On 2017-01-14 at 22:55 GMT+01:00 Bill wrote:
On Sat, Jan 14, 2017 at 10:08:11PM +0100, Loďc Grenié wrote:
> On 2017-01-14 at 10:37 GMT+01:00 Bill wrote:
>
> > On Sat, Jan 14, 2017 at 10:01:43AM +0100, Loďc Grenié wrote:
> > > > On Fri, Jan 13, 2017 at 11:00:28PM +0100, Loďc Grenié wrote:
> > > > > Package: pari
> > > > > Version: dd75740be
> > > > > Severity: normal
> > > > >
> > > > >      Hi Karim and Bill,
> > > > >
> > > > >      I think you probably understood what I will say. With pthread,
> > > > >
> > > > > parfor(n=10,20,addprimes(nextprime(10^n)))
> > > > >
> > > > >     crashes in the free(old) off addp().
> > > > >
> > > > >   addprimes() should probably be either disabled or protected with
> > > > >   pthread.
> > > >
> > > > We need a way for (MPI or posix) threads to inherit primetab from the
> > > > master thread.
> > >
> > >      Right now posix threads inherit primetab (that's the reason why it
> > >   crashes in the first place).
> >
> > Yes, but inheritance is an important feature to keep.
> >
> > >   A new field primetab in mt_queue could
> > >   allow to copy the master thread primetab to the children (for posix).
> > >   For MPI a send_request_GEN(PMPI_primetab, primetab, i);
> > >   could work (with the obvious modifications to mt/mpi.c). In that
> > >   case addprime() would be local to the child thread (in both cases).
> >
> > For posix thread, we should do it the same way as for  modular_eqn:
> >
> > see
> > void
> > pari_thread_sync(void)
> > {
> >   pari_pthread_init_varstate();
> >   pari_pthread_init_seadata();
> > }
> >
> > static GEN global_modular_eqn;
> > static THREAD GEN modular_eqn;
> > void
> > pari_init_seadata(void)  { global_modular_eqn = NULL; }
> > void
> > pari_thread_init_seadata(void)  { modular_eqn = global_modular_eqn; }
> > void
> > pari_pthread_init_seadata(void)  { global_modular_eqn = modular_eqn; }
> >
>
>      What happens with MPI?

seadata is loaded on demand by the MPI thread, as regular GP do.

      This probably means that, for addprime(), something will have to be done
  for MPI.

       Cheers,

           Loïc