paul zimmermann on Wed, 25 Apr 2018 17:19:43 +0200


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: Comparison of multiple-precision floating-point software


       Dear Karim,

> Date: Wed, 25 Apr 2018 16:27:19 +0200
> From: Karim Belabas <Karim.Belabas@math.u-bordeaux.fr>
> 
> * paul zimmermann [2018-04-25 11:26]:
> >        Dear Pari developers,
> > 
> > I have updated my comparison of multiple-precision floating-point software:
> > 
> > https://members.loria.fr/PZimmermann/timings.html
> > 
> > Pari is within a factor of 2 of MPFR for basic arithmetic, except for the
> > square root, mul/sqr at 1000d, sqr/div at 10000d.
> > 
> > Your feedback is welcome.
> 
> Hi Paul,
> 
> Thanks for the data and update !
> 
> We already had this discussion a while back (in Medicis times...)
> but I still believe you should enable the 3 precomputed constants
> (log(2), Pi, Euler) in your timings-pari.c : 
> 
> - I believe it will make no difference at small accuracy and provide a
>   small improvement for huge accuracies; but it's important to me as a
>   developper to be aware of the caching impact !
> 
> - algorithms and implementations were chosen with the precondition that
>   some constants would be available for free [ at least from the second
>   call on ].

feel free to try on your own. It is easy to get an account on the gcc67
machine. It suffices to replace #if 0 by #if 1 at line 69 of timings-pari.c.
I bet it will give at best a speedup of less than 1%, since for 10000 digits
we evaluate at least 127 times each function:

x*y        took 0.044343 ms (32767 eval in 1453 ms)
x^2        took 0.033082 ms (32767 eval in 1084 ms)
x/y        took 0.102240 ms (16383 eval in 1675 ms)
sqrt(x)    took 0.066899 ms (16383 eval in 1096 ms)
exp(x)     took 4.843137 ms (255 eval in 1235 ms)
log(x)     took 3.199609 ms (511 eval in 1635 ms)
sin(x)     took 8.992126 ms (127 eval in 1142 ms)
cos(x)     took 8.874016 ms (127 eval in 1127 ms)
arccos(x)  took 13.496063 ms (127 eval in 1714 ms)
arctan(x)  took 13.393701 ms (127 eval in 1701 ms)

> I believe it's more relevant to compare actual user experience for
> a given system, not restricted to an idealized (in this case crippled)
> "benchmark environment". (All benchmarks will be biased towards specific
> uses which may or may not be the system's primary target, but I don't
> think one should add to the bias.)

as you say, every benchmark is biased in some way. You can simply ignore mine,
or create your own less-biased benchmark.

Best regards,
Paul