Bill Allombert on Thu, 09 May 2019 14:15:34 +0200


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: Evaluation of polynomials: precision loss


On Thu, May 09, 2019 at 04:31:33AM -0700, Ilya Zakharevich wrote:
> To calculate some Fourier transforms, I need to evaluate polynomials
> of very high degree.  Is PARI going to use Horner’s method?
> 
> If so, then the rounding errors would accumulate, right?  One can
> fight this with using Horner’s method for short runs only, and combine
> short runs using Horner’s method for a power of argument.  (Somewhat
> similar to “a step” of FFT — but without speedups.)
> 
> Additionally, for me the point of evaluation is a root of
> unity — which also has numerical errors.  There does not seem to exist
> a way to avoid *these* accumulating for a general polynomial — unless
> one does substpol(P,x^N,x0^N) first (calculating x0^N using
> transcendental functions), right?
> 
> However, the way I implemented substpol() initially, it would not
> optimize for a power of variable — so it may be painfully
> slow — unless something has changed…

Well it does now:
GEN
gsubstpol(GEN x, GEN T, GEN y)
{ ...
  if (typ(T) == t_POL && RgX_is_monomial(T) && gequal1(leading_coeff(T)))
  { /* T = t^d */
    long d = degpol(T);
    v = varn(T); z = (d==1)? x: gdeflate(x, v, d);
    if (z) return gerepileupto(av, gsubst(z, v, y));
  }
  ...

Cheers,
Bill.