Pari/GP Reference Documentation HOME Contents - Global index - GP keyboard shortcuts

Sums, products, integrals and similar functions


asympnum   contfraceval   contfracinit   derivnum   intcirc   intfuncinit   intnum   intnumgauss   intnumgaussinit   intnuminit   intnumromb   limitnum   prod   prodeuler   prodinf   solve   solvestep   sum   sumalt   sumdiv   sumdivmult   suminf   sumnum   sumnumap   sumnumapinit   sumnuminit   sumnummonien   sumnummonieninit   sumpos  
 

Although the gp calculator is programmable, it is useful to have a number of preprogrammed loops, including sums, products, and a certain number of recursions. Also, a number of functions from numerical analysis like numerical integration and summation of series will be described here.

One of the parameters in these loops must be the control variable, hence a simple variable name. In the descriptions, the letter X will always denote any simple variable name, and represents the formal parameter used in the function. The expression to be summed, integrated, etc. is any legal PARI expression, including of course expressions using loops.

Library mode. Since it is easier to program directly the loops in library mode, these functions are mainly useful for GP programming. On the other hand, numerical routines code a function (to be integrated, summed, etc.) with two parameters named

    GEN (*eval)(void*,GEN)
    void *E;  \\ context: eval(E, x) must evaluate your function at x.

see the Libpari manual for details.

Numerical integration. Starting with version 2.2.9 the "double exponential" univariate integration method is implemented in intnum and its variants. Romberg integration is still available under the name intnumromb, but superseded. It is possible to compute numerically integrals to thousands of decimal places in reasonable time, as long as the integrand is regular. It is also reasonable to compute numerically integrals in several variables, although more than two becomes lengthy. The integration domain may be non-compact, and the integrand may have reasonable singularities at endpoints. To use intnum, you must split the integral into a sum of subintegrals where the function has no singularities except at the endpoints. Polynomials in logarithms are not considered singular, and neglecting these logs, singularities are assumed to be algebraic (asymptotic to C(x-a)^{-α} for some α > -1 when x is close to a), or to correspond to simple discontinuities of some (higher) derivative of the function. For instance, the point 0 is a singularity of abs(x).

See also the discrete summation methods below, sharing the prefix sum.

asympnum HOME   TOP

Asymptotic expansion of expr, corresponding to a sequence u(n), assuming it has the shape u(n) ~ ∑_{i ≥ 0} a_i n^{-iα} with rational coefficients a_i with reasonable height; the algorithm is heuristic and performs repeated calls to limitnum, with k and alpha are as in limitnum

  ? f(n) = n! / (n^n*exp(-n)*sqrt(n));
  ? asympnum(f)
  %2 = []   \\ failure !
  ? l = limitnum(f)
  %3 = 2.5066282746310005024157652848110452530
  ? asympnum(n->f(n)/l) \\ normalize
  %4 = [1, 1/12, 1/288, -139/51840]

and we indeed get a few terms of Stirling's expansion. Note that it helps to normalize with a limit computed to higher accuracy:

  ? \p100
  ? L = limitnum(f)
  ? \p38
  ? asympnum(n->f(n)/L) \\ we get more terms!
  %6 = [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880,\
        5246819/75246796800, -534703531/902961561600]

If alpha is not an integer, loss of accuracy is expected, so it should be precomputed to double accuracy, say:

  ? \p38
  ? asympnum(n->-log(1-1/n^Pi),,Pi)
  %1 = [0, 1, 1/2, 1/3]
  ? asympnum(n->-log(1-1/sqrt(n)),,1/2)
  %2 = [0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, \
    1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22]
  
  ? localprec(100); a = Pi;
  ? asympnum(n->-log(1-1/n^a),,a) \\ better !
  %4 = [0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12]

The library syntax is asympnum(void *E, GEN (*u)(void *,GEN,long), long muli, GEN alpha, long prec), where u(E, n, prec) must return u(n) in precision prec. Also available is GEN asympnum0(GEN u, long muli, GEN alpha, long prec), where u must be a vector of sufficient length as above.

contfraceval HOME   TOP

Given a continued fraction CF output by contfracinit, evaluate the first lim terms of the continued fraction at t (all terms if lim is negative or omitted; if positive, lim must be less than or equal to the length of CF.

The library syntax is GEN contfraceval(GEN CF, GEN t, long lim).

contfracinit HOME   TOP

Given M representing the power series S = ∑_{n ≥ 0} M[n+1]z^n, transform it into a continued fraction; restrict to n ≤ lim if latter is non-negative. M can be a vector, a power series, a polynomial, or a rational function. The result is a 2-component vector [A,B] such that S = M[1] / (1+A[1]z+B[1]z^2/(1+A[2]z+B[2]z^2/(1+...1/(1+A[lim/2]z)))). Does not work if any coefficient of M vanishes, nor for series for which certain partial denominators vanish.

The library syntax is GEN contfracinit(GEN M, long lim).

derivnum HOME   TOP

Numerical derivation of expr with respect to X at X = a. The order of derivation is 1 by default.

  ? derivnum(x=0, sin(exp(x))) - cos(1)
  %1 = 0.E-38

A clumsier approach, which would not work in library mode, is

  ? f(x) = sin(exp(x))
  ? f'(0) - cos(1)
  %2 = 0.E-38

When a is a power series, compute derivnum(t = a,f) as f'(a) = (f(a))'/a'.

If the parameter ind is present, it can be

* a non-negative integer m, in which case we return f^{(m)}(x);

* or a vector of orders, in which case we return the vector of derivatives.

  ? derivnum(x = 0, exp(sin(x)), 16) \\ 16-th derivative
  %1 = -52635599.000000000000000000000000000000
  
  ? round( derivnum(x = 0, exp(sin(x)), [0..13]) )  \\ 0-13-th derivatives
  %2 = [1, 1, 1, 0, -3, -8, -3, 56, 217, 64, -2951, -12672, 5973, 309376]

The library syntax is derivfunk(void *E, GEN (*eval)(void*,GEN), GEN a, GEN ind long prec). Also available is GEN derivfun(void *E, GEN (*eval)(void *, GEN), GEN a, long prec)

intcirc HOME   TOP

Numerical integration of (2iπ)^{-1}expr with respect to X on the circle |X-a |= R. In other words, when expr is a meromorphic function, sum of the residues in the corresponding disk; tab is as in intnum, except that if computed with intnuminit it should be with the endpoints [-1, 1].

  ? \p105
  ? intcirc(s=1, 0.5, zeta(s)) - 1
  time = 496 ms.
  %1 = 1.2883911040127271720 E-101 + 0.E-118*I

The library syntax is intcirc(void *E, GEN (*eval)(void*,GEN), GEN a,GEN R,GEN tab, long prec).

intfuncinit HOME   TOP

Initialize tables for use with integral transforms such Fourier, Laplace or Mellin transforms, in order to compute ∫_a^b f(t) k(t,z) dt for some kernel k(t,z). The endpoints a and b are coded as in intnum, f is the function to which the integral transform is to be applied and the non-negative integer m is as in intnum: multiply the number of sampling points roughly by 2^m, hopefully increasing the accuracy. This function is particularly useful when the function f is hard to compute, such as a gamma product.

Limitation. the endpoints a and b must be at infinity, with the same asymptotic behaviour. Oscillating types are not supported. This is easily overcome by integrating vectors of functions, see example below.

Examples.

* numerical Fourier transform F(z) = ∫_{- oo }^{+ oo } f(t)e^{-2iπ z t} dt. First the easy case, assume that f decrease exponentially:

     f(t) = exp(-t^2);
     A = [-oo,1];
     B = [+oo,1];
     \p200
     T = intfuncinit(t = A,B , f(t));
     F(z) =
     { my(a = -2*I*Pi*z);
       intnum(t = A,B, exp(a*t), T);
     }
     ? F(1) - sqrt(Pi)*exp(-Pi^2)
     %1 = -1.3... E-212

Now the harder case, f decrease slowly: we must specify the oscillating behaviour. Thus, we cannot precompute usefully since everything depends on the point we evaluate at:

     f(t) = 1 / (1+ abs(t));
     \p200
     \\ Fourier cosine transform
     FC(z) =
     { my(a = 2*Pi*z);
       intnum(t = [-oo, a*I], [+oo, a*I], cos(a*t)*f(t));
     }
     FC(1)

* Fourier coefficients: we must integrate over a period, but intfuncinit does not support finite endpoints. The solution is to integrate a vector of functions !

  FourierSin(f, T, k) =  \\ first k sine Fourier coeffs
  {
    my (w = 2*Pi/T);
    my (v = vector(k+1));
    intnum(t = -T/2, T/2,
       my (z = exp(I*w*t));
       v[1] = z;
       for (j = 2, k, v[j] = v[j-1]*z);
       f(t) * imag(v)) * 2/T;
  }
  FourierSin(t->sin(2*t), 2*Pi, 10)

The same technique can be used instead of intfuncinit to integrate f(t) k(t,z) whenever the list of z-values is known beforehand.

Note that the above code includes an unrelated optimization: the sin(j w t) are computed as imaginary parts of exp(i j w t) and the latter by successive multiplications.

* numerical Mellin inversion F(z) = (2iπ)^{-1} ∫_{c -i oo }^{c+i oo } f(s)z^{-s} ds = (2π)^{-1} ∫_{- oo }^{+ oo } f(c + i t)e^{-log z(c + it)} dt. We take c = 2 in the program below:

     f(s) = gamma(s)^3;  \\ f(c+it) decrease as exp(-3Pi|t|/2)
     c = 2; \\ arbitrary
     A = [-oo,3*Pi/2];
     B = [+oo,3*Pi/2];
     T = intfuncinit(t=A,B, f(c + I*t));
     F(z) =
     { my (a = -log(z));
       intnum(t=A,B, exp(a*I*t), T)*exp(a*c) / (2*Pi);
     }

The library syntax is intfuncinit(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,long m, long prec).

intnum HOME   TOP

Numerical integration of expr on ]a,b[ with respect to X, using the double-exponential method, and thus O(Dlog D) evaluation of the integrand in precision D. The integrand may have values belonging to a vector space over the real numbers; in particular, it can be complex-valued or vector-valued. But it is assumed that the function is regular on ]a,b[. If the endpoints a and b are finite and the function is regular there, the situation is simple:

  ? intnum(x = 0,1, x^2)
  %1 = 0.3333333333333333333333333333
  ? intnum(x = 0,Pi/2, [cos(x), sin(x)])
  %2 = [1.000000000000000000000000000, 1.000000000000000000000000000]

An endpoint equal to ± oo is coded as +oo or -oo, as expected:

  ? intnum(x = 1,+oo, 1/x^2)
  %3 = 1.000000000000000000000000000

In basic usage, it is assumed that the function does not decrease exponentially fast at infinity:

  ? intnum(x=0,+oo, exp(-x))
    ***   at top-level: intnum(x=0,+oo,exp(-
    ***                 ^--------------------
    *** exp: overflow in expo().

We shall see in a moment how to avoid that last problem, after describing the last optional argument tab.

The tab. argument The routine uses weights w_i, which are mostly independent of the function being integrated, evaluated at many sampling points x_i and approximates the integral by ∑ w_i f(x_i). If tab is

* a non-negative integer m, we multiply the number of sampling points by 2^m, hopefully increasing accuracy. Note that the running time increases roughly by a factor 2^m. One may try consecutive values of m until they give the same value up to an accepted error.

* a set of integration tables containing precomputed x_i and w_i as output by intnuminit. This is useful if several integrations of the same type are performed (on the same kind of interval and functions, for a given accuracy): we skip a precomputation of O(Dlog D) elementary functions in accuracy D, whose running time has the same order of magnitude as the evaluation of the integrand. This is in particular usefule for multivariate integrals.

Specifying the behavior at endpoints. This is done as follows. An endpoint a is either given as such (a scalar, real or complex, oo or -oo for ± oo ), or as a two component vector [a,α], to indicate the behavior of the integrand in a neighborhood of a.

If a is finite, the code [a,α] means the function has a singularity of the form (x-a)^{α}, up to logarithms. (If α \ge 0, we only assume the function is regular, which is the default assumption.) If a wrong singularity exponent is used, the result will lose a catastrophic number of decimals:

  ? intnum(x=0, 1, x^(-1/2))         \\  assume x^{-1/2} is regular at 0
  %1 = 1.9999999999999999999999999999827931660
  ? intnum(x=[0,-1/2], 1, x^(-1/2))  \\  no, it's not
  %2 = 2.0000000000000000000000000000000000000
  ? intnum(x=[0,-1/10], 1, x^(-1/2)) \\  using a wrong exponent is bad
  %3 = 1.9999999999999999999999999999999901912

If a is ± oo , which is coded as +oo or -oo, the situation is more complicated, and [±oo,α] means:

* α = 0 (or no α at all, i.e. simply ±oo) assumes that the integrand tends to zero moderately quickly, at least as O(x^{-2}) but not exponentially fast.

* α > 0 assumes that the function tends to zero exponentially fast approximately as exp(-α x). This includes oscillating but quickly decreasing functions such as exp(-x)sin(x).

  ? intnum(x=0, +oo, exp(-2*x))
    ***   at top-level: intnum(x=0,+oo,exp(-
    ***                 ^--------------------
    *** exp: exponent (expo) overflow
  ? intnum(x=0, [+oo, 2], exp(-2*x))  \\  OK!
  %1 = 0.50000000000000000000000000000000000000
  ? intnum(x=0, [+oo, 3], exp(-2*x))  \\  imprecise exponent, still OK !
  %2 = 0.50000000000000000000000000000000000000
  ? intnum(x=0, [+oo, 10], exp(-2*x)) \\  wrong exponent  ==>  disaster
  %3 = 0.49999999999952372962457451698256707393

As the last exemple shows, the exponential decrease rate must be indicated to avoid overflow, but the method is robust enough for a rough guess to be acceptable.

* α < -1 assumes that the function tends to 0 slowly, like x^{α}. Here the algorithm is less robust and it is essential to give a sharp α, unless α ≤ -2 in which case we use the default algorithm as if α were missing (or equal to 0).

  ? intnum(x=1, +oo, x^(-3/2))         \\ default
  %1 = 1.9999999999999999999999999999646391207
  ? intnum(x=1, [+oo,-3/2], x^(-3/2))  \\ precise decrease rate
  %2 = 2.0000000000000000000000000000000000000
  ? intnum(x=1, [+oo,-11/10], x^(-3/2)) \\ worse than default
  %3 = 2.0000000000000000000000000089298011973

The last two codes are reserved for oscillating functions. Let k > 0 real, and g(x) a non-oscillating function tending slowly to 0 (e.g. like a negative power of x), then

* α = k * I assumes that the function behaves like cos(kx)g(x).

* α = -k* I assumes that the function behaves like sin(kx)g(x).

Here it is critical to give the exact value of k. If the oscillating part is not a pure sine or cosine, one must expand it into a Fourier series, use the above codings, and sum the resulting contributions. Otherwise you will get nonsense. Note that cos(kx), and similarly sin(kx), means that very function, and not a translated version such as cos(kx+a).

Note. If f(x) = cos(kx)g(x) where g(x) tends to zero exponentially fast as exp(-α x), it is up to the user to choose between [±oo,α] and [±oo,k* I], but a good rule of thumb is that if the oscillations are weaker than the exponential decrease, choose [±oo,α], otherwise choose [±oo,k*I], although the latter can reasonably be used in all cases, while the former cannot. To take a specific example, in the inverse Mellin transform, the integrand is almost always a product of an exponentially decreasing and an oscillating factor. If we choose the oscillating type of integral we perhaps obtain the best results, at the expense of having to recompute our functions for a different value of the variable z giving the transform, preventing us to use a function such as intfuncinit. On the other hand using the exponential type of integral, we obtain less accurate results, but we skip expensive recomputations. See intfuncinit for more explanations.

We shall now see many examples to get a feeling for what the various parameters achieve. All examples below assume precision is set to 115 decimal digits. We first type

  ? \p 115

Apparent singularities. In many cases, apparent singularities can be ignored. For instance, if f(x) = 1 /(exp(x)-1) - exp(-x)/x, then ∫_0^ oo f(x)dx = γ, Euler's constant Euler. But

  ? f(x) = 1/(exp(x)-1) - exp(-x)/x
  ? intnum(x = 0, [oo,1],  f(x)) - Euler
  %1 = 0.E-115

But close to 0 the function f is computed with an enormous loss of accuracy, and we are in fact lucky that it get multiplied by weights which are sufficiently close to 0 to hide this:

  ? f(1e-200)
  %2 = -3.885337784451458142 E84

A more robust solution is to define the function differently near special points, e.g. by a Taylor expansion

  ? F = truncate( f(t + O(t^10)) ); \\  expansion around t = 0
  ? poldegree(F)
  %4 = 7
  ? g(x) = if (x > 1e-18, f(x), subst(F,t,x)); \\  note that 7.18 > 105
  ? intnum(x = 0, [oo,1],  g(x)) - Euler
  %2 = 0.E-115

It is up to the user to determine constants such as the 10^{-18} and 10 used above.

True singularities. With true singularities the result is worse. For instance

  ? intnum(x = 0, 1,  x^(-1/2)) - 2
  %1 = -3.5... E-68 \\  only 68 correct decimals
  
  ? intnum(x = [0,-1/2], 1,  x^(-1/2)) - 2
  %2 = 0.E-114 \\  better

Oscillating functions.

  ? intnum(x = 0, oo, sin(x) / x) - Pi/2
  %1 = 16.19.. \\  nonsense
  ? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2
  %2 = -0.006.. \\  bad
  ? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2
  %3 = 0.E-115 \\  perfect
  ? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2  \\  oops, wrong k
  %4 = 0.06...
  ? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2
  %5 = 0.E-115 \\  perfect
  
  ? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4
  %6 = -0.0008... \\  bad
  ? sin(x)^3 - (3*sin(x)-sin(3*x))/4
  %7 = O(x^17)

We may use the above linearization and compute two oscillating integrals with endpoints [oo, -I] and [oo, -3*I] respectively, or notice the obvious change of variable, and reduce to the single integral (1/2)∫_0^ oo sin(x)/xdx. We finish with some more complicated examples:

  ? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2
  %1 = -0.0003... \\  bad
  ? intnum(x = 0, 1, (1-cos(x))/x^2) \
  + intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2
  %2 = 0.E-115 \\  perfect
  
  ? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3
  %3 = -7.34... E-55 \\  bad
  ? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3
  %4 = 8.9... E-103 \\  better. Try higher m
  ? tab = intnuminit(0,[oo,-I], 1); \\  double number of sampling points
  ? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3
  %6 = 0.E-115 \\  perfect

Warning. Like sumalt, intnum often assigns a reasonable value to diverging integrals. Use these values at your own risk! For example:

  ? intnum(x = 0, [oo, -I], x^2*sin(x))
  %1 = -2.0000000000...

Note the formula ∫_0^ oo sin(x)/x^sdx = cos(π s/2) Γ(1-s) , a priori valid only for 0 < Re(s) < 2, but the right hand side provides an analytic continuation which may be evaluated at s = -2...

Multivariate integration. Using successive univariate integration with respect to different formal parameters, it is immediate to do naive multivariate integration. But it is important to use a suitable intnuminit to precompute data for the internal integrations at least!

For example, to compute the double integral on the unit disc x^2+y^2 ≤ 1 of the function x^2+y^2, we can write

  ? tab = intnuminit(-1,1);
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab),tab) - Pi/2
  %2 = -7.1... E-115 \\  OK
  

The first tab is essential, the second optional. Compare:

  ? tab = intnuminit(-1,1);
  time = 4 ms.
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2));
  time = 3,092 ms. \\  slow
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab);
  time = 252 ms.  \\  faster
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab));
  time = 261 ms.  \\  the internal integral matters most

The library syntax is intnum(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,GEN tab, long prec), where an omitted tab is coded as NULL.

intnumgauss HOME   TOP

Numerical integration of expr on the compact interval [a,b] with respect to X using Gauss-Legendre quadrature; tab is either omitted or precomputed with intnumgaussinit. As a convenience, it can be an integer n in which case we call intnumgaussinit(n) and use n-point quadrature.

  ? test(n, b = 1) = T=intnumgaussinit(n);\
      intnumgauss(x=-b,b, 1/(1+x^2),T) - 2*atan(b);
  ? test(0) \\ default
  %1 = -9.490148553624725335 E-22
  ? test(40)
  %2 = -6.186629001816965717 E-31
  ? test(50)
  %3 = -1.1754943508222875080 E-38
  ? test(50, 2) \\ double interval length
  %4 = -4.891779568527713636 E-21
  ? test(90, 2) \\ n must almost be doubled as well!
  %5 = -9.403954806578300064 E-38

On the other hand, we recommend to split the integral and change variables rather than increasing n too much:

  ? f(x) = 1/(1+x^2);
  ? b = 100;
  ? intnumgauss(x=0,1, f(x)) + intnumgauss(x=1,1/b, f(1/x)*(-1/x^2)) - atan(b)
  %3 = -1.0579449157400587572 E-37

The library syntax is GEN intnumgauss0(GEN X, GEN b, GEN expr, GEN tab = NULL, long prec).

intnumgaussinit HOME   TOP

Initialize tables for n-point Gauss-Legendre integration of a smooth function f lon a compact interval [a,b] at current realprecision. If n is omitted, make a default choice n ~ realprecision, suitable for analytic functions on [-1,1]. The error is bounded by

((b-a)^{2n+1} (n!)^4)/((2n+1)[(2n)!]^3) f^{(2n)} (ξ) , a < ξ < b

so, if the interval length increases, n should be increased as well.

  ? T = intnumgaussinit();
  ? intnumgauss(t=-1,1,exp(t), T) - exp(1)+exp(-1)
  %1 = -5.877471754111437540 E-39
  ? intnumgauss(t=-10,10,exp(t), T) - exp(10)+exp(-10)
  %2 = -8.358367809712546836 E-35
  ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2
  %3 = -9.490148553624725335 E-22
  
  ? T = intnumgaussinit(50);
  ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2
  %5 = -1.1754943508222875080 E-38
  ? intnumgauss(t=-5,5,1/(1+t^2), T) - 2*atan(5)
  %6 = -1.2[...]E-8

On the other hand, we recommend to split the integral and change variables rather than increasing n too much, see intnumgauss.

The library syntax is GEN intnumgaussinit(long n, long prec).

intnuminit HOME   TOP

Initialize tables for integration from a to b, where a and b are coded as in intnum. Only the compactness, the possible existence of singularities, the speed of decrease or the oscillations at infinity are taken into account, and not the values. For instance intnuminit(-1,1) is equivalent to intnuminit(0,Pi), and intnuminit([0,-1/2],oo) is equivalent to intnuminit([-1,-1/2], -oo); on the other hand, the order matters and intnuminit([0,-1/2], [1,-1/3]) is not equivalent to intnuminit([0,-1/3], [1,-1/2]) !

If m is present, it must be non-negative and we multiply the default number of sampling points by 2^m (increasing the running time by a similar factor).

The result is technical and liable to change in the future, but we document it here for completeness. Let x = φ(t), t ∈ ]- oo , oo [ be an internally chosen change of variable, achieving double exponential decrease of the integrand at infinity. The integrator intnum will compute h ∑_{|n| < N} φ'(nh) F(φ(nh)) for some integration step h and truncation parameter N. In basic use, let

  [h, x0, w0, xp, wp, xm, wm] = intnuminit(a,b);

* h is the integration step

* x_0 = φ(0) and w_0 = φ'(0),

* xp contains the φ(nh), 0 < n < N,

* xm contains the φ(nh), 0 < -n < N, or is empty.

* wp contains the φ'(nh), 0 < n < N,

* wm contains the φ'(nh), 0 < -n < N, or is empty.

The arrays xm and wm are left empty when φ is an odd function. In complicated situations when non-default behaviour is specified at end points, intnuminit may return up to 3 such arrays, corresponding to a splitting of up to 3 integrals of basic type.

If the functions to be integrated later are of the form F = f(t) k(t,z) for some kernel k (e.g. Fourier, Laplace, Mellin,...), it is useful to also precompute the values of f(φ(nh)), which is accomplished by intfuncinit. The hard part is to determine the behaviour of F at endpoints, depending on z.

The library syntax is GEN intnuminit(GEN a, GEN b, long m, long prec).

intnumromb HOME   TOP

Numerical integration of expr (smooth in ]a,b[), with respect to X. Suitable for low accuracy; if expr is very regular (e.g. analytic in a large region) and high accuracy is desired, try intnum first.

Set flag = 0 (or omit it altogether) when a and b are not too large, the function is smooth, and can be evaluated exactly everywhere on the interval [a,b].

If flag = 1, uses a general driver routine for doing numerical integration, making no particular assumption (slow).

flag = 2 is tailored for being used when a or b are infinite using the change of variable t = 1/X. One must have ab > 0, and in fact if for example b = + oo , then it is preferable to have a as large as possible, at least a ≥ 1.

If flag = 3, the function is allowed to be undefined at a (but right continuous) or b (left continuous), for example the function sin(x)/x between x = 0 and 1.

The user should not require too much accuracy: realprecision about 30 decimal digits (realbitprecision about 100 bits) is OK, but not much more. In addition, analytical cleanup of the integral must have been done: there must be no singularities in the interval or at the boundaries. In practice this can be accomplished with a change of variable. Furthermore, for improper integrals, where one or both of the limits of integration are plus or minus infinity, the function must decrease sufficiently rapidly at infinity, which can often be accomplished through integration by parts. Finally, the function to be integrated should not be very small (compared to the current precision) on the entire interval. This can of course be accomplished by just multiplying by an appropriate constant.

Note that infinity can be represented with essentially no loss of accuracy by an appropriate huge number. However beware of real underflow when dealing with rapidly decreasing functions. For example, in order to compute the ∫_0^ oo e^{-x^2}dx to 28 decimal digits, then one can set infinity equal to 10 for example, and certainly not to 1e1000.

The library syntax is intnumromb_bitprec(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, long flag, long bitprec), where eval(x, E) returns the value of the function at x. You may store any additional information required by eval in E, or set it to NULL. The historical variant The library syntax is intnumromb(..., long prec), where prec is expressed in words, not bits, is obsolete and should no longer be used.

limitnum HOME   TOP

Lagrange-Zagier numerical extrapolation of expr, corresponding to a sequence u_n, either given by a closure n- > u(n) or by a vector of values I.e., assuming that u_n tends to a finite limit ℓ, try to determine ℓ. This routine is purely numerical and heuristic, thus may or may not work on your examples; k is ignored if u is given by a vector, and otherwise is a multiplier such that we extrapolate from u(kn).

Assume that u_n has an asymptotic expansion in n^{-α} : u_n = ℓ + ∑_{i ≥ 1} a_i n^{-iα} for some a_i.

  ? limitnum(n -> n*sin(1/n))
  %1 = 1.0000000000000000000000000000000000000
  
  ? limitnum(n -> (1+1/n)^n) - exp(1)
  %2 = 0.E-37
  
  ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! )
  %3 = 3.1415926535897932384626433832795028842
  ? Pi
  %4 = 3.1415926535897932384626433832795028842

If u_n is given by a vector, it must be long enough for the extrapolation to make sense: at least k times the current realprecision. The preferred format is thus a closure, although it becomes inconvenient when u_n cannot be directly computed in time polynomial in log n, for instance if it is defined as a sum or by induction. In that case, passing a vector of values is the best option. It usually pays off to interpolate u(kn) for some k > 1:

  ? limitnum(vector(10,n,(1+1/n)^n))
   ***                 ^--------------------
   *** limitnum: non-existent component in limitnum: index < 20
  \\ at this accuracy, we must have at least 20 values
  ? limitnum(vector(20,n,(1+1/n)^n)) - exp(1)
  %5 = -2.05... E-20
  ? limitnum(vector(20,n, m=10*n;(1+1/m)^m)) - exp(1) \\ better accuracy
  %6 = 0.E-37
  
  ? v = vector(20); s = 0;
  ? for(i=1,#v, s += 1/i; v[i]= s - log(i));
  ? limitnum(v) - Euler
  %9 = -1.6... E-19
  
  ? V = vector(200); s = 0;
  ? for(i=1,#V, s += 1/i; V[i]= s);
  ? v = vector(#V \ 10, i, V[10*i] - log(10*i));
  ? limitnum(v) - Euler
  %13 = 6.43... E-29

The library syntax is limitnum(void *E, GEN (*u)(void *,GEN,long), long muli, GEN alpha, long prec), where u(E, n, prec) must return u(n) in precision prec. Also available is GEN limitnum0(GEN u, long muli, GEN alpha, long prec), where u must be a vector of sufficient length as above.

prod HOME   TOP

Product of expression expr, initialized at x, the formal parameter X going from a to b. As for sum, the main purpose of the initialization parameter x is to force the type of the operations being performed. For example if it is set equal to the integer 1, operations will start being done exactly. If it is set equal to the real 1., they will be done using real numbers having the default precision. If it is set equal to the power series 1+O(X^k) for a certain k, they will be done using power series of precision at most k. These are the three most common initializations.

As an extreme example, compare

  ? prod(i=1, 100, 1 - X^i);  \\  this has degree 5050 !!
  time = 128 ms.
  ? prod(i=1, 100, 1 - X^i, 1 + O(X^101))
  time = 8 ms.
  %2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \
  X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)

Of course, in this specific case, it is faster to use eta, which is computed using Euler's formula.

  ? prod(i=1, 1000, 1 - X^i, 1 + O(X^1001));
  time = 589 ms.
  ? \ps1000
  seriesprecision = 1000 significant terms
  ? eta(X) - %
  time = 8ms.
  %4 = O(X^1001)

The library syntax is produit(GEN a, GEN b, char *expr, GEN x).

prodeuler HOME   TOP

Product of expression expr, initialized at 1. (i.e. to a real number equal to 1 to the current realprecision), the formal parameter X ranging over the prime numbers between a and b.

The library syntax is prodeuler(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b, long prec).

prodinf HOME   TOP

infinite product of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression minus 1 is less than the default precision. In particular, non-convergent products result in infinite loops. The expressions must always evaluate to an element of ℂ.

If flag = 1, do the product of the (1+expr) instead.

The library syntax is prodinf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec) (flag = 0), or prodinf1 with the same arguments (flag = 1).

solve HOME   TOP

Find a real root of expression expr between a and b, under the condition expr(X = a) * expr(X = b) ≤ 0. (You will get an error message roots must be bracketed in solve if this does not hold.) This routine uses Brent's method and can fail miserably if expr is not defined in the whole of [a,b] (try solve(x = 1, 2, tan(x))).

The library syntax is zbrent(void *E,GEN (*eval)(void*,GEN),GEN a,GEN b,long prec).

solvestep HOME   TOP

Find zeros of a continuous function in the real interval [a,b] by naive interval splitting. This function is heuristic and may or may not find the intended zeros. Binary digits of flag mean

* 1: return as soon as one zero is found, otherwise return all zeros found;

* 2: refine the splitting until at least one zero is found (may loop indefinitely if there are no zeros);

* 4: do a multiplicative search (we must have a > 0 and step > 1), otherwise an additive search; step is the multiplicative or additive step.

* 8: refine the splitting until at least one zero is very close to an integer.

  ? solvestep(X=0,10,1,sin(X^2),1)
  %1 = 1.7724538509055160272981674833411451828
  ? solvestep(X=1,12,2,besselj(4,X),4)
  %2 = [7.588342434..., 11.064709488...]

The library syntax is solvestep(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b, GEN step,long flag,long prec).

sum HOME   TOP

Sum of expression expr, initialized at x, the formal parameter going from a to b. As for prod, the initialization parameter x may be given to force the type of the operations being performed.

As an extreme example, compare

  ? sum(i=1, 10^4, 1/i); \\  rational number: denominator has 4345 digits.
  time = 236 ms.
  ? sum(i=1, 5000, 1/i, 0.)
  time = 8 ms.
  %2 = 9.787606036044382264178477904

The library syntax is somme(GEN a, GEN b, char *expr, GEN x).

sumalt HOME   TOP

Numerical summation of the series expr, which should be an alternating series (-1)^k a_k, the formal variable X starting at a. Use an algorithm of Cohen, Villegas and Zagier (Experiment. Math. 9 (2000), no. 1, 3--12).

If flag = 0, assuming that the a_k are the moments of a positive measure on [0,1], the relative error is O(3+sqrt8)^{-n} after using a_k for k ≤ n. If realprecision is p, we thus set n = log(10)p/log(3+sqrt8) ~ 1.3 p; besides the time needed to compute the a_k, k ≤ n, the algorithm overhead is negligible: time O(p^2) and space O(p).

If flag = 1, use a variant with more complicated polynomials, see polzagier. If the a_k are the moments of w(x)dx where w (or only xw(x^2)) is a smooth function extending analytically to the whole complex plane, convergence is in O(14.4^{-n}). If xw(x^2) extends analytically to a smaller region, we still have exponential convergence, with worse constants. Usually faster when the computation of a_k is expensive. If realprecision is p, we thus set n = log(10)p/log(14.4) ~ 0.86 p; besides the time needed to compute the a_k, k ≤ n, the algorithm overhead is not negligible: time O(p^3) and space O(p^2). Thus, even if the analytic conditions for rigorous use are met, this variant is only worthwile if the a_k are hard to compute, at least O(p^2) individually on average: otherwise we gain a small constant factor (1.5, say) in the number of needed a_k at the expense of a large overhead.

The conditions for rigorous use are hard to check but the routine is best used heuristically: even divergent alternating series can sometimes be summed by this method, as well as series which are not exactly alternating (see for example Section se:user_defined). It should be used to try and guess the value of an infinite sum. (However, see the example at the end of Section se:userfundef.)

If the series already converges geometrically, suminf is often a better choice:

  ? \p28
  ? sumalt(i = 1, -(-1)^i / i)  - log(2)
  time = 0 ms.
  %1 = -2.524354897 E-29
  ? suminf(i = 1, -(-1)^i / i)   \\  Had to hit C-C
    ***   at top-level: suminf(i=1,-(-1)^i/i)
    ***                                ^------
    *** suminf: user interrupt after 10min, 20,100 ms.
  ? \p1000
  ? sumalt(i = 1, -(-1)^i / i)  - log(2)
  time = 90 ms.
  %2 = 4.459597722 E-1002
  
  ? sumalt(i = 0, (-1)^i / i!) - exp(-1)
  time = 670 ms.
  %3 = -4.03698781490633483156497361352190615794353338591897830587 E-944
  ? suminf(i = 0, (-1)^i / i!) - exp(-1)
  time = 110 ms.
  %4 = -8.39147638 E-1000   \\   faster and more accurate

The library syntax is sumalt(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also available is sumalt2 with the same arguments (flag = 1).

sumdiv HOME   TOP

Sum of expression expr over the positive divisors of n. This function is a trivial wrapper essentially equivalent to

    D = divisors(n);
    for (i = 1, #D, X = D[i]; eval(expr))

(except that X is lexically scoped to the sumdiv loop). If expr is a multiplicative function, use sumdivmult.

sumdivmult HOME   TOP

Sum of multiplicative expression expr over the positive divisors d of n. Assume that expr evaluates to f(d) where f is multiplicative: f(1) = 1 and f(ab) = f(a)f(b) for coprime a and b.

suminf HOME   TOP

infinite sum of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression is less than the default precision for 3 consecutive evaluations. The expressions must always evaluate to a complex number.

If the series converges slowly, make sure realprecision is low (even 28 digits may be too much). In this case, if the series is alternating or the terms have a constant sign, sumalt and sumpos should be used instead.

  ? \p28
  ? suminf(i = 1, -(-1)^i / i)   \\  Had to hit C-C
    ***   at top-level: suminf(i=1,-(-1)^i/i)
    ***                                ^------
    *** suminf: user interrupt after 10min, 20,100 ms.
  ? sumalt(i = 1, -(-1)^i / i) - log(2)
  time = 0 ms.
  %1 = -2.524354897 E-29

The library syntax is suminf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec).

sumnum HOME   TOP

Numerical summation of f(n) at high accuracy using Euler-MacLaurin, the variable n taking values from a to + oo , where f is assumed to have positive values and is a C^ oo function; a must be an integer and tab, if given, is the output of sumnuminit. The latter precomputes abcissas and weights, speeding up the computation; it also allows to specify the behaviour at infinity via sumnuminit([+oo, asymp]).

  ? \p500
  ? z3 = zeta(3);
  ? sumpos(n = 1, n^-3) - z3
  time = 2,332 ms.
  %2 = 2.438468843 E-501
  ? sumnum(n = 1, n^-3) - z3 \\ here slower than sumpos
  time = 2,752 ms.
  %3 = 0.E-500

Complexity. The function f will be evaluated at O(D log D) real arguments, where D ~ realprecision.log(10). The routine is geared towards slowly decreasing functions: if f decreases exponentially fast, then one of suminf or sumpos should be preferred. If f satisfies the stronger hypotheses required for Monien summation, i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1], then sumnummonien will be faster since it only requires O(D/log D) evaluations:

  ? sumnummonien(n = 1, 1/n^3) - z3
  time = 1,985 ms.
  %3 = 0.E-500

The tab argument precomputes technical data not depending on the expression being summed and valid for a given accuracy, speeding up immensely later calls:

  ? tab = sumnuminit();
  time = 2,709 ms.
  ? sumnum(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos
  time = 40 ms.
  %5 = 0.E-500
  
  ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too
  time = 1,781 ms.
  ? sumnummonien(n = 1, 1/n^3, tabmon) - z3
  time = 2 ms.
  %7 = 0.E-500

The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:

  ? sumnum(n = 1, lngamma(1+1/n)/n, tab);
  time = 14,180 ms.
  
  ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations
  time = 717 ms.

Behaviour at infinity. By default, sumnum assumes that expr decreases slowly at infinity, but at least like O(n^{-2}). If the function decreases like n^{α} for some -2 < α < -1, then it must be indicated via

    tab = sumnuminit([+oo, alpha]); /* alpha < 0 slow decrease */

otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via

    tab = sumnuminit([+oo, alpha]); /* alpha  > 0 exponential decrease */

otherwise exponent overflow will occur.

  ? sumnum(n=1,2^-n)
   ***   at top-level: sumnum(n=1,2^-n)
   ***                             ^----
   *** _^_: overflow in expo().
  ? tab = sumnuminit([+oo,log(2)]); sumnum(n=1,2^-n, tab)
  %1 = 1.000[...]

As a shortcut, one can also input

    sumnum(n = [a, asymp], f)

instead of

    tab = sumnuminit(asymp);
    sumnum(n = a, f, tab)

Further examples.

  ? \p200
  ? sumnum(n = 1, n^(-2)) - zeta(2) \\ accurate, fast
  time = 200 ms.
  %1 = -2.376364457868949779 E-212
  ? sumpos(n = 1, n^(-2)) - zeta(2)  \\ even faster
  time = 96 ms.
  %2 = 0.E-211
  ? sumpos(n=1,n^(-4/3)) - zeta(4/3)   \\ now much slower
  time = 13,045 ms.
  %3 = -9.980730723049589073 E-210
  ? sumnum(n=1,n^(-4/3)) - zeta(4/3)  \\ fast but inaccurate
  time = 365 ms.
  %4 = -9.85[...]E-85
  ? sumnum(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate
  time = 416 ms.
  %5 = -4.134874156691972616 E-210
  
  ? tab = sumnuminit([+oo,-4/3]);
  time = 196 ms.
  ? sumnum(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations
  time = 216 ms.
  %5 = -4.134874156691972616 E-210
  ? sumnum(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3)
  time = 321 ms.
  %7 = 7.224147951921607329 E-210

Note that in the case of slow decrease (α < 0), the exact decrease rate must be indicated, while in the case of exponential decrease, a rough value will do. In fact, for exponentially decreasing functions, sumnum is given for completeness and comparison purposes only: one of suminf or sumpos should always be preferred.

  ? sumnum(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n)
  time = 240 ms.
  %8 = 1.000[...] \\ perfect
  ? sumpos(n=1, 2^-n)
  %9 = 1.000[...] \\ perfect and instantaneous

The library syntax is sumnum((void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec)) where an omitted tab is coded as NULL.

sumnumap HOME   TOP

Numerical summation of f(n) at high accuracy using Abel-Plana, the variable n taking values from a to + oo , where f is holomorphic in the right half-place Re(z) > a; a must be an integer and tab, if given, is the output of sumnumapinit. The latter precomputes abcissas and weights, speeding up the computation; it also allows to specify the behaviour at infinity via sumnumapinit([+oo, asymp]).

  ? \p500
  ? z3 = zeta(3);
  ? sumpos(n = 1, n^-3) - z3
  time = 2,332 ms.
  %2 = 2.438468843 E-501
  ? sumnumap(n = 1, n^-3) - z3 \\ here slower than sumpos
  time = 2,565 ms.
  %3 = 0.E-500

Complexity. The function f will be evaluated at O(D log D) real arguments and O(D) complex arguments, where D ~ realprecision.log(10). The routine is geared towards slowly decreasing functions: if f decreases exponentially fast, then one of suminf or sumpos should be preferred. The default algorithm sumnum is usually a little slower than sumnumap but its initialization function sumnuminit becomes much faster as realprecision increases.

If f satisfies the stronger hypotheses required for Monien summation, i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1], then sumnummonien will be faster since it only requires O(D/log D) evaluations:

  ? sumnummonien(n = 1, 1/n^3) - z3
  time = 1,128 ms.
  %3 = 0.E-500

The tab argument precomputes technical data not depending on the expression being summed and valid for a given accuracy, speeding up immensely later calls:

  ? tab = sumnumapinit();
  time = 2,567 ms.
  ? sumnumap(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos
  time = 39 ms.
  %5 = 0.E-500
  
  ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too
  time = 1,125 ms.
  ? sumnummonien(n = 1, 1/n^3, tabmon) - z3
  time = 2 ms.
  %7 = 0.E-500

The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:

  ? sumnumap(n = 1, lngamma(1+1/n)/n, tab);
  time = 10,762 ms.
  
  ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations
  time = 205 ms.

Behaviour at infinity. By default, sumnumap assumes that expr decreases slowly at infinity, but at least like O(n^{-2}). If the function decreases like n^{α} for some -2 < α < -1, then it must be indicated via

    tab = sumnumapinit([+oo, alpha]); /* alpha < 0 slow decrease */

otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via

    tab = sumnumapinit([+oo, alpha]); /* alpha  > 0 exponential decrease */

otherwise exponent overflow will occur.

  ? sumnumap(n=1,2^-n)
   ***   at top-level: sumnumap(n=1,2^-n)
   ***                             ^----
   *** _^_: overflow in expo().
  ? tab = sumnumapinit([+oo,log(2)]); sumnumap(n=1,2^-n, tab)
  %1 = 1.000[...]

As a shortcut, one can also input

    sumnumap(n = [a, asymp], f)

instead of

    tab = sumnumapinit(asymp);
    sumnumap(n = a, f, tab)

Further examples.

  ? \p200
  ? sumnumap(n = 1, n^(-2)) - zeta(2) \\ accurate, fast
  time = 169 ms.
  %1 = -4.752728915737899559 E-212
  ? sumpos(n = 1, n^(-2)) - zeta(2)  \\ even faster
  time = 79 ms.
  %2 = 0.E-211
  ? sumpos(n=1,n^(-4/3)) - zeta(4/3)   \\ now much slower
  time = 10,518 ms.
  %3 = -9.980730723049589073 E-210
  ? sumnumap(n=1,n^(-4/3)) - zeta(4/3)  \\ fast but inaccurate
  time = 309 ms.
  %4 = -2.57[...]E-78
  ? sumnumap(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate
  time = 329 ms.
  %6 = -5.418110963941205497 E-210
  
  ? tab = sumnumapinit([+oo,-4/3]);
  time = 160 ms.
  ? sumnumap(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations
  time = 175 ms.
  %5 = -5.418110963941205497 E-210
  ? sumnumap(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3)
  time = 258 ms.
  %7 = 9.125239518216767153 E-210

Note that in the case of slow decrease (α < 0), the exact decrease rate must be indicated, while in the case of exponential decrease, a rough value will do. In fact, for exponentially decreasing functions, sumnumap is given for completeness and comparison purposes only: one of suminf or sumpos should always be preferred.

  ? sumnumap(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n)
  time = 240 ms.
  %8 = 1.000[...] \\ perfect
  ? sumpos(n=1, 2^-n)
  %9 = 1.000[...] \\ perfect and instantaneous

The library syntax is sumnumap((void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec)) where an omitted tab is coded as NULL.

sumnumapinit HOME   TOP

Initialize tables for Abel--Plana summation of a series ∑ f(n), where f is holomorphic in a right half-plane. If given, asymp is of the form [+oo, α], as in intnum and indicates the decrease rate at infinity of functions to be summed. A positive α > 0 encodes an exponential decrease of type exp(-α n) and a negative -2 < α < -1 encodes a slow polynomial decrease of type n^{α}.

  ? \p200
  ? sumnumap(n=1, n^-2);
  time = 163 ms.
  ? tab = sumnumapinit();
  time = 160 ms.
  ? sumnum(n=1, n^-2, tab); \\ faster
  time = 7 ms.
  
  ? tab = sumnumapinit([+oo, log(2)]); \\ decrease like 2^-n
  time = 164 ms.
  ? sumnumap(n=1, 2^-n, tab) - 1
  time = 36 ms.
  %5 = 3.0127431466707723218 E-282
  
  ? tab = sumnumapinit([+oo, -4/3]); \\ decrease like n^(-4/3)
  time = 166 ms.
  ? sumnumap(n=1, n^(-4/3), tab);
  time = 181 ms.

The library syntax is GEN sumnumapinit(GEN asymp = NULL, long prec).

sumnuminit HOME   TOP

Initialize tables for Euler--MacLaurin delta summation of a series with positive terms. If given, asymp is of the form [+oo, α], as in intnum and indicates the decrease rate at infinity of functions to be summed. A positive α > 0 encodes an exponential decrease of type exp(-α n) and a negative -2 < α < -1 encodes a slow polynomial decrease of type n^{α}.

  ? \p200
  ? sumnum(n=1, n^-2);
  time = 200 ms.
  ? tab = sumnuminit();
  time = 188 ms.
  ? sumnum(n=1, n^-2, tab); \\ faster
  time = 8 ms.
  
  ? tab = sumnuminit([+oo, log(2)]); \\ decrease like 2^-n
  time = 200 ms.
  ? sumnum(n=1, 2^-n, tab)
  time = 44 ms.
  
  ? tab = sumnuminit([+oo, -4/3]); \\ decrease like n^(-4/3)
  time = 200 ms.
  ? sumnum(n=1, n^(-4/3), tab);
  time = 221 ms.

The library syntax is GEN sumnuminit(GEN asymp = NULL, long prec).

sumnummonien HOME   TOP

Numerical summation ∑_{n ≥ a} f(n) at high accuracy, the variable n taking values from the integer a to + oo using Monien summation, which assumes that f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].

The function f is evaluated at O(D / log D) real arguments, where D ~ realprecision.log(10). By default, assume that f(n) = O(n^{-2}) and has a non-zero asymptotic expansion f(n) = ∑_{i ≥ 2} a_i n^{-i} at infinity. To handle more complicated behaviours and allow time-saving precomputations (for a given realprecision), see sumnummonieninit.

The library syntax is GEN sumnummonien0(GEN n, GEN f, GEN tab = NULL, long prec).

sumnummonieninit HOME   TOP

Initialize tables for Monien summation of a series ∑_{n ≥ n_0} f(n) where f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].

By default, assume that f(n) = O(n^{-2}) and has a non-zero asymptotic expansion f(n) = ∑_{i ≥ 2} a_i / n^i at infinity. Note that the sum starts at i = 2! The argument asymp allows to specify different expansions:

* a real number β > 0 means f(n) = ∑_{i ≥ 1} a_i / n^{i + β} (Now the summation starts at 1.)

* a vector [α,β] of reals, where we must have α > 0 and α + β > 1 to ensure convergence, means that f(n) = ∑_{i ≥ 1} a_i / n^{α i + β} Note that asymp = [1, β] is equivalent to asymp = β.

  ? \p57
  ? s = sumnum(n = 1, sin(1/sqrt(n)) / n); \\ reference point
  
  ? \p38
  ? sumnummonien(n = 1, sin(1/sqrt(n)) / n) - s
  %2 = -0.001[...] \\ completely wrong
  
  ? t = sumnummonieninit(1/2);  \\ f(n) = sum_i 1 / n^(i+1/2)
  ? sumnummonien(n = 1, sin(1/sqrt(n)) / n, t) - s
  %3 = 0.E-37 \\ now correct

(As a matter of fact, in the above summation, the result given by sumnum at \p38 is slighly incorrect, so we had to increase the accuracy to \p57.)

The argument w is used to sum expressions of the form ∑_{n ≥ n_0} f(n) w(n), for varying f as above, and fixed weight function w, where we further assume that the auxiliary sums g_w(m) = ∑_{n ≥ n_0} w(n) / n^{α m + β} converge for all m ≥ 1. Note that for non-negative integers k, and weight w(n) = (log n)^k, the function g_w(m) = ζ^{(k)}(α m + β) has a simple expression; for general weights, g_w is computed using sumnum. The following variants are available

* an integer k ≥ 0, to code w(n) = (log n)^k;

* a t_CLOSURE computing the values w(n), where we assume that w(n) = O(n^ε) for all ε > 0;

* a vector [w, fast], where w is a closure as above and fast is a scalar; we assume that w(n) = O(n^{fast+ε}); note that w = [w, 0] is equivalent to w = w. Note that if w decreases exponentially, suminf should be used instead.

The subsequent calls to sumnummonien must use the same value of n_0 as was used here.

  ? \p300
  ? sumnummonien(n = 1, n^-2*log(n)) + zeta'(2)
  time = 328 ms.
  %1 = -1.323[...]E-6 \\ completely wrong, f does not satisfy hypotheses !
  ? tab = sumnummonieninit(, 1); \\ codes w(n) = log(n)
  time = 3,993 ms.
  ? sumnummonien(n = 1, n^-2, tab) + zeta'(2)
  time = 41 ms.
  %3 = -5.562684646268003458 E-309  \\ now perfect
  
  ? tab = sumnummonieninit(, n->log(n)); \\ generic, slower
  time = 9,808 ms.
  ? sumnummonien(n = 1, n^-2, tab) + zeta'(2)
  time = 40 ms.
  %5 = -5.562684646268003458 E-309  \\ identical result

The library syntax is GEN sumnummonieninit(GEN asymp = NULL, GEN w = NULL, GEN n0 = NULL, long prec).

sumpos HOME   TOP

Numerical summation of the series expr, which must be a series of terms having the same sign, the formal variable X starting at a. The algorithm used is Van Wijngaarden's trick for converting such a series into an alternating one, then we use sumalt. For regular functions, the function sumnum is in general much faster once the initializations have been made using sumnuminit.

The routine is heuristic and assumes that expr is more or less a decreasing function of X. In particular, the result will be completely wrong if expr is 0 too often. We do not check either that all terms have the same sign. As sumalt, this function should be used to try and guess the value of an infinite sum.

If flag = 1, use sumalt(,1) instead of sumalt(,0), see Section se:sumalt. Requiring more stringent analytic properties for rigorous use, but allowing to compute fewer series terms.

To reach accuracy 10^{-p}, both algorithms require O(p^2) space; furthermore, assuming the terms decrease polynomially (in O(n^{-C})), both need to compute O(p^2) terms. The sumpos(,1) variant has a smaller implied constant (roughly 1.5 times smaller). Since the sumalt(,1) overhead is now small compared to the time needed to compute series terms, this last variant should be about 1.5 faster. On the other hand, the achieved accuracy may be much worse: as for sumalt, since conditions for rigorous use are hard to check, the routine is best used heuristically.

The library syntax is sumpos(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also available is sumpos2 with the same arguments (flag = 1).