Pari/GP Reference Documentation | Contents - Global index - GP keyboard shortcuts |
asympnum asympnumraw contfraceval contfracinit derivnum intcirc intfuncinit intnum intnumgauss intnumgaussinit intnuminit intnumosc intnumromb laurentseries limitnum prod prodeuler prodeulerrat prodinf prodnumrat solve solvestep sum sumalt sumdiv sumdivmult sumeulerrat suminf sumnum sumnumap sumnumapinit sumnuminit sumnumlagrange sumnumlagrangeinit sumnummonien sumnummonieninit sumnumrat sumnumsidi sumpos | |
Although the One of the parameters in these loops must be the control variable, hence a simple variable name. In the descriptions, the letter X will always denote any simple variable name, and represents the formal parameter used in the function. The expression to be summed, integrated, etc. is any legal PARI expression, including of course expressions using loops. Library mode. Since it is easier to program directly the loops in library mode, these functions are mainly useful for GP programming. On the other hand, numerical routines code a function (to be integrated, summed, etc.) with two parameters named
GEN (*eval)(void*,GEN) void *E; \\ context: eval(E, x) must evaluate your function at x. see the Libpari manual for details.
Numerical integration.
The "double exponential" (DE) univariate
integration method is implemented in
Assume the bitprecision is b, so we try to achieve an absolute error less
than 2-b. DE methods use O(b log b) function evaluations and should
work for both compact and non-compact intervals as long as the integrand is
the restriction of an analytic function to a suitable domain and its behaviour
at infinity is correctly described.
When integrating regular functions on a compact interval, away from
poles of the integrand, Gauss-Legendre integration (
For maximal efficiency, abscissas and integration
weights can be precomputed, respectively using Numerical summation.
Many numerical summation methods are available to approximate
∑n ≥ n_{0} f(n) at accuracy 2-b: the overall best choice should
be
* Abel-Plana summation (
* Lagrange summation (
* Sidi summation (
* Monien summation (
* To sum rational functions, use All the function so far require f to be be the restriction to integers of a regular function on the reals, and even on the complex numbers for Monien summation. The following algorithms allow functions defined only on the integers, under asumptions that are hard to verify. They are best used heuristically since they in fact are often valid when those asumptions do not hold, and for instance often yield a result for divergent series (e.g., Borel resummation).
* To sum alternating series, use
* To sum functions of a fixed sign,
| |
asympnum(expr, {alpha = 1}) | |
Asymptotic expansion of expr, corresponding to a sequence u(n),
assuming it has the shape
u(n) ~ ∑i ≥ 0 ai n-iα
with rational coefficients ai with reasonable height; the algorithm
is heuristic and performs repeated calls to limitnum, with
? f(n) = n! / (n^n*exp(-n)*sqrt(n)); ? asympnum(f) %2 = [] \\ failure ! ? localprec(57); l = limitnum(f) %3 = 2.5066282746310005024157652848110452530 ? asympnum(n->f(n)/l) \\ normalize %4 = [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880, 5246819/75246796800] and we indeed get a few terms of Stirling's expansion. Note that it definitely helps to normalize with a limit computed to higher accuracy (as a rule of thumb, multiply the bit accuracy by 1.612):
? l = limitnum(f) ? asympnum(n->f(n) / l) \\ failure again !!! %6 = []
We treat again the example of the Motzkin numbers Mn given
in
\\ [Mk, Mk*2, ..., Mk*N] / (3^n / n^(3/2)) ? vM(N, k = 1) = { my(q = k*N, V); if (q == 1, return ([1/3])); V = vector(q); V[1] = V[2] = 1; for(n = 2, q - 1, V[n+1] = ((2*n + 1)*V[n] + 3*(n - 1)*V[n-1]) / (n + 2)); f = (n -> 3^n / n^(3/2)); return (vector(N, n, V[n*k] / f(n*k))); } ? localprec(100); l = limitnum(n->vM(n,10)); \\ 3/sqrt(12*Pi) ? \p38 ? asympnum(n->vM(n,10)/l) %2 = [1, -3/32, 101/10240, -1617/1638400, 505659/5242880000, ...]
If
? \p38 ? asympnum(n->log(1+1/n^Pi),Pi) %1 = [0, 1, -1/2, 1/3, -1/4, 1/5] ? localprec(76); a = Pi; ? asympnum(n->log(1+1/n^Pi), a) \\ more terms %3 = [0, 1, -1/2, 1/3, -1/4, 1/5, -1/6, 1/7, -1/8, 1/9, -1/10, 1/11, -1/12] ? asympnum(n->log(1+1/sqrt(n)),1/2) \\ many more terms %4 = 49 The expression is evaluated for n = 1, 2,..., N for an N = O(B) if the current bit accuracy is B. If it is not defined for one of these values, translate or rescale accordingly:
? asympnum(n->log(1-1/n)) \\ can't evaluate at n = 1 ! *** at top-level: asympnum(n->log(1-1/n)) *** ^ — — — — — — — -- *** in function asympnum: log(1-1/n) *** ^ — — — - *** log: domain error in log: argument = 0 ? asympnum(n->-log(1-1/(2*n))) %5 = [0, 1/2, 1/8, 1/24, ...] ? asympnum(n->-log(1-1/(n+1))) %6 = [0, 1, -1/2, 1/3, -1/4, ...]
The library syntax is
| |
asympnumraw(expr, N, {alpha = 1}) | |
Return the N+1 first terms of asymptotic expansion of expr,
corresponding to a sequence u(n), as floating point numbers. Assume
that the expansion has the shape
u(n) ~ ∑i ≥ 0 ai n-iα
and return approximation of [a0, a1,..., aN].
The algorithm is heuristic and performs repeated calls to limitnum, with
? f(n) = n! / (n^n*exp(-n)*sqrt(n)); ? asympnum(f) %2 = [] \\ failure ! ? v = asympnumraw(f, 10); ? v[1] - sqrt(2*Pi) %4 = 0.E-37 ? bestappr(v / v[1], 2^60) %5 = [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880,...] and we indeed get a few terms of Stirling's expansion (the first 9 terms are correct). If u(n) has an asymptotic expansion in n-α with α not an integer, the default alpha = 1 is inaccurate:
? f(n) = (1+1/n^(7/2))^(n^(7/2)); ? v1 = asympnumraw(f,10); ? v1[1] - exp(1) %8 = 4.62... E-12 ? v2 = asympnumraw(f,10,7/2); ? v2[1] - exp(1) %7 0.E-37
As in
The library syntax is
| |
contfraceval(CF, t, {lim = -1}) | |
Given a continued fraction
The library syntax is
| |
contfracinit(M, {lim = -1}) | |
Given M representing the power series S = ∑n ≥ 0 M[n+1]zn,
transform it into a continued fraction in Euler form, using the
quotient-difference algorithm; restrict to
n ≤ The result is a 2-component vector [A,B] such that S = M[1] / (1+A[1]z+B[1]z2/(1+A[2]z+B[2]z2/(1+...1/(1+A[lim/2]z)))). Does not work if any coefficient of M vanishes, nor for series for which certain partial denominators vanish.
The library syntax is
| |
derivnum(X = a, expr, {ind = 1}) | |
Numerical derivation of expr with respect to X at X = a. The order of derivation is 1 by default.
? derivnum(x=0, sin(exp(x))) - cos(1) %1 = 0.E-38 A clumsier approach, which would not work in library mode, is
? f(x) = sin(exp(x)) ? f'(0) - cos(1) %2 = 0.E-38
* When a is a numerical type (integer, rational number, real number or
* When a is a (polynomial, rational function or) power series, compute
? derivnum(x = 1 + t, sqrt(x)) %1 = 1/2 - 1/4*t + 3/16*t^2 - 5/32*t^3 + ... + O(t^16) ? derivnum(x = 1/(1 + t), sqrt(x)) %2 = 1/2 + 1/4*t - 1/16*t^2 + 1/32*t^3 + ... + O(t^16) ? derivnum(x = 1 + t + O(t^17), sqrt(x)) %3 = 1/2 - 1/4*t + 3/16*t^2 - 5/32*t^3 + ... + O(t^16) If the parameter ind is present, it can be * a nonnegative integer m, in which case we return f(m)(x); * or a vector of orders, in which case we return the vector of derivatives.
? derivnum(x = 0, exp(sin(x)), 16) \\ 16-th derivative %1 = -52635599.000000000000000000000000000000 ? round( derivnum(x = 0, exp(sin(x)), [0..13]) ) \\ 0-13-th derivatives %2 = [1, 1, 1, 0, -3, -8, -3, 56, 217, 64, -2951, -12672, 5973, 309376]
The library syntax is
| |
intcirc(X = a, R, expr, {tab}) | |
Numerical
integration of (2iπ)-1expr with respect to X on the circle
|X-a |= R.
In other words, when expr is a meromorphic
function, sum of the residues in the corresponding disk; tab is as in
? \p105 ? intcirc(s=1, 0.5, zeta(s)) - 1 time = 496 ms. %1 = 1.2883911040127271720 E-101 + 0.E-118*I
The library syntax is
| |
intfuncinit(t = a, b, f, {m = 0}) | |
Initialize tables for use with integral transforms (such as Fourier,
Laplace or Mellin transforms) in order to compute
∫ab f(t) k(t,z) dt
for some kernel k(t,z).
The endpoints a and b are coded as in Limitation. The endpoints a and b must be at infinity, with the same asymptotic behavior. Oscillating types are not supported. This is easily overcome by integrating vectors of functions, see example below. Examples. * numerical Fourier transform F(z) = ∫- oo + oo f(t)e-2iπ z t dt. First the easy case, assume that f decrease exponentially:
f(t) = exp(-t^2); A = [-oo,1]; B = [+oo,1]; \p200 T = intfuncinit(t = A,B , f(t)); F(z) = { my(a = -2*I*Pi*z); intnum(t = A,B, exp(a*t), T); } ? F(1) - sqrt(Pi)*exp(-Pi^2) %1 = -1.3... E-212 Now the harder case, f decrease slowly: we must specify the oscillating behavior. Thus, we cannot precompute usefully since everything depends on the point we evaluate at:
f(t) = 1 / (1+ abs(t)); \p200 \\ Fourier cosine transform FC(z) = { my(a = 2*Pi*z); intnum(t = [-oo, a*I], [+oo, a*I], cos(a*t)*f(t)); } FC(1)
* Fourier coefficients: we must integrate over a period, but
FourierSin(f, T, k) = \\ first k sine Fourier coeffs { my (w = 2*Pi/T); my (v = vector(k+1)); intnum(t = -T/2, T/2, my (z = exp(I*w*t)); v[1] = z; for (j = 2, k, v[j] = v[j-1]*z); f(t) * imag(v)) * 2/T; } FourierSin(t->sin(2*t), 2*Pi, 10)
The same technique can be used instead of Note that the above code includes an unrelated optimization: the sin(j w t) are computed as imaginary parts of exp(i j w t) and the latter by successive multiplications. * numerical Mellin inversion F(z) = (2iπ)-1 ∫c -i oo c+i oo f(s)z-s ds = (2π)-1 ∫- oo + oo f(c + i t)e-log z(c + it) dt. We take c = 2 in the program below:
f(s) = gamma(s)^3; \\ f(c+it) decrease as exp(-3Pi|t|/2) c = 2; \\ arbitrary A = [-oo,3*Pi/2]; B = [+oo,3*Pi/2]; T = intfuncinit(t=A,B, f(c + I*t)); F(z) = { my (a = -log(z)); intnum(t=A,B, exp(a*I*t), T)*exp(a*c) / (2*Pi); }
The library syntax is
| |
intnum(X = a, b, expr, {tab}) | |
Numerical integration of expr on ]a,b[ with respect to X, using the double-exponential method, and thus O(Dlog D) evaluation of the integrand in precision D. The integrand may have values belonging to a vector space over the real numbers; in particular, it can be complex-valued or vector-valued. But it is assumed that the function is regular on ]a,b[. If the endpoints a and b are finite and the function is regular there, the situation is simple:
? intnum(x = 0,1, x^2) %1 = 0.3333333333333333333333333333 ? intnum(x = 0,Pi/2, [cos(x), sin(x)]) %2 = [1.000000000000000000000000000, 1.000000000000000000000000000]
An endpoint equal to ± oo is coded as
? intnum(x = 1,+oo, 1/x^2) %3 = 1.000000000000000000000000000 In basic usage, it is assumed that the function does not decrease exponentially fast at infinity:
? intnum(x=0,+oo, exp(-x)) *** at top-level: intnum(x=0,+oo,exp(- *** ^ — — — — — — -- *** exp: overflow in expo(). We shall see in a moment how to avoid that last problem, after describing the last optional argument tab. The tab argument. The routine uses weights wi, which are mostly independent of the function being integrated, evaluated at many sampling points xi and approximates the integral by ∑ wi f(xi). If tab is * a nonnegative integer m, we multiply the number of sampling points by 2m, hopefully increasing accuracy. Note that the running time increases roughly by a factor 2m. One may try consecutive values of m until they give the same value up to an accepted error.
* a set of integration tables containing precomputed xi and wi
as output by
Specifying the behavior at endpoints. This is done as follows.
An endpoint a is either given as such (a scalar,
real or complex, If a is finite, the code [a,α] means the function has a singularity of the form (x-a)α, up to logarithms. (If α \ge 0, we only assume the function is regular, which is the default assumption.) If a wrong singularity exponent is used, the result will lose decimals:
? c = -9/10; ? intnum(x=0, 1, x^c) \\ assume x-9/10 is regular at 0 %1 = 9.9999839078827082322596783301939063944 ? intnum(x=[0,c], 1, x^c) \\ no, it's not %2 = 10.000000000000000000000000000000000000 ? intnum(x=[0,c/2], 1, x^c) \\ using a wrong exponent is bad %3 = 9.9999999997122749095442279375719919769
If a is ± oo , which is coded as
* α = 0 (or no α at all, i.e. simply ± * α > 0 assumes that the function tends to zero exponentially fast approximately as exp(-α|x|). This includes oscillating but quickly decreasing functions such as exp(-x)sin(x).
? intnum(x=0, +oo, exp(-2*x)) *** at top-level: intnum(x=0,+oo,exp(- *** ^ — — — — — — -- *** exp: exponent (expo) overflow ? intnum(x=0, [+oo, 2], exp(-2*x)) \\ OK! %1 = 0.50000000000000000000000000000000000000 ? intnum(x=0, [+oo, 3], exp(-2*x)) \\ imprecise exponent, still OK ! %2 = 0.50000000000000000000000000000000000000 ? intnum(x=0, [+oo, 10], exp(-2*x)) \\ wrong exponent ==> disaster %3 = 0.49999999999952372962457451698256707393 As the last exemple shows, the exponential decrease rate must be indicated to avoid overflow, but the method is robust enough for a rough guess to be acceptable. * α < -1 assumes that the function tends to 0 slowly, like xα. Here the algorithm is less robust and it is essential to give a sharp α, unless α ≤ -2 in which case we use the default algorithm as if α were missing (or equal to 0).
? intnum(x=1, +oo, x^(-3/2)) \\ default %1 = 1.9999999999999999999999999999646391207 ? intnum(x=1, [+oo,-3/2], x^(-3/2)) \\ precise decrease rate %2 = 2.0000000000000000000000000000000000000 ? intnum(x=1, [+oo,-11/10], x^(-3/2)) \\ worse than default %3 = 2.0000000000000000000000000089298011973 The last two codes are reserved for oscillating functions. Let k > 0 real, and g(x) a nonoscillating function tending slowly to 0 (e.g. like a negative power of x), then * α = k * I assumes that the function behaves like cos(kx)g(x). * α = -k* I assumes that the function behaves like sin(kx)g(x).
Here it is critical to give the exact value of k. If the
oscillating part is not a pure sine or cosine, one must expand it into a
Fourier series, use the above codings, and sum the resulting contributions.
Otherwise you will get nonsense. Note that cos(kx), and similarly
sin(kx), means that very function, and not a translated version such as
cos(kx+a). Note that the (slower) function
? \pb1664 ? exponent(intnum(x=0,+oo, sinc(x)) - Pi/2) time = 308 ms. %1 = 5 \\ junk ? exponent(intnum(x=0,[+oo,-I], sinc(x)) - Pi/2) time = 493 ms. %2 = -1663 \\ perfect when k is given ? exponent(intnum(x=0,[+oo,-0.999*I], sinc(x)) - Pi/2) time = 604 ms. %3 = -14 \\ junk when k is off \\ intnumosc requires the half-period ? exponent(intnumosc(x=0, sinc(x), Pi) - Pi/2) time = 20,570 ms. %4 = -1663 \\ slower but perfect ? exponent(intnumosc(x=0, sinc(x), Pi, 1) - Pi/2) time = 7,976 ms. %4 = -1663 \\ also perfect in fast unsafe mode ? exponent(intnumosc(x=0, sinc(x), Pi+0.001, 1) - Pi/2) time = 23,115 ms. %5 = -1278 \\ loses some accuracy when period is off, but much less
Note. If f(x) = cos(kx)g(x) where g(x) tends to zero
exponentially fast as exp(-α x), it is up to the user to choose
between [± Power series limits. The limits a and b can be power series of nonnegative valuation, giving a power series expansion for the integral -- provided it exists.
? intnum(t=0,X + O(X^3), exp(t)) %4 = 1.000...*X - 0.5000...*X^2 + O(X^3) ? bestappr( intnum(t=0,X + O(X^17), exp(t)) )- exp(X) + 1 %5 = O(X^17)
The valuation of the limit cannot be negative
since ∫01/X(1+t2)-1 dt = π/2 -
Polynomials and rational functions are also allowed and
converted to power series using current
? bestappr( intnum(t=1,1+X, 1/t) ) %6 = X - 1/2*X^2 + 1/3*X^3 - 1/4*X^4 + [...] + 1/15*X^15 + O(X^16) The function does not work if the integral is singular with the constant coefficient of the series as limit:
? intnum(t=X^2+O(X^4),1, 1/sqrt(t)) %8 = 2.000... - 6.236608109630992528 E28*X^2 + O(X^4) however you can use
? intnum(t=[X^2+O(X^4),-1/2],1, 1/sqrt(t)) %10 = 2.000000000000000000000000000-2.000000000000000000000000000*X^2+O(X^4) whis is translated internally to
? intnum(t=[0,-1/2],1, 1/sqrt(t))-intnum(t=[0,-1/2],X^2+O(X^4), 1/sqrt(t))
For this form the argument tab can be used only as an integer, not a
table precomputed by We shall now see many examples to get a feeling for what the various parameters achieve. All examples below assume precision is set to 115 decimal digits. We first type
? \p 115
Apparent singularities. In many cases, apparent singularities
can be ignored. For instance, if f(x) = 1
/(exp(x)-1) - exp(-x)/x, then ∫0 oo f(x)dx = γ, Euler's
constant
? f(x) = 1/(exp(x)-1) - exp(-x)/x ? intnum(x = 0, [oo,1], f(x)) - Euler %1 = 0.E-115 But close to 0 the function f is computed with an enormous loss of accuracy, and we are in fact lucky that it get multiplied by weights which are sufficiently close to 0 to hide this:
? f(1e-200) %2 = -3.885337784451458142 E84 A more robust solution is to define the function differently near special points, e.g. by a Taylor expansion
? F = truncate( f(t + O(t^10)) ); \\ expansion around t = 0 ? poldegree(F) %4 = 7 ? g(x) = if (x > 1e-18, f(x), subst(F,t,x)); \\ note that 7.18 > 105 ? intnum(x = 0, [oo,1], g(x)) - Euler %2 = 0.E-115 It is up to the user to determine constants such as the 10-18 and 10 used above. True singularities. With true singularities the result is worse. For instance
? intnum(x = 0, 1, x^(-1/2)) - 2 %1 = -3.5... E-68 \\ only 68 correct decimals ? intnum(x = [0,-1/2], 1, x^(-1/2)) - 2 %2 = 0.E-114 \\ better Oscillating functions.
? intnum(x = 0, oo, sin(x) / x) - Pi/2 %1 = 16.19.. \\ nonsense ? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2 %2 = -0.006.. \\ bad ? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2 %3 = 0.E-115 \\ perfect ? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2 \\ oops, wrong k %4 = 0.06... ? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2 %5 = 0.E-115 \\ perfect ? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4 %6 = -0.0008... \\ bad ? sin(x)^3 - (3*sin(x)-sin(3*x))/4 %7 = O(x^17)
We may use the above linearization and compute two oscillating integrals with
endpoints
? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2 %1 = -0.0003... \\ bad ? intnum(x = 0, 1, (1-cos(x))/x^2) \ + intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2 %2 = 0.E-115 \\ perfect ? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3 %3 = -7.34... E-55 \\ bad ? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3 %4 = 8.9... E-103 \\ better. Try higher m ? tab = intnuminit(0,[oo,-I], 1); \\ double number of sampling points ? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3 %6 = 0.E-115 \\ perfect
Warning. Like
? intnum(x = 0, [oo, -I], x^2*sin(x)) %1 = -2.0000000000... Note the formula ∫0 oo sin(x)x-sdx = cos(π s/2) Γ(1-s) , a priori valid only for 0 < Re(s) < 2, but the right hand side provides an analytic continuation which may be evaluated at s = -2...
Multivariate integration.
Using successive univariate integration with respect to different formal
parameters, it is immediate to do naive multivariate integration. But it is
important to use a suitable For example, to compute the double integral on the unit disc x2+y2 ≤ 1 of the function x2+y2, we can write
? tab = intnuminit(-1,1); ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab),tab) - Pi/2 %2 = -7.1... E-115 \\ OK The first tab is essential, the second optional. Compare:
? tab = intnuminit(-1,1); time = 4 ms. ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2)); time = 3,092 ms. \\ slow ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab); time = 252 ms. \\ faster ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab)); time = 261 ms. \\ the internal integral matters most
The library syntax is
| |
intnumgauss(X = a, b, expr, {tab}) | |
Numerical integration of expr on the compact interval [a,b] with
respect to X using Gauss-Legendre quadrature;
? test(n, b = 1) = T=intnumgaussinit(n);\ intnumgauss(x=-b,b, 1/(1+x^2),T) - 2*atan(b); ? test(0) \\ default %1 = -9.490148553624725335 E-22 ? test(40) %2 = -6.186629001816965717 E-31 ? test(50) %3 = -1.1754943508222875080 E-38 ? test(50, 2) \\ double interval length %4 = -4.891779568527713636 E-21 ? test(90, 2) \\ n must almost be doubled as well! %5 = -9.403954806578300064 E-38 On the other hand, we recommend to split the integral and change variables rather than increasing n too much:
? f(x) = 1/(1+x^2); ? b = 100; ? intnumgauss(x=0,1, f(x)) + intnumgauss(x=1,1/b, f(1/x)*(-1/x^2)) - atan(b) %3 = -1.0579449157400587572 E-37
The library syntax is
| |
intnumgaussinit({n}) | |
Initialize tables for n-point Gauss-Legendre integration of
a smooth function f on a compact interval [a,b]. If n is omitted, make a
default choice n ~ B / 4, where B is
((b-a)2n+1 (n!)4)/((2n+1)!(2n)!) (f(2n))/((2n)!) (ξ) , a < ξ < b. If r denotes the distance of the nearest pole to the interval [a,b], then this is of the order of ((b-a) / (4r))2n. In particular, the integral must be subdivided if the interval length b - a becomes close to 4r. The default choice n ~ B / 4 makes this quantity of order 2-B when b - a = r, as is the case when integrating 1/(1+t) on [0,1] for instance. If the interval length increases, n should be increased as well. Specifically, the function returns a pair of vectors [x,w], where x contains the nonnegative roots of the n-th Legendre polynomial Pn and w the corresponding Gaussian integration weights Qn(xj)/P'n(xj) = 2 / ((1-xj2)P'n(xj))2 such that ∫-11 f(t) dt ~ ∑j wj f(xj) .
? T = intnumgaussinit(); ? intnumgauss(t=-1,1,exp(t), T) - exp(1)+exp(-1) %1 = -5.877471754111437540 E-39 ? intnumgauss(t=-10,10,exp(t), T) - exp(10)+exp(-10) %2 = -8.358367809712546836 E-35 ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2 \\ b - a = 2r %3 = -9.490148553624725335 E-22 \\ ... loses half the accuracy ? T = intnumgaussinit(50); ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2 %5 = -1.1754943508222875080 E-38 ? intnumgauss(t=-5,5,1/(1+t^2), T) - 2*atan(5) %6 = -1.2[...]E-8
On the other hand, we recommend to split the integral and change variables
rather than increasing n too much, see
The library syntax is
| |
intnuminit(a, b, {m = 0}) | |
Initialize tables for integration from
a to b, where a and b are coded as in If m is present, it must be nonnegative and we multiply the default number of sampling points by 2m (increasing the running time by a similar factor).
The result is technical and liable to change in the future, but we document
it here for completeness. Let x = φ(t), t ∈ ]- oo , oo [ be an
internally chosen change of variable, achieving double exponential decrease of
the integrand at infinity. The integrator
[h, x0, w0, xp, wp, xm, wm] = intnuminit(a,b); * h is the integration step * x0 = φ(0) and w0 = φ'(0), * xp contains the φ(nh), 0 < n < N, * xm contains the φ(nh), 0 < -n < N, or is empty. * wp contains the φ'(nh), 0 < n < N, * wm contains the φ'(nh), 0 < -n < N, or is empty.
The arrays xm and wm are left empty when φ is an odd
function. In complicated situations,
If the functions to be integrated later are of the form F = f(t) k(t,z)
for some kernel k (e.g. Fourier, Laplace, Mellin,...), it is
useful to also precompute the values of f(φ(nh)), which is accomplished
by
The library syntax is
| |
intnumosc(x = a, expr, H, {flag = 0}, {tab}) | |
Numerical integration from a to oo of oscillating
quasi-periodic function expr of half-period H, meaning that we
at least expect the distance between the function's consecutive zeros to be
close to H: the sine or cosine functions (H = π) are paradigmatic
examples, but the Bessel Jν or Yν functions (H = π/2) can
also be handled. The integral from a to oo is computed
by summing the integral between two consecutive multiples of H;
flag determines the summation algorithm used: either 0 (Sidi extrapolation,
safe mode), 1 (Sidi extrapolation, unsafe mode), 2 ( The default is flag = 0; Sidi summation should be the most robust algorithm; you can try it in unsafe mode when the integrals between two consecutive multiples of H form an alternating series, this should be about twice faster than the default and not lose accuracy. Sumpos should be by far the slowest method, but also very robust and may be able to handle integrals where Sidi fails. Sumalt should be fast but often wrong, especially when the integrals between two consecutive multiples of H do not form an alternating series), and Lagrange should be as fast as Sumalt but more often wrong. When one of the Sidi modes runs into difficulties, it will return the result to the accuracy believed to be correct (the other modes do not perform extrapolation and do not have this property) :
? f(x)=besselj(0,x)^4*log(x+1); ? \pb384 ? intnumosc(x = 0, f(x), Pi) %1 = 0.4549032054850867417 \\ fewer digits than expected ! ? bitprecision(%) %2 = 64 ? \g1 \\ increase debug level to see diagnostics ? intnumosc(x = 0, f(x), Pi) sumsidi: reached accuracy of 23 bits. %2 = 0.4549032054850867417 The algorithm could extrapolate the series to 23 bits of accuracy, then diverged. So only the absolute error is likely to be around 2-23 instead of the possible 2-64 (or the requested 2-384). We'll come back to this example at the end.
In case of difficulties, you may try to replace the half-(quasi)-period H
by a multiple, such as the quasi-period 2H: since we do not expect
alternating behaviour,
First an alternating example:
? \pb384 \\ Sidi, safe mode ? exponent(intnumosc(x=0,sinc(x),Pi) - Pi/2) time = 183 ms. %1 = -383 ? exponent(intnumosc(x=0,sinc(x),2*Pi) - Pi/2) time = 224 ms. %2 = -383 \\ also works with 2H, a little slower \\ Sidi, unsafe mode ? exponent(intnumosc(x=0,sinc(x),Pi,1) - Pi/2) time = 79 ms. %3 = -383 \\ alternating: unsafe mode is fine and almost twice faster ? exponent(intnumosc(x=0,sinc(x),2*Pi,1) - Pi/2) time = 86 ms. %4 = -285 \\ but this time 2H loses accuracy \\ Sumalt ? exponent(intnumosc(x=0,sinc(x),Pi,2) - Pi/2) time = 115 ms. \\ sumalt is just as accurate and fast %5 = -383 ? exponent(intnumosc(x=0,sinc(x),2*Pi,2) - Pi/2) time = 115 ms. %6 = -10 \\ ...but breaks completely with 2H \\ Lagrange ? exponent(intnumosc(x=0,sinc(x),Pi,2) - Pi/2) time = 100 ms. \\ junk %7 = 224 ? exponent(intnumosc(x=0,sinc(x),2*Pi,2) - Pi/2) time = 100 ms. %8 = -238 \\ ...a little better with 2H \\ Sumpos ? exponent(intnumosc(x=0,sinc(x),Pi,4) - Pi/2) time = 17,961 ms. %9 = 7 \\ junk; slow ? exponent(intnumosc(x=0,sinc(x),2*Pi,4) - Pi/2) time = 19,105 ms. %10 = -4 \\ still junk Now a non-alternating one:
? exponent(intnumosc(x=0,sinc(x)^2,Pi) - Pi/2) time = 277 ms. %1 = -383 \\ safe mode is still perfect ? exponent(intnumosc(x=0,sinc(x)^2,Pi,1) - Pi/2) time = 97 ms. %2 = -284 \\ non-alternating; this time, Sidi's unsafe mode loses accuracy ? exponent(intnumosc(x=0,sinc(x)^2,Pi,2) - Pi/2) time = 113 ms. %3 = -10 \\ this time sumalt fails completely ? exponent(intnumosc(x=0,sinc(x)^2,Pi,3) - Pi/2) time = 103 ms. %4 = -237 \\ Lagrange loses accuracy (same with 2H = 2*Pi) ? exponent(intnumosc(x=0,sinc(x)^2,Pi,4) - Pi/2) time = 17,681 ms. %4 = -381 \\ and Sumpos is good but slow (perfect with 2H) Exemples of a different flavour:
? exponent(intnumosc(x = 0, besselj(0,x)*sin(3*x), Pi) - 1/sqrt(8)) time = 4,615 ms. %1 = -385 \\ more expensive but correct ? exponent(intnumosc(x = 0, besselj(0,x)*sin(3*x), Pi, 1) - 1/sqrt(8)) time = 1,424 ms. %2 = -279 \\ unsafe mode loses some accuracy (other modes return junk) ? S = log(2*Pi)- Euler - 1; ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1/2) - S) time = 21 ms. %4 = -6 \\ junk ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1) - S) time = 66ms. %5 = -384 \\ perfect with 2H ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 1) - S) time = 20 ms. %6 = -286 \\ unsafe mode loses accuracy ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 3) - S) time = 30 ms. %7 = -236 \\ and so does Lagrange (Sumalt fails) ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 4) - S) time = 2,315 ms. %8 = -382 \\ Sumpos is perfect but slow Again, Sidi extrapolation behaves well, especially in safe mode, but 2H is required here.
If the integrand has singularities close to the interval of integration,
it is advisable to split the integral in two: use the more robust
? \p38 ? f(x) = besselj(0,x)^3 * log(x); \\ mild singularity at 0 ? g() = intnumosc(x = 0, f(x), Pi); \\ direct ? h() = intnum(x = 0, Pi, f(x)) + intnumosc(x = Pi, f(x), Pi); \\ split at Pi ? G = g(); time = 293 ms. ? H = h(); time = 320 ms. \\ about as fast ? exponent(G-H) %6 = -12 \\ at least one of them is junk ? \p77 \\ increase accuracy ? G2=g(); H2=h(); ? exponent(G - G2) %8 = -13 \\ g() is not consistent ? exponent(H - H2) %9 = -128 \\ not a proof, but h() looks good Finally, here is an exemple where all methods fail, even when splitting the integral, except Sumpos:
? \p38 ? f(x)=besselj(0,x)^4*log(x+1); ? F = intnumosc(x=0,f(x), Pi, 4) time = 2,437 ms. %2 = 0.45489838778971732178155161172638343214 ? \p76 \\ double accuracy to check ? exponent(F - intnumosc(x = 0,f(x), Pi, 4)) time = 18,817 ms. %3 = -122 \\ F was almost perfect
The library syntax is
| |
intnumromb(X = a, b, expr, {flag = 0}) | |
Numerical integration of expr (smooth in ]a,b[), with respect to
X. Suitable for low accuracy; if expr is very regular (e.g. analytic
in a large region) and high accuracy is desired, try Set flag = 0 (or omit it altogether) when a and b are not too large, the function is smooth, and can be evaluated exactly everywhere on the interval [a,b]. If flag = 1, uses a general driver routine for doing numerical integration, making no particular assumption (slow). flag = 2 is tailored for being used when a or b are infinite using the change of variable t = 1/X. One must have ab > 0, and in fact if for example b = + oo , then it is preferable to have a as large as possible, at least a ≥ 1. If flag = 3, the function is allowed to be undefined at a (but right continuous) or b (left continuous), for example the function sin(x)/x between x = 0 and 1.
The user should not require too much accuracy:
Note that infinity can be represented with essentially no loss of
accuracy by an appropriate huge number. However beware of real underflow
when dealing with rapidly decreasing functions. For example, in order to
compute the ∫0 oo e-x^{2}dx to 38 decimal digits, then
one can set infinity equal to 10 for example, and certainly not to
The library syntax is
| |
laurentseries(f, {M = seriesprecision}, {x = 'x}) | |
Expand f as a Laurent series around x = 0 to order M. This function computes f(x + O(xn)) until n is large enough: it must be possible to evaluate f on a power series with 0 constant term.
? laurentseries(t->sin(t)/(1-cos(t)), 5) %1 = 2*x^-1 - 1/6*x - 1/360*x^3 - 1/15120*x^5 + O(x^6) ? laurentseries(log) *** at top-level: laurentseries(log) *** ^ — — — — — — *** in function laurentseries: log *** ^ — *** log: domain error in log: series valuation != 0
Note that individual Laurent coefficients of order ≤ M
can be retrieved from s =
With respect to successive calls to
? laurentseries(t->log(3+t),1) %1 = 1.0986122886681096913952452369225257047 + 1/3*x - 1/18*x^2 + O(x^3) ? derivnum(t=0,log(3+t),1) %2 = 0.33333333333333333333333333333333333333 ? derivnum(t=0,log(3+t),2) %3 = -0.11111111111111111111111111111111111111 ? f = x->sin(exp(x)); ? polcoef(laurentseries(x->f(x+2), 1), 1) %5 = 3.3129294231043339804683687620360224365 ? exp(2) * cos(exp(2)); %6 = 3.3129294231043339804683687620360224365 ? derivnum(x = 2, f(x)) %7 = 3.3129294231043339804683687620360224364 \\ 1 ulp off ? default(realprecision,115); ? for(i=1,10^4, laurentseries(x->f(x+2),1)) time = 279 ms. ? for(i=1,10^4, derivnum(x=2,f(x))) \\ ... and slower time = 1,134 ms.
The library syntax is
| |
limitnum(expr, {alpha = 1}) | |
Lagrange-Zagier numerical extrapolation of expr, corresponding to
a sequence un, either given by a closure The routine assume that un has an asymptotic expansion in n-α : un = ℓ + ∑i ≥ 1 ai n-iα for some ai. It is purely numerical and heuristic, thus may or may not work on your examples. The expression will be evaluated for n = 1, 2, ..., N for an N = O(B) at a bit accuracy bounded by 1.612 B.
? limitnum(n -> n*sin(1/n)) %1 = 1.0000000000000000000000000000000000000 ? limitnum(n -> (1+1/n)^n) - exp(1) %2 = 0.E-37 ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! ) - Pi %3 = 0.E -37 It is not mandatory to specify α when the un have an asymptotic expansion in n-1. However, if the series in n-1 is lacunary, specifying α allows faster computation:
? \p1000 ? limitnum(n->(1+1/n^2)^(n^2)) - exp(1) time = 1min, 44,681 ms. %4 = 0.E-1001 ? limitnum(n->(1+1/n^2)^(n^2), 2) - exp(1) time = 27,271 ms. %5 = 0.E-1001 \\ still perfect, 4 times faster When un has an asymptotic expansion in n-α with α not an integer, leaving α unspecified will bring an inexact limit. Giving a satisfying optional argument improves precision; the program runs faster when the optional argument gives non lacunary series.
? \p50 ? limitnum(n->(1+1/n^(7/2))^(n^(7/2))) - exp(1) time = 982 ms. %6 = 4.13[...] E-12 ? limitnum(n->(1+1/n^(7/2))^(n^(7/2)), 1/2) - exp(1) time = 16,745 ms. %7 = 0.E-57 ? limitnum(n->(1+1/n^(7/2))^(n^(7/2)), 7/2) - exp(1) time = 105 ms. %8 = 0.E-57
Alternatively, un may be given by a closure
N
? \p2000 ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! ) - Pi time = 1,755 ms. %9 = 0.E-2003 ? vu(N) = \\ exploit hypergeometric property { my(v = vector(N)); v[1] = 8./3;\ for (n=2, N, my(q = 4*n^2); v[n] = v[n-1]*q/(q-1));\ return(v); } ? limitnum(vu) - Pi \\ much faster time = 106 ms. %11 = 0.E-2003
All sums and recursions can be handled in the same way.
In the above it is essential that un be defined as a closure because
it must be evaluated at a higher precision than the one expected for the
limit. Make sure that the closure does not depend on a global variable which
would be computed at a priori fixed accuracy. For instance, precomputing
Alternatively, and more clumsily, un may be given by a vector of values:
it must be long and precise enough for the extrapolation
to make sense. Let B be the current
? limitnum(vector(10,n,(1+1/n)^n)) *** ^ — — — — — — -- *** limitnum: nonexistent component in limitnum: index < 43 \\ at this accuracy, we must have at least 43 values ? limitnum(vector(43,n,(1+1/n)^n)) - exp(1) %12 = 0.E-37 ? v = vector(43); ? s = 0; for(i=1,#v, s += 1/i; v[i]= s - log(i)); ? limitnum(v) - Euler %15 = -1.57[...] E-16 ? v = vector(43); \\ ~ 128 bit * 1.612 ? localbitprec(207);\ s = 0; for(i=1,#v, s += 1/i; v[i]= s - log(i)); ? limitnum(v) - Euler %18 = 0.E-38 Because of the above problems, the preferred format is thus a closure, given either a single value or the vector of values [u1,...,uN]. The function distinguishes between the two formats by evaluating the closure at N ! = 1 and 1 and checking whether it yields vectors of respective length N and 1 or not. Warning. The expression is evaluated for n = 1, 2,..., N for an N = O(B) if the current bit accuracy is B. If it is not defined for one of these values, translate or rescale accordingly:
? limitnum(n->log(1-1/n)) \\ can't evaluate at n = 1 ! *** at top-level: limitnum(n->log(1-1/n)) *** ^ — — — — — — — -- *** in function limitnum: log(1-1/n) *** ^ — — — - *** log: domain error in log: argument = 0 ? limitnum(n->-log(1-1/(2*n))) %19 = -6.11[...] E-58 We conclude with a complicated example. Since the function is heuristic, it is advisable to check whether it produces the same limit for un, u2n,...ukm for a suitable small multiplier k. The following function implements the recursion for the Motzkin numbers Mn which count the number of ways to draw non intersecting chords between n points on a circle: Mn = Mn-1 + ∑i < n-1 Mi Mn-2-i = ((n+1)Mn-1+(3n-3)Mn-2) / (n+2). It is known that Mn^2 ~ (9n+1)/(12π n3).
\\ [Mk, Mk*2, ..., Mk*N] / (3^n / n^(3/2)) vM(N, k = 1) = { my(q = k*N, V); if (q == 1, return ([1/3])); V = vector(q); V[1] = V[2] = 1; for(n = 2, q - 1, V[n+1] = ((2*n + 1)*V[n] + 3*(n - 1)*V[n-1]) / (n + 2)); f = (n -> 3^n / n^(3/2)); return (vector(N, n, V[n*k] / f(n*k))); } ? limitnum(vM) - 3/sqrt(12*Pi) \\ complete junk %1 = 35540390.753542730306762369615276452646 ? limitnum(N->vM(N,5)) - 3/sqrt(12*Pi) \\ M5n: better %2 = 4.130710262178469860 E-25 ? limitnum(N->vM(N,10)) - 3/sqrt(12*Pi) \\ M10n: perfect %3 = 0.E-38 ? \p2000 ? limitnum(N->vM(N,10)) - 3/sqrt(12*Pi) \\ also at high accuracy time = 409 ms. %4 = 1.1048895470044788191 E-2004 In difficult cases such as the above a multiplier of 5 to 10 is usually sufficient. The above example is typical: a good multiplier usually remains sufficient when the requested precision increases!
The library syntax is
| |
prod(X = a, b, expr, {x = 1}) | |
Product of expression
expr, initialized at x, the formal parameter X going from a to
b. As for As an extreme example, compare
? prod(i=1, 100, 1 - X^i); \\ this has degree 5050 !! time = 128 ms. ? prod(i=1, 100, 1 - X^i, 1 + O(X^101)) time = 8 ms. %2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \ X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)
Of course, in this specific case, it is faster to use
? prod(i=1, 1000, 1 - X^i, 1 + O(X^1001)); time = 589 ms. ? \ps1000 seriesprecision = 1000 significant terms ? eta(X) - % time = 8ms. %4 = O(X^1001)
The library syntax is
| |
prodeuler(p = a, b, expr) | |
Product of expression expr, initialized at
? prodeuler(p = 2, 10^4, 1 - p^-2) %1 = 0.60793306911405513018380499671124428015 ? P = 1; forprime(p = 2, 10^4, P *= (1 - p^-2)) ? exponent(numerator(P)) %3 = 22953
The function returns a floating point number because,
as the second expression shows, such products are usually intractably
large rational numbers when computed symbolically.
If the expression is a rational function,
? prodeulerrat(1 - p^-2) %4 = 0.60792710185402662866327677925836583343 ? 6/Pi^2 %3 = 0.60792710185402662866327677925836583343
The library syntax is
| |
prodeulerrat(F, {s = 1}, {a = 2}) | |
∏p ≥ aF(ps), where the product is taken over prime numbers and F is a rational function.
? prodeulerrat(1+1/q^3,1) %1 = 1.1815649490102569125693997341604542605 ? zeta(3)/zeta(6) %2 = 1.1815649490102569125693997341604542606
The library syntax is
| |
prodinf(X = a, expr, {flag = 0}) | |
infinite product of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression minus 1 is less than the default precision. In particular, divergent products result in infinite loops. The expressions must always evaluate to an element of ℂ. If flag = 1, do the product of the (1+expr) instead.
The library syntax is
| |
prodnumrat(F, a) | |
∏n ≥ aF(n), where F-1 is a rational function of degree less than or equal to -2.
? prodnumrat(1+1/x^2,1) %1 = 3.6760779103749777206956974920282606665
The library syntax is
| |
solve(X = a, b, expr) | |
Find a real root of expression
expr between a and b.
If both a and b are finite, the condition is that
expr(X = a) * expr(X = b) ≤ 0. (You will get an error message
If only one between a and b is finite, say a, then b = ± oo . The routine will test all b = a± 2r, with r ≥ log2(|a|) until it finds a bracket for the root which satisfies the abovementioned condition. If both a and b are infinite, the routine will test 0 and all ± 2r, r ≥ 0, until it finds a bracket for the root which satisfies the condition.
This routine uses Brent's method and can fail miserably if expr is
not defined in the whole of [a,b] (try
The library syntax is
| |
solvestep(X = a, b, step, expr, {flag = 0}) | |
Find zeros of a continuous function in the real interval [a,b] by naive interval splitting. This function is heuristic and may or may not find the intended zeros. Binary digits of flag mean * 1: return as soon as one zero is found, otherwise return all zeros found; * 2: refine the splitting until at least one zero is found (may loop indefinitely if there are no zeros); * 4: do a multiplicative search (we must have a > 0 and step > 1), otherwise an additive search; step is the multiplicative or additive step. * 8: refine the splitting until at least one zero is very close to an integer.
? solvestep(X=0,10,1,sin(X^2),1) %1 = 1.7724538509055160272981674833411451828 ? solvestep(X=1,12,2,besselj(4,X),4) %2 = [7.588342434..., 11.064709488...]
The library syntax is
| |
sum(X = a, b, expr, {x = 0}) | |
Sum of expression expr,
initialized at x, the formal parameter going from a to b. As for
As an extreme example, compare
? sum(i=1, 10^4, 1/i); \\ rational number: denominator has 4345 digits. time = 236 ms. ? sum(i=1, 5000, 1/i, 0.) time = 8 ms. %2 = 9.787606036044382264178477904
| |
sumalt(X = a, expr, {flag = 0}) | |
Numerical summation of the series expr, which should be an alternating series (-1)k ak, the formal variable X starting at a. Use an algorithm of Cohen, Villegas and Zagier (Experiment. Math. 9 (2000), no. 1, 3–12).
If flag = 0, assuming that the ak are the moments of a positive
measure on [0,1], the relative error is O(3+sqrt8)-n after using
ak for k ≤ n. If
If flag = 1, use a variant with more complicated polynomials, see
The conditions for rigorous use are hard to check but the routine is best used heuristically: even divergent alternating series can sometimes be summed by this method, as well as series which are not exactly alternating (see for example Section se:user_defined). It should be used to try and guess the value of an infinite sum. (However, see the example at the end of Section se:userfundef.)
If the series already converges geometrically,
? \p38
? sumalt(i = 1, -(-1)^i / i) - log(2)
time = 0 ms.
%1 = 0.E-38
? suminf(i = 1, -(-1)^i / i) \\ Had to hit
The library syntax is
| |
sumdiv(n, X, expr) | |
Sum of expression expr over the positive divisors of n. This function is a trivial wrapper essentially equivalent to
D = divisors(n); sum (i = 1, #D, my(X = D[i]); eval(expr))
If expr is a multiplicative function, use
| |
sumdivmult(n, d, expr) | |
Sum of multiplicative expression expr over the positive
divisors d of n. Assume that expr evaluates to f(d)
where f is multiplicative: f(1) = 1 and f(ab) = f(a)f(b) for coprime
a and b.
The library syntax is
| |
sumeulerrat(F, {s = 1}, {a = 2}) | |
∑p ≥ aF(ps), where the sum is taken over prime numbers and F is a rational function.
? sumeulerrat(1/p^2) %1 = 0.45224742004106549850654336483224793417 ? sumeulerrat(1/p, 2) %2 = 0.45224742004106549850654336483224793417
The library syntax is
| |
suminf(X = a, expr) | |
Naive summation of expression expr, the formal parameter X going from a to infinity. The evaluation stops when the relative error of the expression is less than the default bit precision for 3 consecutive evaluations. The expressions must evaluate to a complex number.
If the expression tends slowly to 0, like n-a for some a > 1,
make sure b =
? \pb25 realbitprecision = 25 significant bits (7 decimal digits displayed) ? exponent(suminf(i = 1, (-1)^i / i) + log(2)) time = 2min, 2,602 ms. %1 = -29 ? \pb45 realbitprecision = 45 significant bits (13 decimal digits displayed) ? exponent(suminf(i = 1, 1 / i^2) - zeta(2)) time = 2,186 ms. %2 = -23 \\ alternatives are much faster ? \pb 10000 realbitprecision = 10000 significant bits (3010 decimal digits displayed) ? exponent(sumalt(i = 1, (-1)^i / i) + log(2)) time = 25 ms. %3 = -10043 ? \pb 4000 realbitprecision = 4000 significant bits (1204 decimal digits displayed))) ? exponent(sumpos(i = 1, 1 / i^2) - zeta(2)) time = 22,593 ms. %4 = -4030 ? exponent(sumnum(i = 1, 1 / i^2) - zeta(2)) time = 7,032 ms. %5 = -4031 \\ but suminf is perfect for geometrically converging series ? exponent(suminf(i = 1, 2^-i) - 1) time = 25 ms. %6 = -4003
The library syntax is
| |
sumnum(n = a, f, {tab}) | |
Numerical summation of f(n) at high accuracy using Euler-MacLaurin,
the variable n taking values from a to + oo , where f is assumed to
have positive values and is a C oo function;
? \p500 ? z3 = zeta(3); ? sumpos(n = 1, n^-3) - z3 time = 2,332 ms. %2 = 2.438468843 E-501 ? sumnum(n = 1, n^-3) - z3 \\ here slower than sumpos time = 2,752 ms. %3 = 0.E-500
Complexity.
The function f will be evaluated at O(D log D) real arguments,
where D ~
? sumnummonien(n = 1, 1/n^3) - z3 time = 1,985 ms. %3 = 0.E-500
The
? tab = sumnuminit(); time = 2,709 ms. ? sumnum(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos time = 40 ms. %5 = 0.E-500 ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too time = 1,781 ms. ? sumnummonien(n = 1, 1/n^3, tabmon) - z3 time = 2 ms. %7 = 0.E-500 The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:
? sumnum(n = 1, lngamma(1+1/n)/n, tab); time = 14,180 ms. ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations time = 717 ms.
Behaviour at infinity.
By default,
tab = sumnuminit([+oo, alpha]); /* alpha < 0 slow decrease */ otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via
tab = sumnuminit([+oo, alpha]); /* alpha > 0 exponential decrease */ otherwise exponent overflow will occur.
? sumnum(n=1,2^-n) *** at top-level: sumnum(n=1,2^-n) *** ^ — - *** _^_: overflow in expo(). ? tab = sumnuminit([+oo,log(2)]); sumnum(n=1,2^-n, tab) %1 = 1.000[...] As a shortcut, one can also input
sumnum(n = [a, asymp], f) instead of
tab = sumnuminit(asymp); sumnum(n = a, f, tab) Further examples.
? \p200 ? sumnum(n = 1, n^(-2)) - zeta(2) \\ accurate, fast time = 200 ms. %1 = -2.376364457868949779 E-212 ? sumpos(n = 1, n^(-2)) - zeta(2) \\ even faster time = 96 ms. %2 = 0.E-211 ? sumpos(n=1,n^(-4/3)) - zeta(4/3) \\ now much slower time = 13,045 ms. %3 = -9.980730723049589073 E-210 ? sumnum(n=1,n^(-4/3)) - zeta(4/3) \\ fast but inaccurate time = 365 ms. %4 = -9.85[...]E-85 ? sumnum(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate time = 416 ms. %5 = -4.134874156691972616 E-210 ? tab = sumnuminit([+oo,-4/3]); time = 196 ms. ? sumnum(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations time = 216 ms. %5 = -4.134874156691972616 E-210 ? sumnum(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3) time = 321 ms. %7 = 7.224147951921607329 E-210
Note that in the case of slow decrease (α < 0), the exact
decrease rate must be indicated, while in the case of exponential decrease,
a rough value will do. In fact, for exponentially decreasing functions,
? sumnum(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n) time = 240 ms. %8 = 1.000[...] \\ perfect ? sumpos(n=1, 2^-n) %9 = 1.000[...] \\ perfect and instantaneous Beware cancellation. The function f(n) is evaluated for huge values of n, so beware of cancellation in the evaluation:
? f(n) = 2 - 1/n - 2*n*log(1+1/n); \\ result is O(1/n^2) ? z = -2 + log(2*Pi) - Euler; ? sumnummonien(n=1, f(n)) - z time = 149 ms. %12 = 0.E-212 \\ perfect ? sumnum(n=1, f(n)) - z time = 116 ms. %13 = -948.216[...] \\ junk
As
? F = truncate( f(1/x + O(x^30)) ); ? sumnum(n=1, if(n > 1e7, subst(F,x,1/n), f(n))) - z %15 = 1.1 E-212 \\ now perfect
The library syntax is
| |
sumnumap(n = a, f, {tab}) | |
Numerical summation of f(n) at high accuracy using Abel-Plana,
the variable n taking values from a to + oo , where f is
holomorphic in the right half-place Re(z) > a;
? \p500 ? z3 = zeta(3); ? sumpos(n = 1, n^-3) - z3 time = 2,332 ms. %2 = 2.438468843 E-501 ? sumnumap(n = 1, n^-3) - z3 \\ here slower than sumpos time = 2,565 ms. %3 = 0.E-500
Complexity.
The function f will be evaluated at O(D log D) real arguments
and O(D) complex arguments,
where D ~
If f satisfies the stronger hypotheses required for Monien summation,
i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1],
then
? sumnummonien(n = 1, 1/n^3) - z3 time = 1,128 ms. %3 = 0.E-500
The
? tab = sumnumapinit(); time = 2,567 ms. ? sumnumap(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos time = 39 ms. %5 = 0.E-500 ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too time = 1,125 ms. ? sumnummonien(n = 1, 1/n^3, tabmon) - z3 time = 2 ms. %7 = 0.E-500 The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:
? sumnumap(n = 1, lngamma(1+1/n)/n, tab); time = 10,762 ms. ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations time = 205 ms.
Behaviour at infinity.
By default,
tab = sumnumapinit([+oo, alpha]); /* alpha < 0 slow decrease */ otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via
tab = sumnumapinit([+oo, alpha]); /* alpha > 0 exponential decrease */ otherwise exponent overflow will occur.
? sumnumap(n=1,2^-n) *** at top-level: sumnumap(n=1,2^-n) *** ^ — - *** _^_: overflow in expo(). ? tab = sumnumapinit([+oo,log(2)]); sumnumap(n=1,2^-n, tab) %1 = 1.000[...] As a shortcut, one can also input
sumnumap(n = [a, asymp], f) instead of
tab = sumnumapinit(asymp); sumnumap(n = a, f, tab) Further examples.
? \p200 ? sumnumap(n = 1, n^(-2)) - zeta(2) \\ accurate, fast time = 169 ms. %1 = -4.752728915737899559 E-212 ? sumpos(n = 1, n^(-2)) - zeta(2) \\ even faster time = 79 ms. %2 = 0.E-211 ? sumpos(n=1,n^(-4/3)) - zeta(4/3) \\ now much slower time = 10,518 ms. %3 = -9.980730723049589073 E-210 ? sumnumap(n=1,n^(-4/3)) - zeta(4/3) \\ fast but inaccurate time = 309 ms. %4 = -2.57[...]E-78 ? sumnumap(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ decrease rate: now accurate time = 329 ms. %6 = -5.418110963941205497 E-210 ? tab = sumnumapinit([+oo,-4/3]); time = 160 ms. ? sumnumap(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations time = 175 ms. %5 = -5.418110963941205497 E-210 ? sumnumap(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3) time = 258 ms. %7 = 9.125239518216767153 E-210
Note that in the case of slow decrease (α < 0), the exact
decrease rate must be indicated, while in the case of exponential decrease,
a rough value will do. In fact, for exponentially decreasing functions,
? sumnumap(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n) time = 240 ms. %8 = 1.000[...] \\ perfect ? sumpos(n=1, 2^-n) %9 = 1.000[...] \\ perfect and instantaneous
The library syntax is
| |
sumnumapinit({asymp}) | |
Initialize tables for Abel-Plana summation of a series ∑ f(n),
where f is holomorphic in a right half-plane.
If given,
? \p200 ? sumnumap(n=1, n^-2); time = 163 ms. ? tab = sumnumapinit(); time = 160 ms. ? sumnumap(n=1, n^-2, tab); \\ faster time = 7 ms. ? tab = sumnumapinit([+oo, log(2)]); \\ decrease like 2^-n time = 164 ms. ? sumnumap(n=1, 2^-n, tab) - 1 time = 36 ms. %5 = 3.0127431466707723218 E-282 ? tab = sumnumapinit([+oo, -4/3]); \\ decrease like n^(-4/3) time = 166 ms. ? sumnumap(n=1, n^(-4/3), tab); time = 181 ms.
The library syntax is
| |
sumnuminit({asymp}) | |
Initialize tables for Euler-MacLaurin delta summation of a series with
positive terms. If given,
? \p200 ? sumnum(n=1, n^-2); time = 200 ms. ? tab = sumnuminit(); time = 188 ms. ? sumnum(n=1, n^-2, tab); \\ faster time = 8 ms. ? tab = sumnuminit([+oo, log(2)]); \\ decrease like 2^-n time = 200 ms. ? sumnum(n=1, 2^-n, tab) time = 44 ms. ? tab = sumnuminit([+oo, -4/3]); \\ decrease like n^(-4/3) time = 200 ms. ? sumnum(n=1, n^(-4/3), tab); time = 221 ms.
The library syntax is
| |
sumnumlagrange(n = a, f, {tab}) | |
Numerical summation of f(n) from n = a to + oo using Lagrange
summation; a must be an integer, and the optional argument
? \p1000 ? z3 = zeta(3); ? sumpos(n = 1, n^-3) - z3 time = 4,440 ms. %2 = -2.08[...] E-1001 ? sumnumlagrange(n = 1, n^-3) - z3 \\ much faster than sumpos time = 25 ms. %3 = 0.E-1001 ? tab = sumnumlagrangeinit(); time = 21 ms. ? sumnumlagrange(n = 1, n^-3, tab) - z3 time = 2 ms. /* even faster */ %5 = 0.E-1001 ? \p115 ? tab = sumnumlagrangeinit([1/3,1/3]); time = 316 ms. ? sumnumlagrange(n = 1, n^-(7/3), tab) - zeta(7/3) time = 24 ms. %7 = 0.E-115 ? sumnumlagrange(n = 1, n^(-2/3) - 3*(n^(1/3)-(n-1)^(1/3)), tab) - zeta(2/3) time = 32 ms. %8 = 1.0151767349262596893 E-115
Complexity.
The function f is evaluated at O(D) integer arguments,
where D ~
The library syntax is
| |
sumnumlagrangeinit({asymp}, {c1}) | |
Initialize tables for Lagrange summation of a series. By
default, assume that the remainder R(n) = ∑m ≥ n f(m)
has an asymptotic expansion
R(n) = ∑m ≥ n f(n) ~ ∑i ≥ 1 ai / ni
at infinity. The argument * a real number β means R(n) = n-β ∑i ≥ 1 ai / ni
* a * a pair [α,β] where β is as above and α ∈ {2, 1, 1/2, 1/3, 1/4}. We let R2(n) = R(n) - f(n)/2 and Rα(n) = R(n) for α ! = 2. Then Rα(n) = g(n) ∑i ≥ 1 ai / niα Note that the initialization times increase considerable for the α is this list (1/4 being the slowest). The constant c1 is technical and computed by the program, but can be set by the user: the number of interpolation steps will be chosen close to c1.B, where B is the bit accuracy.
? \p2000 ? sumnumlagrange(n=1, n^-2); time = 173 ms. ? tab = sumnumlagrangeinit(); time = 172 ms. ? sumnumlagrange(n=1, n^-2, tab); time = 4 ms. ? \p115 ? sumnumlagrange(n=1, n^(-4/3)) - zeta(4/3); %1 = -0.1093[...] \\ junk: expansion in n^(1/3) time = 84 ms. ? tab = sumnumlagrangeinit([1/3,0]); \\ alpha = 1/3 time = 336 ms. ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3) time = 84 ms. %3 = 1.0151767349262596893 E-115 \\ now OK ? tab = sumnumlagrangeinit(1/3); \\ alpha = 1, beta = 1/3: much faster time = 3ms ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3) \\ ... but wrong %5 = -0.273825[...] \\ junk ! ? tab = sumnumlagrangeinit(-2/3); \\ alpha = 1, beta = -2/3 time = 3ms ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3) %6 = 2.030353469852519379 E-115 \\ now OK in The final example with ζ(4/3), the remainder R1(n) is of the form n-1/3 ∑i ≥ 0 ai / ni, i.e. n2/3 ∑i ≥ 1 ai / ni. The explains the wrong result for β = 1/3 and the correction with β = -2/3.
The library syntax is
| |
sumnummonien(n = a, f, {tab}) | |
Numerical summation ∑n ≥ a f(n) at high accuracy, the variable n taking values from the integer a to + oo using Monien summation, which assumes that f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].
The function f is evaluated at O(D / log D) real arguments,
where D ~
The library syntax is
| |
sumnummonieninit({asymp}, {w}, {n0 = 1}) | |
Initialize tables for Monien summation of a series ∑n ≥ n_{0} f(n) where f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].
By default, assume that f(n) = O(n-2) and has a nonzero asymptotic
expansion
f(n) = ∑i ≥ 2 ai / ni
at infinity. Note that the sum starts at i = 2! The argument * a real number β > 0 means f(n) = ∑i ≥ 1 ai / ni + β (Now the summation starts at 1.)
* a vector [α,β] of reals, where we must have α > 0
and α + β > 1 to ensure convergence, means that
f(n) = ∑i ≥ 1 ai / nα i + β
Note that
? \p57 ? s = sumnum(n = 1, sin(1/sqrt(n)) / n); \\ reference point ? \p38 ? sumnummonien(n = 1, sin(1/sqrt(n)) / n) - s %2 = -0.001[...] \\ completely wrong ? t = sumnummonieninit(1/2); \\ f(n) = sumi 1 / n^(i+1/2) ? sumnummonien(n = 1, sin(1/sqrt(n)) / n, t) - s %3 = 0.E-37 \\ now correct
(As a matter of fact, in the above summation, the
result given by
The argument w is used to sum expressions of the form
∑n ≥ n_{0} f(n) w(n),
for varying f as above, and fixed weight function w, where we
further assume that the auxiliary sums
gw(m) = ∑n ≥ n_{0} w(n) / nα m + β
converge for all m ≥ 1. Note that for nonnegative integers k,
and weight w(n) = (log n)k, the function
gw(m) = ζ(k)(α m + β) has a simple expression;
for general weights, gw is
computed using * an integer k ≥ 0, to code w(n) = (log n)k;
* a
* a vector [w,
The subsequent calls to
? \p300 ? sumnummonien(n = 1, n^-2*log(n)) + zeta'(2) time = 328 ms. %1 = -1.323[...]E-6 \\ completely wrong, f does not satisfy hypotheses ! ? tab = sumnummonieninit(, 1); \\ codes w(n) = log(n) time = 3,993 ms. ? sumnummonien(n = 1, n^-2, tab) + zeta'(2) time = 41 ms. %3 = -5.562684646268003458 E-309 \\ now perfect ? tab = sumnummonieninit(, n->log(n)); \\ generic, slower time = 9,808 ms. ? sumnummonien(n = 1, n^-2, tab) + zeta'(2) time = 40 ms. %5 = -5.562684646268003458 E-309 \\ identical result
The library syntax is
| |
sumnumrat(F, a) | |
∑n ≥ aF(n), where F is a rational function of degree less
than or equal to -2 and where poles of F at integers ≥ a are
omitted from the summation. The argument a must be a
? sumnumrat(1/(x^2+1)^2,0) %1 = 1.3068369754229086939178621382829073480 ? sumnumrat(1/x^2, -oo) \\ value at x=0 is discarded %2 = 3.2898681336964528729448303332920503784 ? 2*zeta(2) %3 = 3.2898681336964528729448303332920503784 When deg F = -1, we define ∑- oo oo F(n) := ∑n ≥ 0 (F(n) + F(-1-n)):
? sumnumrat(1/x, -oo) %4 = 0.E-38
The library syntax is
| |
sumnumsidi(n = a, f, {safe = 1}) | |
Numerical summation of f(n) from n = a to + oo using Sidi
summation; a must be an integer. The optional argument
? \pb3328 ? z = zeta(2); ? exponent(sumnumsidi(n = 1, 1/n^2) - z) time = 1,507 ms. %2 = -3261 \\ already loses some decimals ? exponent(sumnumsidi(n = 1, 1/n^2, 0) - z) time = 442 ms. \\ unsafe is much faster %3 = -2108 \\ ... but very wrong ? l2 = log(2); ? exponent(sumnumsidi(n = 1,(-1)^(n-1)/n) - z) time = 718 ms. %5 = -3328 \\ not so slow and perfect ? exponent(sumnumsidi(n = 1,(-1)^(n-1)/n, 0) - z) time = 504 ms. %5 = -3328 \\ still perfect in unsafe mode, not so much faster Complexity. If the bitprecision is b, we try to achieve an absolute error less than 2-b. The function f is evaluated at O(b) consecutive integer arguments at bit accuracy 1.56 b (resp. b) in safe (resp. unsafe) mode.
The library syntax is
| |
sumpos(X = a, expr, {flag = 0}) | |
Numerical summation of the series expr, which must be a series of
terms having the same sign, the formal variable X starting at a. The
algorithm uses Van Wijngaarden's trick for converting such a series into
an alternating one, then
? sumnum(n = 0, 1/n!) *** at top-level: sumnum(n=1,1/n!) *** ^ — *** incorrect type in gtos [integer expected] (t_FRAC). ? sumpos(n = 0, 1/n!) - exp(1) %2 = -1.0862155548773347717 E-33
On the other hand, when the function accepts general real
numbers, it is usually advantageous to replace n by
? \p500 ? sumpos(n = 0, n^7 / (n^9+n+1)); time = 6,108 ms. ? sumpos(n = 0, n *= 1.; n^7 / (n^9+n+1)); time = 2,788 ms. ? sumnumrat(n^7 / (n^9+n+1), 0); time = 4 ms.
In the last example,
The routine is heuristic and assumes that expr is more or less a
decreasing function of X. In particular, the result will be completely
wrong if expr is 0 too often. We do not check either that all terms
have the same sign: as
If flag = 1, use
To reach accuracy 10-p, both algorithms require O(p2) space;
furthermore, assuming the terms decrease polynomially (in O(n-C)), both
need to compute O(p2) terms. The
The library syntax is
| |