Pari/GP Reference Documentation | Contents - Global index - GP keyboard shortcuts |
asympnum contfraceval contfracinit derivnum intcirc intfuncinit intnum intnumgauss intnumgaussinit intnuminit intnumromb limitnum prod prodeuler prodinf solve solvestep sum sumalt sumdiv sumdivmult suminf sumnum sumnumap sumnumapinit sumnuminit sumnummonien sumnummonieninit sumpos | |
Although the One of the parameters in these loops must be the control variable, hence a simple variable name. In the descriptions, the letter X will always denote any simple variable name, and represents the formal parameter used in the function. The expression to be summed, integrated, etc. is any legal PARI expression, including of course expressions using loops. Library mode. Since it is easier to program directly the loops in library mode, these functions are mainly useful for GP programming. On the other hand, numerical routines code a function (to be integrated, summed, etc.) with two parameters named
GEN (*eval)(void*,GEN) void *E; \\ context: eval(E, x) must evaluate your function at x. see the Libpari manual for details.
Numerical integration.
Starting with version 2.2.9 the "double exponential" univariate
integration method is implemented in
See also the discrete summation methods below, sharing the prefix
| |
asympnum | |
Asymptotic expansion of expr, corresponding to a sequence u(n),
assuming it has the shape
u(n) ~ ∑_{i ≥ 0} a_i n^{-iα}
with rational coefficients a_i with reasonable height; the algorithm
is heuristic and performs repeated calls to limitnum, with
? f(n) = n! / (n^n*exp(-n)*sqrt(n)); ? asympnum(f) %2 = [] \\ failure ! ? l = limitnum(f) %3 = 2.5066282746310005024157652848110452530 ? asympnum(n->f(n)/l) \\ normalize %4 = [1, 1/12, 1/288, -139/51840] and we indeed get a few terms of Stirling's expansion. Note that it helps to normalize with a limit computed to higher accuracy:
? \p100 ? L = limitnum(f) ? \p38 ? asympnum(n->f(n)/L) \\ we get more terms! %6 = [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880,\ 5246819/75246796800, -534703531/902961561600]
If
? \p38 ? asympnum(n->-log(1-1/n^Pi),,Pi) %1 = [0, 1, 1/2, 1/3] ? asympnum(n->-log(1-1/sqrt(n)),,1/2) %2 = [0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, \ 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22] ? localprec(100); a = Pi; ? asympnum(n->-log(1-1/n^a),,a) \\ better ! %4 = [0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12]
The library syntax is
| |
contfraceval | |
Given a continued fraction
The library syntax is
| |
contfracinit | |
Given M representing the power series S = ∑_{n ≥ 0} M[n+1]z^n,
transform it into a continued fraction; restrict to n ≤
The library syntax is
| |
derivnum | |
Numerical derivation of expr with respect to X at X = a. The order of derivation is 1 by default.
? derivnum(x=0, sin(exp(x))) - cos(1) %1 = 0.E-38 A clumsier approach, which would not work in library mode, is
? f(x) = sin(exp(x)) ? f'(0) - cos(1) %2 = 0.E-38
When a is a power series, compute If the parameter ind is present, it can be * a non-negative integer m, in which case we return f^{(m)}(x); * or a vector of orders, in which case we return the vector of derivatives.
? derivnum(x = 0, exp(sin(x)), 16) \\ 16-th derivative %1 = -52635599.000000000000000000000000000000 ? round( derivnum(x = 0, exp(sin(x)), [0..13]) ) \\ 0-13-th derivatives %2 = [1, 1, 1, 0, -3, -8, -3, 56, 217, 64, -2951, -12672, 5973, 309376]
The library syntax is
| |
intcirc | |
Numerical
integration of (2iπ)^{-1}expr with respect to X on the circle
|X-a |= R.
In other words, when expr is a meromorphic
function, sum of the residues in the corresponding disk; tab is as in
? \p105 ? intcirc(s=1, 0.5, zeta(s)) - 1 time = 496 ms. %1 = 1.2883911040127271720 E-101 + 0.E-118*I
The library syntax is
| |
intfuncinit | |
Initialize tables for use with integral transforms such Fourier,
Laplace or Mellin transforms, in order to compute
∫_a^b f(t) k(t,z) dt
for some kernel k(t,z).
The endpoints a and b are coded as in Limitation. the endpoints a and b must be at infinity, with the same asymptotic behaviour. Oscillating types are not supported. This is easily overcome by integrating vectors of functions, see example below. Examples. * numerical Fourier transform F(z) = ∫_{- oo }^{+ oo } f(t)e^{-2iπ z t} dt. First the easy case, assume that f decrease exponentially:
f(t) = exp(-t^2); A = [-oo,1]; B = [+oo,1]; \p200 T = intfuncinit(t = A,B , f(t)); F(z) = { my(a = -2*I*Pi*z); intnum(t = A,B, exp(a*t), T); } ? F(1) - sqrt(Pi)*exp(-Pi^2) %1 = -1.3... E-212 Now the harder case, f decrease slowly: we must specify the oscillating behaviour. Thus, we cannot precompute usefully since everything depends on the point we evaluate at:
f(t) = 1 / (1+ abs(t)); \p200 \\ Fourier cosine transform FC(z) = { my(a = 2*Pi*z); intnum(t = [-oo, a*I], [+oo, a*I], cos(a*t)*f(t)); } FC(1)
* Fourier coefficients: we must integrate over a period, but
FourierSin(f, T, k) = \\ first k sine Fourier coeffs { my (w = 2*Pi/T); my (v = vector(k+1)); intnum(t = -T/2, T/2, my (z = exp(I*w*t)); v[1] = z; for (j = 2, k, v[j] = v[j-1]*z); f(t) * imag(v)) * 2/T; } FourierSin(t->sin(2*t), 2*Pi, 10)
The same technique can be used instead of Note that the above code includes an unrelated optimization: the sin(j w t) are computed as imaginary parts of exp(i j w t) and the latter by successive multiplications. * numerical Mellin inversion F(z) = (2iπ)^{-1} ∫_{c -i oo }^{c+i oo } f(s)z^{-s} ds = (2π)^{-1} ∫_{- oo }^{+ oo } f(c + i t)e^{-log z(c + it)} dt. We take c = 2 in the program below:
f(s) = gamma(s)^3; \\ f(c+it) decrease as exp(-3Pi|t|/2) c = 2; \\ arbitrary A = [-oo,3*Pi/2]; B = [+oo,3*Pi/2]; T = intfuncinit(t=A,B, f(c + I*t)); F(z) = { my (a = -log(z)); intnum(t=A,B, exp(a*I*t), T)*exp(a*c) / (2*Pi); }
The library syntax is
| |
intnum | |
Numerical integration of expr on ]a,b[ with respect to X, using the double-exponential method, and thus O(Dlog D) evaluation of the integrand in precision D. The integrand may have values belonging to a vector space over the real numbers; in particular, it can be complex-valued or vector-valued. But it is assumed that the function is regular on ]a,b[. If the endpoints a and b are finite and the function is regular there, the situation is simple:
? intnum(x = 0,1, x^2) %1 = 0.3333333333333333333333333333 ? intnum(x = 0,Pi/2, [cos(x), sin(x)]) %2 = [1.000000000000000000000000000, 1.000000000000000000000000000]
An endpoint equal to ± oo is coded as
? intnum(x = 1,+oo, 1/x^2) %3 = 1.000000000000000000000000000 In basic usage, it is assumed that the function does not decrease exponentially fast at infinity:
? intnum(x=0,+oo, exp(-x)) *** at top-level: intnum(x=0,+oo,exp(- *** ^-------------------- *** exp: overflow in expo(). We shall see in a moment how to avoid that last problem, after describing the last optional argument tab. The tab. argument The routine uses weights w_i, which are mostly independent of the function being integrated, evaluated at many sampling points x_i and approximates the integral by ∑ w_i f(x_i). If tab is * a non-negative integer m, we multiply the number of sampling points by 2^m, hopefully increasing accuracy. Note that the running time increases roughly by a factor 2^m. One may try consecutive values of m until they give the same value up to an accepted error.
* a set of integration tables containing precomputed x_i and w_i
as output by
Specifying the behavior at endpoints.
This is done as follows. An endpoint a is either given as such (a scalar,
real or complex, If a is finite, the code [a,α] means the function has a singularity of the form (x-a)^{α}, up to logarithms. (If α \ge 0, we only assume the function is regular, which is the default assumption.) If a wrong singularity exponent is used, the result will lose a catastrophic number of decimals:
? intnum(x=0, 1, x^(-1/2)) \\ assume x^{-1/2} is regular at 0 %1 = 1.9999999999999999999999999999827931660 ? intnum(x=[0,-1/2], 1, x^(-1/2)) \\ no, it's not %2 = 2.0000000000000000000000000000000000000 ? intnum(x=[0,-1/10], 1, x^(-1/2)) \\ using a wrong exponent is bad %3 = 1.9999999999999999999999999999999901912
If a is ± oo , which is coded as
* α = 0 (or no α at all, i.e. simply ± * α > 0 assumes that the function tends to zero exponentially fast approximately as exp(-α x). This includes oscillating but quickly decreasing functions such as exp(-x)sin(x).
? intnum(x=0, +oo, exp(-2*x)) *** at top-level: intnum(x=0,+oo,exp(- *** ^-------------------- *** exp: exponent (expo) overflow ? intnum(x=0, [+oo, 2], exp(-2*x)) \\ OK! %1 = 0.50000000000000000000000000000000000000 ? intnum(x=0, [+oo, 3], exp(-2*x)) \\ imprecise exponent, still OK ! %2 = 0.50000000000000000000000000000000000000 ? intnum(x=0, [+oo, 10], exp(-2*x)) \\ wrong exponent ==> disaster %3 = 0.49999999999952372962457451698256707393 As the last exemple shows, the exponential decrease rate must be indicated to avoid overflow, but the method is robust enough for a rough guess to be acceptable. * α < -1 assumes that the function tends to 0 slowly, like x^{α}. Here the algorithm is less robust and it is essential to give a sharp α, unless α ≤ -2 in which case we use the default algorithm as if α were missing (or equal to 0).
? intnum(x=1, +oo, x^(-3/2)) \\ default %1 = 1.9999999999999999999999999999646391207 ? intnum(x=1, [+oo,-3/2], x^(-3/2)) \\ precise decrease rate %2 = 2.0000000000000000000000000000000000000 ? intnum(x=1, [+oo,-11/10], x^(-3/2)) \\ worse than default %3 = 2.0000000000000000000000000089298011973 The last two codes are reserved for oscillating functions. Let k > 0 real, and g(x) a non-oscillating function tending slowly to 0 (e.g. like a negative power of x), then * α = k * I assumes that the function behaves like cos(kx)g(x). * α = -k* I assumes that the function behaves like sin(kx)g(x). Here it is critical to give the exact value of k. If the oscillating part is not a pure sine or cosine, one must expand it into a Fourier series, use the above codings, and sum the resulting contributions. Otherwise you will get nonsense. Note that cos(kx), and similarly sin(kx), means that very function, and not a translated version such as cos(kx+a).
Note. If f(x) = cos(kx)g(x) where g(x) tends to zero
exponentially fast as exp(-α x), it is up to the user to choose
between [± We shall now see many examples to get a feeling for what the various parameters achieve. All examples below assume precision is set to 115 decimal digits. We first type
? \p 115
Apparent singularities. In many cases, apparent singularities
can be ignored. For instance, if f(x) = 1
/(exp(x)-1) - exp(-x)/x, then ∫_0^ oo f(x)dx = γ, Euler's
constant
? f(x) = 1/(exp(x)-1) - exp(-x)/x ? intnum(x = 0, [oo,1], f(x)) - Euler %1 = 0.E-115 But close to 0 the function f is computed with an enormous loss of accuracy, and we are in fact lucky that it get multiplied by weights which are sufficiently close to 0 to hide this:
? f(1e-200) %2 = -3.885337784451458142 E84 A more robust solution is to define the function differently near special points, e.g. by a Taylor expansion
? F = truncate( f(t + O(t^10)) ); \\ expansion around t = 0 ? poldegree(F) %4 = 7 ? g(x) = if (x > 1e-18, f(x), subst(F,t,x)); \\ note that 7.18 > 105 ? intnum(x = 0, [oo,1], g(x)) - Euler %2 = 0.E-115 It is up to the user to determine constants such as the 10^{-18} and 10 used above. True singularities. With true singularities the result is worse. For instance
? intnum(x = 0, 1, x^(-1/2)) - 2 %1 = -3.5... E-68 \\ only 68 correct decimals ? intnum(x = [0,-1/2], 1, x^(-1/2)) - 2 %2 = 0.E-114 \\ better Oscillating functions.
? intnum(x = 0, oo, sin(x) / x) - Pi/2 %1 = 16.19.. \\ nonsense ? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2 %2 = -0.006.. \\ bad ? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2 %3 = 0.E-115 \\ perfect ? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2 \\ oops, wrong k %4 = 0.06... ? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2 %5 = 0.E-115 \\ perfect ? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4 %6 = -0.0008... \\ bad ? sin(x)^3 - (3*sin(x)-sin(3*x))/4 %7 = O(x^17)
We may use the above linearization and compute two oscillating integrals with
endpoints
? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2 %1 = -0.0003... \\ bad ? intnum(x = 0, 1, (1-cos(x))/x^2) \ + intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2 %2 = 0.E-115 \\ perfect ? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3 %3 = -7.34... E-55 \\ bad ? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3 %4 = 8.9... E-103 \\ better. Try higher m ? tab = intnuminit(0,[oo,-I], 1); \\ double number of sampling points ? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3 %6 = 0.E-115 \\ perfect
Warning. Like
? intnum(x = 0, [oo, -I], x^2*sin(x)) %1 = -2.0000000000... Note the formula ∫_0^ oo sin(x)/x^sdx = cos(π s/2) Γ(1-s) , a priori valid only for 0 < Re(s) < 2, but the right hand side provides an analytic continuation which may be evaluated at s = -2...
Multivariate integration.
Using successive univariate integration with respect to different formal
parameters, it is immediate to do naive multivariate integration. But it is
important to use a suitable For example, to compute the double integral on the unit disc x^2+y^2 ≤ 1 of the function x^2+y^2, we can write
? tab = intnuminit(-1,1); ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab),tab) - Pi/2 %2 = -7.1... E-115 \\ OK The first tab is essential, the second optional. Compare:
? tab = intnuminit(-1,1); time = 4 ms. ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2)); time = 3,092 ms. \\ slow ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab); time = 252 ms. \\ faster ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab)); time = 261 ms. \\ the internal integral matters most
The library syntax is
| |
intnumgauss | |
Numerical integration of expr on the compact interval [a,b] with
respect to X using Gauss-Legendre quadrature;
? test(n, b = 1) = T=intnumgaussinit(n);\ intnumgauss(x=-b,b, 1/(1+x^2),T) - 2*atan(b); ? test(0) \\ default %1 = -9.490148553624725335 E-22 ? test(40) %2 = -6.186629001816965717 E-31 ? test(50) %3 = -1.1754943508222875080 E-38 ? test(50, 2) \\ double interval length %4 = -4.891779568527713636 E-21 ? test(90, 2) \\ n must almost be doubled as well! %5 = -9.403954806578300064 E-38 On the other hand, we recommend to split the integral and change variables rather than increasing n too much:
? f(x) = 1/(1+x^2); ? b = 100; ? intnumgauss(x=0,1, f(x)) + intnumgauss(x=1,1/b, f(1/x)*(-1/x^2)) - atan(b) %3 = -1.0579449157400587572 E-37
The library syntax is
| |
intnumgaussinit | |
Initialize tables for n-point Gauss-Legendre integration of
a smooth function f lon a compact
interval [a,b] at current ((b-a)^{2n+1} (n!)^4)/((2n+1)[(2n)!]^3) f^{(2n)} (ξ) , a < ξ < b so, if the interval length increases, n should be increased as well.
? T = intnumgaussinit(); ? intnumgauss(t=-1,1,exp(t), T) - exp(1)+exp(-1) %1 = -5.877471754111437540 E-39 ? intnumgauss(t=-10,10,exp(t), T) - exp(10)+exp(-10) %2 = -8.358367809712546836 E-35 ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2 %3 = -9.490148553624725335 E-22 ? T = intnumgaussinit(50); ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2 %5 = -1.1754943508222875080 E-38 ? intnumgauss(t=-5,5,1/(1+t^2), T) - 2*atan(5) %6 = -1.2[...]E-8
On the other hand, we recommend to split the integral and change variables
rather than increasing n too much, see
The library syntax is
| |
intnuminit | |
Initialize tables for integration from
a to b, where a and b are coded as in If m is present, it must be non-negative and we multiply the default number of sampling points by 2^m (increasing the running time by a similar factor).
The result is technical and liable to change in the future, but we document
it here for completeness. Let x = φ(t), t ∈ ]- oo , oo [ be an
internally chosen change of variable, achieving double exponential decrease of
the integrand at infinity. The integrator
[h, x0, w0, xp, wp, xm, wm] = intnuminit(a,b); * h is the integration step * x_0 = φ(0) and w_0 = φ'(0), * xp contains the φ(nh), 0 < n < N, * xm contains the φ(nh), 0 < -n < N, or is empty. * wp contains the φ'(nh), 0 < n < N, * wm contains the φ'(nh), 0 < -n < N, or is empty.
The arrays xm and wm are left empty when φ is an odd
function. In complicated situations when non-default behaviour is specified at
end points,
If the functions to be integrated later are of the form F = f(t) k(t,z)
for some kernel k (e.g. Fourier, Laplace, Mellin,...), it is
useful to also precompute the values of f(φ(nh)), which is accomplished
by
The library syntax is
| |
intnumromb | |
Numerical integration of expr (smooth in ]a,b[), with respect to
X. Suitable for low accuracy; if expr is very regular (e.g. analytic
in a large region) and high accuracy is desired, try Set flag = 0 (or omit it altogether) when a and b are not too large, the function is smooth, and can be evaluated exactly everywhere on the interval [a,b]. If flag = 1, uses a general driver routine for doing numerical integration, making no particular assumption (slow). flag = 2 is tailored for being used when a or b are infinite using the change of variable t = 1/X. One must have ab > 0, and in fact if for example b = + oo , then it is preferable to have a as large as possible, at least a ≥ 1. If flag = 3, the function is allowed to be undefined at a (but right continuous) or b (left continuous), for example the function sin(x)/x between x = 0 and 1.
The user should not require too much accuracy:
Note that infinity can be represented with essentially no loss of
accuracy by an appropriate huge number. However beware of real underflow
when dealing with rapidly decreasing functions. For example, in order to
compute the ∫_0^ oo e^{-x^2}dx to 28 decimal digits, then one can
set infinity equal to 10 for example, and certainly not to
The library syntax is
| |
limitnum | |
Lagrange-Zagier numerical extrapolation of expr, corresponding to a
sequence
u_n, either given by a closure Assume that u_n has an asymptotic expansion in n^{-α} : u_n = ℓ + ∑_{i ≥ 1} a_i n^{-iα} for some a_i.
? limitnum(n -> n*sin(1/n)) %1 = 1.0000000000000000000000000000000000000 ? limitnum(n -> (1+1/n)^n) - exp(1) %2 = 0.E-37 ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! ) %3 = 3.1415926535897932384626433832795028842 ? Pi %4 = 3.1415926535897932384626433832795028842
If u_n is given by a vector, it must be long enough for the extrapolation
to make sense: at least k times the current
? limitnum(vector(10,n,(1+1/n)^n)) *** ^-------------------- *** limitnum: non-existent component in limitnum: index < 20 \\ at this accuracy, we must have at least 20 values ? limitnum(vector(20,n,(1+1/n)^n)) - exp(1) %5 = -2.05... E-20 ? limitnum(vector(20,n, m=10*n;(1+1/m)^m)) - exp(1) \\ better accuracy %6 = 0.E-37 ? v = vector(20); s = 0; ? for(i=1,#v, s += 1/i; v[i]= s - log(i)); ? limitnum(v) - Euler %9 = -1.6... E-19 ? V = vector(200); s = 0; ? for(i=1,#V, s += 1/i; V[i]= s); ? v = vector(#V \ 10, i, V[10*i] - log(10*i)); ? limitnum(v) - Euler %13 = 6.43... E-29
The library syntax is
| |
prod | |
Product of expression
expr, initialized at x, the formal parameter X going from a to
b. As for As an extreme example, compare
? prod(i=1, 100, 1 - X^i); \\ this has degree 5050 !! time = 128 ms. ? prod(i=1, 100, 1 - X^i, 1 + O(X^101)) time = 8 ms. %2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \ X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)
Of course, in this specific case, it is faster to use
? prod(i=1, 1000, 1 - X^i, 1 + O(X^1001)); time = 589 ms. ? \ps1000 seriesprecision = 1000 significant terms ? eta(X) - % time = 8ms. %4 = O(X^1001)
The library syntax is
| |
prodeuler | |
Product of expression expr,
initialized at 1. (i.e. to a real number equal to 1 to the current
The library syntax is
| |
prodinf | |
infinite product of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression minus 1 is less than the default precision. In particular, non-convergent products result in infinite loops. The expressions must always evaluate to an element of ℂ. If flag = 1, do the product of the (1+expr) instead.
The library syntax is
| |
solve | |
Find a real root of expression
expr between a and b, under the condition
expr(X = a) * expr(X = b) ≤ 0. (You will get an error message
The library syntax is
| |
solvestep | |
Find zeros of a continuous function in the real interval [a,b] by naive interval splitting. This function is heuristic and may or may not find the intended zeros. Binary digits of flag mean * 1: return as soon as one zero is found, otherwise return all zeros found; * 2: refine the splitting until at least one zero is found (may loop indefinitely if there are no zeros); * 4: do a multiplicative search (we must have a > 0 and step > 1), otherwise an additive search; step is the multiplicative or additive step. * 8: refine the splitting until at least one zero is very close to an integer.
? solvestep(X=0,10,1,sin(X^2),1) %1 = 1.7724538509055160272981674833411451828 ? solvestep(X=1,12,2,besselj(4,X),4) %2 = [7.588342434..., 11.064709488...]
The library syntax is
| |
sum | |
Sum of expression expr,
initialized at x, the formal parameter going from a to b. As for
As an extreme example, compare
? sum(i=1, 10^4, 1/i); \\ rational number: denominator has 4345 digits. time = 236 ms. ? sum(i=1, 5000, 1/i, 0.) time = 8 ms. %2 = 9.787606036044382264178477904
The library syntax is
| |
sumalt | |
Numerical summation of the series expr, which should be an alternating series (-1)^k a_k, the formal variable X starting at a. Use an algorithm of Cohen, Villegas and Zagier (Experiment. Math. 9 (2000), no. 1, 3--12).
If flag = 0, assuming that the a_k are the moments of a positive
measure on [0,1], the relative error is O(3+sqrt8)^{-n} after using
a_k for k ≤ n. If
If flag = 1, use a variant with more complicated polynomials, see
The conditions for rigorous use are hard to check but the routine is best used heuristically: even divergent alternating series can sometimes be summed by this method, as well as series which are not exactly alternating (see for example Section se:user_defined). It should be used to try and guess the value of an infinite sum. (However, see the example at the end of Section se:userfundef.)
If the series already converges geometrically,
? \p28
? sumalt(i = 1, -(-1)^i / i) - log(2)
time = 0 ms.
%1 = -2.524354897 E-29
? suminf(i = 1, -(-1)^i / i) \\ Had to hit
The library syntax is
| |
sumdiv | |
Sum of expression expr over the positive divisors of n. This function is a trivial wrapper essentially equivalent to
D = divisors(n); for (i = 1, #D, X = D[i]; eval(expr))
(except that
| |
sumdivmult | |
Sum of multiplicative expression expr over the positive divisors d of n. Assume that expr evaluates to f(d) where f is multiplicative: f(1) = 1 and f(ab) = f(a)f(b) for coprime a and b.
| |
suminf | |
infinite sum of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression is less than the default precision for 3 consecutive evaluations. The expressions must always evaluate to a complex number.
If the series converges slowly, make sure
? \p28
? suminf(i = 1, -(-1)^i / i) \\ Had to hit
The library syntax is
| |
sumnum | |
Numerical summation of f(n) at high accuracy using Euler-MacLaurin,
the variable n taking values from a to + oo , where f is assumed to
have positive values and is a C^ oo function;
? \p500 ? z3 = zeta(3); ? sumpos(n = 1, n^-3) - z3 time = 2,332 ms. %2 = 2.438468843 E-501 ? sumnum(n = 1, n^-3) - z3 \\ here slower than sumpos time = 2,752 ms. %3 = 0.E-500
Complexity.
The function f will be evaluated at O(D log D) real arguments,
where D ~
? sumnummonien(n = 1, 1/n^3) - z3 time = 1,985 ms. %3 = 0.E-500
The
? tab = sumnuminit(); time = 2,709 ms. ? sumnum(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos time = 40 ms. %5 = 0.E-500 ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too time = 1,781 ms. ? sumnummonien(n = 1, 1/n^3, tabmon) - z3 time = 2 ms. %7 = 0.E-500 The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:
? sumnum(n = 1, lngamma(1+1/n)/n, tab); time = 14,180 ms. ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations time = 717 ms.
Behaviour at infinity.
By default,
tab = sumnuminit([+oo, alpha]); /* alpha < 0 slow decrease */ otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via
tab = sumnuminit([+oo, alpha]); /* alpha > 0 exponential decrease */ otherwise exponent overflow will occur.
? sumnum(n=1,2^-n) *** at top-level: sumnum(n=1,2^-n) *** ^---- *** _^_: overflow in expo(). ? tab = sumnuminit([+oo,log(2)]); sumnum(n=1,2^-n, tab) %1 = 1.000[...] As a shortcut, one can also input
sumnum(n = [a, asymp], f) instead of
tab = sumnuminit(asymp); sumnum(n = a, f, tab) Further examples.
? \p200 ? sumnum(n = 1, n^(-2)) - zeta(2) \\ accurate, fast time = 200 ms. %1 = -2.376364457868949779 E-212 ? sumpos(n = 1, n^(-2)) - zeta(2) \\ even faster time = 96 ms. %2 = 0.E-211 ? sumpos(n=1,n^(-4/3)) - zeta(4/3) \\ now much slower time = 13,045 ms. %3 = -9.980730723049589073 E-210 ? sumnum(n=1,n^(-4/3)) - zeta(4/3) \\ fast but inaccurate time = 365 ms. %4 = -9.85[...]E-85 ? sumnum(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate time = 416 ms. %5 = -4.134874156691972616 E-210 ? tab = sumnuminit([+oo,-4/3]); time = 196 ms. ? sumnum(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations time = 216 ms. %5 = -4.134874156691972616 E-210 ? sumnum(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3) time = 321 ms. %7 = 7.224147951921607329 E-210
Note that in the case of slow decrease (α < 0), the exact
decrease rate must be indicated, while in the case of exponential decrease,
a rough value will do. In fact, for exponentially decreasing functions,
? sumnum(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n) time = 240 ms. %8 = 1.000[...] \\ perfect ? sumpos(n=1, 2^-n) %9 = 1.000[...] \\ perfect and instantaneous
The library syntax is
| |
sumnumap | |
Numerical summation of f(n) at high accuracy using Abel-Plana,
the variable n taking values from a to + oo , where f is
holomorphic in the right half-place Re(z) > a;
? \p500 ? z3 = zeta(3); ? sumpos(n = 1, n^-3) - z3 time = 2,332 ms. %2 = 2.438468843 E-501 ? sumnumap(n = 1, n^-3) - z3 \\ here slower than sumpos time = 2,565 ms. %3 = 0.E-500
Complexity.
The function f will be evaluated at O(D log D) real arguments
and O(D) complex arguments,
where D ~
If f satisfies the stronger hypotheses required for Monien summation,
i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1],
then
? sumnummonien(n = 1, 1/n^3) - z3 time = 1,128 ms. %3 = 0.E-500
The
? tab = sumnumapinit(); time = 2,567 ms. ? sumnumap(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos time = 39 ms. %5 = 0.E-500 ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too time = 1,125 ms. ? sumnummonien(n = 1, 1/n^3, tabmon) - z3 time = 2 ms. %7 = 0.E-500 The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:
? sumnumap(n = 1, lngamma(1+1/n)/n, tab); time = 10,762 ms. ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations time = 205 ms.
Behaviour at infinity.
By default,
tab = sumnumapinit([+oo, alpha]); /* alpha < 0 slow decrease */ otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via
tab = sumnumapinit([+oo, alpha]); /* alpha > 0 exponential decrease */ otherwise exponent overflow will occur.
? sumnumap(n=1,2^-n) *** at top-level: sumnumap(n=1,2^-n) *** ^---- *** _^_: overflow in expo(). ? tab = sumnumapinit([+oo,log(2)]); sumnumap(n=1,2^-n, tab) %1 = 1.000[...] As a shortcut, one can also input
sumnumap(n = [a, asymp], f) instead of
tab = sumnumapinit(asymp); sumnumap(n = a, f, tab) Further examples.
? \p200 ? sumnumap(n = 1, n^(-2)) - zeta(2) \\ accurate, fast time = 169 ms. %1 = -4.752728915737899559 E-212 ? sumpos(n = 1, n^(-2)) - zeta(2) \\ even faster time = 79 ms. %2 = 0.E-211 ? sumpos(n=1,n^(-4/3)) - zeta(4/3) \\ now much slower time = 10,518 ms. %3 = -9.980730723049589073 E-210 ? sumnumap(n=1,n^(-4/3)) - zeta(4/3) \\ fast but inaccurate time = 309 ms. %4 = -2.57[...]E-78 ? sumnumap(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate time = 329 ms. %6 = -5.418110963941205497 E-210 ? tab = sumnumapinit([+oo,-4/3]); time = 160 ms. ? sumnumap(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations time = 175 ms. %5 = -5.418110963941205497 E-210 ? sumnumap(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3) time = 258 ms. %7 = 9.125239518216767153 E-210
Note that in the case of slow decrease (α < 0), the exact
decrease rate must be indicated, while in the case of exponential decrease,
a rough value will do. In fact, for exponentially decreasing functions,
? sumnumap(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n) time = 240 ms. %8 = 1.000[...] \\ perfect ? sumpos(n=1, 2^-n) %9 = 1.000[...] \\ perfect and instantaneous
The library syntax is
| |
sumnumapinit | |
Initialize tables for Abel--Plana summation of a series ∑ f(n),
where f is holomorphic in a right half-plane.
If given,
? \p200 ? sumnumap(n=1, n^-2); time = 163 ms. ? tab = sumnumapinit(); time = 160 ms. ? sumnum(n=1, n^-2, tab); \\ faster time = 7 ms. ? tab = sumnumapinit([+oo, log(2)]); \\ decrease like 2^-n time = 164 ms. ? sumnumap(n=1, 2^-n, tab) - 1 time = 36 ms. %5 = 3.0127431466707723218 E-282 ? tab = sumnumapinit([+oo, -4/3]); \\ decrease like n^(-4/3) time = 166 ms. ? sumnumap(n=1, n^(-4/3), tab); time = 181 ms.
The library syntax is
| |
sumnuminit | |
Initialize tables for Euler--MacLaurin delta summation of a series with
positive terms. If given,
? \p200 ? sumnum(n=1, n^-2); time = 200 ms. ? tab = sumnuminit(); time = 188 ms. ? sumnum(n=1, n^-2, tab); \\ faster time = 8 ms. ? tab = sumnuminit([+oo, log(2)]); \\ decrease like 2^-n time = 200 ms. ? sumnum(n=1, 2^-n, tab) time = 44 ms. ? tab = sumnuminit([+oo, -4/3]); \\ decrease like n^(-4/3) time = 200 ms. ? sumnum(n=1, n^(-4/3), tab); time = 221 ms.
The library syntax is
| |
sumnummonien | |
Numerical summation ∑_{n ≥ a} f(n) at high accuracy, the variable n taking values from the integer a to + oo using Monien summation, which assumes that f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].
The function f is evaluated at O(D / log D) real arguments,
where D ~
The library syntax is
| |
sumnummonieninit | |
Initialize tables for Monien summation of a series ∑_{n ≥ n_0} f(n) where f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].
By default, assume that f(n) = O(n^{-2}) and has a non-zero asymptotic
expansion
f(n) = ∑_{i ≥ 2} a_i / n^i
at infinity. Note that the sum starts at i = 2! The argument * a real number β > 0 means f(n) = ∑_{i ≥ 1} a_i / n^{i + β} (Now the summation starts at 1.)
* a vector [α,β] of reals, where we must have α > 0
and α + β > 1 to ensure convergence, means that
f(n) = ∑_{i ≥ 1} a_i / n^{α i + β}
Note that
? \p57 ? s = sumnum(n = 1, sin(1/sqrt(n)) / n); \\ reference point ? \p38 ? sumnummonien(n = 1, sin(1/sqrt(n)) / n) - s %2 = -0.001[...] \\ completely wrong ? t = sumnummonieninit(1/2); \\ f(n) = sum_i 1 / n^(i+1/2) ? sumnummonien(n = 1, sin(1/sqrt(n)) / n, t) - s %3 = 0.E-37 \\ now correct
(As a matter of fact, in the above summation, the
result given by
The argument w is used to sum expressions of the form
∑_{n ≥ n_0} f(n) w(n),
for varying f as above, and fixed weight function w, where we
further assume that the auxiliary sums
g_w(m) = ∑_{n ≥ n_0} w(n) / n^{α m + β}
converge for all m ≥ 1. Note that for non-negative integers k,
and weight w(n) = (log n)^k, the function g_w(m) = ζ^{(k)}(α m +
β) has a simple expression; for general weights, g_w is
computed using * an integer k ≥ 0, to code w(n) = (log n)^k;
* a
* a vector [w,
The subsequent calls to
? \p300 ? sumnummonien(n = 1, n^-2*log(n)) + zeta'(2) time = 328 ms. %1 = -1.323[...]E-6 \\ completely wrong, f does not satisfy hypotheses ! ? tab = sumnummonieninit(, 1); \\ codes w(n) = log(n) time = 3,993 ms. ? sumnummonien(n = 1, n^-2, tab) + zeta'(2) time = 41 ms. %3 = -5.562684646268003458 E-309 \\ now perfect ? tab = sumnummonieninit(, n->log(n)); \\ generic, slower time = 9,808 ms. ? sumnummonien(n = 1, n^-2, tab) + zeta'(2) time = 40 ms. %5 = -5.562684646268003458 E-309 \\ identical result
The library syntax is
| |
sumpos | |
Numerical summation of the series expr, which must be a series of
terms having the same sign, the formal variable X starting at a. The
algorithm used is Van Wijngaarden's trick for converting such a series into
an alternating one, then we use
The routine is heuristic and assumes that expr is more or less a
decreasing function of X. In particular, the result will be completely
wrong if expr is 0 too often. We do not check either that all terms
have the same sign. As
If flag = 1, use
To reach accuracy 10^{-p}, both algorithms require O(p^2) space;
furthermore, assuming the terms decrease polynomially (in O(n^{-C})), both
need to compute O(p^2) terms. The
The library syntax is
| |