### Sums, products, integrals and similar functions

#### derivnum(X = a,expr)

Numerical derivation of expr with respect to X at X = a.

 ? derivnum(x=0,sin(exp(x))) - cos(1)
%1 = -1.262177448 E-29


A clumsier approach, which would not work in library mode, is

 ? f(x) = sin(exp(x))
? f'(0) - cos(1)
%1 = -1.262177448 E-29


When a is a power series, compute derivnum(t = a,f) as f'(a) = (f(a))'/a'.

The library syntax is derivnum(void *E, GEN (*eval)(void*,GEN), GEN a, long prec). Also available is GEN derivfun(void *E, GEN (*eval)(void *, GEN), GEN a, long prec), which also allows power series for a.

#### intcirc(X = a,R,expr,{tab})

Numerical integration of (2iPi)^{-1}expr with respect to X on the circle |X-a |= R. In other words, when expr is a meromorphic function, sum of the residues in the corresponding disk. tab is as in intnum, except that if computed with intnuminit it should be with the endpoints [-1, 1].

 ? \p105
? intcirc(s=1, 0.5, zeta(s)) - 1
%1 = -2.398082982 E-104 - 7.94487211 E-107*I


The library syntax is intcirc(void *E, GEN (*eval)(void*,GEN), GEN a,GEN R,GEN tab, long prec).

#### intfouriercos(X = a,b,z,expr,{tab})

Numerical integration of expr(X)cos(2Pi zX) from a to b, in other words Fourier cosine transform (from a to b) of the function represented by expr. Endpoints a and b are coded as in intnum, and are not necessarily at infinity, but if they are, oscillations (i.e. [[±1],alpha I]) are forbidden.

The library syntax is intfouriercos(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, GEN z, GEN tab, long prec).

#### intfourierexp(X = a,b,z,expr,{tab})

Numerical integration of expr(X)exp(-2iPi zX) from a to b, in other words Fourier transform (from a to b) of the function represented by expr. Note the minus sign. Endpoints a and b are coded as in intnum, and are not necessarily at infinity but if they are, oscillations (i.e. [[±1],alpha I]) are forbidden.

The library syntax is intfourierexp(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, GEN z, GEN tab, long prec).

#### intfouriersin(X = a,b,z,expr,{tab})

Numerical integration of expr(X)sin(2Pi zX) from a to b, in other words Fourier sine transform (from a to b) of the function represented by expr. Endpoints a and b are coded as in intnum, and are not necessarily at infinity but if they are, oscillations (i.e. [[±1],alpha I]) are forbidden.

The library syntax is intfouriersin(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, GEN z, GEN tab, long prec).

#### intfuncinit(X = a,b,expr,{flag = 0},{m = 0})

Initialize tables for use with integral transforms such as intmellininv, etc., where a and b are coded as in intnum, expr is the function s(X) to which the integral transform is to be applied (which will multiply the weights of integration) and m is as in intnuminit. If flag is nonzero, assumes that s(-X) = \overline{s(X)}, which makes the computation twice as fast. See intmellininvshort for examples of the use of this function, which is particularly useful when the function s(X) is lengthy to compute, such as a gamma product.

The library syntax is intfuncinit(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,long m, long flag, long prec). Note that the order of m and flag are reversed compared to the GP syntax.

#### intlaplaceinv(X = sig,z,expr,{tab})

Numerical integration of (2iPi)^{-1}expr(X)e^{Xz} with respect to X on the line Re(X) = sig. In other words, inverse Laplace transform of the function corresponding to expr at the value z.

sig is coded as follows. Either it is a real number sigma, equal to the abscissa of integration, and then the integrand is assumed to be slowly decreasing when the imaginary part of the variable tends to ± oo . Or it is a two component vector [sigma,alpha], where sigma is as before, and either alpha = 0 for slowly decreasing functions, or alpha > 0 for functions decreasing like exp(-alpha t). Note that it is not necessary to choose the exact value of alpha. tab is as in intnum.

It is often a good idea to use this function with a value of m one or two higher than the one chosen by default (which can be viewed thanks to the function intnumstep), or to increase the abscissa of integration sigma. For example:

 ? \p 105
? intlaplaceinv(x=2, 1, 1/x) - 1
time = 350 ms.
%1 = 7.37... E-55 + 1.72... E-54*I \\ not so good
? m = intnumstep()
%2 = 7
? intlaplaceinv(x=2, 1, 1/x, m+1) - 1
time = 700 ms.
%3 = 3.95... E-97 + 4.76... E-98*I \\ better
? intlaplaceinv(x=2, 1, 1/x, m+2) - 1
time = 1400 ms.
%4 = 0.E-105 + 0.E-106*I \\ perfect but slow.
? intlaplaceinv(x=5, 1, 1/x) - 1
time = 340 ms.
%5 = -5.98... E-85 + 8.08... E-85*I \\ better than %1
? intlaplaceinv(x=5, 1, 1/x, m+1) - 1
time = 680 ms.
%6 = -1.09... E-106 + 0.E-104*I \\ perfect, fast.
? intlaplaceinv(x=10, 1, 1/x) - 1
time = 340 ms.
%7 = -4.36... E-106 + 0.E-102*I \\ perfect, fastest, but why sig = 10?
? intlaplaceinv(x=100, 1, 1/x) - 1
time = 330 ms.
%7 = 1.07... E-72 + 3.2... E-72*I \\ too far now...


The library syntax is intlaplaceinv(void *E, GEN (*eval)(void*,GEN), GEN sig,GEN z, GEN tab, long prec).

#### intmellininv(X = sig,z,expr,{tab})

Numerical integration of (2iPi)^{-1}expr(X)z^{-X} with respect to X on the line Re(X) = sig, in other words, inverse Mellin transform of the function corresponding to expr at the value z.

sig is coded as follows. Either it is a real number sigma, equal to the abscissa of integration, and then the integrated is assumed to decrease exponentially fast, of the order of exp(-t) when the imaginary part of the variable tends to ± oo . Or it is a two component vector [sigma,alpha], where sigma is as before, and either alpha = 0 for slowly decreasing functions, or alpha > 0 for functions decreasing like exp(-alpha t), such as gamma products. Note that it is not necessary to choose the exact value of alpha, and that alpha = 1 (equivalent to sig alone) is usually sufficient. tab is as in intnum.

As all similar functions, this function is provided for the convenience of the user, who could use intnum directly. However it is in general better to use intmellininvshort.

 ? \p 105
? intmellininv(s=2,4, gamma(s)^3);
time = 1,190 ms. \\ reasonable.
? \p 308
? intmellininv(s=2,4, gamma(s)^3);
time = 51,300 ms. \\ slow because of Gamma(s)^3.
@3

The library syntax is
intmellininv(void *E, GEN (*eval)(void*,GEN), GEN sig, GEN z, GEN tab, long prec).

intmellininvshort(sig,z,tab)

Numerical integration
of (2iPi)^{-1}s(X)z^{-X} with respect to X on the line Re(X) = sig.
In other words, inverse Mellin transform of s(X) at the value z.
Here s(X) is implicitly contained in tab in
intfuncinit format,
typically

tab = intfuncinit(T = [-1], [1], s(sig + I*T))
@3

or similar commands. Take the example of the inverse Mellin transform of
Gamma(s)^3 given in
intmellininv:

? \p 105
? oo = [1]; \\ for clarity
? A = intmellininv(s=2,4, gamma(s)^3);
time = 2,500 ms. \\ not too fast because of Gamma(s)^3.
\\  function of real type, decreasing as exp(-3Pi/2.|t|)
? tab = intfuncinit(t=[-oo, 3*Pi/2],[oo, 3*Pi/2], gamma(2+I*t)^3, 1);
time = 1,370 ms.
? intmellininvshort(2,4, tab) - A
time = 50 ms.
%4 = -1.26... - 3.25...E-109*I \\ 50 times faster than
A and perfect.
? tab2 = intfuncinit(t=-oo, oo, gamma(2+I*t)^3, 1);
? intmellininvshort(2,4, tab2)
%6 = -1.2...E-42 - 3.2...E-109*I  \\ 63 digits lost
@3

In the computation of tab, it was not essential to include the
exact exponential decrease of Gamma(2+it)^3. But as the last
example shows, a rough indication must be given, otherwise slow
decrease is assumed, resulting in catastrophic loss of accuracy.

The library syntax is
GEN intmellininvshort(GEN sig, GEN z, GEN tab, long prec).

intnum(X = a,b,expr,{tab})

Numerical integration
of expr on ]a,b[ with respect to X. The integrand may have values
belonging to a vector space over the real numbers; in particular, it can be
complex-valued or vector-valued. But it is assumed that the function is regular
on ]a,b[. If the endpoints a and b are finite and the function is regular
there, the situation is simple:

? intnum(x = 0,1, x^2)
%1 = 0.3333333333333333333333333333
? intnum(x = 0,Pi/2, [cos(x), sin(x)])
%2 = [1.000000000000000000000000000, 1.000000000000000000000000000]
@3

An endpoint equal to ± oo  is coded as the single-component vector
[±1]. You are welcome to set, e.g
oo = [1] or
INFINITY = [1],
then using
+oo,
-oo,
-INFINITY, etc. will have the expected
behavior.

? oo = [1];  \\ for clarity
? intnum(x = 1,+oo, 1/x^2)
%2 = 1.000000000000000000000000000
@3

In basic usage, it is assumed that the function does not decrease
exponentially fast at infinity:

? intnum(x=0,+oo, exp(-x))
***   at top-level: intnum(x=0,+oo,exp(-
***                 ^--------------------
*** exp: exponent (expo) overflow
@3

We shall see in a moment how to avoid the last problem, after describing
the last argument tab, which is both optional and technical. The
routine uses weights, which are mostly independent of the function being
integrated, evaluated at many sampling points. If tab is

@3* a positive integer m, we use 2^m sampling points, hopefully
increasing accuracy. But note that the running time is roughly proportional
to 2^m. One may try consecutive values of m until they give the same
value up to an accepted error. If tab is omitted, the algorithm guesses
a reasonable value for m depending on the current precision only, which
should be sufficient for regular functions. That value may be obtained from

intnumstep, and increased in case of difficulties.

@3* a set of integration tables as output by
intnuminit,
they are used directly. This is useful if several integrations of the same
type are performed (on the same kind of interval and functions, for a given
accuracy), in particular for multivariate integrals, since we then skip
expensive precomputations.

@3Specifying the behavior at endpoints.
This is done as follows. An endpoint a is either given as such (a scalar,
real or complex, or [±1] for ± oo ), or as a two component vector
[a,alpha], to indicate the behavior of the integrand in a neighborhood
of a.

If a is finite, the code [a,alpha] means the function has a
singularity of the form (x-a)^{alpha}, up to logarithms. (If alpha \ge
0, we only assume the function is regular, which is the default assumption.)
If a wrong singularity exponent is used, the result will lose a catastrophic
number of decimals:

? intnum(x=0, 1, x^(-1/2))         \\ assume x^{-1/2} is regular at 0
%1 = 1.999999999999999999990291881
? intnum(x=[0,-1/2], 1, x^(-1/2))  \\ no, it's not
%2 = 2.000000000000000000000000000
? intnum(x=[0,-1/10], 1, x^(-1/2))
%3 = 1.999999999999999999999946438 \\ using a wrong exponent is bad

If a is ± oo , which is coded as [± 1], the situation is more
complicated, and [[±1],alpha] means:

@3* alpha = 0 (or no alpha at all, i.e. simply [±1]) assumes that the
integrand tends to zero, but not exponentially fast, and not
oscillating such as sin(x)/x.

@3* alpha > 0 assumes that the function tends to zero exponentially fast
approximately as exp(-alpha x). This includes oscillating but quickly
decreasing functions such as exp(-x)sin(x).

? oo = [1];
? intnum(x=0, +oo, exp(-2*x))
***   at top-level: intnum(x=0,+oo,exp(-
***                 ^--------------------
*** exp: exponent (expo) overflow
? intnum(x=0, [+oo, 2], exp(-2*x))
%1 = 0.5000000000000000000000000000 \\ OK!
? intnum(x=0, [+oo, 4], exp(-2*x))
%2 = 0.4999999999999999999961990984 \\ wrong exponent  ==>  imprecise result
? intnum(x=0, [+oo, 20], exp(-2*x))
%2 = 0.4999524997739071283804510227 \\ disaster

@3* alpha < -1 assumes that the function tends to 0 slowly, like
x^{alpha}. Here it is essential to give the correct alpha, if possible,
but on the other hand alpha
<= -2 is equivalent to alpha = 0, in other
words to no alpha at all.

The last two codes are reserved for oscillating functions.
Let k > 0 real, and g(x) a non-oscillating function tending slowly to 0
(e.g. like a negative power of x), then

@3* alpha = k * I assumes that the function behaves like cos(kx)g(x).

@3* alpha = -k* I assumes that the function behaves like sin(kx)g(x).

@3Here it is critical to give the exact value of k. If the
oscillating part is not a pure sine or cosine, one must expand it into a
Fourier series, use the above codings, and sum the resulting contributions.
Otherwise you will get nonsense. Note that cos(kx), and similarly
sin(kx), means that very function, and not a translated version such as
cos(kx+a).

@3Note. If f(x) = cos(kx)g(x) where g(x) tends to zero
exponentially fast as exp(-alpha x), it is up to the user to choose
between [[±1],alpha] and [[±1],k* I], but a good rule of thumb is that
if the oscillations are much weaker than the exponential decrease, choose
[[±1],alpha], otherwise choose [[±1],k* I], although the latter can
reasonably be used in all cases, while the former cannot. To take a specific
example, in the inverse Mellin transform, the integrand is almost always a
product of an exponentially decreasing and an oscillating factor. If we
choose the oscillating type of integral we perhaps obtain the best results,
at the expense of having to recompute our functions for a different value of
the variable z giving the transform, preventing us to use a function such
as
intmellininvshort. On the other hand using the exponential type of
integral, we obtain less accurate results, but we skip expensive
recomputations. See
intmellininvshort and
intfuncinit for more
explanations.

We shall now see many examples to get a feeling for what the various
parameters achieve. All examples below assume precision is set to 105
decimal digits. We first type

? \p 105
? oo = [1]  \\ for clarity

@3Apparent singularities. Even if the function f(x) represented
by expr has no singularities, it may be important to define the
function differently near special points. For instance, if f(x) = 1
/(exp(x)-1) - exp(-x)/x, then int_0^ oo f(x)dx = gamma, Euler's
constant
Euler. But

? f(x) = 1/(exp(x)-1) - exp(-x)/x
? intnum(x = 0, [oo,1],  f(x)) - Euler
%1 = 6.00... E-67
@3

thus only correct to 67 decimal digits. This is because close to 0 the
function f is computed with an enormous loss of accuracy.
A better solution is

? f(x) = 1/(exp(x)-1)-exp(-x)/x
? F = truncate( f(t + O(t^7)) ); \\ expansion around t = 0
? g(x) = if (x > 1e-18, f(x), subst(F,t,x))  \\ note that 6.18 > 105
? intnum(x = 0, [oo,1],  g(x)) - Euler
%2 = 0.E-106 \\ perfect
@3

It is up to the user to determine constants such as the 10^{-18} and 7
used above.

@3True singularities. With true singularities the result is worse.
For instance

? intnum(x = 0, 1,  1/sqrt(x)) - 2
%1 = -1.92... E-59 \\ only 59 correct decimals

? intnum(x = [0,-1/2], 1,  1/sqrt(x)) - 2
%2 = 0.E-105 \\ better

@3Oscillating functions.

? intnum(x = 0, oo, sin(x) / x) - Pi/2
%1 = 20.78.. \\ nonsense
? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2
? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2
%3 = 0.E-105 \\ perfect
? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2  \\ oops, wrong k
%4 = 0.07...
? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2
%5 = 0.E-105 \\ perfect

? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4
? sin(x)^3 - (3*sin(x)-sin(3*x))/4
%7 = O(x^17)
@3

We may use the above linearization and compute two oscillating integrals with
"infinite endpoints"
[oo, -I] and
[oo, -3*I] respectively, or
notice the obvious change of variable, and reduce to the single integral
(1/2)int_0^ oo sin(x)/xdx. We finish with some more complicated
examples:

? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2
? intnum(x = 0, 1, (1-cos(x))/x^2) \
+ intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2
%2 = -2.18... E-106 \\ OK

? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3
%3 = 5.45... E-107 \\ OK
? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3
%4 = -1.33... E-89 \\ lost 16 decimals. Try higher m:
? m = intnumstep()
%5 = 7 \\ the value of m actually used above.
? tab = intnuminit(0,[oo,-I], m+1); \\ try m one higher.
? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3
%6 = 5.45... E-107 \\ OK this time.

@3Warning. Like
sumalt,
intnum often assigns a
reasonable value to diverging integrals. Use these values at your own risk!
For example:

? intnum(x = 0, [oo, -I], x^2*sin(x))
%1 = -2.0000000000...
@3

Note the formula
int_0^ oo sin(x)/x^sdx = cos(Pi s/2) Gamma(1-s) ,
a priori valid only for 0 < Re(s) < 2, but the right hand side provides an
analytic continuation which may be evaluated at s = -2...

@3Multivariate integration.
Using successive univariate integration with respect to different formal
parameters, it is immediate to do naive multivariate integration. But it is
important to use a suitable
intnuminit to precompute data for the
internal integrations at least!

For example, to compute the double integral on the unit disc x^2+y^2
<= 1
of the function x^2+y^2, we can write

? tab = intnuminit(-1,1);
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab)
@3

The first tab is essential, the second optional. Compare:

? tab = intnuminit(-1,1);
time = 30 ms.
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2));
time = 54,410 ms. \\ slow
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab);
time = 7,210 ms.  \\ faster
@3

However, the
intnuminit program is usually pessimistic when it comes to
choosing the integration step 2^{-m}. It is often possible to improve the
speed by trial and error. Continuing the above example:

? test(M) =
{
tab = intnuminit(-1,1, M);
intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2,tab), tab) - Pi/2
}
? m = intnumstep() \\ what value of m did it take?
%1 = 7
? test(m - 1)
time = 1,790 ms.
%2 = -2.05... E-104 \\ 4 = 2^2 times faster and still OK.
? test(m - 2)
time = 430 ms.
%3 = -1.11... E-104 \\ 16 = 2^4 times faster and still OK.
? test(m - 3)
time = 120 ms.
%3 = -7.23... E-60 \\ 64 = 2^6 times faster, lost 45 decimals.

The library syntax is
intnum(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,GEN tab, long prec),
where an omitted tab is coded as
NULL.

intnuminit(a,b,{m = 0})

Initialize tables for integration from
a to b, where a and b are coded as in
intnum. Only the
compactness, the possible existence of singularities, the speed of decrease
or the oscillations at infinity are taken into account, and not the values.
For instance intnuminit(-1,1) is equivalent to intnuminit(0,Pi),
and intnuminit([0,-1/2],[1]) is equivalent to {\tt
intnuminit([-1],[-1,-1/2])}. If m is not given, it is computed according to
the current precision. Otherwise the integration step is 1/2^m. Reasonable
values of m are m = 6 or m = 7 for 100 decimal digits, and m = 9 for
1000 decimal digits.

The result is technical, but in some cases it is useful to know the output.
Let x = phi(t) be the change of variable which is used. tab[1] contains
the integer m as above, either given by the user or computed from the default
precision, and can be recomputed directly using the function
intnumstep.
tab[2] and tab[3] contain respectively the abscissa and weight
corresponding to t = 0 (phi(0) and phi'(0)). tab[4] and
tab[5] contain the abscissas and weights corresponding to positive
t = nh for 1
<= n
<= N and h = 1/2^m (phi(nh) and phi'(nh)). Finally
tab[6] and tab[7] contain either the abscissas and weights
corresponding to negative t = nh for -N
<= n
<= -1, or may be empty (but
not always) if phi(t) is an odd function (implicitly we would have
tab[6] = -tab[4] and tab[7] = tab[5]).

The library syntax is
GEN intnuminit(GEN a, GEN b, long m, long prec).

intnuminitgen(t,a,b,ph,{m = 0},{flag = 0})

Initialize tables for integrations from a to b using abscissas
ph(t) and weights ph'(t). Note that there is no equal sign after the
variable name t since t always goes from - oo  to + oo , but it
is ph(t) which goes from a to b, and this is not checked. If flag = 1
or 2, multiply the reserved table length by 4^{flag}, to avoid corresponding
error.

The library syntax is
intnuminitgen(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, long m, long flag, long prec)

intnumromb(X = a,b,expr,{flag = 0})

Numerical integration of expr (smooth in ]a,b[), with respect to
X. Suitable for low accuracy; if expr is very regular (e.g. analytic
in a large region) and high accuracy is desired, try
intnum first.

Set flag = 0 (or omit it altogether) when a and b are not too large, the
function is smooth, and can be evaluated exactly everywhere on the interval
[a,b].

If flag = 1, uses a general driver routine for doing numerical integration,
making no particular assumption (slow).

flag = 2 is tailored for being used when a or b are infinite. One
must have ab > 0, and in fact if for example b = + oo , then it is
preferable to have a as large as possible, at least a
>= 1.

If flag = 3, the function is allowed to be undefined (but continuous) at a
or b, for example the function sin(x)/x at x = 0.

The user should not require too much accuracy: 18 or 28 decimal digits is OK,
but not much more. In addition, analytical cleanup of the integral must have
been done: there must be no singularities in the interval or at the
boundaries. In practice this can be accomplished with a simple change of
variable. Furthermore, for improper integrals, where one or both of the
limits of integration are plus or minus infinity, the function must decrease
sufficiently rapidly at infinity. This can often be accomplished through
integration by parts. Finally, the function to be integrated should not be
very small (compared to the current precision) on the entire interval. This
can of course be accomplished by just multiplying by an appropriate constant.

Note that infinity can be represented with essentially no loss of
accuracy by an appropriate huge number. However beware of real underflow
when dealing with rapidly decreasing functions. For example, in order to
compute the int_0^ oo e^{-x^2}dx to 28 decimal digits, then one can
set infinity equal to 10 for example, and certainly not to
1e1000.

The library syntax is
intnumromb(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, long flag, long prec),
where
eval(x, E) returns the value of the function at x.
You may store any additional information required by
eval in E, or set
it to
NULL.

intnumstep()

Give the value of m used in all the

intnum and
sumnum programs, hence such that the integration
step is equal to 1/2^m.

The library syntax is
long intnumstep(long prec).

prod(X = a,b,expr,{x = 1})

Product of expression
expr, initialized at x, the formal parameter X going from a to
b. As for
sum, the main purpose of the initialization parameter x
is to force the type of the operations being performed. For example if it is
set equal to the integer 1, operations will start being done exactly. If it
is set equal to the real 1., they will be done using real numbers having
the default precision. If it is set equal to the power series 1+O(X^k) for
a certain k, they will be done using power series of precision at most k.
These are the three most common initializations.

@3As an extreme example, compare

? prod(i=1, 100, 1 - X^i);  \\ this has degree 5050 !!
time = 128 ms.
? prod(i=1, 100, 1 - X^i, 1 + O(X^101))
time = 8 ms.
%2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \
X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)
@3

Of course, in  this specific case, it is faster to use
eta,
which is computed using Euler's formula.

? prod(i=1, 1000, 1 - X^i, 1 + O(X^1001));
time = 589 ms.
? \ps1000
seriesprecision = 1000 significant terms
? eta(X) - %
time = 8ms.
%4 = O(X^1001)

The library syntax is
produit(GEN a, GEN b, char *expr, GEN x).

prodeuler(X = a,b,expr)

Product of expression expr,
initialized at 1. (i.e.to a real number equal to 1 to the current

realprecision), the formal parameter X ranging over the prime numbers
between a and b.

The library syntax is
prodeuler(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b, long prec).

prodinf(X = a,expr,{flag = 0})

infinite product of
expression expr, the formal parameter X starting at a. The evaluation
stops when the relative error of the expression minus 1 is less than the
default precision. In particular, non-convergent products result in infinite
loops. The expressions must always evaluate to an element of C.

If flag = 1, do the product of the (1+expr) instead.

The library syntax is
prodinf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec)
(flag = 0), or
prodinf1 with the same arguments (flag = 1).

solve(X = a,b,expr)

Find a real root of expression
expr between a and b, under the condition
expr(X = a) * expr(X = b)
<= 0. (You will get an error message

roots must be bracketed in solve if this does not hold.)
This routine uses Brent's method and can fail miserably if expr is
not defined in the whole of [a,b] (try
solve(x = 1, 2, tan(x))).

The library syntax is
zbrent(void *E,GEN (*eval)(void*,GEN),GEN a,GEN b,long prec).

sum(X = a,b,expr,{x = 0})

Sum of expression expr,
initialized at x, the formal parameter going from a to b. As for

prod, the initialization parameter x may be given to force the type
of the operations being performed.

@3As an extreme example, compare

? sum(i=1, 10^4, 1/i); \\ rational number: denominator has 4345 digits.
time = 236 ms.
? sum(i=1, 5000, 1/i, 0.)
time = 8 ms.
%2 = 9.787606036044382264178477904

The library syntax is
somme(GEN a, GEN b, char *expr, GEN x).

sumalt(X = a,expr,{flag = 0})

Numerical summation of the series expr, which should be an
alternating series, the formal variable X starting at a. Use an
algorithm of Cohen, Villegas and Zagier (Experiment. Math. 9
(2000), no.1, 3--12).

If flag = 1, use a variant with slightly different polynomials. Sometimes
faster.

The routine is heuristic and a rigorous proof assumes that the values of
expr are the moments of a positive measure on [0,1]. Divergent
alternating series can sometimes be summed by this method, as well as series
which are not exactly alternating (see for example
Section [Label: se:user_defined]). It should be used to try and guess the value of
an infinite sum. (However, see the example at the end of
Section [Label: se:userfundef].)

If the series already converges geometrically,

suminf is often a better choice:

? \p28
? sumalt(i = 1, -(-1)^i / i)  - log(2)
time = 0 ms.
%1 = -2.524354897 E-29
? suminf(i = 1, -(-1)^i / i)   \\ Had to hit < C-C >
***   at top-level: suminf(i=1,-(-1)^i/i)
***                                ^------
*** suminf: user interrupt after 10min, 20,100 ms.
? \p1000
? sumalt(i = 1, -(-1)^i / i)  - log(2)
time = 90 ms.
%2 = 4.459597722 E-1002

? sumalt(i = 0, (-1)^i / i!) - exp(-1)
time = 670 ms.
%3 = -4.03698781490633483156497361352190615794353338591897830587 E-944
? suminf(i = 0, (-1)^i / i!) - exp(-1)
time = 110 ms.
%4 = -8.39147638 E-1000   \\  faster and more accurate

The library syntax is
sumalt(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also
available is
sumalt2 with the same arguments (flag = 1).

sumdiv(n,X,expr)

Sum of expression expr over the positive divisors of n.
This function is a trivial wrapper essentially equivalent to

D = divisors(n);
for (i = 1, #D, X = D[i]; eval(expr))
@3
(except that
X is lexically scoped to the
sumdiv
loop). If expr is a multiplicative function, use
sumdivmult.

sumdivmult(n,d,expr)

Sum of multiplicative expression expr over the positive
divisors d of n. Assume that expr evaluates to f(d)
where f is multiplicative: f(1) = 1 and f(ab) = f(a)f(b) for coprime
a and b.

suminf(X = a,expr)

infinite sum of expression
expr, the formal parameter X starting at a. The evaluation stops
when the relative error of the expression is less than the default precision
for 3 consecutive evaluations. The expressions must always evaluate to a
complex number.

If the series converges slowly, make sure
realprecision is low (even 28
digits may be too much). In this case, if the series is alternating or the
terms have a constant sign,
sumalt and
sumpos should be used

? \p28
? suminf(i = 1, -(-1)^i / i)   \\ Had to hit < C-C >
***   at top-level: suminf(i=1,-(-1)^i/i)
***                                ^------
*** suminf: user interrupt after 10min, 20,100 ms.
? sumalt(i = 1, -(-1)^i / i) - log(2)
time = 0 ms.
%1 = -2.524354897 E-29

The library syntax is
suminf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec).

sumnum(X = a,sig,expr,{tab},{flag = 0})

Numerical summation of expr, the variable X taking integer values
from ceiling of a to + oo , where expr is assumed to be a
holomorphic function f(X) for Re(X)
>= sigma.

The parameter sigma belongs to R is coded in the argument
sig as follows: it
is either

@3* a real number sigma. Then the function f is assumed to
decrease at least as 1/X^2 at infinity, but not exponentially;

@3* a two-component vector [sigma,alpha], where sigma is as
before, alpha < -1. The function f is assumed to decrease like
X^{alpha}. In particular, alpha
<= -2 is equivalent to no alpha at all.

@3* a two-component vector [sigma,alpha], where sigma is as
before, alpha > 0. The function f is assumed to decrease like
exp(-alpha X). In this case it is essential that alpha be exactly the
rate of exponential decrease, and it is usually a good idea to increase
the default value of m used for the integration step. In practice, if
the function is exponentially decreasing
sumnum is slower and less
accurate than
sumpos or
suminf, so should not be used.

The function uses the
intnum routines and integration on the line
Re(s) = sigma. The optional argument tab is as in intnum, except it
must be initialized with
intnuminit.

When tab is not precomputed,
sumnum can be slower than

sumpos, when the latter is applicable. It is in general faster for
slowly decreasing functions.

Finally, if flag is nonzero, we assume that the function f to be summed is
of real type, i.e. satisfies \overline{f(z)} = f(\overline{z}), which
speeds up the computation.

? \p 308
? a = sumpos(n=1, 1/(n^3+n+1));
time = 1,410 ms.
? tab = sumnuminit(2);
time = 1,620 ms. \\ slower but done once and for all.
? b = sumnum(n=1, 2, 1/(n^3+n+1), tab);
time = 460 ms. \\ 3 times as fast as
sumpos
? a - b
%4 = -1.0... E-306 + 0.E-320*I \\ perfect.
? sumnum(n=1, 2, 1/(n^3+n+1), tab, 1) - a; \\ function of real type
time = 240 ms.
%2 = -1.0... E-306 \\ twice as fast, no imaginary part.
? c = sumnum(n=1, 2, 1/(n^2+1), tab, 1);
time = 170 ms. \\ fast
? d = sumpos(n=1, 1 / (n^2+1));
time = 2,700 ms. \\ slow.
? d - c
time = 0 ms.
%5 = 1.97... E-306 \\ perfect.

For slowly decreasing function, we must indicate singularities:

? \p 308
? a = sumnum(n=1, 2, n^(-4/3));
time = 9,930 ms. \\ slow because of the computation of n^{-4/3}.
? a - zeta(4/3)
time = 110 ms.
%1 = -2.42... E-107 \\ lost 200 decimals because of singularity at  oo
? b = sumnum(n=1, [2,-4/3], n^(-4/3), /*omitted*/, 1); \\ of real type
time = 12,210 ms.
? b - zeta(4/3)
%3 = 1.05... E-300 \\ better

Since the complex values of the function are used, beware of
determination problems. For instance:

? \p 308
? tab = sumnuminit([2,-3/2]);
time = 1,870 ms.
? sumnum(n=1,[2,-3/2], 1/(n*sqrt(n)), tab,1) - zeta(3/2)
time = 690 ms.
%1 = -1.19... E-305 \\ fast and correct
? sumnum(n=1,[2,-3/2], 1/sqrt(n^3), tab,1) - zeta(3/2)
time = 730 ms.
%2 = -1.55... \\ nonsense. However
? sumnum(n=1,[2,-3/2], 1/n^(3/2), tab,1) - zeta(3/2)
time = 8,990 ms.
%3 = -1.19... E-305 \\ perfect, as 1/(n*sqrt{n}) above but much slower

For exponentially decreasing functions,
sumnum is given for
completeness, but one of
suminf or
sumpos should always be
preferred. If you experiment with such functions and
sumnum anyway,
indicate the exact rate of decrease and increase m by 1 or 2:

? suminf(n=1, 2^(-n)) - 1
time = 10 ms.
%1 = -1.11... E-308 \\ fast and perfect
? sumpos(n=1, 2^(-n)) - 1
time = 10 ms.
%2 = -2.78... E-308 \\ also fast and perfect
? sumnum(n=1,2, 2^(-n)) - 1
%3 = -1.321115060 E320 + 0.E311*I \\ nonsense
? sumnum(n=1, [2,log(2)], 2^(-n), /*omitted*/, 1) - 1 \\ of real type
time = 5,860 ms.
%4 = -1.5... E-236 \\ slow and lost 70 decimals
? m = intnumstep()
%5 = 9
? sumnum(n=1,[2,log(2)], 2^(-n), m+1, 1) - 1
time = 11,770 ms.
%6 = -1.9... E-305 \\ now perfect, but slow.

The library syntax is
sumnum(void *E, GEN (*eval)(void*,GEN), GEN a,GEN sig,GEN tab,long flag, long prec).

sumnumalt(X = a,sig,expr,{tab},{flag = 0})

Numerical
summation of (-1)^Xexpr(X), the variable X taking integer values from
ceiling of a to + oo , where expr is assumed to be a holomorphic
function for Re(X)
>= sig (or sig[1]).

@3Warning. This function uses the
intnum routines and is
orders of magnitude slower than
sumalt. It is only given for
completeness and should not be used in practice.

@3Warning 2. The expression expr must not include the
(-1)^X coefficient. Thus
sumalt(n = a,(-1)^nf(n)) is (approximately)
equal to
sumnumalt(n = a,sig,f(n)).

sig is coded as in
sumnum. However for slowly decreasing functions
(where sig is coded as [sigma,alpha] with alpha < -1), it is not
really important to indicate alpha. In fact, as for
sumalt, the
program will often give meaningful results (usually analytic continuations)
even for divergent series. On the other hand the exponential decrease must be
indicated.

tab is as in
intnum, but if used must be initialized with

sumnuminit. If flag is nonzero, assumes that the function f to be
summed is of real type, i.e. satisfies \overline{f(z)} = f(\overline{z}), and
then twice faster when tab is precomputed.

? \p 308
? tab = sumnuminit(2, /*omitted*/, -1); \\ abscissa sigma = 2, alternating sums.
time = 1,620 ms. \\ slow, but done once and for all.
? a = sumnumalt(n=1, 2, 1/(n^3+n+1), tab, 1);
time = 230 ms. \\ similar speed to
sumnum
? b = sumalt(n=1, (-1)^n/(n^3+n+1));
time = 0 ms. \\ infinitely faster!
? a - b
time = 0 ms.
%1 = -1.66... E-308 \\ perfect

The library syntax is
sumnumalt(void *E, GEN (*eval)(void*,GEN), GEN a, GEN sig, GEN tab, long flag, long prec).

sumnuminit(sig, {m = 0}, {sgn = 1})

Initialize tables for numerical summation using
sumnum (with
sgn = 1) or
sumnumalt (with sgn = -1), sig is the
abscissa of integration coded as in
sumnum, and m is as in

intnuminit.

The library syntax is
GEN sumnuminit(GEN sig, long m, long sgn, long prec).

sumpos(X = a,expr,{flag = 0})

Numerical summation of the series expr, which must be a series of
terms having the same sign, the formal variable X starting at a. The
algorithm used is Van Wijngaarden's trick for converting such a series into
an alternating one, then we use
sumalt. For regular functions, the
function
sumnum is in general much faster once the initializations
sumnuminit.

The routine is heuristic and assumes that expr is more or less a
decreasing function of X. In particular, the result will be completely
wrong if expr is 0 too often. We do not check either that all terms
have the same sign. As
sumalt, this function should be used to
try and guess the value of an infinite sum.

If flag = 1, use slightly different polynomials. Sometimes faster.

The library syntax is
sumpos(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also
available is
sumpos2 with the same arguments (flag = 1).