Pari/GP Reference Documentation | Contents - Global index - GP keyboard shortcuts |
algdep bestapprnf charpoly concat dirpowers forqfvec lindep matadjoint matcompanion matconcat matdet matdetint matdetmod matdiagonal mateigen matfrobenius mathess mathilbert mathnf mathnfmod mathnfmodid mathouseholder matid matimage matimagecompl matimagemod matindexrank matintersect matinverseimage matinvmod matisdiagonal matker matkerint matkermod matmuldiagonal matmultodiagonal matpascal matpermanent matqr matrank matreduce matrix matrixqz matsize matsnf matsolve matsolvemod matsupplement mattranspose minpoly norml2 normlp powers qfauto qfautoexport qfbil qfcholesky qfcvp qfeval qfgaussred qfisom qfisominit qfjacobi qflll qflllgram qfminim qfminimize qfnorm qforbits qfparam qfperfection qfrep qfsign qfsolve setbinop setdelta setintersect setisset setminus setsearch setunion snfrank trace vecextract vecprod vecsearch vecsort vecsum vector vectorsmall vectorv | |
Note that most linear algebra functions operating on subspaces defined by
generating sets (such as
Since PARI does not have a strong typing system, scalars live in
unspecified commutative base rings. It is very difficult to write
robust linear algebra routines in such a general setting. We thus
assume that the base ring is a domain and work over its field of
fractions. If the base ring is not a domain, one gets an error as soon
as a nonzero pivot turns out to be noninvertible. Some functions,
e.g.
| |
algdep(z, k, {flag = 0}) | |
z being real/complex, or p-adic, finds a polynomial (in the variable
Internally,
? \p200 ? z = 2^(1/6)+3^(1/5); ? algdep(z, 30); \\ right in 63ms ? algdep(z, 30, 100); \\ wrong in 39ms ? algdep(z, 30, 170); \\ right in 61ms ? algdep(z, 30, 200); \\ wrong in 146ms ? \p250 ? z = 2^(1/6)+3^(1/5); \\ recompute to new, higher, accuracy ! ? algdep(z, 30); \\ right in 68ms ? algdep(z, 30, 200); \\ right in 68ms ? \p500 ? algdep(2^(1/6)+3^(1/5), 30); \\ right in 138ms ? \p1000 ? algdep(2^(1/6)+3^(1/5), 30); \\ right in 276s
The changes in
Proceeding by increments of 5 digits of accuracy,
\\ assume T contains the correct result, for comparison forstep(d=100, 250, 5, \ localprec(d); \ print(d, " ", algdep(2^(1/6)+3^(1/5),30) == T)) This example is the test case studied in a 2000 paper by Borwein and Lisonek: Applications of integer relation algorithms, Discrete Math., 217, p. 65–82. The version of PARI tested there was 1.39, which succeeded reliably from precision 265 on, in about 1000 as much time as the current version (on slower hardware of course).
Note that this function does not work if z is a power series. The function
The library syntax is
| |
bestapprnf(V, T, {rootT}) | |
T being an integral polynomial and V being a scalar, vector, or
matrix with complex coefficients, return a reasonable approximation of V
with polmods modulo T. T can also be any number field structure, in which
case the minimal polynomial attached to the structure (
? bestapprnf(sqrt(5), polcyclo(5)) %1 = Mod(-2*x^3 - 2*x^2 - 1, x^4 + x^3 + x^2 + x + 1) ? bestapprnf(sqrt(5), polcyclo(5), exp(4*I*Pi/5)) %2 = Mod(2*x^3 + 2*x^2 + 1, x^4 + x^3 + x^2 + x + 1)
When the output has huge rational coefficients, try to
increase the working
? T = x^3-2; vT = polroots(T); z = 3*2^(1/3)+1; ? bestapprnf(z, T, vT[1]) %2 = Mod(3*x + 1, x^3 - 2) ? bestapprnf(z, T, vT[2]) %3 = 4213714286230872/186454048314072 \\ close to 3*2^(1/3) + 1
The library syntax is
| |
charpoly(A, {v = 'x}, {flag = 5}) | |
characteristic polynomial of A with respect to the variable v, i.e. determinant of v*I-A if A is a square matrix.
? charpoly([1,2;3,4]); %1 = x^2 - 5*x - 2 ? charpoly([1,2;3,4],, 't) %2 = t^2 - 5*t - 2 If A is not a square matrix, the function returns the characteristic polynomial of the map "multiplication by A" if A is a scalar:
? charpoly(Mod(x+2, x^3-2)) %1 = x^3 - 6*x^2 + 12*x - 10 ? charpoly(I) %2 = x^2 + 1 ? charpoly(quadgen(5)) %3 = x^2 - x - 1 ? charpoly(ffgen(ffinit(2,4))) %4 = Mod(1, 2)*x^4 + Mod(1, 2)*x^3 + Mod(1, 2)*x^2 + Mod(1, 2)*x + Mod(1, 2) The value of flag is only significant for matrices, and we advise to stick to the default value. Let n be the dimension of A. If flag = 0, same method (Le Verrier's) as for computing the adjoint matrix, i.e. using the traces of the powers of A. Assumes that n! is invertible; uses O(n4) scalar operations. If flag = 1, uses Lagrange interpolation which is usually the slowest method. Assumes that n! is invertible; uses O(n4) scalar operations. If flag = 2, uses the Hessenberg form. Assumes that the base ring is a field. Uses O(n3) scalar operations, but suffers from coefficient explosion unless the base field is finite or ℝ. If flag = 3, uses Berkowitz's division free algorithm, valid over any ring (commutative, with unit). Uses O(n4) scalar operations. If flag = 4, x must be integral. Uses a modular algorithm: Hessenberg form for various small primes, then Chinese remainders. If flag = 5 (default), uses the "best" method given x. This means we use Berkowitz unless the base ring is ℤ (use flag = 4) or a field where coefficient explosion does not occur, e.g. a finite field or the reals (use flag = 2).
The library syntax is
| |
concat(x, {y}) | |
Concatenation of x and y. If x or y is
not a vector or matrix, it is considered as a one-dimensional vector. All
types are allowed for x and y, but the sizes must be compatible. Note
that matrices are concatenated horizontally, i.e. the number of rows stays
the same. Using transpositions, one can concatenate them vertically,
but it is often simpler to use
? x = matid(2); y = 2*matid(2); ? concat(x,y) %2 = [1 0 2 0] [0 1 0 2] ? concat(x~,y~)~ %3 = [1 0] [0 1] [2 0] [0 2] ? matconcat([x;y]) %4 = [1 0] [0 1] [2 0] [0 2]
To concatenate vectors sideways (i.e. to obtain a two-row or two-column
matrix), use
? x = [1,2]; ? y = [3,4]; ? concat(x,y) %3 = [1, 2, 3, 4] ? Mat([x,y]~) %4 = [1 2] [3 4] ? matconcat([x;y]) %5 = [1 2] [3 4] Concatenating a row vector to a matrix having the same number of columns will add the row to the matrix (top row if the vector is x, i.e. comes first, and bottom row otherwise).
The empty matrix If y is omitted, x has to be a row vector or a list, in which case its elements are concatenated, from left to right, using the above rules.
? concat([1,2], [3,4]) %1 = [1, 2, 3, 4] ? a = [[1,2]~, [3,4]~]; concat(a) %2 = [1 3] [2 4] ? concat([1,2; 3,4], [5,6]~) %3 = [1 2 5] [3 4 6] ? concat([%, [7,8]~, [1,2,3,4]]) %5 = [1 2 5 7] [3 4 6 8] [1 2 3 4]
The library syntax is
| |
dirpowers(n, x) | |
For nonnegative n and complex number x, return the vector with n components [1×,2×,...,n×].
? dirpowers(5, 2) %1 = [1, 4, 9, 16, 25] ? dirpowers(5, 1/2) %2 = [1, 1.414..., 1.732..., 2.000..., 2.236...]
When n ≤ 0, the function returns the empty vector
The library syntax is
| |
forqfvec(v, q, b, expr) | |
q being a square and symmetric integral matrix representing a positive
definite quadratic form, evaluate
? forqfvec(v, [3,2;2,3], 3, print(v)) [0, 1]~ [1, 0]~ [-1, 1]~
The library syntax is
| |
lindep(v, {flag = 0}) | |
finds a small nontrivial integral linear combination between components of v. If none can be found return an empty vector. If v is a vector with real/complex entries we use a floating point (variable precision) LLL algorithm. If flag = 0 the accuracy is chosen internally using a crude heuristic. If flag > 0 the computation is done with an accuracy of flag decimal digits. To get meaningful results in the latter case, the parameter flag should be smaller than the number of correct decimal digits in the input.
? lindep([sqrt(2), sqrt(3), sqrt(2)+sqrt(3)]) %1 = [-1, -1, 1]~ If v is p-adic, flag is ignored and the algorithm LLL-reduces a suitable (dual) lattice.
? lindep([1, 2 + 3 + 3^2 + 3^3 + 3^4 + O(3^5)]) %2 = [1, -2]~ If v is a matrix (or a vector of column vectors, or a vector of row vectors), flag is ignored and the function returns a non trivial kernel vector if one exists, else an empty vector.
? lindep([1,2,3;4,5,6;7,8,9]) %3 = [1, -2, 1]~ ? lindep([[1,0], [2,0]]) %4 = [2, -1]~ ? lindep([[1,0], [0,1]]) %5 = []~ If v contains polynomials or power series over some base field, finds a linear relation with coefficients in the field.
? lindep([x*y, x^2 + y, x^2*y + x*y^2, 1]) %4 = [y, y, -1, -y^2]~
For better control, it is preferable to use
? v = [t^2+O(t^4), 1+O(t^2)]; L=lindep(v) %1 = [1, 0]~ ? v*L %2 = t^2+O(t^4) \\ small but not 0
The library syntax is
| |
matadjoint(M, {flag = 0}) | |
adjoint matrix of M, i.e. a matrix N
of cofactors of M, satisfying M*N = det(M)*Id. M must be a
(not necessarily invertible) square matrix of dimension n.
If flag is 0 or omitted, we try to use Leverrier-Faddeev's algorithm,
which assumes that n! invertible. If it fails or flag = 1,
computes T =
? a = [1,2,3;3,4,5;6,7,8] * Mod(1,4); ? matadjoint(a) %2 = [Mod(1, 4) Mod(1, 4) Mod(2, 4)] [Mod(2, 4) Mod(2, 4) Mod(0, 4)] [Mod(1, 4) Mod(1, 4) Mod(2, 4)] Both algorithms use O(n4) operations in the base ring. Over a field, they are usually slower than computing the characteristic polynomial or the inverse of M directly.
The library syntax is
| |
matcompanion(x) | |
The left companion matrix to the nonzero polynomial x.
The library syntax is
| |
matconcat(v) | |
Returns a
? A=[1,2;3,4]; B=[5,6]~; C=[7,8]; D=9; ? matconcat([A, B]) \\ horizontal %1 = [1 2 5] [3 4 6] ? matconcat([A, C]~) \\ vertical %2 = [1 2] [3 4] [7 8] ? matconcat([A, B; C, D]) \\ block matrix %3 = [1 2 5] [3 4 6] [7 8 9] If the dimensions of the entries to concatenate do not match up, the above rules are extended as follows:
* each entry vi,j of v has a natural length and height: 1 x
1 for a scalar, 1 x n for a * let Hi be the maximum over j of the lengths of the vi,j, let Lj be the maximum over i of the heights of the vi,j. The dimensions of the (i,j)-th block in the concatenated matrix are Hi x Lj. * a scalar s = vi,j is considered as s times an identity matrix of the block dimension min (Hi,Lj) * blocks are extended by 0 columns on the right and 0 rows at the bottom, as needed.
? matconcat([1, [2,3]~, [4,5,6]~]) \\ horizontal %4 = [1 2 4] [0 3 5] [0 0 6] ? matconcat([1, [2,3], [4,5,6]]~) \\ vertical %5 = [1 0 0] [2 3 0] [4 5 6] ? matconcat([B, C; A, D]) \\ block matrix %6 = [5 0 7 8] [6 0 0 0] [1 2 9 0] [3 4 0 9] ? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9]; ? matconcat(matdiagonal([U, V])) \\ block diagonal %7 = [1 2 0 0 0] [3 4 0 0 0] [0 0 1 2 3] [0 0 4 5 6] [0 0 7 8 9]
The library syntax is
| |
matdet(x, {flag = 0}) | |
Determinant of the square matrix x. If flag = 0, uses an appropriate algorithm depending on the coefficients: * integer entries: modular method due to Dixon, Pernet and Stein. * real or p-adic entries: classical Gaussian elimination using maximal pivot. * intmod entries: classical Gaussian elimination using first nonzero pivot. * other cases: Gauss-Bareiss. If flag = 1, uses classical Gaussian elimination with appropriate pivoting strategy (maximal pivot for real or p-adic coefficients). This is usually worse than the default.
The library syntax is
| |
matdetint(B) | |
Let B be an m x n matrix with integer coefficients. The determinant D of the lattice generated by the columns of B is the square root of det(BT B) if B has maximal rank m, and 0 otherwise. This function uses the Gauss-Bareiss algorithm to compute a positive multiple of D. When B is square, the function actually returns D = |det B|.
This function is useful in conjunction with
matdet( mathnfmod(B, matdetint(B)) )
Note that as soon as one of the dimensions gets large (m or n is larger
than 20, say), it will often be much faster to use
The library syntax is
| |
matdetmod(x, d) | |
Given a matrix x with
? A = [4,2,3; 4,5,6; 7,8,9] ? matdetmod(A,27) %2 = 9
Note that using the generic function
? matdet(A * Mod(1,27)) *** at top-level: matdet(A*Mod(1,27)) *** ^ — — — — — — *** matdet: impossible inverse in Fl_inv: Mod(3, 27).
The library syntax is
| |
matdiagonal(x) | |
x being a vector, creates the diagonal matrix whose diagonal entries are those of x.
? matdiagonal([1,2,3]); %1 = [1 0 0] [0 2 0] [0 0 3]
Block diagonal matrices are easily created using
? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9]; ? matconcat(matdiagonal([U, V])) %1 = [1 2 0 0 0] [3 4 0 0 0] [0 0 1 2 3] [0 0 4 5 6] [0 0 7 8 9]
The library syntax is
| |
mateigen(x, {flag = 0}) | |
Returns the (complex) eigenvectors of x as columns of a matrix. If flag = 1, return [L,H], where L contains the eigenvalues and H the corresponding eigenvectors; multiple eigenvalues are repeated according to the eigenspace dimension (which may be less than the eigenvalue multiplicity in the characteristic polynomial). This function first computes the characteristic polynomial of x and approximates its complex roots (λi), then tries to compute the eigenspaces as kernels of the x - λi. This algorithm is ill-conditioned and is likely to miss kernel vectors if some roots of the characteristic polynomial are close, in particular if it has multiple roots.
? A = [13,2; 10,14]; mateigen(A) %1 = [-1/2 2/5] [ 1 1] ? [L,H] = mateigen(A, 1); ? L %3 = [9, 18] ? H %4 = [-1/2 2/5] [ 1 1] ? A * H == H * matdiagonal(L) %5 = 1
For symmetric matrices, use
A = real(x); B = imag(x); y = matconcat([A, -B; B, A]);
and apply
The library syntax is
| |
matfrobenius(M, {flag}, {v = 'x}) | |
Returns the Frobenius form of
the square matrix
The library syntax is
| |
mathess(x) | |
Returns a matrix similar to the square matrix x, which is in upper Hessenberg form (zero entries below the first subdiagonal).
The library syntax is
| |
mathilbert(n) | |
Creates the Hilbert matrix of order n ≥ 0, i.e. the square matrix H whose coefficient H[i,j] is 1 / (i+j-1). This matrix is ill-conditionned but its inverse has integer entries.
The library syntax is
| |
mathnf(M, {flag = 0}) | |
Let R be a Euclidean ring, equal to ℤ or to K[X] for some field K. If M is a (not necessarily square) matrix with entries in R, this routine finds the upper triangular Hermite normal form of M. If the rank of M is equal to its number of rows, this is a square matrix. In general, the columns of the result form a basis of the R-module spanned by the columns of M. The values of flag are: * 0 (default): only return the Hermite normal form H * 1 (complete output): return [H,U], where H is the Hermite normal form of M, and U is a transformation matrix such that MU = [0|H]. The matrix U belongs to GL(R). When M has a large kernel, the entries of U are in general huge. For these two values, we use a naive algorithm, which behaves well in small dimension only. Larger values correspond to different algorithms, are restricted to integer matrices, and all output the unimodular matrix U. From now on all matrices have integral entries. * flag = 4, returns [H,U] as in "complete output" above, using a variant of LLL reduction along the way. The matrix U is provably small in the L2 sense, and often close to optimal; but the reduction is in general slow, although provably polynomial-time. If flag = 5, uses Batut's algorithm and output [H,U,P], such that H and U are as before and P is a permutation of the rows such that P applied to MU gives H. This is in general faster than flag = 4 but the matrix U is usually worse; it is heuristically smaller than with the default algorithm. When the matrix is dense and the dimension is large (bigger than 100, say), flag = 4 will be fastest. When M has maximal rank, then
H = mathnfmod(M, matdetint(M)) will be even faster. You can then recover U as M-1H.
? M = matrix(3,4,i,j,random([-5,5])) %1 = [ 0 2 3 0] [-5 3 -5 -5] [ 4 3 -5 4] ? [H,U] = mathnf(M, 1); ? U %3 = [-1 0 -1 0] [ 0 5 3 2] [ 0 3 1 1] [ 1 0 0 0] ? H %5 = [19 9 7] [ 0 9 1] [ 0 0 1] ? M*U %6 = [0 19 9 7] [0 0 9 1] [0 0 0 1]
For convenience, M is allowed to be a
? v = [116085838, 181081878, 314252913,10346840]; ? [H,U] = mathnf(v, 1); ? U %2 = [ 103 -603 15 -88] [-146 13 -1208 352] [ 58 220 678 -167] [-362 -144 381 -101] ? v*U %3 = [0, 0, 0, 1]
This also allows to input a matrix as a
? v = [[1,0,4]~, [3,3,4]~, [0,-4,-5]~]; mathnf(v) %1 = [47 32 12] [ 0 1 0] [ 0 0 1]
The library syntax is
| |
mathnfmod(x, d) | |
If x is a (not necessarily square) matrix of maximal rank with integer entries, and d is a multiple of the (nonzero) determinant of the lattice spanned by the columns of x, finds the upper triangular Hermite normal form of x.
If the rank of x is equal to its number of rows, the result is a square
matrix. In general, the columns of the result form a basis of the lattice
spanned by the columns of x. Even when d is known, this is in general
slower than
The library syntax is
| |
mathnfmodid(x, d) | |
Outputs the (upper triangular) Hermite normal form of x concatenated with the diagonal matrix with diagonal d. Assumes that x has integer entries. Variant: if d is an integer instead of a vector, concatenate d times the identity matrix.
? m=[0,7;-1,0;-1,-1] %1 = [ 0 7] [-1 0] [-1 -1] ? mathnfmodid(m, [6,2,2]) %2 = [2 1 1] [0 1 0] [0 0 1] ? mathnfmodid(m, 10) %3 = [10 7 3] [ 0 1 0] [ 0 0 1]
The library syntax is
| |
mathouseholder(Q, v) | |
applies a sequence Q of Householder
transforms, as returned by
? m = [2,1; 3,2]; \\ some random matrix ? [Q,R] = matqr(m); ? Q %3 = [-0.554... -0.832...] [-0.832... 0.554...] ? R %4 = [-3.605... -2.218...] [0 0.277...] ? v = [1, 2]~; \\ some random vector ? Q * v %6 = [-2.218..., 0.277...]~ ? [q,r] = matqr(m, 1); ? exponent(r - R) \\ r is the same as R %8 = -128 ? q \\ but q has a different structure %9 = [[0.0494..., [5.605..., 3]]]] ? mathouseholder(q, v) \\ applied to v %10 = [-2.218..., 0.277...]~
The point of the Householder structure is that it efficiently
represents the linear operator v
? m = mathilbert(20); v = vectorv(20,i,i^2+1); ? [Q,R] = matqr(m); ? [q,r] = matqr(m, 1); ? \p100 ? [q2,r2] = matqr(m, 1); \\ recompute at higher accuracy ? exponent(R - r) %5 = -127 ? exponent(R - r2) %6 = -127 ? exponent(mathouseholder(q,v) - mathouseholder(q2,v)) %7 = -119 ? exponent(Q*v - mathouseholder(q2,v)) %8 = 9
We see that R is OK with or without a flag to
The library syntax is
| |
matid(n) | |
Creates the n x n identity matrix.
The library syntax is
| |
matimage(x, {flag = 0}) | |
Gives a basis for the image of the
matrix x as columns of a matrix. A priori the matrix can have entries of
any type. If flag = 0, use standard Gauss pivot. If flag = 1, use
The library syntax is
| |
matimagecompl(x) | |
Gives the vector of the column indices which
are not extracted by the function
The library syntax is
| |
matimagemod(x, d, &U) | |
Gives a Howell basis (unique representation for submodules
of (ℤ/dℤ)n)
for the image of the matrix x modulo d as columns of a matrix H. The
matrix x must have
? A = [2,1;0,2]; ? matimagemod(A,6,&U) %2 = [1 0] [0 2] ? U %3 = [5 1] [3 4] ? (A*U)%6 %4 = [1 0] [0 2] Caveat. In general the number of columns of the Howell form is not the minimal number of generators of the submodule. Example:
? matimagemod([1;2],4) %5 = [2 1] [0 2] Caveat 2. In general the matrix U is not invertible, even if A and H have the same size. Example:
? matimagemod([4,1;0,4],8,&U) %6 = [2 1] [0 4] ? U %7 = [0 0] [2 1]
The library syntax is
| |
matindexrank(M) | |
M being a matrix of rank r, returns a vector with two
The library syntax is
| |
matintersect(x, y) | |
x and y being two matrices with the same number of rows, finds a basis of the vector space equal to the intersection of the spaces spanned by the columns of x and y respectively. For efficiency, the columns of x (resp. y) should be independent.
The faster function
The library syntax is
| |
matinverseimage(x, y) | |
Given a matrix x and
a column vector or matrix y, returns a preimage z of y by x if one
exists (i.e such that x z = y), an empty vector or matrix otherwise. The
complete inverse image is z + Ker x, where a basis of the kernel of
x may be obtained by
? M = [1,2;2,4]; ? matinverseimage(M, [1,2]~) %2 = [1, 0]~ ? matinverseimage(M, [3,4]~) %3 = []~ \\ no solution ? matinverseimage(M, [1,3,6;2,6,12]) %4 = [1 3 6] [0 0 0] ? matinverseimage(M, [1,2;3,4]) %5 = [;] \\ no solution ? K = matker(M) %6 = [-2] [1]
The library syntax is
| |
matinvmod(x, d) | |
Computes a left inverse of the matrix x modulo d. The matrix x must
have
? A = [3,1,2;1,2,1;3,1,1]; ? U = matinvmod(A,6) %2 = [1 1 3] [2 3 5] [1 0 5] ? (U*A)%6 %3 = [1 0 0] [0 1 0] [0 0 1] ? matinvmod(A,5) *** at top-level: matinvmod(A,5) *** ^ — — — — -- *** matinvmod: impossible inverse in gen_inv: 0.
The library syntax is
| |
matisdiagonal(x) | |
Returns true (1) if x is a diagonal matrix, false (0) if not.
The library syntax is
| |
matker(x, {flag = 0}) | |
Gives a basis for the kernel of the matrix x as columns of a matrix. The matrix can have entries of any type, provided they are compatible with the generic arithmetic operations (+, x and /). If x is known to have integral entries, set flag = 1.
The library syntax is
| |
matkerint(x, {flag = 0}) | |
Gives an LLL-reduced ℤ-basis
for the lattice equal to the kernel of the matrix x with rational entries.
flag is deprecated, kept for backward compatibility. The function
The library syntax is
| |
matkermod(x, d, &im) | |
Gives a Howell basis (unique representation for submodules
of (ℤ/dℤ)n,
cf.
? A = [1,2,3;5,1,4] %1 = [1 2 3] [5 1 4] ? K = matkermod(A,6) %2 = [2 1] [2 1] [0 3] ? (A*K)%6 %3 = [0 0] [0 0]
The library syntax is
| |
matmuldiagonal(x, d) | |
Product of the matrix x by the diagonal
matrix whose diagonal entries are those of the vector d. Equivalent to,
but much faster than x*
The library syntax is
| |
matmultodiagonal(x, y) | |
Product of the matrices x and y assuming that the result is a diagonal matrix. Much faster than x*y in that case. The result is undefined if x*y is not diagonal.
The library syntax is
| |
matpascal(n, {q}) | |
Creates as a matrix the lower triangular Pascal triangle of order x+1 (i.e. with binomial coefficients up to x). If q is given, compute the q-Pascal triangle (i.e. using q-binomial coefficients).
The library syntax is
| |
matpermanent(x) | |
Permanent of the square matrix x using Ryser's formula in Gray code order.
? n = 20; m = matrix(n,n,i,j, i!=j); ? matpermanent(m) %2 = 895014631192902121 ? n! * sum(i=0,n, (-1)^i/i!) %3 = 895014631192902121 This function runs in time O(2n n) for a matrix of size n and is not implemented for n large.
The library syntax is
| |
matqr(M, {flag = 0}) | |
Returns [Q,R], the QR-decomposition of the square invertible matrix M with real entries: Q is orthogonal and R upper triangular. If flag = 1, the orthogonal matrix is returned as a sequence of Householder transforms: applying such a sequence is stabler and faster than multiplication by the corresponding Q matrix. More precisely, if
[Q,R] = matqr(M); [q,r] = matqr(M, 1);
then r = R and
mathouseholder(q, matid(#M)) == Q~ the inverse of Q. This function raises an error if the precision is too low or x is singular.
The library syntax is
| |
matrank(x) | |
Rank of the matrix x.
The library syntax is
| |
matreduce(m) | |
Let m be a factorization matrix, i.e., a 2-column matrix whose
columns contains arbitrary "generators" and integer "exponents"
respectively. Returns the canonical form of m: the
first column is sorted with unique elements and the second one contains the
merged "exponents" (exponents of identical entries in the first column of
m are added, rows attached to 0 exponents are deleted). The generators are
sorted with respect to the universal
For convenience, this function also allows a vector m, which is handled as a
factorization with all exponents equal to 1, as in
? A=[x,2;y,4]; B=[x,-2; y,3; 3, 4]; C=matconcat([A,B]~) %1 = [x 2] [y 4] [x -2] [y 3] [3 4] ? matreduce(C) %2 = [3 4] [y 7] ? matreduce([x,x,y,x,z,x,y]) \\ vector argument %3 = [x 4] [y 2] [z 1] The following one-line functions will list elements occurring exactly once (resp. more than once) in the vector or list v:
unique(v) = [ x[1] | x <- matreduce(v)~, x[2] == 1 ]; duplicates(v) = [ x[1] | x <- matreduce(v)~, x[2] > 1 ]; ? v = [0,1,2,3,1,2]; ? unique(v) %2 = [0, 3] ? duplicates(v) %3 = [1, 2]
The library syntax is
| |
matrix(m, {n = m}, {X}, {Y}, {expr = 0}) | |
Creation of the m x n matrix whose coefficients are given by the expression expr. There are two formal parameters in expr, the first one (X) corresponding to the rows, the second (Y) to the columns, and X goes from 1 to m, Y goes from 1 to n. If one of the last 3 parameters is omitted, fill the matrix with zeroes. If n is omitted, return a square m x m matrix.
| |
matrixqz(A, {p = 0}) | |
A being an m x n matrix in Mm,n(ℚ), let Imℚ A (resp. Imℤ A) the ℚ-vector space (resp. the ℤ-module) spanned by the columns of A. This function has varying behavior depending on the sign of p: If p ≥ 0, A is assumed to have maximal rank n ≤ m. The function returns a matrix B ∈ Mm,n(ℤ), with Imℚ B = Imℚ A, such that the GCD of all its n x n minors is coprime to p; in particular, if p = 0 (default), this GCD is 1. If p = -1, returns a basis of the lattice ℤm ∩ Imℤ A. If p = -2, returns a basis of the lattice ℤm ∩ Imℚ A. Caveat. (p = -1 or -2) For efficiency reason, we do not compute the HNF of the resulting basis.
? minors(x) = vector(#x[,1], i, matdet(x[^i,])); ? A = [3,1/7; 5,3/7; 7,5/7]; minors(A) %1 = [4/7, 8/7, 4/7] \\ determinants of all 2x2 minors ? B = matrixqz(A) %2 = [3 1] [5 2] [7 3] ? minors(%) %3 = [1, 2, 1] \\ B integral with coprime minors ? matrixqz(A,-1) %4 = [3 1] [5 3] [7 5] ? matrixqz(A,-2) %5 = [3 1] [5 2] [7 3]
The library syntax is
| |
matsize(x) | |
x being a vector or matrix, returns a row vector with two components, the first being the number of rows (1 for a row vector), the second the number of columns (1 for a column vector).
The library syntax is
| |
matsnf(X, {flag = 0}) | |
If X is a (singular or nonsingular) matrix outputs the vector of elementary divisors of X, i.e. the diagonal of the Smith normal form of X, normalized so that dn | dn-1 | ... | d1. X must have integer or polynomial entries; in the latter case, X must be a square matrix. The binary digits of flag mean: 1 (complete output): if set, outputs [U,V,D], where U and V are two unimodular matrices such that UXV is the diagonal matrix D. Otherwise output only the diagonal of D. If X is not a square matrix, then D will be a square diagonal matrix padded with zeros on the left or the top. 4 (cleanup): if set, cleans up the output. This means that elementary divisors equal to 1 will be deleted, i.e. outputs a shortened vector D' instead of D. If complete output was required, returns [U',V',D'] so that U'XV' = D' holds. If this flag is set, X is allowed to be of the form `vector of elementary divisors' or [U,V,D] as would normally be output with the cleanup flag unset.
If v is an output from
? X = [27,0; 0,3; 1,1; 0,0]; matsnf(X) %1 = [0, 0, 3, 1] ? [U,V,D] = v = matsnf(X, 1); U*X*V == D %2 ? U %3 = [0 0 0 1] [1 9 -27 0] [0 1 0 0] [0 0 1 0] ? V %4 = [-1 1] [ 1 0] ? snfrank(v, 3) %5 = 3 Continuing the same example after cleanup:
? [U,V,D] = v = matsnf(X, 1+4); U*X*V == D %6 = 1 ? D %7 = [0] [0] [3] ? snfrank(v, 3) %8 = 3 ? snfrank(v, 2) %9 = 2
The library syntax is
| |
matsolve(M, B) | |
Let M be a left-invertible matrix and B a column vector such that there exists a solution X to the system of linear equations MX = B; return the (unique) solution X. This has the same effect as, but is faster, than M-1*B. Uses Dixon p-adic lifting method if M and B are integral and Gaussian elimination otherwise. When there is no solution, the function returns an X such that MX - B is nonzero although it has at least #M zero entries:
? M = [1,2;3,4;5,6]; ? B = [4,6,8]~; X = matsolve(M, B) %2 = [-2, 3]~ ? M*X == B %3 = 1 ? B = [1,2,4]~; X = matsolve(M, [1,2,4]~) %4 = [0, 1/2]~ ? M*X - B %5 = [0, 0, -1]~ Raises an exception if M is not left-invertible, even if there is a solution:
? M = [1,1;1,1]; matsolve(M, [1,1]~) *** at top-level: matsolve(M,[1,1]~) *** ^ — — — — — — *** matsolve: impossible inverse in gauss: [1, 1; 1, 1]. The function also works when B is a matrix and we return the unique matrix solution X provided it exists. Again, if there is no solution, the function returns an X such that MX - B is nonzero although it has at least #M zero rows.
The library syntax is
| |
matsolvemod(M, D, B, {flag = 0}) | |
M being any integral matrix, D a column vector of nonnegative integer moduli, and B an integral column vector, gives an integer solution to the system of congruences ∑i mi,jxj = bi (mod di) if one exists, otherwise returns the integer zero. Note that we explicitly allow di = 0 corresponding to an equality in ℤ. Shorthand notation: B (resp. D) can be given as a single integer, in which case all the bi (resp. di) above are taken to be equal to B (resp. D). Again, D = 0 solves the linear system of equations over ℤ.
? M = [1,2;3,4]; ? matsolvemod(M, [3,4]~, [1,2]~) %2 = [10, 0]~ ? matsolvemod(M, 3, 1) \\ M X = [1,1]~ over F3 %3 = [2, 1]~ ? matsolvemod(M, [3,0]~, [1,2]~) \\ x + 2y = 1 (mod 3), 3x + 4y = 2 (in Z) %4 = [6, -4]~ ? matsolvemod(M, 0, [1,2]~) \\ no solution in Z for x + 2y = 1, 3x + 4y = 2 If flag = 1, all solutions are returned in the form of a two-component row vector [x,u], where x is an integer solution to the system of congruences and u is a matrix whose columns give a basis of the homogeneous system (so that all solutions can be obtained by adding x to any linear combination of columns of u). If no solution exists, returns zero.
The library syntax is
| |
matsupplement(x) | |
Assuming that the columns of the matrix x are linearly independent (if they are not, an error message is issued), finds a square invertible matrix whose first columns are the columns of x, i.e. supplement the columns of x to a basis of the whole space.
? matsupplement([1;2]) %1 = [1 0] [2 1] Raises an error if x has 0 columns, since (due to a long standing design bug), the dimension of the ambient space (the number of rows) is unknown in this case:
? matsupplement(matrix(2,0)) *** at top-level: matsupplement(matrix *** ^ — — — — — — -- *** matsupplement: sorry, suppl [empty matrix] is not yet implemented.
The library syntax is
| |
mattranspose(x) | |
Transpose of x (also x~). This has an effect only on vectors and matrices.
The library syntax is
| |
minpoly(A, {v = 'x}) | |
minimal polynomial of A with respect to the variable v., i.e. the monic polynomial P of minimal degree (in the variable v) such that P(A) = 0.
The library syntax is
| |
norml2(x) | |
Square of the L2-norm of x. More precisely,
if x is a scalar,
? norml2( [ 1, 2, 3 ] ) \\ vector %1 = 14 ? norml2( [ 1, 2; 3, 4] ) \\ matrix %2 = 30 ? norml2( 2*I + x ) %3 = 5 ? norml2( [ [1,2], [3,4], 5, 6 ] ) \\ recursively defined %4 = 91
The library syntax is
| |
normlp(x, {p = oo}) | |
Lp-norm of x; sup norm if p is omitted or
* if p is omitted or
* otherwise,
? v = [1,-2,3]; normlp(v) \\ vector %1 = 3 ? normlp(v, +oo) \\ same, more explicit %2 = 3 ? M = [1,-2;-3,4]; normlp(M) \\ matrix %3 = 4 ? T = (1+I) + I*x^2; normlp(T) %4 = 1.4142135623730950488016887242096980786 ? normlp([[1,2], [3,4], 5, 6]) \\ recursively defined %5 = 6 ? normlp(v, 1) %6 = 6 ? normlp(M, 1) %7 = 10 ? normlp(T, 1) %8 = 2.4142135623730950488016887242096980786
The library syntax is
| |
powers(x, n, {x0}) | |
For nonnegative n, return the vector with n+1 components
[1,x,...,xn] if
? powers(Mod(3,17), 4) %1 = [Mod(1, 17), Mod(3, 17), Mod(9, 17), Mod(10, 17), Mod(13, 17)] ? powers(Mat([1,2;3,4]), 3) %2 = [[1, 0; 0, 1], [1, 2; 3, 4], [7, 10; 15, 22], [37, 54; 81, 118]] ? powers(3, 5, 2) %3 = [2, 6, 18, 54, 162, 486]
When n < 0, the function returns the empty vector
The library syntax is
| |
qfauto(G, {fl}) | |
G being a square and symmetric matrix with integer entries representing a
positive definite quadratic form, outputs the automorphism group of the
associate lattice.
Since this requires computing the minimal vectors, the computations can
become very lengthy as the dimension grows. G can also be given by an
The output is a two-components vector [o,g] where o is the group order and g is the list of generators (as a vector). For each generator H, the equality G = {t}H G H holds. The interface of this function is experimental and will likely change in the future. This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.
? K = matkerint(Mat(concat([vector(23,i,2*i+1), 51, 145]))); ? M = matdiagonal(vector(25,i,if(i==25,-1,1))); ? L24 = K~ * M * K; \\ the Leech lattice ? [o,g] = qfauto(L24); o %4 = 8315553613086720000 ? #g %5 = 2
The library syntax is
| |
qfautoexport(qfa, {flag}) | |
qfa being an automorphism group as output by
? G = qfauto([2,1;1,2]) %1 = [12, [[-1, 0; 0, -1], [0, -1; 1, 1], [1, 1; 0, -1]]] ? s = qfautoexport(G) %2 = "Group([[-1, 0], [0, -1]], [[0, -1], [1, 1]], [[1, 1], [0, -1]])" ? extern("echo \"Order("s");\" | gap -q") %3 = 12
The library syntax is
| |
qfbil(x, y, {q}) | |
This function is obsolete, use
The library syntax is
| |
qfcholesky(q) | |
Given a square symmetric
The library syntax is
| |
qfcvp(x, t, {B}, {m}, {flag = 0}) | |
x being a square and symmetric matrix of dimension d representing a positive definite quadratic form, and t a vector of the same dimension d. This function deals with the vectors whose squared distance to t is less than B, enumerated using the Fincke-Pohst algorithm, storing at most m vectors. There is no limit if m is omitted: beware that this may be a huge vector! The vectors are returned in no particular order. The function searches for the closest vectors to t if B is omitted or ≤ 0. The behavior is undefined if x is not positive definite (a "precision too low" error is most likely, although more precise error messages are possible). The precise behavior depends on flag. * If flag = 0 (default), return [N, M, V], where N is the number of vectors enumerated (possibly larger than m), M ≤ B is the maximum squared distance found, and V is a matrix whose columns are found vectors. * If flag = 1, ignore m and return [M,v], where v is a nonzero vector at squared distance M ≤ B. If no nonzero vector has distance ≤ B, return []. In these two cases, x must have integral small entries: more precisely, we definitely must have d.|x|_ oo ^2 < 253 but even that may not be enough. The implementation uses low precision floating point computations for maximal speed and gives incorrect results when x has large entries. That condition is checked in the code and the routine raises an error if large rounding errors occur.
? M = [2,1;1,2]; t = [1/2, -1/2]; ? qfcvp(M, t, 0) %2 = [2, 0.5000000000000000000, [0, 1; 0, -1]] ? qfcvp(M, t, 1.5) %3 = [4, 1.5000000000000000000, [1, 0, 1, 0; 0, 0, -1, -1]]
The library syntax is
| |
qfeval({q}, x, {y}) | |
Evaluate the quadratic form q (given by a symmetric matrix) at the vector x; if y is present, evaluate the polar form at (x,y); if q omitted, use the standard Euclidean scalar product, corresponding to the identity matrix.
Roughly equivalent to
? x = [1,2,3]~; y = [-1,3,1]~; q = [1,2,3;2,2,-1;3,-1,9]; ? qfeval(q,x,y) %2 = 23 ? for(i=1,10^6, qfeval(q,x,y)) time = 661ms ? for(i=1,10^6, x~*q*y) time = 697ms
The speedup is noticeable for the quadratic form,
compared to
? for(i=1,10^6, qfeval(q,x)) time = 487ms The special case q = Id is handled faster if we omit q altogether:
? qfeval(,x,y) %6 = 8 ? q = matid(#x); ? for(i=1,10^6, qfeval(q,x,y)) time = 529 ms. ? for(i=1,10^6, qfeval(,x,y)) time = 228 ms. ? for(i=1,10^6, x~*y) time = 274 ms.
We also allow
? M = [1,2,3;4,5,6;7,8,9]; qfeval(,M) \\ Gram matrix %5 = [66 78 90] [78 93 108] [90 108 126] ? q = [1,2,3;2,2,-1;3,-1,9]; ? for(i=1,10^6, qfeval(q,M)) time = 2,008 ms. ? for(i=1,10^6, M~*q*M) time = 2,368 ms. ? for(i=1,10^6, qfeval(,M)) time = 1,053 ms. ? for(i=1,10^6, M~*M) time = 1,171 ms.
If q is a
? q = Qfb(2,3,4); x = [2,3]; ? qfeval(q, x) %2 = 62 ? Q = Mat(q) %3 = [ 2 3/2] [3/2 4] ? qfeval(Q, x) %4 = 62 ? for (i=1, 10^6, qfeval(q,x)) time = 758 ms. ? for (i=1, 10^6, qfeval(Q,x)) time = 1,110 ms.
Finally, when x is a
? q = Qfb(2,3,4); Q = Mat(q); x = [1,2;3,4]; ? qfeval(q, x) %2 = Qfb(47, 134, 96) ? qfeval(Q,x) %3 = [47 67] [67 96] ? for (i=1, 10^6, qfeval(q,x)) time = 701 ms. ? for (i=1, 10^6, qfeval(Q,x)) time = 1,639 ms.
The library syntax is
| |
qfgaussred(q, {flag = 0}) | |
decomposition into squares of the quadratic form represented by the symmetric matrix q. If flag = 0 (default), the result is a matrix M whose diagonal entries are the coefficients of the squares, and the off-diagonal entries on each line represent the bilinear forms. More precisely, if (aij) denotes the output, one has q(x) = ∑i ai,i (xi + ∑j ! = i ai,j xj)2
? qfgaussred([0,1;1,0]) %1 = [1/2 1] [-1 -1/2] This means that 2xy = (1/2)(x+y)2 - (1/2)(x-y)2. Singular matrices are supported, in which case some diagonal coefficients vanish:
? qfgaussred([1,1;1,1]) %2 = [1 1] [1 0] This means that x2 + 2xy + y2 = (x+y)2.
If flag = 1, return
? q = [0,1;1,0]; ? [U,V] = qfgaussred(q,1); D = matdiagonal(V); ? U~*D*U %5 = [0 1] [1 0] ? U+D-1 %6 = [1/2 1] [ -1 -1/2]
The library syntax is
| |
qfisom(G, H, {fl}, {grp}) | |
G, H being square and symmetric matrices with integer entries representing
positive definite quadratic forms, return an invertible matrix S such that
G = {t}S H S. This defines a isomorphism between the corresponding lattices.
Since this requires computing the minimal vectors, the computations can
become very lengthy as the dimension grows.
See
G can also be given by an This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.
The library syntax is
| |
qfisominit(G, {fl}, {m}) | |
G being a square and symmetric matrix with integer entries representing a
positive definite quadratic form, return an The interface of this function is experimental and will likely change in future release.
If present, the optional parameter fl must be a
*
*
If present, m must be the set of vectors of norm up to the maximal of the
diagonal entry of G, either as a matrix or as given by
The library syntax is
| |
qfjacobi(A) | |
Apply Jacobi's eigenvalue algorithm to the real symmetric matrix A. This returns [L, V], where * L is the vector of (real) eigenvalues of A, sorted in increasing order, * V is the corresponding orthogonal matrix of eigenvectors of A.
? \p19 ? A = [1,2;2,1]; mateigen(A) %1 = [-1 1] [ 1 1] ? [L, H] = qfjacobi(A); ? L %3 = [-1.000000000000000000, 3.000000000000000000]~ ? H %4 = [ 0.7071067811865475245 0.7071067811865475244] [-0.7071067811865475244 0.7071067811865475245] ? norml2( (A-L[1])*H[,1] ) \\ approximate eigenvector %5 = 9.403954806578300064 E-38 ? norml2(H*H~ - 1) %6 = 2.350988701644575016 E-38 \\ close to orthogonal
The library syntax is
| |
qflll(x, {flag = 0}) | |
LLL algorithm applied to the columns of the matrix x. The columns of x may be linearly dependent. The result is by default a unimodular transformation matrix T such that x.T is an LLL-reduced basis of the lattice generated by the column vectors of x. Note that if x is not of maximal rank T will not be square. The LLL parameters are (0.51,0.99), meaning that the Gram-Schmidt coefficients for the final basis satisfy |μi,j| ≤ 0.51, and the Lovász's constant is 0.99.
If flag = 0 (default), assume that x has either exact (integral or
rational) or real floating point entries. The matrix is rescaled, converted
to integers and the behavior is then as in flag = 1.
Computations involving Gram-Schmidt
vectors are approximate, with precision varying as needed (Lehmer's trick,
as generalized by Schnorr). Adapted from Nguyen and Stehlé's algorithm
and Stehlé's code (
If flag = 1, disable use of FLATTER algorithm; use If flag = 2, x should be an integer matrix whose columns are linearly independent. Returns a partially reduced basis for x, using an unpublished algorithm by Peter Montgomery: a basis is said to be partially reduced if |vi ± vj| ≥ |vi| for any two distinct basis vectors vi, vj. This is faster than flag = 1, esp. when one row is huge compared to the other rows (knapsack-style), and should quickly produce relatively short vectors. The resulting basis is not LLL-reduced in general. If LLL reduction is eventually desired, avoid this partial reduction: applying LLL to the partially reduced matrix is significantly slower than starting from a knapsack-type lattice. If flag = 3, as flag = 0, but the reduction is performed in place: the routine returns x.T. This is usually faster for knapsack-type lattices. If flag = 4, as flag = 0, returning a vector [K, T] of matrices: the columns of K represent a basis of the integer kernel of x (not LLL-reduced in general) and T is the transformation matrix such that x.T is an LLL-reduced ℤ-basis of the image of the matrix x. If flag = 5, case as flag = 4, but x may have polynomial coefficients. If flag = 8, same as flag = 0, but x may have polynomial coefficients.
? \p500 realprecision = 500 significant digits ? a = 2*cos(2*Pi/97); ? C = 10^450; ? v = powers(a,48); b = round(matconcat([matid(48),C*v]~)); ? p = b * qflll(b)[,1]; \\ tiny linear combination of powers of 'a' time = 4,470 ms. ? exponent(v * p / C) %5 = -1418 ? p3 = qflll(b,3)[,1]; \\ compute in place, faster time = 3,790 ms. ? p3 == p \\ same result %7 = 1 ? p2 = b * qflll(b,2)[,1]; \\ partial reduction: faster, not as good time = 343 ms. ? exponent(v * p2 / C) %9 = -1190
The library syntax is
| |
qflllgram(G, {flag = 0}) | |
Same as
If flag = 0 (default), assume that G has either exact (integral or
rational) or real floating point entries. The matrix is rescaled, converted
to integers.
Computations involving Gram-Schmidt vectors are approximate, with precision
varying as needed (Lehmer's trick, as generalized by Schnorr). Adapted from
Nguyen and Stehlé's algorithm and Stehlé's code ( If flag = 1, disable use of FLATTER algorithm. flag = 4: G has integer entries, gives the kernel and reduced image of x. flag = 5: same as 4, but G may have polynomial coefficients.
The library syntax is
| |
qfminim(x, {B}, {m}, {flag = 0}) | |
x being a square and symmetric matrix of dimension d representing a positive definite quadratic form, this function deals with the vectors of x whose norm is less than or equal to B, enumerated using the Fincke-Pohst algorithm, storing at most m pairs of vectors: only one vector is given for each pair ± v. There is no limit if m is omitted: beware that this may be a huge vector! The vectors are returned in no particular order. The function searches for the minimal nonzero vectors if B is omitted. The behavior is undefined if x is not positive definite (a "precision too low" error is most likely, although more precise error messages are possible). The precise behavior depends on flag. * If flag = 0 (default), return [N, M, V], where N is the number of vectors enumerated (an even number, possibly larger than 2m), M ≤ B is the maximum norm found, and V is a matrix whose columns are found vectors. * If flag = 1, ignore m and return [M,v], where v is a nonzero vector of length M ≤ B. If no nonzero vector has length ≤ B, return []. If no explicit B is provided, return a vector of smallish norm, namely the vector of smallest length (usually the first one but not always) in an LLL-reduced basis for x. In these two cases, x must have integral small entries: more precisely, we definitely must have d.|x| oo 2 < 253 but even that may not be enough. The implementation uses low precision floating point computations for maximal speed and gives incorrect results when x has large entries. That condition is checked in the code and the routine raises an error if large rounding errors occur. A more robust, but much slower, implementation is chosen if the following flag is used: * If flag = 2, x can have non integral real entries, but this is also useful when x has large integral entries. Return [N, M, V] as in case flag = 0, where M is returned as a floating point number. If x is inexact and B is omitted, the "minimal" vectors in V only have approximately the same norm (up to the internal working accuracy). This version is very robust but still offers no hard and fast guarantee about the result: it involves floating point operations performed at a high floating point precision depending on your input, but done without rigorous tracking of roundoff errors (as would be provided by interval arithmetic for instance). No example is known where the input is exact but the function returns a wrong result.
? x = matid(2); ? qfminim(x) \\ 4 minimal vectors of norm 1: ±[0,1], ±[1,0] %2 = [4, 1, [0, 1; 1, 0]] ? { x = \\ The Leech lattice [4, 2, 0, 0, 0,-2, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1, 0,-1, 0, 0, 0,-2; 2, 4,-2,-2, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1, 0, 1,-1,-1; 0,-2, 4, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 0, 1,-1,-1, 0, 0; 0,-2, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1,-1, 0, 1,-1, 1, 0; 0, 0,-2, 0, 4, 0, 0, 0, 1,-1, 0, 0, 1, 0, 0, 0,-2, 0, 0,-1, 1, 1, 0, 0; -2, -2,0, 0, 0, 4,-2, 0,-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,-1, 1, 1; 0, 0, 0, 0, 0,-2, 4,-2, 0, 0, 0, 0, 0, 1, 0, 0, 0,-1, 0, 0, 0, 1,-1, 0; 0, 0, 0, 0, 0, 0,-2, 4, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1,-1,-1, 0, 1, 0; 0, 0, 0, 0, 1,-1, 0, 0, 4, 0,-2, 0, 1, 1, 0,-1, 0, 1, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 0, 0, 1, 1,-1, 1, 0, 0, 0, 1, 0, 0, 1, 0; 0, 0, 0, 0, 0, 0, 0, 0,-2, 0, 4,-2, 0,-1, 0, 0, 0,-1, 0,-1, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,-2, 4,-1, 1, 0, 0,-1, 1, 0, 1, 1, 1,-1, 0; 1, 0,-1, 1, 1, 0, 0,-1, 1, 1, 0,-1, 4, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1,-1; -1,-1, 1,-1, 0, 0, 1, 0, 1, 1,-1, 1, 0, 4, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1; 0, 0, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 1, 0, 4, 0, 0, 0, 0, 1, 1, 0, 0; 0, 0, 1, 0,-2, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 1, 1, 1, 0, 0, 1, 1; 1, 0, 0, 1, 0, 0,-1, 0, 1, 0,-1, 1, 1, 0, 0, 0, 1, 4, 0, 1, 1, 0, 1, 0; 0, 0, 0,-1, 0, 1, 0,-1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 4, 0, 1, 1, 0, 1; -1, -1,1, 0,-1, 1, 0,-1, 0, 1,-1, 1, 0, 1, 0, 0, 1, 1, 0, 4, 0, 0, 1, 1; 0, 0,-1, 1, 1, 0, 0,-1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 4, 1, 0, 1; 0, 1,-1,-1, 1,-1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 4, 0, 1; 0,-1, 0, 1, 0, 1,-1, 1, 0, 1, 0,-1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 4, 1; -2,-1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 4]; } ? qfminim(x,,0) \\ 0: don't store minimal vectors time = 121 ms. %4 = [196560, 4, [;]] \\ 196560 minimal vectors of norm 4 ? qfminim(x) \\ store all minimal vectors ! time = 821 ms. ? qfminim(x,,0,2); \\ safe algorithm. Slower and unnecessary here. time = 5,540 ms. %6 = [196560, 4.000061035156250000, [;]] ? qfminim(x,,,2); \\ safe algorithm; store all minimal vectors time = 6,602 ms.
In this example, storing 0 vectors limits memory use; storing all of them
requires a
The library syntax is
| |
qfminimize(G) | |
Given a square symmetric matrix G with rational coefficients, and
non-zero determinant, of dimension n ≥ 1, return
? G = matdiagonal([650, -104329, -104329]); ? [H,U,c]=qfminimize(G); H %2 = [-1,0,0;0,-1,0;0,0,1] ? U %3 = [0,0,1/5;5/323,-1/323,0;-1/323,-5/323,0] ? c %4 = 1/26 ? c * U~ * G * U %4 = [-1,0,0;0,-1,0;0,0,1]
The library syntax is
| |
qfnorm(x, {q}) | |
This function is obsolete, use
The library syntax is
| |
qforbits(G, V) | |
Return the orbits of V under the action of the group of linear transformation generated by the set G. It is assumed that G contains minus identity, and only one vector in {v, -v} should be given. If G does not stabilize V, the function return 0. In the example below, we compute representatives and lengths of the orbits of the vectors of norm ≤ 3 under the automorphisms of the lattice ℤ6.
? Q=matid(6); G=qfauto(Q); V=qfminim(Q,3); ? apply(x->[x[1],#x],qforbits(G,V)) %2 = [[[0,0,0,0,0,1]~,6],[[0,0,0,0,1,-1]~,30],[[0,0,0,1,-1,-1]~,80]]
The library syntax is
| |
qfparam(G, sol, {flag = 0}) | |
Coefficients of binary quadratic forms that parametrize the solutions of the ternary quadratic form G, using the particular solution sol. flag is optional and can be 1, 2, or 3, in which case the flag-th form is reduced. The default is flag = 0 (no reduction).
? G = [1,0,0;0,1,0;0,0,-34]; ? M = qfparam(G, qfsolve(G)) %2 = [ 3 -10 -3] [-5 -6 5] [ 1 0 1] Indeed, the solutions can be parametrized as (3x2-10xy-3y2)2 + (-5x2-6xy+5y2)2 -34(x2+y2)2 = 0.
? v = y^2 * M*[1,x/y,(x/y)^2]~ %3 = [3*x^2 - 10*y*x - 3*y^2, -5*x^2 - 6*y*x + 5*y^2, -x^2 - y^2]~ ? v~*G*v %4 = 0
The library syntax is
| |
qfperfection(G) | |
G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, outputs the perfection rank of the form. That is, gives the rank of the family of the s symmetric matrices v{t}v, where v runs through the minimal vectors. A form is perfect if and only if its perfection rank is d(d+1)/2 where d is the dimension of G. The algorithm computes the minimal vectors and its runtime is exponential in d.
The library syntax is
| |
qfrep(q, B, {flag = 0}) | |
q being a square and symmetric matrix with integer entries representing a positive definite quadratic form, count the vectors representing successive integers. * If flag = 0, count all vectors. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors v such that q(v) = i. * If flag = 1, count vectors of even norm. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors such that q(v) = 2i.
? q = [2, 1; 1, 3]; ? qfrep(q, 5) %2 = Vecsmall([0, 1, 2, 0, 0]) \\ 1 vector of norm 2, 2 of norm 3, etc. ? qfrep(q, 5, 1) %3 = Vecsmall([1, 0, 0, 1, 0]) \\ 1 vector of norm 2, 0 of norm 4, etc.
This routine uses a naive algorithm based on
The library syntax is
| |
qfsign(x) | |
Returns [p,m] the signature of the quadratic form represented by the symmetric matrix x. Namely, p (resp. m) is the number of positive (resp. negative) eigenvalues of x. The result is computed using Gaussian reduction.
The library syntax is
| |
qfsolve(G) | |
Given a square symmetric matrix G of dimension n ≥ 1, solve over ℚ the quadratic equation {t}X G X = 0. The matrix G must have rational coefficients. When G is integral, the argument can also be a vector [G,F] where F is the factorization matrix of the absolute value of the determinant of G.
The solution might be a single nonzero column vector
( If no solution exists, returns an integer, that can be a prime p such that there is no local solution at p, or -1 if there is no real solution, or -2 if n = 2 and -det G is not a square (which implies there is a real solution, but no local solution at some p dividing det G).
? G = [1,0,0;0,1,0;0,0,-34]; ? qfsolve(G) %1 = [-3, -5, 1]~ ? qfsolve([1,0; 0,2]) %2 = -1 \\ no real solution ? qfsolve([1,0,0;0,3,0; 0,0,-2]) %3 = 3 \\ no solution in Q3 ? qfsolve([1,0; 0,-2]) %4 = -2 \\ no solution, n = 2
The library syntax is
| |
setbinop(f, X, {Y}) | |
The set whose elements are the f(x,y), where x,y run through X,Y. respectively. If Y is omitted, assume that X = Y and that f is symmetric: f(x,y) = f(y,x) for all x,y in X.
? X = [1,2,3]; Y = [2,3,4]; ? setbinop((x,y)->x+y, X,Y) \\ set X + Y %2 = [3, 4, 5, 6, 7] ? setbinop((x,y)->x-y, X,Y) \\ set X - Y %3 = [-3, -2, -1, 0, 1] ? setbinop((x,y)->x+y, X) \\ set 2X = X + X %2 = [2, 3, 4, 5, 6]
The library syntax is
| |
setdelta(x, y) | |
Symmetric difference of the two sets x and y (see
? a=[1,2,2,3];b=[4,2,3,4]; ? setdelta(Set(a), Set(b)) %2 = [1, 4] \\ the symmetric difference of the two sets ? setdelta(a,b) %3 = [1, 2, 2, 3, 4, 2, 3, 4] \\ undefined result
The library syntax is
| |
setintersect(x, y) | |
Intersection of the two sets x and y (see
The library syntax is
| |
setisset(x) | |
Returns true (1) if x is a set, false (0) if
not. In PARI, a set is a row vector whose entries are strictly
increasing with respect to a (somewhat arbitrary) universal comparison
function. To convert any object into a set (this is most useful for
vectors, of course), use the function
? a = [3, 1, 1, 2]; ? setisset(a) %2 = 0 ? Set(a) %3 = [1, 2, 3]
The library syntax is
| |
setminus(x, y) | |
Difference of the two sets x and y (see
The library syntax is
| |
setsearch(S, x, {flag = 0}) | |
Determines whether x belongs to the set or sorted list S
(see We first describe the default behavior, when flag is zero or omitted. If x belongs to the set S, returns the index j such that S[j] = x, otherwise returns 0.
? T = [7,2,3,5]; S = Set(T); ? setsearch(S, 2) %2 = 1 ? setsearch(S, 4) \\ not found %3 = 0 ? setsearch(T, 7) \\ search in a randomly sorted vector %4 = 0 \\ WRONG !
If S is not a set, we also allow sorted lists with
respect to the
? L = List([1,4,2,3,2]); setsearch(L, 4) %1 = 0 \\ WRONG ! ? listsort(L, 1); L \\ sort L first %2 = List([1, 2, 3, 4]) ? setsearch(L, 4) %3 = 4 \\ now correct
If flag is nonzero, this function returns the index j where x should be
inserted, and 0 if it already belongs to S. This is meant to be used for
dynamically growing (sorted) lists, in conjunction with
? L = List([1,5,2,3,2]); listsort(L,1); L %1 = List([1,2,3,5]) ? j = setsearch(L, 4, 1) \\ 4 should have been inserted at index j %2 = 4 ? listinsert(L, 4, j); L %3 = List([1, 2, 3, 4, 5])
The library syntax is
| |
setunion(x, y) | |
Union of the two sets x and y (see
The library syntax is
| |
snfrank(D, {q = 0}) | |
Assuming that D is a Smith normal form (i.e. vector of elementary divisors) for some module and q a power of an irreducible element or 0, returns the minimal number of generators for D/qD. For instance, if q = pn where p is a prime number, this is the dimension of (pn-1D)/pnD as an 𝔽p-vector space. An argument q = 0 may be omitted.
? snfrank([4,4,2], 2) %1 = 3 ? snfrank([4,4,2], 4) %2 = 2 ? snfrank([4,4,2], 8) %3 = 0 ? snfrank([4,4,2]) \\ or snfrank([4,4,2], 0) %4 = 3 The function also works for K[x]-modules:
? D=matsnf([-x-5,-1,-1,0; 0,x^2+10*x+26,-1,-x-5; 1,-x-5,-x-5,1; -1,0,0,1]); ? snfrank(D, x^2 + 10*x + 27) %6 = 2 ? A=matdiagonal([x-1,x^2+1,x-1,(x^2+1)^2,x,(x-1)^2]); D=matsnf(A); ? snfrank(D,x-1) %8 = 3 ? snfrank(D,(x-1)^2) %9 = 1 ? snfrank(D,(x-1)^3) %9 = 0 ? snfrank(D,x^2+1) %10 = 2
Finally this function supports any output from
The library syntax is
| |
trace(x) | |
This applies to quite general x. If x is not a matrix, it is equal to the sum of x and its conjugate, except for polmods where it is the trace as an algebraic number. For x a square matrix, it is the ordinary trace. If x is a nonsquare matrix (but not a vector), an error occurs.
The library syntax is
| |
vecextract(x, y, {z}) | |
Extraction of components of the vector or matrix x according to y. In case x is a matrix, its components are the columns of x. The parameter y is a component specifier, which is either an integer, a string describing a range, or a vector. If y is an integer, it is considered as a mask: the binary bits of y are read from right to left, but correspond to taking the components from left to right. For example, if y = 13 = (1101)2 then the components 1,3 and 4 are extracted.
If y is a vector ( If y is a string, it can be * a single (nonzero) index giving a component number (a negative index means we start counting from the end).
* a range of the form
In addition, if the first character in the string is If z is not omitted, x must be a matrix. y is then the row specifier, and z the column specifier, where the component specifier is as explained above.
? v = [a, b, c, d, e]; ? vecextract(v, 5) \\ mask %1 = [a, c] ? vecextract(v, [4, 2, 1]) \\ component list %2 = [d, b, a] ? vecextract(v, "2..4") \\ interval %3 = [b, c, d] ? vecextract(v, "-1..-3") \\ interval + reverse order %4 = [e, d, c] ? vecextract(v, "^2") \\ complement %5 = [a, c, d, e] ? vecextract(matid(3), "2..", "..") %6 = [0 1 0] [0 0 1]
The range notations * reverse order,
* omitting either a or b in
The library syntax is
| |
vecprod(v) | |
Return the product of the components of the vector v. Return 1 on an empty vector.
? vecprod([1,2,3]) %1 = 6 ? vecprod([]) %2 = 1
The library syntax is
| |
vecsearch(v, x, {cmpf}) | |
Determines whether x belongs to the sorted vector or list v: return the (positive) index where x was found, or 0 if it does not belong to v.
If the comparison function cmpf is omitted, we assume that v is sorted in
increasing order, according to the standard comparison function
If
? v = [1,3,4,5,7]; ? vecsearch(v, 3) %2 = 2 ? vecsearch(v, 6) %3 = 0 \\ not in the list ? vecsearch([7,6,5], 5) \\ unsorted vector: result undefined %4 = 0 Note that if we are sorting with respect to a key which is expensive to compute (e.g. a discriminant), one should rather precompute all keys, sort that vector and search in the vector of keys, rather than searching in the original vector with respect to a comparison function.
By abuse of notation, x is also allowed to be a matrix, seen as a vector
of its columns; again by abuse of notation, a
? v = vecsort([3,0,2; 1,0,2]) \\ sort matrix columns according to lex order %1 = [0 2 3] [0 2 1] ? vecsearch(v, [3,1]~) %2 = 3 ? vecsearch(v, [3,1]) \\ can search for x or x~ %3 = 3 ? vecsearch(v, [1,2]) %4 = 0 \\ not in the list
The library syntax is
| |
vecsort(x, {cmpf}, {flag = 0}) | |
Sorts the vector x in ascending order, using a mergesort method. x must be a list, vector or matrix (seen as a vector of its columns). Note that mergesort is stable, hence the initial ordering of "equal" entries (with respect to the sorting criterion) is not changed.
If
If * an integer k: sort according to the value of the k-th subcomponents of the components of x.
* a vector: sort lexicographically according to the components listed in
the vector. For example, if
* a comparison function:
* a key:
? vecsort([3,0,2; 1,0,2]) \\ sort columns according to lex order %1 = [0 2 3] [0 2 1] ? vecsort(v, (x,y)->y-x) \\ reverse sort ? vecsort(v, (x,y)->abs(x)-abs(y)) \\ sort by increasing absolute value ? vecsort(v, abs) \\ sort by increasing absolute value, using key ? cmpf(x,y) = my(dx = poldisc(x), dy = poldisc(y)); abs(dx) - abs(dy); ? v = [x^2+1, x^3-2, x^4+5*x+1] vecsort(v, cmpf) \\ comparison function ? vecsort(v, x->abs(poldisc(x))) \\ key
The A direct approach is also possible and equivalent to using a sorting key:
? T = [abs(poldisc(x)) | x<-v]; ? perm = vecsort(T,,1); \\ indirect sort ? vecextract(v, perm)
This also provides the vector T of all keys, which is
interesting for instance in later The binary digits of flag mean:
* 1: indirect sorting of the vector x, i.e. if x is an
n-component vector, returns a permutation of [1,2,...,n] which
applied to the components of x sorts x in increasing order.
For example, * 4: use descending instead of ascending order. * 8: remove "duplicate" entries with respect to the sorting function (keep the first occurring entry). For example:
? vecsort([Pi,Mod(1,2),z], (x,y)->0, 8) \\ make everything compare equal %1 = [3.141592653589793238462643383] ? vecsort([[2,3],[0,1],[0,3]], 2, 8) %2 = [[0, 1], [2, 3]]
The library syntax is
| |
vecsum(v) | |
Return the sum of the components of the vector v. Return 0 on an empty vector.
? vecsum([1,2,3]) %1 = 6 ? vecsum([]) %2 = 0
The library syntax is
| |
vector(n, {X}, {expr = 0}) | |
Creates a row vector (type
? vector(3,i, 5*i) %1 = [5, 10, 15] ? vector(3) %2 = [0, 0, 0] The variable X is lexically scoped to each evaluation of expr. Any change to X within expr does not affect subsequent evaluations, it still runs 1 to n. A local change allows for example different indexing:
vector(10, i, i=i-1; f(i)) \\ i = 0, ..., 9 vector(10, i, i=2*i; f(i)) \\ i = 2, 4, ..., 20
This per-element scope for X differs from
n = 3 v = vector(n); vector(n, i, i++) — -> [2, 3, 4] v = vector(n); for (i = 1, n, v[i] = i++) — -> [2, 0, 4]
| |
vectorsmall(n, {X}, {expr = 0}) | |
Creates a row vector of small integers (type
| |
vectorv(n, {X}, {expr = 0}) | |
As
| |