Vectors, matrices, linear algebra and sets

Note that most linear algebra functions operating on subspaces defined by generating sets (such as mathnf, qflll, etc.) take matrices as arguments. As usual, the generating vectors are taken to be the columns of the given matrix.

Since PARI does not have a strong typing system, scalars live in unspecified commutative base rings. It is very difficult to write robust linear algebra routines in such a general setting. We thus assume that the base ring is a domain and work over its field of fractions. If the base ring is not a domain, one gets an error as soon as a non-zero pivot turns out to be non-invertible. Some functions, e.g. mathnf or mathnfmod, specifically assume that the base ring is ℤ.




algdep(x, k, {flag = 0})

x being real/complex, or p-adic, finds a polynomial of degree at most k with integer coefficients having x as approximate root. Note that the polynomial which is obtained is not necessarily the "correct" one. In fact it is not even guaranteed to be irreducible. One can check the closeness either by a polynomial evaluation (use subst), or by computing the roots of the polynomial given by algdep (use polroots).

Internally, lindep([1,x,...,x^k], flag) is used. A non-zero value of flag may improve on the default behavior if the input number is known to a huge accuracy, and you suspect the last bits are incorrect (this truncates the number, throwing away the least significant bits), but default values are usually sufficient:

  ? \p200
  ? algdep(2^(1/6)+3^(1/5), 30);      \\ wrong in 0.8s
  ? algdep(2^(1/6)+3^(1/5), 30, 100); \\ wrong in 0.4s
  ? algdep(2^(1/6)+3^(1/5), 30, 170); \\ right in 0.8s
  ? algdep(2^(1/6)+3^(1/5), 30, 200); \\ wrong in 1.0s
  ? \p250
  ? algdep(2^(1/6)+3^(1/5), 30);      \\ right in 1.0s
  ? algdep(2^(1/6)+3^(1/5), 30, 200); \\ right in 1.0s
  ? \p500
  ? algdep(2^(1/6)+3^(1/5), 30);      \\ right in 2.9s
  ? \p1000
  ? algdep(2^(1/6)+3^(1/5), 30);      \\ right in 10.6s

The changes in defaultprecision only affect the quality of the initial approximation to 2^{1/6} + 3^{1/5}, algdep itself uses exact operations (the size of its operands depend on the accuracy of the input of course: more accurate input means slower operations).

Proceeding by increments of 5 digits of accuracy, algdep with default flag produces its first correct result at 205 digits, and from then on a steady stream of correct results.

The above example is the test case studied in a 2000 paper by Borwein and Lisonek: Applications of integer relation algorithms, Discrete Math., 217, p. 65--82. The version of PARI tested there was 1.39, which succeeded reliably from precision 265 on, in about 200 as much time as the current version.

The library syntax is GEN algdep0(GEN x, long k, long flag). Also available is GEN algdep(GEN x, long k) (flag = 0).




charpoly(A, {v = 'x}, {flag = 5})

characteristic polynomial of A with respect to the variable v, i.e. determinant of v*I-A if A is a square matrix.

  ? charpoly([1,2;3,4]);
  %1 = x^2 - 5*x - 2
  ? charpoly([1,2;3,4],, 't)
  %2 = t^2 - 5*t - 2

If A is not a square matrix, the function returns the characteristic polynomial of the map "multiplication by A" if A is a scalar:

  ? charpoly(Mod(x+2, x^3-2))
  %1 = x^3 - 6*x^2 + 12*x - 10
  ? charpoly(I)
  %2 = x^2 + 1
  ? charpoly(quadgen(5))
  %3 = x^2 - x - 1
  ? charpoly(ffgen(ffinit(2,4)))
  %4 = Mod(1, 2)*x^4 + Mod(1, 2)*x^3 + Mod(1, 2)*x^2 + Mod(1, 2)*x + Mod(1, 2)

The value of flag is only significant for matrices, and we advise to stick to the default value. Let n be the dimension of A.

If flag = 0, same method (Le Verrier's) as for computing the adjoint matrix, i.e. using the traces of the powers of A. Assumes that n! is invertible; uses O(n^4) scalar operations.

If flag = 1, uses Lagrange interpolation which is usually the slowest method. Assumes that n! is invertible; uses O(n^4) scalar operations.

If flag = 2, uses the Hessenberg form. Assumes that the base ring is a field. Uses O(n^3) scalar operations, but suffers from coefficient explosion unless the base field is finite or ℝ.

If flag = 3, uses Berkowitz's division free algorithm, valid over any ring (commutative, with unit). Uses O(n^4) scalar operations.

If flag = 4, x must be integral. Uses a modular algorithm: Hessenberg form for various small primes, then Chinese remainders.

If flag = 5 (default), uses the "best" method given x. This means we use Berkowitz unless the base ring is ℤ (use flag = 4) or a field where coefficient explosion does not occur, e.g. a finite field or the reals (use flag = 2).

The library syntax is GEN charpoly0(GEN A, long v = -1, long flag), where v is a variable number. Also available are GEN charpoly(GEN x, long v) (flag = 5), GEN caract(GEN A, long v) (flag = 1), GEN carhess(GEN A, long v) (flag = 2), GEN carberkowitz(GEN A, long v) (flag = 3) and GEN caradj(GEN A, long v, GEN *pt). In this last case, if pt is not NULL, *pt receives the address of the adjoint matrix of A (see matadjoint), so both can be obtained at once.




concat(x, {y})

Concatenation of x and y. If x or y is not a vector or matrix, it is considered as a one-dimensional vector. All types are allowed for x and y, but the sizes must be compatible. Note that matrices are concatenated horizontally, i.e. the number of rows stays the same. Using transpositions, one can concatenate them vertically, but it is often simpler to use matconcat.

  ? x = matid(2); y = 2*matid(2);
  ? concat(x,y)
  %2 =
  [1 0 2 0]
  
  [0 1 0 2]
  ? concat(x~,y~)~
  %3 =
  [1 0]
  
  [0 1]
  
  [2 0]
  
  [0 2]
  ? matconcat([x;y])
  %4 =
  [1 0]
  
  [0 1]
  
  [2 0]
  
  [0 2]

To concatenate vectors sideways (i.e. to obtain a two-row or two-column matrix), use Mat instead, or matconcat:

  ? x = [1,2];
  ? y = [3,4];
  ? concat(x,y)
  %3 = [1, 2, 3, 4]
  
  ? Mat([x,y]~)
  %4 =
  [1 2]
  
  [3 4]
  ? matconcat([x;y])
  %5 =
  [1 2]
  
  [3 4]

Concatenating a row vector to a matrix having the same number of columns will add the row to the matrix (top row if the vector is x, i.e. comes first, and bottom row otherwise).

The empty matrix [;] is considered to have a number of rows compatible with any operation, in particular concatenation. (Note that this is not the case for empty vectors [ ] or [ ]~.)

If y is omitted, x has to be a row vector or a list, in which case its elements are concatenated, from left to right, using the above rules.

  ? concat([1,2], [3,4])
  %1 = [1, 2, 3, 4]
  ? a = [[1,2]~, [3,4]~]; concat(a)
  %2 =
  [1 3]
  
  [2 4]
  
  ? concat([1,2; 3,4], [5,6]~)
  %3 =
  [1 2 5]
  
  [3 4 6]
  ? concat([%, [7,8]~, [1,2,3,4]])
  %5 =
  [1 2 5 7]
  
  [3 4 6 8]
  
  [1 2 3 4]

The library syntax is GEN concat(GEN x, GEN y = NULL). GEN concat1(GEN x) is a shortcut for concat(x,NULL).




forqfvec(v, q, b, expr)

q being a square and symmetric matrix representing a positive definite quadratic form, evaluate expr for all vector v such that q(v) ≤ b. The formal variable v runs through all such vectors in turn.

  ? forqfvec(v, [3,2;2,3], 3, print(v))
  [0, 1]~
  [1, 0]~
  [-1, 1]~

The library syntax is void forqfvec0(GEN v, GEN q = NULL, GEN b). The following function is also available: void forqfvec(void *E, long (*fun)(void *, GEN, double), GEN q, GEN b): Evaluate fun(E,v,m) on all v such that q(v) < b, where v is a t_VECSMALL and m = q(v) is a C double. The function fun must return 0, unless forqfvec should stop, in which case, it should return 1.




lindep(v, {flag = 0})

finds a small non-trivial integral linear combination between components of v. If none can be found return an empty vector.

If v is a vector with real/complex entries we use a floating point (variable precision) LLL algorithm. If flag = 0 the accuracy is chosen internally using a crude heuristic. If flag > 0 the computation is done with an accuracy of flag decimal digits. To get meaningful results in the latter case, the parameter flag should be smaller than the number of correct decimal digits in the input.

  ? lindep([sqrt(2), sqrt(3), sqrt(2)+sqrt(3)])
  %1 = [-1, -1, 1]~

If v is p-adic, flag is ignored and the algorithm LLL-reduces a suitable (dual) lattice.

  ? lindep([1, 2 + 3 + 3^2 + 3^3 + 3^4 + O(3^5)])
  %2 = [1, -2]~

If v is a matrix, flag is ignored and the function returns a non trivial kernel vector (combination of the columns).

  ? lindep([1,2,3;4,5,6;7,8,9])
  %3 = [1, -2, 1]~

If v contains polynomials or power series over some base field, finds a linear relation with coefficients in the field.

  ? lindep([x*y, x^2 + y, x^2*y + x*y^2, 1])
  %4 = [y, y, -1, -y^2]~

For better control, it is preferable to use t_POL rather than t_SER in the input, otherwise one gets a linear combination which is t-adically small, but not necessarily 0. Indeed, power series are first converted to the minimal absolute accuracy occurring among the entries of v (which can cause some coefficients to be ignored), then truncated to polynomials:

  ? v = [t^2+O(t^4), 1+O(t^2)]; L=lindep(v)
  %1 = [1, 0]~
  ? v*L
  %2 = t^2+O(t^4)  \\ small but not 0

The library syntax is GEN lindep0(GEN v, long flag). Also available are GEN lindep(GEN v) (real/complex entries, flag = 0), GEN lindep2(GEN v, long flag) (real/complex entries) GEN padic_lindep(GEN v) (p-adic entries) and GEN Xadic_lindep(GEN v) (polynomial entries). Finally GEN deplin(GEN v) returns a non-zero kernel vector for a t_MAT input.




listcreate()

Creates an empty list. This routine used to have a mandatory argument, which is now ignored (for backward compatibility). In fact, this function has become redundant and obsolete; it will disappear in future versions of PARI: just use List()




listinsert(L, x, n)

Inserts the object x at position n in L (which must be of type t_LIST). This has complexity O(#L - n + 1): all the remaining elements of list (from position n+1 onwards) are shifted to the right.

The library syntax is GEN listinsert(GEN L, GEN x, long n).




listkill(L)

Obsolete, retained for backward compatibility. Just use L = List() instead of listkill(L). In most cases, you won't even need that, e.g. local variables are automatically cleared when a user function returns.

The library syntax is void listkill(GEN L).




listpop(list, {n})

Removes the n-th element of the list list (which must be of type t_LIST). If n is omitted, or greater than the list current length, removes the last element. If the list is already empty, do nothing. This runs in time O(#L - n + 1).

The library syntax is void listpop(GEN list, long n).




listput(list, x, {n})

Sets the n-th element of the list list (which must be of type t_LIST) equal to x. If n is omitted, or greater than the list length, appends x. You may put an element into an occupied cell (not changing the list length), but it is easier to use the standard list[n] = x construct. This runs in time O(#L) in the worst case (when the list must be reallocated), but in time O(1) on average: any number of successive listputs run in time O(#L), where #L denotes the list final length.

The library syntax is GEN listput(GEN list, GEN x, long n).




listsort(L, {flag = 0})

Sorts the t_LIST list in place, with respect to the (somewhat arbitrary) universal comparison function cmp. In particular, the ordering is the same as for sets and setsearch can be used on a sorted list.

  ? L = List([1,2,4,1,3,-1]); listsort(L); L
  %1 = List([-1, 1, 1, 2, 3, 4])
  ? setsearch(L, 4)
  %2 = 6
  ? setsearch(L, -2)
  %3 = 0

This is faster than the vecsort command since the list is sorted in place: no copy is made. No value returned.

If flag is non-zero, suppresses all repeated coefficients.

The library syntax is void listsort(GEN L, long flag).




matadjoint(M, {flag = 0})

adjoint matrix of M, i.e. a matrix N of cofactors of M, satisfying M*N = det(M)*Id. M must be a (non-necessarily invertible) square matrix of dimension n. If flag is 0 or omitted, we try to use Leverrier-Faddeev's algorithm, which assumes that n! invertible. If it fails or flag = 1, compute T = charpoly(M) independently first and return (-1)^{n-1} (T(x)-T(0))/x evaluated at M.

  ? a = [1,2,3;3,4,5;6,7,8] * Mod(1,4);
  %2 =
  [Mod(1, 4) Mod(2, 4) Mod(3, 4)]
  
  [Mod(3, 4) Mod(0, 4) Mod(1, 4)]
  
  [Mod(2, 4) Mod(3, 4) Mod(0, 4)]

Both algorithms use O(n^4) operations in the base ring, and are usually slower than computing the characteristic polynomial or the inverse of M directly.

The library syntax is GEN matadjoint0(GEN M, long flag). Also available are GEN adj(GEN x) (flag = 0) and GEN adjsafe(GEN x) (flag = 1).




matcompanion(x)

The left companion matrix to the non-zero polynomial x.

The library syntax is GEN matcompanion(GEN x).




matconcat(v)

Returns a t_MAT built from the entries of v, which may be a t_VEC (concatenate horizontally), a t_COL (concatenate vertically), or a t_MAT (concatenate vertically each column, and concatenate vertically the resulting matrices). The entries of v are always considered as matrices: they can themselves be t_VEC (seen as a row matrix), a t_COL seen as a column matrix), a t_MAT, or a scalar (seen as an 1 x 1 matrix).

  ? A=[1,2;3,4]; B=[5,6]~; C=[7,8]; D=9;
  ? matconcat([A, B]) \\ horizontal
  %1 =
  [1 2 5]
  
  [3 4 6]
  ? matconcat([A, C]~) \\ vertical
  %2 =
  [1 2]
  
  [3 4]
  
  [7 8]
  ? matconcat([A, B; C, D]) \\ block matrix
  %3 =
  [1 2 5]
  
  [3 4 6]
  
  [7 8 9]

If the dimensions of the entries to concatenate do not match up, the above rules are extended as follows:

* each entry v_{i,j} of v has a natural length and height: 1 x 1 for a scalar, 1 x n for a t_VEC of length n, n x 1 for a t_COL, m x n for an m x n t_MAT

* let H_i be the maximum over j of the lengths of the v_{i,j}, let L_j be the maximum over i of the heights of the v_{i,j}. The dimensions of the (i,j)-th block in the concatenated matrix are H_i x L_j.

* a scalar s = v_{i,j} is considered as s times an identity matrix of the block dimension \min (H_i,L_j)

* blocks are extended by 0 columns on the right and 0 rows at the bottom, as needed.

  ? matconcat([1, [2,3]~, [4,5,6]~]) \\ horizontal
  %4 =
  [1 2 4]
  
  [0 3 5]
  
  [0 0 6]
  ? matconcat([1, [2,3], [4,5,6]]~) \\ vertical
  %5 =
  [1 0 0]
  
  [2 3 0]
  
  [4 5 6]
  ? matconcat([B, C; A, D]) \\ block matrix
  %6 =
  [5 0 7 8]
  
  [6 0 0 0]
  
  [1 2 9 0]
  
  [3 4 0 9]
  ? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
  ? matconcat(matdiagonal([U, V])) \\ block diagonal
  %7 =
  [1 2 0 0 0]
  
  [3 4 0 0 0]
  
  [0 0 1 2 3]
  
  [0 0 4 5 6]
  
  [0 0 7 8 9]

The library syntax is GEN matconcat(GEN v).




matdet(x, {flag = 0})

Determinant of the square matrix x.

If flag = 0, uses an appropriate algorithm depending on the coefficients:

* integer entries: modular method due to Dixon, Pernet and Stein.

* real or p-adic entries: classical Gaussian elimination using maximal pivot.

* intmod entries: classical Gaussian elimination using first non-zero pivot.

* other cases: Gauss-Bareiss.

If flag = 1, uses classical Gaussian elimination with appropriate pivoting strategy (maximal pivot for real or p-adic coefficients). This is usually worse than the default.

The library syntax is GEN det0(GEN x, long flag). Also available are GEN det(GEN x) (flag = 0), GEN det2(GEN x) (flag = 1) and GEN ZM_det(GEN x) for integer entries.




matdetint(B)

Let B be an m x n matrix with integer coefficients. The determinant D of the lattice generated by the columns of B is the square root of det(B^T B) if B has maximal rank m, and 0 otherwise.

This function uses the Gauss-Bareiss algorithm to compute a positive multiple of D. When B is square, the function actually returns D = |det B|.

This function is useful in conjunction with mathnfmod, which needs to know such a multiple. If the rank is maximal and the matrix non-square, you can obtain D exactly using

    matdet( mathnfmod(B, matdetint(B)) )

Note that as soon as one of the dimensions gets large (m or n is larger than 20, say), it will often be much faster to use mathnf(B, 1) or mathnf(B, 4) directly.

The library syntax is GEN detint(GEN B).




matdiagonal(x)

x being a vector, creates the diagonal matrix whose diagonal entries are those of x.

  ? matdiagonal([1,2,3]);
  %1 =
  [1 0 0]
  
  [0 2 0]
  
  [0 0 3]

Block diagonal matrices are easily created using matconcat:

  ? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
  ? matconcat(matdiagonal([U, V]))
  %1 =
  [1 2 0 0 0]
  
  [3 4 0 0 0]
  
  [0 0 1 2 3]
  
  [0 0 4 5 6]
  
  [0 0 7 8 9]

The library syntax is GEN diagonal(GEN x).




mateigen(x, {flag = 0})

Returns the (complex) eigenvectors of x as columns of a matrix. If flag = 1, return [L,H], where L contains the eigenvalues and H the corresponding eigenvectors; multiple eigenvalues are repeated according to the eigenspace dimension (which may be less than the eigenvalue multiplicity in the characteristic polynomial).

This function first computes the characteristic polynomial of x and approximates its complex roots (λ_i), then tries to compute the eigenspaces as kernels of the x - λ_i. This algorithm is ill-conditioned and is likely to miss kernel vectors if some roots of the characteristic polynomial are close, in particular if it has multiple roots.

  ? A = [13,2; 10,14]; mateigen(A)
  %1 =
  [-1/2 2/5]
  
  [   1   1]
  ? [L,H] = mateigen(A, 1);
  ? L
  %3 = [9, 18]
  ? H
  %4 =
  [-1/2 2/5]
  
  [   1   1]

For symmetric matrices, use qfjacobi instead; for Hermitian matrices, compute

   A = real(x);
   B = imag(x);
   y = matconcat([A, -B; B, A]);

and apply qfjacobi to y.

The library syntax is GEN mateigen(GEN x, long flag, long prec). Also available is GEN eigen(GEN x, long prec) (flag = 0)




matfrobenius(M, {flag}, {v = 'x})

Returns the Frobenius form of the square matrix M. If flag = 1, returns only the elementary divisors as a vector of polynomials in the variable v. If flag = 2, returns a two-components vector [F,B] where F is the Frobenius form and B is the basis change so that M = B^{-1}FB.

The library syntax is GEN matfrobenius(GEN M, long flag, long v = -1), where v is a variable number.




mathess(x)

Returns a matrix similar to the square matrix x, which is in upper Hessenberg form (zero entries below the first subdiagonal).

The library syntax is GEN hess(GEN x).




mathilbert(n)

x being a long, creates the Hilbert matrixof order x, i.e. the matrix whose coefficient (i,j) is 1/ (i+j-1).

The library syntax is GEN mathilbert(long n).




mathnf(M, {flag = 0})

Let R be a Euclidean ring, equal to ℤ or to K[X] for some field K. If M is a (not necessarily square) matrix with entries in R, this routine finds the upper triangular Hermite normal form of M. If the rank of M is equal to its number of rows, this is a square matrix. In general, the columns of the result form a basis of the R-module spanned by the columns of M.

The values 0,1,2,3 of flag have a binary meaning, analogous to the one in matsnf; in this case, binary digits of flag mean:

* 1 (complete output): if set, outputs [H,U], where H is the Hermite normal form of M, and U is a transformation matrix such that MU = [0|H]. The matrix U belongs to {GL}(R). When M has a large kernel, the entries of U are in general huge.

* 2 (generic input): Deprecated. If set, assume that R = K[X] is a polynomial ring; otherwise, assume that R = ℤ. This flag is now useless since the routine always checks whether the matrix has integral entries.

For these 4 values, we use a naive algorithm, which behaves well in small dimension only. Larger values correspond to different algorithms, are restricted to integer matrices, and all output the unimodular matrix U. From now on all matrices have integral entries.

* flag = 4, returns [H,U] as in "complete output" above, using a variant of LLL reduction along the way. The matrix U is provably small in the L_2 sense, and in general close to optimal; but the reduction is in general slow, although provably polynomial-time.

If flag = 5, uses Batut's algorithm and output [H,U,P], such that H and U are as before and P is a permutation of the rows such that P applied to MU gives H. This is in general faster than flag = 4 but the matrix U is usually worse; it is heuristically smaller than with the default algorithm.

When the matrix is dense and the dimension is large (bigger than 100, say), flag = 4 will be fastest. When M has maximal rank, then

    H = mathnfmod(M, matdetint(M))

will be even faster. You can then recover U as M^{-1}H.

  ? M = matrix(3,4,i,j,random([-5,5]))
  %1 =
  [ 0 2  3  0]
  
  [-5 3 -5 -5]
  
  [ 4 3 -5  4]
  
  ? [H,U] = mathnf(M, 1);
  ? U
  %3 =
  [-1 0 -1 0]
  
  [ 0 5  3 2]
  
  [ 0 3  1 1]
  
  [ 1 0  0 0]
  
  ? H
  %5 =
  [19 9 7]
  
  [ 0 9 1]
  
  [ 0 0 1]
  
  ? M*U
  %6 =
  [0 19 9 7]
  
  [0  0 9 1]
  
  [0  0 0 1]

For convenience, M is allowed to be a t_VEC, which is then automatically converted to a t_MAT, as per the Mat function. For instance to solve the generalized extended gcd problem, one may use

  ? v = [116085838, 181081878, 314252913,10346840];
  ? [H,U] = mathnf(v, 1);
  ? U
  %2 =
  [ 103 -603    15  -88]
  
  [-146   13 -1208  352]
  
  [  58  220   678 -167]
  
  [-362 -144   381 -101]
  ? v*U
  %3 = [0, 0, 0, 1]

This also allows to input a matrix as a t_VEC of t_COLs of the same length (which Mat would concatenate to the t_MAT having those columns):

  ? v = [[1,0,4]~, [3,3,4]~, [0,-4,-5]~]; mathnf(v)
  %1 =
  [47 32 12]
  
  [ 0  1  0]
  
  [ 0  0  1]

The library syntax is GEN mathnf0(GEN M, long flag). Also available are GEN hnf(GEN M) (flag = 0) and GEN hnfall(GEN M) (flag = 1). To reduce huge relation matrices (sparse with small entries, say dimension 400 or more), you can use the pair hnfspec / hnfadd. Since this is quite technical and the calling interface may change, they are not documented yet. Look at the code in basemath/hnf_snf.c.




mathnfmod(x, d)

If x is a (not necessarily square) matrix of maximal rank with integer entries, and d is a multiple of the (non-zero) determinant of the lattice spanned by the columns of x, finds the upper triangular Hermite normal form of x.

If the rank of x is equal to its number of rows, the result is a square matrix. In general, the columns of the result form a basis of the lattice spanned by the columns of x. Even when d is known, this is in general slower than mathnf but uses much less memory.

The library syntax is GEN hnfmod(GEN x, GEN d).




mathnfmodid(x, d)

Outputs the (upper triangular) Hermite normal form of x concatenated with the diagonal matrix with diagonal d. Assumes that x has integer entries. Variant: if d is an integer instead of a vector, concatenate d times the identity matrix.

  ? m=[0,7;-1,0;-1,-1]
  %1 =
  [ 0  7]
  
  [-1  0]
  
  [-1 -1]
  ? mathnfmodid(m, [6,2,2])
  %2 =
  [2 1 1]
  
  [0 1 0]
  
  [0 0 1]
  ? mathnfmodid(m, 10)
  %3 =
  [10 7 3]
  
  [ 0 1 0]
  
  [ 0 0 1]

The library syntax is GEN hnfmodid(GEN x, GEN d).




mathouseholder(Q, v)

applies a sequence Q of Householder transforms, as returned by matqr(M,1) to the vector or matrix v.

The library syntax is GEN mathouseholder(GEN Q, GEN v).




matid(n)

Creates the n x n identity matrix.

The library syntax is GEN matid(long n).




matimage(x, {flag = 0})

Gives a basis for the image of the matrix x as columns of a matrix. A priori the matrix can have entries of any type. If flag = 0, use standard Gauss pivot. If flag = 1, use matsupplement (much slower: keep the default flag!).

The library syntax is GEN matimage0(GEN x, long flag). Also available is GEN image(GEN x) (flag = 0).




matimagecompl(x)

Gives the vector of the column indices which are not extracted by the function matimage, as a permutation (t_VECSMALL). Hence the number of components of matimagecompl(x) plus the number of columns of matimage(x) is equal to the number of columns of the matrix x.

The library syntax is GEN imagecompl(GEN x).




matindexrank(x)

x being a matrix of rank r, returns a vector with two t_VECSMALL components y and z of length r giving a list of rows and columns respectively (starting from 1) such that the extracted matrix obtained from these two vectors using vecextract(x,y,z) is invertible.

The library syntax is GEN indexrank(GEN x).




matintersect(x, y)

x and y being two matrices with the same number of rows each of whose columns are independent, finds a basis of the ℚ-vector space equal to the intersection of the spaces spanned by the columns of x and y respectively. The faster function idealintersect can be used to intersect fractional ideals (projective ℤ_K modules of rank 1); the slower but much more general function nfhnf can be used to intersect general ℤ_K-modules.

The library syntax is GEN intersect(GEN x, GEN y).




matinverseimage(x, y)

Given a matrix x and a column vector or matrix y, returns a preimage z of y by x if one exists (i.e such that x z = y), an empty vector or matrix otherwise. The complete inverse image is z + {Ker} x, where a basis of the kernel of x may be obtained by matker.

  ? M = [1,2;2,4];
  ? matinverseimage(M, [1,2]~)
  %2 = [1, 0]~
  ? matinverseimage(M, [3,4]~)
  %3 = []~    \\  no solution
  ? matinverseimage(M, [1,3,6;2,6,12])
  %4 =
  [1 3 6]
  
  [0 0 0]
  ? matinverseimage(M, [1,2;3,4])
  %5 = [;]    \\  no solution
  ? K = matker(M)
  %6 =
  [-2]
  
  [1]

The library syntax is GEN inverseimage(GEN x, GEN y).




matisdiagonal(x)

Returns true (1) if x is a diagonal matrix, false (0) if not.

The library syntax is GEN isdiagonal(GEN x).




matker(x, {flag = 0})

Gives a basis for the kernel of the matrix x as columns of a matrix. The matrix can have entries of any type, provided they are compatible with the generic arithmetic operations (+, x and /).

If x is known to have integral entries, set flag = 1.

The library syntax is GEN matker0(GEN x, long flag). Also available are GEN ker(GEN x) (flag = 0), GEN keri(GEN x) (flag = 1).




matkerint(x, {flag = 0})

Gives an LLL-reduced ℤ-basis for the lattice equal to the kernel of the matrix x as columns of the matrix x with integer entries (rational entries are not permitted).

If flag = 0, uses an integer LLL algorithm.

If flag = 1, uses matrixqz(x,-2). Many orders of magnitude slower than the default: never use this.

The library syntax is GEN matkerint0(GEN x, long flag). See also GEN kerint(GEN x) (flag = 0), which is a trivial wrapper around

  ZM_lll(ZM_lll(x, 0.99, LLL_KER), 0.99, LLL_INPLACE);

Remove the outermost ZM_lll if LLL-reduction is not desired (saves time).




matmuldiagonal(x, d)

Product of the matrix x by the diagonal matrix whose diagonal entries are those of the vector d. Equivalent to, but much faster than x*matdiagonal(d).

The library syntax is GEN matmuldiagonal(GEN x, GEN d).




matmultodiagonal(x, y)

Product of the matrices x and y assuming that the result is a diagonal matrix. Much faster than x*y in that case. The result is undefined if x*y is not diagonal.

The library syntax is GEN matmultodiagonal(GEN x, GEN y).




matpascal(n, {q})

Creates as a matrix the lower triangular Pascal triangle of order x+1 (i.e. with binomial coefficients up to x). If q is given, compute the q-Pascal triangle (i.e. using q-binomial coefficients).

The library syntax is GEN matqpascal(long n, GEN q = NULL). Also available is GEN matpascal(GEN x).




matqr(M, {flag = 0})

Returns [Q,R], the QR-decomposition of the square invertible matrix M with real entries: Q is orthogonal and R upper triangular. If flag = 1, the orthogonal matrix is returned as a sequence of Householder transforms: applying such a sequence is stabler and faster than multiplication by the corresponding Q matrix. More precisely, if

    [Q,R] = matqr(M);
    [q,r] = matqr(M, 1);

then r = R and mathouseholder(q, M) is R; furthermore

    mathouseholder(q, matid(#M)) == Q~

the inverse of Q. This function raises an error if the precision is too low or x is singular.

The library syntax is GEN matqr(GEN M, long flag, long prec).




matrank(x)

Rank of the matrix x.

The library syntax is long rank(GEN x).




matrix(m, n, {X}, {Y}, {expr = 0})

Creation of the m x n matrix whose coefficients are given by the expression expr. There are two formal parameters in expr, the first one (X) corresponding to the rows, the second (Y) to the columns, and X goes from 1 to m, Y goes from 1 to n. If one of the last 3 parameters is omitted, fill the matrix with zeroes.




matrixqz(A, {p = 0})

A being an m x n matrix in M_{m,n}(ℚ), let {Im}_ℚ A (resp. {Im}_ℤ A) the ℚ-vector space (resp. the ℤ-module) spanned by the columns of A. This function has varying behavior depending on the sign of p:

If p ≥ 0, A is assumed to have maximal rank n ≤ m. The function returns a matrix B belongs to M_{m,n}(ℤ), with {Im}_ℚ B = {Im}_ℚ A, such that the GCD of all its n x n minors is coprime to p; in particular, if p = 0 (default), this GCD is 1.

  ? minors(x) = vector(#x[,1], i, matdet(x[^i,]));
  ? A = [3,1/7; 5,3/7; 7,5/7]; minors(A)
  %1 = [4/7, 8/7, 4/7]   \\ determinants of all 2x2 minors
  ? B = matrixqz(A)
  %2 =
  [3 1]
  
  [5 2]
  
  [7 3]
  ? minors(%)
  %3 = [1, 2, 1]   \\ B integral with coprime minors

If p = -1, returns the HNF basis of the lattice ℤ^n cap {Im}_ℤ A.

If p = -2, returns the HNF basis of the lattice ℤ^n cap {Im}_ℚ A.

  ? matrixqz(A,-1)
  %4 =
  [8 5]
  
  [4 3]
  
  [0 1]
  
  ? matrixqz(A,-2)
  %5 =
  [2 -1]
  
  [1 0]
  
  [0 1]

The library syntax is GEN matrixqz0(GEN A, GEN p = NULL).




matsize(x)

x being a vector or matrix, returns a row vector with two components, the first being the number of rows (1 for a row vector), the second the number of columns (1 for a column vector).

The library syntax is GEN matsize(GEN x).




matsnf(X, {flag = 0})

If X is a (singular or non-singular) matrix outputs the vector of elementary divisors of X, i.e. the diagonal of the Smith normal form of X, normalized so that d_n | d_{n-1} | ... | d_1.

The binary digits of flag mean:

1 (complete output): if set, outputs [U,V,D], where U and V are two unimodular matrices such that UXV is the diagonal matrix D. Otherwise output only the diagonal of D. If X is not a square matrix, then D will be a square diagonal matrix padded with zeros on the left or the top.

2 (generic input): if set, allows polynomial entries, in which case the input matrix must be square. Otherwise, assume that X has integer coefficients with arbitrary shape.

4 (cleanup): if set, cleans up the output. This means that elementary divisors equal to 1 will be deleted, i.e. outputs a shortened vector D' instead of D. If complete output was required, returns [U',V',D'] so that U'XV' = D' holds. If this flag is set, X is allowed to be of the form `vector of elementary divisors' or [U,V,D] as would normally be output with the cleanup flag unset.

The library syntax is GEN matsnf0(GEN X, long flag).




matsolve(M, B)

M being an invertible matrix and B a column vector, finds the solution X of MX = B, using Dixon p-adic lifting method if M and B are integral and Gaussian elimination otherwise. This has the same effect as, but is faster, than M^{-1}*B.

The library syntax is GEN gauss(GEN M, GEN B). For integral input, the function GEN ZM_gauss(GEN M,GEN B) is also available.




matsolvemod(M, D, B, {flag = 0})

M being any integral matrix, D a column vector of non-negative integer moduli, and B an integral column vector, gives a small integer solution to the system of congruences ∑_i m_{i,j}x_j = b_i (mod d_i) if one exists, otherwise returns zero. Shorthand notation: B (resp. D) can be given as a single integer, in which case all the b_i (resp. d_i) above are taken to be equal to B (resp. D).

  ? M = [1,2;3,4];
  ? matsolvemod(M, [3,4]~, [1,2]~)
  %2 = [-2, 0]~
  ? matsolvemod(M, 3, 1) \\ M X = [1,1]~ over F_3
  %3 = [-1, 1]~
  ? matsolvemod(M, [3,0]~, [1,2]~) \\ x + 2y = 1 (mod 3), 3x + 4y = 2 (in Z)
  %4 = [6, -4]~

If flag = 1, all solutions are returned in the form of a two-component row vector [x,u], where x is a small integer solution to the system of congruences and u is a matrix whose columns give a basis of the homogeneous system (so that all solutions can be obtained by adding x to any linear combination of columns of u). If no solution exists, returns zero.

The library syntax is GEN matsolvemod0(GEN M, GEN D, GEN B, long flag). Also available are GEN gaussmodulo(GEN M, GEN D, GEN B) (flag = 0) and GEN gaussmodulo2(GEN M, GEN D, GEN B) (flag = 1).




matsupplement(x)

Assuming that the columns of the matrix x are linearly independent (if they are not, an error message is issued), finds a square invertible matrix whose first columns are the columns of x, i.e. supplement the columns of x to a basis of the whole space.

  ? matsupplement([1;2])
  %1 =
  [1 0]
  
  [2 1]

Raises an error if x has 0 columns, since (due to a long standing design bug), the dimension of the ambient space (the number of rows) is unknown in this case:

  ? matsupplement(matrix(2,0))
    ***   at top-level: matsupplement(matrix
    ***                 ^--------------------
    *** matsupplement: sorry, suppl [empty matrix] is not yet implemented.

The library syntax is GEN suppl(GEN x).




mattranspose(x)

Transpose of x (also x~). This has an effect only on vectors and matrices.

The library syntax is GEN gtrans(GEN x).




minpoly(A, {v = 'x})

minimal polynomial of A with respect to the variable v., i.e. the monic polynomial P of minimal degree (in the variable v) such that P(A) = 0.

The library syntax is GEN minpoly(GEN A, long v = -1), where v is a variable number.




norml2(x)

Square of the L^2-norm of x. More precisely, if x is a scalar, norml2(x) is defined to be the square of the complex modulus of x (real t_QUADs are not supported). If x is a polynomial, a (row or column) vector or a matrix, norml2(x) is defined recursively as ∑_i norml2(x_i), where (x_i) run through the components of x. In particular, this yields the usual ∑ |x_i|^2 (resp. ∑ |x_{i,j}|^2) if x is a polynomial or vector (resp. matrix) with complex components.

  ? norml2( [ 1, 2, 3 ] )      \\ vector
  %1 = 14
  ? norml2( [ 1, 2; 3, 4] )   \\ matrix
  %2 = 30
  ? norml2( 2*I + x )
  %3 = 5
  ? norml2( [ [1,2], [3,4], 5, 6 ] )   \\ recursively defined
  %4 = 91

The library syntax is GEN gnorml2(GEN x).




normlp(x, {p})

L^p-norm of x; sup norm if p is omitted. More precisely, if x is a scalar, normlp(x, p) is defined to be abs(x). If x is a polynomial, a (row or column) vector or a matrix:

* if p is omitted, normlp(x) is defined recursively as \max_i normlp(x_i)), where (x_i) run through the components of x. In particular, this yields the usual sup norm if x is a polynomial or vector with complex components.

* otherwise, normlp(x, p) is defined recursively as (∑_i normlp^p(x_i,p))^{1/p}. In particular, this yields the usual (∑ |x_i|^p)^{1/p} if x is a polynomial or vector with complex components.

  ? v = [1,-2,3]; normlp(v)      \\ vector
  %1 = 3
  ? M = [1,-2;-3,4]; normlp(M)   \\ matrix
  %2 = 4
  ? T = (1+I) + I*x^2; normlp(T)
  %3 = 1.4142135623730950488016887242096980786
  ? normlp([[1,2], [3,4], 5, 6])   \\ recursively defined
  %4 = 6
  
  ? normlp(v, 1)
  %5 = 6
  ? normlp(M, 1)
  %6 = 10
  ? normlp(T, 1)
  %7 = 2.4142135623730950488016887242096980786

The library syntax is GEN gnormlp(GEN x, GEN p = NULL, long prec).




qfauto(G, {fl})

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, outputs the automorphism group of the associate lattice. Since this requires computing the minimal vectors, the computations can become very lengthy as the dimension grows. G can also be given by an qfisominit structure. See qfisominit for the meaning of fl.

The output is a two-components vector [o,g] where o is the group order and g is the list of generators (as a vector). For each generator H, the equality G = {^t}H G H holds.

The interface of this function is experimental and will likely change in the future.

This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.

The library syntax is GEN qfauto0(GEN G, GEN fl = NULL). Also available is GEN qfauto(GEN G, GEN fl) where G is a vector of zm.




qfautoexport(qfa, {flag})

qfa being an automorphism group as output by qfauto, export the underlying matrix group as a string suitable for (no flags or flag = 0) GAP or (flag = 1) Magma. The following example computes the size of the matrix group using GAP:

  ? G = qfauto([2,1;1,2])
  %1 = [12, [[-1, 0; 0, -1], [0, -1; 1, 1], [1, 1; 0, -1]]]
  ? s = qfautoexport(G)
  %2 = "Group([[-1, 0], [0, -1]], [[0, -1], [1, 1]], [[1, 1], [0, -1]])"
  ? extern("echo \"Order("s");\" | gap -q")
  %3 = 12

The library syntax is GEN qfautoexport(GEN qfa, long flag).




qfbil(x, y, {q})

Evaluate the bilinear form q (symmetric matrix) at the vectors (x,y); if q omitted, use the standard Euclidean scalar product, corresponding to the identity matrix.

Roughly equivalent to x~ * q * y, but a little faster and more convenient (does not distinguish between column and row vectors):

  ? x = [1,2,3]~; y = [-1,0,1]~; qfbil(x,y)
  %1 = 2
  ? q = [1,2,3;2,2,-1;3,-1,0]; qfbil(x,y, q)
  %2 = -13
  ? for(i=1,10^6, qfbil(x,y,q))
  %3 = 568ms
  ? for(i=1,10^6, x~*q*y)
  %4 = 717ms

The associated quadratic form is also available, as qfnorm, slightly faster:

  ? for(i=1,10^6, qfnorm(x,q))
  time = 444ms
  ? for(i=1,10^6, qfnorm(x))
  time = 176 ms.
  ? for(i=1,10^6, qfbil(x,y))
  time = 208 ms.

The library syntax is GEN qfbil(GEN x, GEN y, GEN q = NULL).




qfgaussred(q)

decomposition into squares of the quadratic form represented by the symmetric matrix q. The result is a matrix whose diagonal entries are the coefficients of the squares, and the off-diagonal entries on each line represent the bilinear forms. More precisely, if (a_{ij}) denotes the output, one has q(x) = ∑_i a_{ii} (x_i + ∑_{j != i} a_{ij} x_j)^2

  ? qfgaussred([0,1;1,0])
  %1 =
  [1/2 1]
  
  [-1 -1/2]

This means that 2xy = (1/2)(x+y)^2 - (1/2)(x-y)^2.

The library syntax is GEN qfgaussred(GEN q). GEN qfgaussred_positive(GEN q) assumes that q is positive definite and is a little faster; returns NULL if a vector with negative norm occurs (non positive matrix or too many rounding errors).




qfisom(G, H, {fl})

G, H being square and symmetric matrices with integer entries representing positive definite quadratic forms, return an invertible matrix S such that G = {^t}S H S. This defines a isomorphism between the corresponding lattices. Since this requires computing the minimal vectors, the computations can become very lengthy as the dimension grows. See qfisominit for the meaning of fl.

G can also be given by an qfisominit structure which is preferable if several forms H need to be compared to G.

This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.

The library syntax is GEN qfisom0(GEN G, GEN H, GEN fl = NULL). Also available is GEN qfisom(GEN G, GEN H, GEN fl) where G is a vector of zm, and H is a zm.




qfisominit(G, {fl})

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, return an isom structure allowing to compute isomorphisms between G and other quadratic forms faster.

The interface of this function is experimental and will likely change in future release.

If present, the optional parameter fl must be a t_VEC with two components. It allows to specify the invariants used, which can make the computation faster or slower. The components are

* fl[1] Depth of scalar product combination to use.

* fl[2] Maximum level of Bacher polynomials to use.

Since this function computes the minimal vectors, it can become very lengthy as the dimension of G grows.

The library syntax is GEN qfisominit0(GEN G, GEN fl = NULL). Also available is GEN qfisominit(GEN F, GEN fl) where F is a vector of zm.




qfjacobi(A)

Apply Jacobi's eigenvalue algorithm to the real symmetric matrix A. This returns [L, V], where

* L is the vector of (real) eigenvalues of A, sorted in increasing order,

* V is the corresponding orthogonal matrix of eigenvectors of A.

  ? \p19
  ? A = [1,2;2,1]; mateigen(A)
  %1 =
  [-1 1]
  
  [ 1 1]
  ? [L, H] = qfjacobi(A);
  ? L
  %3 = [-1.000000000000000000, 3.000000000000000000]~
  ? H
  %4 =
  [ 0.7071067811865475245 0.7071067811865475244]
  
  [-0.7071067811865475244 0.7071067811865475245]
  ? norml2( (A-L[1])*H[,1] )       \\ approximate eigenvector
  %5 = 9.403954806578300064 E-38
  ? norml2(H*H~ - 1)
  %6 = 2.350988701644575016 E-38   \\ close to orthogonal

The library syntax is GEN jacobi(GEN A, long prec).




qflll(x, {flag = 0})

LLL algorithm applied to the columns of the matrix x. The columns of x may be linearly dependent. The result is a unimodular transformation matrix T such that x .T is an LLL-reduced basis of the lattice generated by the column vectors of x. Note that if x is not of maximal rank T will not be square. The LLL parameters are (0.51,0.99), meaning that the Gram-Schmidt coefficients for the final basis satisfy μ_{i,j} ≤ |0.51|, and the Lov\'{a}sz's constant is 0.99.

If flag = 0 (default), assume that x has either exact (integral or rational) or real floating point entries. The matrix is rescaled, converted to integers and the behavior is then as in flag = 1.

If flag = 1, assume that x is integral. Computations involving Gram-Schmidt vectors are approximate, with precision varying as needed (Lehmer's trick, as generalized by Schnorr). Adapted from Nguyen and Stehlé's algorithm and Stehlé's code (fplll-1.3).

If flag = 2, x should be an integer matrix whose columns are linearly independent. Returns a partially reduced basis for x, using an unpublished algorithm by Peter Montgomery: a basis is said to be partially reduced if |v_i ± v_j| ≥ |v_i| for any two distinct basis vectors v_i, v_j.

This is faster than flag = 1, esp. when one row is huge compared to the other rows (knapsack-style), and should quickly produce relatively short vectors. The resulting basis is not LLL-reduced in general. If LLL reduction is eventually desired, avoid this partial reduction: applying LLL to the partially reduced matrix is significantly slower than starting from a knapsack-type lattice.

If flag = 4, as flag = 1, returning a vector [K, T] of matrices: the columns of K represent a basis of the integer kernel of x (not LLL-reduced in general) and T is the transformation matrix such that x.T is an LLL-reduced ℤ-basis of the image of the matrix x.

If flag = 5, case as case 4, but x may have polynomial coefficients.

If flag = 8, same as case 0, but x may have polynomial coefficients.

The library syntax is GEN qflll0(GEN x, long flag). Also available are GEN lll(GEN x) (flag = 0), GEN lllint(GEN x) (flag = 1), and GEN lllkerim(GEN x) (flag = 4).




qflllgram(G, {flag = 0})

Same as qflll, except that the matrix G = x~ * x is the Gram matrix of some lattice vectors x, and not the coordinates of the vectors themselves. In particular, G must now be a square symmetric real matrix, corresponding to a positive quadratic form (not necessarily definite: x needs not have maximal rank). The result is a unimodular transformation matrix T such that x.T is an LLL-reduced basis of the lattice generated by the column vectors of x. See qflll for further details about the LLL implementation.

If flag = 0 (default), assume that G has either exact (integral or rational) or real floating point entries. The matrix is rescaled, converted to integers and the behavior is then as in flag = 1.

If flag = 1, assume that G is integral. Computations involving Gram-Schmidt vectors are approximate, with precision varying as needed (Lehmer's trick, as generalized by Schnorr). Adapted from Nguyen and Stehlé's algorithm and Stehlé's code (fplll-1.3).

flag = 4: G has integer entries, gives the kernel and reduced image of x.

flag = 5: same as 4, but G may have polynomial coefficients.

The library syntax is GEN qflllgram0(GEN G, long flag). Also available are GEN lllgram(GEN G) (flag = 0), GEN lllgramint(GEN G) (flag = 1), and GEN lllgramkerim(GEN G) (flag = 4).




qfminim(x, {b}, {m}, {flag = 0})

x being a square and symmetric matrix representing a positive definite quadratic form, this function deals with the vectors of x whose norm is less than or equal to b, enumerated using the Fincke-Pohst algorithm, storing at most m vectors (no limit if m is omitted). The function searches for the minimal non-zero vectors if b is omitted. The behavior is undefined if x is not positive definite (a "precision too low" error is most likely, although more precise error messages are possible). The precise behavior depends on flag.

If flag = 0 (default), seeks at most 2m vectors. The result is a three-component vector, the first component being the number of vectors found, the second being the maximum norm found, and the last vector is a matrix whose columns are the vectors found, only one being given for each pair ± v (at most m such pairs, unless m was omitted). The vectors are returned in no particular order.

If flag = 1, ignores m and returns [N,v], where v is a non-zero vector of length N ≤ b, or [] if no non-zero vector has length ≤ b. If no explicit b is provided, return a vector of smallish norm (smallest vector in an LLL-reduced basis).

In these two cases, x must have integral entries. The implementation uses low precision floating point computations for maximal speed, which gives incorrect result when x has large entries. (The condition is checked in the code and the routine raises an error if large rounding errors occur.) A more robust, but much slower, implementation is chosen if the following flag is used:

If flag = 2, x can have non integral real entries. In this case, if b is omitted, the "minimal" vectors only have approximately the same norm. If b is omitted, m is an upper bound for the number of vectors that will be stored and returned, but all minimal vectors are nevertheless enumerated. If m is omitted, all vectors found are stored and returned; note that this may be a huge vector!

  ? x = matid(2);
  ? qfminim(x)  \\  4 minimal vectors of norm 1: ±[0,1], ±[1,0]
  %2 = [4, 1, [0, 1; 1, 0]]
  ? { x =
  [4, 2, 0, 0, 0,-2, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1, 0,-1, 0, 0, 0,-2;
   2, 4,-2,-2, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1, 0, 1,-1,-1;
   0,-2, 4, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 0, 1,-1,-1, 0, 0;
   0,-2, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1,-1, 0, 1,-1, 1, 0;
   0, 0,-2, 0, 4, 0, 0, 0, 1,-1, 0, 0, 1, 0, 0, 0,-2, 0, 0,-1, 1, 1, 0, 0;
  -2, -2,0, 0, 0, 4,-2, 0,-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,-1, 1, 1;
   0, 0, 0, 0, 0,-2, 4,-2, 0, 0, 0, 0, 0, 1, 0, 0, 0,-1, 0, 0, 0, 1,-1, 0;
   0, 0, 0, 0, 0, 0,-2, 4, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1,-1,-1, 0, 1, 0;
   0, 0, 0, 0, 1,-1, 0, 0, 4, 0,-2, 0, 1, 1, 0,-1, 0, 1, 0, 0, 0, 0, 0, 0;
   0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 0, 0, 1, 1,-1, 1, 0, 0, 0, 1, 0, 0, 1, 0;
   0, 0, 0, 0, 0, 0, 0, 0,-2, 0, 4,-2, 0,-1, 0, 0, 0,-1, 0,-1, 0, 0, 0, 0;
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0,-2, 4,-1, 1, 0, 0,-1, 1, 0, 1, 1, 1,-1, 0;
   1, 0,-1, 1, 1, 0, 0,-1, 1, 1, 0,-1, 4, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1,-1;
  -1,-1, 1,-1, 0, 0, 1, 0, 1, 1,-1, 1, 0, 4, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1;
   0, 0, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0;
   0, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 1, 0, 4, 0, 0, 0, 0, 1, 1, 0, 0;
   0, 0, 1, 0,-2, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 1, 1, 1, 0, 0, 1, 1;
   1, 0, 0, 1, 0, 0,-1, 0, 1, 0,-1, 1, 1, 0, 0, 0, 1, 4, 0, 1, 1, 0, 1, 0;
   0, 0, 0,-1, 0, 1, 0,-1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 4, 0, 1, 1, 0, 1;
  -1, -1,1, 0,-1, 1, 0,-1, 0, 1,-1, 1, 0, 1, 0, 0, 1, 1, 0, 4, 0, 0, 1, 1;
   0, 0,-1, 1, 1, 0, 0,-1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 4, 1, 0, 1;
   0, 1,-1,-1, 1,-1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 4, 0, 1;
   0,-1, 0, 1, 0, 1,-1, 1, 0, 1, 0,-1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 4, 1;
  -2,-1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 4]; }
  ? qfminim(x,,0)  \\ the Leech lattice has 196560 minimal vectors of norm 4
  time = 648 ms.
  %4 = [196560, 4, [;]]
  ? qfminim(x,,0,2); \\ safe algorithm. Slower and unnecessary here.
  time = 18,161 ms.
  %5 = [196560, 4.000061035156250000, [;]]

In the last example, we store 0 vectors to limit memory use. All minimal vectors are nevertheless enumerated. Provided parisize is about 50MB, qfminim(x) succeeds in 2.5 seconds.

The library syntax is GEN qfminim0(GEN x, GEN b = NULL, GEN m = NULL, long flag, long prec). Also available are GEN minim(GEN x, GEN b = NULL, GEN m = NULL) (flag = 0), GEN minim2(GEN x, GEN b = NULL, GEN m = NULL) (flag = 1). GEN minim_raw(GEN x, GEN b = NULL, GEN m = NULL) (do not perform LLL reduction on x).




qfnorm(x, {q})

Evaluate the binary quadratic form q (symmetric matrix) at the vector x. If q omitted, use the standard Euclidean form, corresponding to the identity matrix.

Equivalent to x~ * q * x, but about twice faster and more convenient (does not distinguish between column and row vectors):

  ? x = [1,2,3]~; qfnorm(x)
  %1 = 14
  ? q = [1,2,3;2,2,-1;3,-1,0]; qfnorm(x, q)
  %2 = 23
  ? for(i=1,10^6, qfnorm(x,q))
  time = 384ms.
  ? for(i=1,10^6, x~*q*x)
  time = 729ms.

We also allow t_MATs of compatible dimensions for x, and return x~ * q * x in this case as well:

  ? M = [1,2,3;4,5,6;7,8,9]; qfnorm(M) \\ Gram matrix
  %5 =
  [66  78  90]
  
  [78  93 108]
  
  [90 108 126]
  
  ? for(i=1,10^6, qfnorm(M,q))
  time = 2,144 ms.
  ? for(i=1,10^6, M~*q*M)
  time = 2,793 ms.

The polar form is also available, as qfbil.

The library syntax is GEN qfnorm(GEN x, GEN q = NULL).




qfperfection(G)

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, outputs the perfection rank of the form. That is, gives the rank of the family of the s symmetric matrices v_iv_i^t, where s is half the number of minimal vectors and the v_i (1 ≤ i ≤ s) are the minimal vectors.

Since this requires computing the minimal vectors, the computations can become very lengthy as the dimension of x grows.

The library syntax is GEN perf(GEN G).




qfrep(q, B, {flag = 0})

q being a square and symmetric matrix with integer entries representing a positive definite quadratic form, count the vectors representing successive integers.

* If flag = 0, count all vectors. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors v such that q(v) = i.

* If flag = 1, count vectors of even norm. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors such that q(v) = 2i.

  ? q = [2, 1; 1, 3];
  ? qfrep(q, 5)
  %2 = Vecsmall([0, 1, 2, 0, 0]) \\ 1 vector of norm 2, 2 of norm 3, etc.
  ? qfrep(q, 5, 1)
  %3 = Vecsmall([1, 0, 0, 1, 0]) \\ 1 vector of norm 2, 0 of norm 4, etc.

This routine uses a naive algorithm based on qfminim, and will fail if any entry becomes larger than 2^{31} (or 2^{63}).

The library syntax is GEN qfrep0(GEN q, GEN B, long flag).




qfsign(x)

Returns [p,m] the signature of the quadratic form represented by the symmetric matrix x. Namely, p (resp. m) is the number of positive (resp. negative) eigenvalues of x.The result is computed using Gaussian reduction.

The library syntax is GEN qfsign(GEN x).




seralgdep(s, p, r)

finds a linear relation between powers (1,s, ..., s^p) of the series s, with polynomial coefficients of degree ≤ r. In case no relation is found, return 0.

  ? s = 1 + 10*y - 46*y^2 + 460*y^3 - 5658*y^4 + 77740*y^5 + O(y^6);
  ? seralgdep(s, 2, 2)
  %2 = -x^2 + (8*y^2 + 20*y + 1)
  ? subst(%, x, s)
  %3 = O(y^6)
  ? seralgdep(s, 1, 3)
  %4 = (-77*y^2 - 20*y - 1)*x + (310*y^3 + 231*y^2 + 30*y + 1)
  ? seralgdep(s, 1, 2)
  %5 = 0

The series main variable must not be x, so as to be able to express the result as a polynomial in x.

The library syntax is GEN seralgdep(GEN s, long p, long r).




setbinop(f, X, {Y})

The set whose elements are the f(x,y), where x,y run through X,Y. respectively. If Y is omitted, assume that X = Y and that f is symmetric: f(x,y) = f(y,x) for all x,y in X.

  ? X = [1,2,3]; Y = [2,3,4];
  ? setbinop((x,y)->x+y, X,Y) \\ set X + Y
  %2 = [3, 4, 5, 6, 7]
  ? setbinop((x,y)->x-y, X,Y) \\ set X - Y
  %3 = [-3, -2, -1, 0, 1]
  ? setbinop((x,y)->x+y, X)   \\ set 2X = X + X
  %2 = [2, 3, 4, 5, 6]

The library syntax is GEN setbinop(GEN f, GEN X, GEN Y = NULL).




setintersect(x, y)

Intersection of the two sets x and y (see setisset). If x or y is not a set, the result is undefined.

The library syntax is GEN setintersect(GEN x, GEN y).




setisset(x)

Returns true (1) if x is a set, false (0) if not. In PARI, a set is a row vector whose entries are strictly increasing with respect to a (somewhat arbitrary) universal comparison function. To convert any object into a set (this is most useful for vectors, of course), use the function Set.

  ? a = [3, 1, 1, 2];
  ? setisset(a)
  %2 = 0
  ? Set(a)
  %3 = [1, 2, 3]

The library syntax is long setisset(GEN x).




setminus(x, y)

Difference of the two sets x and y (see setisset), i.e. set of elements of x which do not belong to y. If x or y is not a set, the result is undefined.

The library syntax is GEN setminus(GEN x, GEN y).




setsearch(S, x, {flag = 0})

Determines whether x belongs to the set S (see setisset).

We first describe the default behaviour, when flag is zero or omitted. If x belongs to the set S, returns the index j such that S[j] = x, otherwise returns 0.

  ? T = [7,2,3,5]; S = Set(T);
  ? setsearch(S, 2)
  %2 = 1
  ? setsearch(S, 4)      \\ not found
  %3 = 0
  ? setsearch(T, 7)      \\ search in a randomly sorted vector
  %4 = 0 \\ WRONG !

If S is not a set, we also allow sorted lists with respect to the cmp sorting function, without repeated entries, as per listsort(L,1); otherwise the result is undefined.

  ? L = List([1,4,2,3,2]); setsearch(L, 4)
  %1 = 0 \\ WRONG !
  ? listsort(L, 1); L    \\ sort L first
  %2 = List([1, 2, 3, 4])
  ? setsearch(L, 4)
  %3 = 4                 \\ now correct

If flag is non-zero, this function returns the index j where x should be inserted, and 0 if it already belongs to S. This is meant to be used for dynamically growing (sorted) lists, in conjunction with listinsert.

  ? L = List([1,5,2,3,2]); listsort(L,1); L
  %1 = List([1,2,3,5])
  ? j = setsearch(L, 4, 1)  \\ 4 should have been inserted at index j
  %2 = 4
  ? listinsert(L, 4, j); L
  %3 = List([1, 2, 3, 4, 5])

The library syntax is long setsearch(GEN S, GEN x, long flag).




setunion(x, y)

Union of the two sets x and y (see setisset). If x or y is not a set, the result is undefined.

The library syntax is GEN setunion(GEN x, GEN y).




trace(x)

This applies to quite general x. If x is not a matrix, it is equal to the sum of x and its conjugate, except for polmods where it is the trace as an algebraic number.

For x a square matrix, it is the ordinary trace. If x is a non-square matrix (but not a vector), an error occurs.

The library syntax is GEN gtrace(GEN x).




vecextract(x, y, {z})

Extraction of components of the vector or matrix x according to y. In case x is a matrix, its components are the columns of x. The parameter y is a component specifier, which is either an integer, a string describing a range, or a vector.

If y is an integer, it is considered as a mask: the binary bits of y are read from right to left, but correspond to taking the components from left to right. For example, if y = 13 = (1101)_2 then the components 1,3 and 4 are extracted.

If y is a vector (t_VEC, t_COL or t_VECSMALL), which must have integer entries, these entries correspond to the component numbers to be extracted, in the order specified.

If y is a string, it can be

* a single (non-zero) index giving a component number (a negative index means we start counting from the end).

* a range of the form "a..b", where a and b are indexes as above. Any of a and b can be omitted; in this case, we take as default values a = 1 and b = -1, i.e. the first and last components respectively. We then extract all components in the interval [a,b], in reverse order if b < a.

In addition, if the first character in the string is ^, the complement of the given set of indices is taken.

If z is not omitted, x must be a matrix. y is then the row specifier, and z the column specifier, where the component specifier is as explained above.

  ? v = [a, b, c, d, e];
  ? vecextract(v, 5)         \\  mask
  %1 = [a, c]
  ? vecextract(v, [4, 2, 1]) \\  component list
  %2 = [d, b, a]
  ? vecextract(v, "2..4")    \\  interval
  %3 = [b, c, d]
  ? vecextract(v, "-1..-3")  \\  interval + reverse order
  %4 = [e, d, c]
  ? vecextract(v, "^2")      \\  complement
  %5 = [a, c, d, e]
  ? vecextract(matid(3), "2..", "..")
  %6 =
  [0 1 0]
  
  [0 0 1]

The range notations v[i..j] and v[^i] (for t_VEC or t_COL) and M[i..j, k..l] and friends (for t_MAT) implement a subset of the above, in a simpler and faster way, hence should be preferred in most common situations. The following features are not implemented in the range notation:

* reverse order,

* omitting either a or b in a..b.

The library syntax is GEN extract0(GEN x, GEN y, GEN z = NULL).




vecsearch(v, x, {cmpf})

Determines whether x belongs to the sorted vector or list v: return the (positive) index where x was found, or 0 if it does not belong to v.

If the comparison function cmpf is omitted, we assume that v is sorted in increasing order, according to the standard comparison function < , thereby restricting the possible types for x and the elements of v (integers, fractions or reals).

If cmpf is present, it is understood as a comparison function and we assume that v is sorted according to it, see vecsort for how to encode comparison functions.

  ? v = [1,3,4,5,7];
  ? vecsearch(v, 3)
  %2 = 2
  ? vecsearch(v, 6)
  %3 = 0 \\ not in the list
  ? vecsearch([7,6,5], 5) \\ unsorted vector: result undefined
  %4 = 0

By abuse of notation, x is also allowed to be a matrix, seen as a vector of its columns; again by abuse of notation, a t_VEC is considered as part of the matrix, if its transpose is one of the matrix columns.

  ? v = vecsort([3,0,2; 1,0,2]) \\ sort matrix columns according to lex order
  %1 =
  [0 2 3]
  
  [0 2 1]
  ? vecsearch(v, [3,1]~)
  %2 = 3
  ? vecsearch(v, [3,1])  \\ can search for x or x~
  %3 = 3
  ? vecsearch(v, [1,2])
  %4 = 0 \\ not in the list

The library syntax is long vecsearch(GEN v, GEN x, GEN cmpf = NULL).




vecsort(x, {cmpf}, {flag = 0})

Sorts the vector x in ascending order, using a mergesort method. x must be a list, vector or matrix (seen as a vector of its columns). Note that mergesort is stable, hence the initial ordering of "equal" entries (with respect to the sorting criterion) is not changed.

If cmpf is omitted, we use the standard comparison function lex, thereby restricting the possible types for the elements of x (integers, fractions or reals and vectors of those). If cmpf is present, it is understood as a comparison function and we sort according to it. The following possibilities exist:

* an integer k: sort according to the value of the k-th subcomponents of the components of x.

* a vector: sort lexicographically according to the components listed in the vector. For example, if cmpf = [2,1,3], sort with respect to the second component, and when these are equal, with respect to the first, and when these are equal, with respect to the third.

* a comparison function (t_CLOSURE), with two arguments x and y, and returning an integer which is < 0, > 0 or = 0 if x < y, x > y or x = y respectively. The sign function is very useful in this context:

  ? vecsort([3,0,2; 1,0,2]) \\ sort columns according to lex order
  %1 =
  [0 2 3]
  
  [0 2 1]
  ? vecsort(v, (x,y)->sign(y-x))            \\  reverse sort
  ? vecsort(v, (x,y)->sign(abs(x)-abs(y)))  \\  sort by increasing absolute value
  ? cmpf(x,y) = my(dx = poldisc(x), dy = poldisc(y)); sign(abs(dx) - abs(dy))
  ? vecsort([x^2+1, x^3-2, x^4+5*x+1], cmpf)

The last example used the named cmpf instead of an anonymous function, and sorts polynomials with respect to the absolute value of their discriminant. A more efficient approach would use precomputations to ensure a given discriminant is computed only once:

  ? DISC = vector(#v, i, abs(poldisc(v[i])));
  ? perm = vecsort(vector(#v,i,i), (x,y)->sign(DISC[x]-DISC[y]))
  ? vecextract(v, perm)

Similar ideas apply whenever we sort according to the values of a function which is expensive to compute.

The binary digits of flag mean:

* 1: indirect sorting of the vector x, i.e. if x is an n-component vector, returns a permutation of [1,2,...,n] which applied to the components of x sorts x in increasing order. For example, vecextract(x, vecsort(x,,1)) is equivalent to vecsort(x).

* 4: use descending instead of ascending order.

* 8: remove "duplicate" entries with respect to the sorting function (keep the first occurring entry). For example:

    ? vecsort([Pi,Mod(1,2),z], (x,y)->0, 8)   \\  make everything compare equal
    %1 = [3.141592653589793238462643383]
    ? vecsort([[2,3],[0,1],[0,3]], 2, 8)
    %2 = [[0, 1], [2, 3]]

The library syntax is GEN vecsort0(GEN x, GEN cmpf = NULL, long flag).




vecsum(v)

Return the sum of the component of the vector v

The library syntax is GEN vecsum(GEN v).




vector(n, {X}, {expr = 0})

Creates a row vector (type t_VEC) with n components whose components are the expression expr evaluated at the integer points between 1 and n. If one of the last two arguments is omitted, fill the vector with zeroes.

  ? vector(3,i, 5*i)
  %1 = [5, 10, 15]
  ? vector(3)
  %2 = [0, 0, 0]

The variable X is lexically scoped to each evaluation of expr. Any change to X within expr does not affect subsequent evaluations, it still runs 1 to n. A local change allows for example different indexing:

  vector(10, i, i=i-1; f(i)) \\ i = 0, ..., 9
  vector(10, i, i=2*i; f(i)) \\ i = 2, 4, ..., 20

This per-element scope for X differs from for loop evaluations, as the following example shows:

  n = 3
  v = vector(n); vector(n, i, i++)            ----> [2, 3, 4]
  v = vector(n); for (i = 1, n, v[i] = i++)   ----> [2, 0, 4]




vectorsmall(n, {X}, {expr = 0})

Creates a row vector of small integers (type t_VECSMALL) with n components whose components are the expression expr evaluated at the integer points between 1 and n. If one of the last two arguments is omitted, fill the vector with zeroes.




vectorv(n, {X}, {expr = 0})

As vector, but returns a column vector (type t_COL).