Karim Belabas on Fri, 07 Mar 2014 22:40:03 +0100

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: bb_field.{sub,div}

* Peter Bruin [2014-02-05 15:43]:

[ sorry I'm late in this thread. Busy with too many other things :-(  ]

> > What will be the difference with gen_Gauss ?
> The same as the difference between inverseimage() and gauss()
> (matinverseimage() and matsolve() in GP): the latter is only for solving
> A*X = B with A invertible.  In my application, A is non-square with full
> column rank (i.e. A is the matrix of an injective but non-surjective
> linear map) and the column space of B is known to be contained in that
> of A, so there exists a unique X solving the system.
> In earlier PARI versions (maybe 2-3 years ago), matsolve() could solve
> such a system, but at some point it stopped working

Hum. It has always been documented that A had to be invertible and I
couldn't find a stable version where it worked with non-square matrices
(neither 2.1, 2.3 nor 2.5). It may have worked in some testing versions, though.

On the other hand, it's an undocumented feature that the underlying
libpari function gauss() allows entering NULL for B and returns a
left-inverse in the case of non square matrices with full column rank.
None of the specialized domain-specific variants, e.g. FpM_gauss,
support this.

> and I quickly found out that matinverseimage() provided the
> functionality that I needed.

Yes, that's the documented "proper" solution.

> Now I switched from matrices with t_FFELT entries to FlxqM for
> efficiency reasons.  I did try FlxqM_gauss(), but it crashed; I did not
> check why exactly, but I assume because it implicitly also requires A to
> be invertible. 

It does, explicitly :-) The documentation states that *_gauss behaves
as gauss(), and gauss() officially requires a square invertible matrix,
above undocumented feature notwithstanding.

As a matter of fact, it's a little sad that we have so many "Gauss pivot"
variants as primitives, instead of standard algorithms, e.g. LUP
factorization. Currently

- RgM_pivots (allow image / supplement / rank profile)
- gauss_pivot_ker (analogous but more expensive, allow kernel)
- gauss (compute A^(-1) B)

Other operations are then implemented in terms of these non-standard primitives.
(e.g. inverseimage reduces to kernel). These 3 are not too complicated,
and share most non-trivial code (pivot search), but various minor
optimizations related to memory use made them diverge (e.g. avoid a few
operations, specialized gerepile).

General linear algebra over fields has never been a strong focus of
PARI, but cleaning up this set of ad hoc historical functions would be useful.
Again it's a bit sad that domain specialization (allowing large efficiency
gains!) occured before that cleanup. As I see it, we could

- merge RgM_pivots / gauss_pivot_ker

- replace gauss(A,B) by a proper PA = LU + (independent) back-substitution
  to solve triangular linear systems associated to B. This would allow
  to implement matinverseimage directly (more efficiently for fixed A
  and varying B), without reducing to ker().

> >> Last week I also wrote a bb_field implementation of Strassen's matrix
> >> multiplication algorithm (for personal use; as far as I understand it
> >> is considered to be outside the scope of PARI).
> >
> > I do not think it is outside the scope of PARI. Rather so far we did
> > not have any application for it.
> I see.  But any application of matrix multiplication (over exact rings,
> because fast multiplication is numerically less stable) is of course a
> potential application of Strassen's algorithm, since you can just switch
> between classical and Strassen multiplication inside the multiplication
> routine, depending on whether the dimensions are larger than some tuning
> parameter.  From some small experiments that I did, it seems that over
> finite fields (I used a field of size 29^10), Strassen multiplication
> already becomes worthwile for dimensions larger than about 16 (using the
> classical algorithm for sizes <= 16 as base cases).

I agree. It is useful to have Strassen multiplication. And matrix
multiplication is a simpler primitive than LUP :-)

> However, after that I committed some industrial espionage and took a
> look at what FLINT does; it uses Kronecker substitution to reduce to
> multiplying matrices with integer coefficients.  This could actually be
> a more promising way of speeding up matrix multiplication over finite
> fields (one could then still apply Strassen, but I suspect the
> dimensions at which this becomes profitable are much larger).

That's also an interesting option.


Karim Belabas, IMB (UMR 5251)  Tel: (+33) (0)5 40 00 26 17
Universite Bordeaux 1          Fax: (+33) (0)5 40 00 69 50
351, cours de la Liberation    http://www.math.u-bordeaux1.fr/~kbelabas/
F-33405 Talence (France)       http://pari.math.u-bordeaux1.fr/  [PARI/GP]