Lorenz Minder on Mon, 25 May 2009 10:37:47 +0200

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

 Re: Another problem with matrix inversion

Hi,

BA:
> On Thu, May 21, 2009 at 04:43:46AM +0200, Lorenz Minder wrote:
> > > Sorry for the rushed our reply, but maybe you would prefer it to
> > > a long delayed one.
> >
> > I absolutely do, yes.  Thanks for taking the time.
>
> But I am afraid this lead to misunderstanding. Sorry about that.

Maybe, and with my rushed reply on your reply, it probably has irritated
both of us to some degree.  In any case, I apologize form my harsh tone
in that last mail.

I still think your "rushed reply" was a good thing, after all it helped
clear out the non-controversial issues.

> > I'll look in the test suite, not sure how this works.  (Documentation
>
> Test-suite is merely a matter of adding a file src/test/in/foo or
> expanding an existing file that is not used by 'make bench'.

Ok.  Is in/linear used by make bench'?  How do I know?

As a tangential issue, when I ran the tests a couple of months ago I
got a number of failures (Linux on PPC, GMP kernel IIRC), all of which
were stack overflows.  Should I redo this (with unpatched source) and
send you guys the results?  (At the time I didn't bother because it
didn't look like something as serious as wrong output.)

> > I strongly suggest to go back to the way I did it originally, namely
> > simply replace FpM_gauss() and Flm_gauss().  Let me quote from the
>
> I have an alternative:
> 1) We add function FpM_invsafe/Flm_invsafe that would do that same for
> matrix inversion that Fp_invsafe do for integers. Maybe even
> FpM_gausssafe if needed.  This is a minimal change, that use a
> well-defined interface.

I'm not too sure I get this right.  Fp_invsafe() gives an inverse if it
exists, so FpM_invsafe would do the same (i.e., it would behave like
ZnM_gauss(*, NULL, modulus))?  Or do you mean that FpM_invsafe does what
FpM_gauss() does except err out when p is detected to be nonprime?
Assuming that it then returns the failing residue class that would be
useful, of course.

I think though that FpM_gausssafe() and Flm_gausssafe() are more
important than the *_invsafe()-variants.  (Also *_invsafe() is just
trivial if we have *_gaussafe().)

> I can do it if you want.
>
> 2) We add a function that take a factored modulus and use chinese
> remainder and Hensel lifting to use FpM_invsafe to inverse a matrix.
> Hensel lifting alleviate the need to change FpM_gauss to look for an
> invertible pivot. This is similar to how gen_Shanks_sqrtn works.

Ok, so I guess it's the 2nd possibility above, right?

> 3) We add a function that take an unfactored modulus, call the
> function above as if the modulus is prime, and if not, refine the
> factorisation and iterate.

This sounds reasonable.  The only cost compared to the current solution
seems to be that Gaussian elimination is restarted from scratch if the
modulus is proved non-prime.  If that is the case, I think that would

> > libpari manual, the part that talks about Fp* and Fq* functions:
> >
> >   7.2 Modular arithmetic.
> >
> >   These routines implement univariate polynomial arithmetic and linear
> >   algebra over finite fields, in fact over finite rings of the form
> >   (Z/pZ)[X]/(T), where p is not necessarily prime and T \in (Z/pZ)[X]
> >   is possibly reducible; and finite extensions thereof. All this can be
> >   emulated with t_INTMOD and t_POLMOD coefficients and using generic
> >   routines, at a considerable loss of efficiency.
> >
> > As far as I can see, this states unambiguously that p does not have to
> > be prime.  Don't you agree?
>
> I do not.

A clear answer to a clear question.  (I appreciate that.)

> I designed this interface and this says that the Fp operations perform
> as if the ring is a finite field. This means they do not go out of
> their way to handle non-fields.  The whole point of the Fp operations
> being to get rid of genericity, which is quite costly.

Yes, this is also in part the reason why my last patch was quite long
without doing that much.

I'm still not sure how your *safe() proposal would fix the problem with
the proliferation of new Zn*-routines that the previous approach would
cause.  Can you elaborate on that?

> In practice inside PARI this is used for non-field only for prime power
> to handle operation over Z/p^mZ (e.g. Hensel lifting).
>
> > I suggest we start thinking of this patch as what it is, a bug fix.
>
> it is not a "bug fix":

Ok, I take your word for it.

> the PARI documentation is quite clear on that:
> ==============================================================================
> Vectors, matrices, linear algebra and sets:
>
>    Since PARI does not have a strong typing system, scalars live in
>    unspecified commutative  base  rings.   It is very difficult to
>    write robust linear algebra routines  in  such  a general setting.
>    We thus assume that the base ring is a domain and work over its
>    field of fractions.  If the base ring is not a domain, one  gets
>    an error as soon as a non-zero pivot turns out to be
>    non-invertible.  Some  functions,   e.g. mathnf or mathnfmod,
>    specifically assume that the base ring is Z.
> ==============================================================================
> This describes exactly the situation here, so it is not a bug.
>
> Now I agree that, as a matter of "quality of implementation", it would be
> nice if PARI was able to inverse all invertible matrices. But this is not a
> bug.

It's more like a deficiency, then. (-: Yes, I'm joking.)

> OK, I have made some test and I agree that matadjoint is very slow,
> even compared to plain RgM_inv without the is_FpM otpimisation.

I would be interested in the numbers, if you still have them at hand.
What I noticed is that matdet() (when using slow Gaussian elimination)
is still usable on a slow machine, when matadjoint(,1) wouldn't work for
me even on a really fast machine.  (It was very slow, but the worse
thing was that it would need way too much RAM.)

Somewhat unrelatedly, do you have a reference describing the Berkowitz
algorithm and why it works?

> There is always a cost (in storage/ memory/ testing/ maintainance/
> consistentcy/ etc.) associate to extra code in PARI. This has to be
> balanced with use case.

Yes, and it is not in my interest at all to send you a non-maintainable
mess.  My understanding of your concern is that the main problem right
now is that the amount of code may not really warrant the admittedly
rather rare use case?  (I'm not sure what is meant by storage/memory,
and I hope to be able to do the main testing myself and add appropriate
testcases.)

> > > It would be nice if the whole strategy was abstracted to be reused in
> > > similar
> > > situation.
> >
> > FqM_gauss() has the analogous bug of course, and it may be possible to
> > reuse the same kind of logic, and possibly share some code if p is a
> > prime but the polynomial reducible.  But if p=9, we get things like
> > (3x + 1)^3 = 1, and I'm pretty sure that will screw us, although I
>
> Agreed. However, factoring polynomials over a field is much easier than
> factoring integers, so this trink is not very interesting for polynomials.

Maybe.  Of course factoring polynomials is not that expensive, but by
far most popular use case of those functions is when the modulus is
irreducible, and I think any fix for the non-irreducible case should not
penalize the common case in terms of running time.  Therefore I'd
consider it to be a bad idea to try to factor the modulus (or test its
irreducibility) in every call to FqM_gauss().

On the other hand, when the modulus is detected to be non-invertible, it
might possibly be easier to just factor the whole thing instead of doing
the splitting trick in this case, yes.  (Although it seems to me that
it would need probably roughly the same amount of code.)

> > > Do you think there is really a need for 'ZlM' ?  In that range, factoring
> > > the modulus is trivial, and that would reduce the size of the patch
> > > somewhat.
> >
> > For Flm_gauss() the problem could indeed be solved differently, but
> > I'm not sure it's so much easier, especially since the case of p having
> > square factors needs to be considered, and the elimination routines
> > fixed accordingly in any case.
>
> This is why I suggest to use Hensel lifting.

Ok, that can be done.

> PS:
> Please use Fp_invsafe() (and implement a function Fl_invsafe for the Fl case).

I can do that. It means an additional duplicate gcd()-call in the
splitting case; I don't like the look of that too much, but it can be
done.

> Could you avoid the code duplication between Z_find_coprime_factors and
> find_coprime_factors ? I do not think this is performance critical enough
> to warrant it.

Yes, indeed.

So here is my question:  What should I do to move towards resolving this
in the best possible way?  Obviously I can get rid of
find_coprime_factors() and fix the other issues with my last patch, and
implement the lifting in the m = p^t case, instead of relying on the
search for a better pivot in FpM_gaussafe().

But how is it integrated? I.e., how do I name the functions and where
can I hook them in?

Best,
--Lorenz
--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss für nur 17,95 Euro/mtl.!* http://portal.gmx.net/de/go/dsl02

`