Lorenz Minder on Fri, 15 May 2009 05:49:16 +0200

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: Another problem with matrix inversion


> On Thu, May 14, 2009 at 04:03:14AM +0200, Lorenz Minder wrote:
> > BA:  
> > Ok. (What would you do with, e.g., FpM_gauss()?  Leave it there
> > or have it simply call ZnM_gauss()?)
> I would prefer if ZnM_gauss() would call FpM_gauss() that would not be
> changed too much (maybe only make it return NULL for division by zero 
> error).

I'll give that a thought, although it seems to be necessary to
distinguish between "matrix not invertible" and "nonzero non-invertible"
element detected.

I take it, the API for FpM_gauss() must not change, right?

> In any case if we restrict the problem to matrix inversion, we could
> implement
> a division free algorithm, like 
> matinv(A)=matadjoint(A,1)/matdet(A)

Oh yes, very good point, I'd completely forgotten about this
> At this point the only trouble is to pick the fastest algorithm depending 
> on the matrices entries.

I did some tests to see how this behaves for t_INTMOD over small moduli.
I took m = 2^8 - 1 = 3*5*17 as an example. I created random invertible
matrices mod m of size n * n.

Here are the results:

matadjoint technique:
  n = 150, stack usage between 128MB and 256MB, running time 67s.
  n = 170, stack usage between 256MB and 512MB, running time 112s.
chinese remaindering:
  n = 150, stack usage < 8MB, running time 0.15s.
  n = 170, stack usage < 8MB, running time 0.227s.

For me, the most worrying thing here is the excessive memory usage,
which suggests that something with n = 200 variables or so will be
difficult to solve on a typical PC.

The numbers suggest that for t_INTMOD, the splitting trick, or possibly
adapting matsolvemod(), will fare much better, unless matadjoint() can
be sped up by orders of magnitude, and its memory usage can be contained.

Nonetheless, the matadjoint() technique could be used for other non-field
rings for which nothing faster is available.

> > > And furthermore what application do you have in mind ?
> > 
> > In my case, I need to solve a set of multivariate equations over GF(q).
> > With the right change of variables I can (in some cases) reduce this to
> > equations with only products, e.g. x*y*z = const_1, y*z*v = const_2,
> > etc.  Then I take the log (to some fixed basis) which gives me a set
> > of linear equations in Z/(q - 1)Z.
> > 
> > Of course, q - 1 is not prime.
> So I expect you are using Pohlig-Hellman algorithm along the way,
> so you know how to factor q-1, and it might be more efficient to 
> use full chinese remainder than this trick.

Yes, I can indeed solve my problem without the need to modify PARI.
(That was never the point I intended to make; the point was just that
it's unnecessarily tedious to do this by hand when PARI could perfectly
well do it for me and for everyone else solving similar problems.)

In fact, in my case, matsolvemod() is the answer that is not even
tedious. I'll still have to see whether an efficient matrix inversion
algorithm can be built on top of it, didn't get around to this yet.

Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss für nur 17,95 Euro/mtl.!* http://dslspecial.gmx.de/freedsl-surfflat/?ac=OM.AD.PD003K11308T4569a