William Hart on Fri, 25 Jun 2010 19:21:08 +0200

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: worrying comment from John McKay

You shouldn't trust *any* computation on any computer, full stop. It always makes me laugh when people quote computations in papers as though they were theorems. This is all the more so when it relies on code they wrote themselves and for which they have no test code!! Fortunately, plenty of papers get approved by referees who have never seen the code which produced the computations and which the authors let bit-rot on their hard drives and never expose to public embarrassment. So this issue shouldn't affect you in practice.

But also in practice, it is wise to assume that *ALL* computer software is broken. There are no exceptions. That means you should check that the results you are getting are reasonable. In other words, if *you* care that the results you are getting are meaningful, then you need to write test code to check this!!

Now, with that disclaimer out of the way: in modern computers, the same CPU unit may actually perform both floating point and integer arithmetic (in fact in some CPU's it is likely a floating point unit that does the integer multiply)! There's nothing inherently wrong with this. The result is formally "proven" to be correct by chip engineers. It would be a major big deal if the chip provided incorrect integer results (it has actually happened in the past, but we hope the chip manufacturer's learned from that expensive recall)!

Even if GMP or MPIR did decide to use floating point for integer computations one day, there'd be nothing wrong with this either (there's a proposal to investigate Fast Hartley Transforms for the inner part of the Schoenhage-Strassen routine). It boils down to whether *proven* rounding has been used. This means that, modulo possible coding bugs (which you assume *do* exist in every implementation), the result is proven to be correct (unless otherwise documented).

Essentially people coding this stuff into GMP/MPIR/Pari, know what can be trusted and what can't. For example, you can usually trust the chip, not so much the math.h library (at least not on every system), but you can trust MPFR (whose specific purpose is reliable rounding), etc. That is all taken into account so that (unless otherwise documented, and modulo programmer's bugs), the answer is mathematically reliable.

But again, assume *ALL* computer software is broken. I recall a legend that a particular well-known system (which shall remain nameless - it wasn't Pari) once issued a version which returned 6 for the third prime (or something like that)....


--- On Sat, 6/26/10, Karim Belabas <Karim.Belabas@math.u-bordeaux1.fr> wrote:

> From: Karim Belabas <Karim.Belabas@math.u-bordeaux1.fr>
> Subject: Re: worrying comment from John McKay
> To: "pari-users" <pari-users@list.cr.yp.to>
> Date: Saturday, June 26, 2010, 12:30 AM
> * Daniel Allcock [2010-06-25 14:58]:
> > John McKay told me something unlikely but
> theoretically worrying, and
> > I'd like to check into it.  Namely, that some
> integer-to-integer
> > functions use floating point ops for speed, and that
> the reconversion
> > to integers at the end of the calculation isn't or
> wasn't "properly"
> > justified.  Not quite sure what this means, but
> of course I do want to
> > trust my results.  What's the story?  (If it
> matters, I'm using 2.4.3
> > on a mac, with gmp).
> 1) Some implementation of the Schoenhage-Strassen algorithm
> (to multiply
> large integers, and many other things...) rely on floating
> point
> computations which are not rigorous, for the sake of raw
> speed.
> This is not the case with the gmp implementation, which is
> (relatively)
> "slow" but rigorous.
> 2) Some numerical algorithms in PARI (e.g. intnum, sumalt)
> are
> inherently non rigorous, in the sense that they are correct
> assuming
> some analytic properties of their input, which a
> mathematician could in
> principle check, but which can't be proven numerically. I
> don't think
> you were referring to that, but this is in principle
> specified in the
> function documentation.
> 3) Some a priori algebraic algorithms nevertheless use
> floating point
> computations in an unproven context ( in most cases,
> because using
> proven bounds would juste mean getting no result at all ).
> To my
> knowledge, this is always documented, e.g. ??polgalois
>   Warning.  The  method used is that of
> resolvent polynomials and is
>   sensitive to the current precision. The precision is
> updated internally
>   but, in very rare cases,   a 
> wrong  result  may  be  returned 
> if  the
>   initial precision was not sufficient.
> In rare cases, this may not be so clear if you only read
> the function
> documentation, but it is conspicuous in the manual ( E.g.
> bnfinit()
> assumes the truth of the GRH. )
> Cheers,
>     K.B.
> --
> Karim Belabas, IMB (UMR 5251)  Tel: (+33) (0)5 40 00
> 26 17
> Universite Bordeaux 1         
> Fax: (+33) (0)5 40 00 69 50
> 351, cours de la Liberation    http://www.math.u-bordeaux1.fr/~belabas/
> F-33405 Talence (France)       http://pari.math.u-bordeaux1.fr/ ; [PARI/GP]
> `