Neill Clift on Fri, 29 Jan 2021 01:21:14 +0100


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Linear factors of very large polynomials


Hi,

This is more of a question about the limits of computations pari/gp could do. This is not very important but it might be of interest to some people.

So a few decades ago I realized eventually the password hashing algorithm in the OpenVMS operating system would be invertible. It was just a matter of machines getting enough memory and being able to do some large GCD’s.

In 2017 I asked a few questions here and got pari to actual find the linear factors of a large polynomial mod 2^64-59. The polynomial looked like this:

 

f(x) = x^16777213 - 24 * x^16777153 - 120 * x^3 - 198 * x^2 - 264 * x + 17748126081817865973

 

The constant term is different for each password that is to be found. I used FpX_roots(f,2^64-59) to find the factors. I think it took about 12 hours to do.

Now OpenVMS isn’t a seriously used operating system anymore (I don’t doubt some people are still using it somewhere). It’s developed still by some enthusiasts.

They haven’t fixed this problem. Clearly a modern hash should be put in it’s place but they have a field size limitation of 64 bits. It can be changed but it’s more work.

I was thinking we could just increase the degree of the polynomial and maybe add more terms. We want the hashing to be slower and of course sieving out the linear terms to be impossible with some reasonable h/w.

Degree can’t exceed 2^64-59 or we could just reduce it obviously.

Would I be right in thinking:

 

  1. Pari or any other software could not do the polynomial GCD to find the roots if the degree cause the coefficients to exceed current machine memory limits? I assume we are finding GCD(f(x),x^p-x) and the largest polynomial we get is the degree of f(x)^2 or something like that. During the GCD it probably becomes dense.
  2. I read most random polynomials are irreducible. So increasing the degree can cause more solutions but maybe not a huge number. I saw various f(x) with changed constant terms having 1-4 roots.

 

I happen to work on addition chains so I can find large exponents that don’t have nice ways of calculating with fewer multiplies. So the binary method is close to optimal.

Thanks for your thoughts.

Neill.