Try GP in your browser

Help / Community
  Mailing Lists
  Ateliers PARI/GP

  Contributed GP scripts

  Latest Changes
  Version Control
  Coding Guidelines

Tests & benchmarks
  Coverage Report
  Doc Coverage
  Refcards test

  WWW Stats

PARI/GP Frequently Asked Questions

This document was originally prepared by Karim Belabas, the PARI/GP maintainer between 1995 and 2023; it is now maintained as part of the PARI website. It strives to be useful for all versions of PARI/GP but mostly documents the behaviour of the latest versions in the stable and testing branches. And may hint at features introduced in the development version, available via GIT only.

  1. General Information
  2. Installation/Build Problems
  3. GP Specific
  4. Features
  5. System Specific


  1. General Information
  2. Installation/Build Problems
  3. GP Specific
  4. Features
  5. System Specific

General Information

What is PARI/GP?

PARI/GP is a widely used computer algebra system designed for fast computations in number theory (factorizations, algebraic number theory, elliptic curves...), but also contains a large number of other useful functions to compute with mathematical entities such as matrices, polynomials, power series, algebraic numbers, etc., and a lot of transcendental functions.

PARI is a C library (C++ - compatible). Writing C (or C++) programs, and linking them to libpari is the only way to access the full set of PARI routines (you can install most of them under gp, but this is not as portable). It will produce much faster code for low-level applications.

gp is an interactive shell giving access to PARI functions, easier to use. Low-level scripts are typically 4 times slower than directly written C code. High-level scripts, whose running time is dominated by a few calls to PARI functions, incur no penalty.

GP is the name of gp's scripting language.

What is gp2c?

It is the GP-to-C compiler. It compiles GP scripts to the C language, easing the task of writing PARI programs. It can automagically compile them to object code and load the resulting functions in gp. Low-level gp2c-compiled scripts typically run 3 or 4 times faster. gp2c currently only understands a subset of the GP language, and some functionalities are restricted to Unix operating systems.

What is GNU readline?

It is a library, which provide line-editing facilities (cursor movement, command history, completion, etc). All applications built using this library, for instance the bash shell, benefit from a consistent and convenient text-only user interface.

It is strongly advised to make sure that readline is present on your system before building gp yourself (readline is included in the Windows binary we distribute). In fact, before running Configure so that the latter gets a chance to detect it. The gp shell will function without line editing, but typing will be tedious.

What is GNU MP?

GNU MP, or GMP for short, is a multiprecision library, which in particular provides asymptotically fast routines for arithmetic operations between integers. PARI multiprecision kernel can use native implementations or make good use of GMP if it is available. We advise you to download and install GMP before Configuring PARI. Make sure you download version 4.2 or more recent, because of this problem.

See also How do I enable the GMP kernel.

How can I report bugs or request new features?

Please use PARI/GP's Bug Tracking System. Others may see and comment on your request, and you will be individually notified whenever a solution is committed to the GIT server.

Need I formally cite PARI/GP in my paper?

If you have used PARI/GP in the preparation of a paper, then yes, we would appreciate that.

When an author formally cites the use of a computer algebra system or package in the bibliography of a paper, then system developers receive credit in citation indices. This practice encourages development of systems and packages. It provides recognition for this work that is similar to that enjoyed by those who publish research articles. Citing the precise version you used also makes your experimental results reproducible by others.

See also SIGSAM's 2002 statement of policy and bibliography of computer algebra systems.

You may cite PARI in the following form (BibTeX format)

      organization = "{The PARI~Group}",
      title        = "{PARI/GP version \texttt{2.15.4}}",
      year         = 2023,
      address      = "Univ. Bordeaux",
      note         = "available from \url{}"

(please change the version number to match the version you actually used), which would translate to something like

    The PARI~Group, PARI/GP version \texttt{2.15.4}, Univ. Bordeaux, 2023,

What are the licensing terms?

PARI/GP is free software. It is released under GNU's General Public License (GPL for short) as published by the Free Software Fundation, either version 2 of the License, or (at your option) any later version.

Why use such a "restrictive" license as the GPL? Will you change it?

The short answer to the second question is no.

We are quite happy with the GPL. The original license (for all the 1.xx versions) was proprietary, too casually worded, and led to various conflicts. Starting from version 2.0, we wanted a well known license for the whole PARI/GP package, that would leave contributors secure with the future use of their code and, as far as possible, not prevent anyone from helping out. It was soon agreed that PARI/GP should become free software.

The GPL was a natural choice. It is certainly well-known, and it satisfied every developer involved in the project, as well as their respective employers. For the latter, the alternatives would have been more restrictive, certainly not closer to the LGPL, BSD, or the Artistic license. Most free Computer Algebra Systems also use the GPL, presumably for related reasons.


Is this the PARI Web site?

Yes. There used to be a separation between (developer-oriented) and (user-oriented). Both sites were merged on Oct 13th 2012.

Is there a documentation?

The testing distribution includes two User's Manuals (for the GP calculator and the libpari C library), a Tutorial, a too short Developer's guide, and some Reference Cards. Online or downloadable versions are available at

For more information about the algorithms used, check out Henri Cohen's two books (Graduate Texts in Mathematics 138 and 193, Springer).

Has the documentation been translated to ...?

No, English only. PARI is written by a small group of volunteers, and maintaining an adequate documentation in a single language has already proven to be a daunting task.

Please do not harass us with translation requests or lectures on linguistic imperialism. Some developers happen to be native French speakers, and are indeed aware of a number of disturbing linguistic issues. But this does not imply we have unlimited time to translate existing documents, not to mention creating and maintaining new ones.

PARI is free software, and you are welcome to undertake or sponsor a translation effort to the language of your choice. Please notify us if you do.

Version Information

What is the current version?

The current stable release is pari-2.15.1. The latest development branch is always available via GIT. The history of past releases since pari-2.0 is available through GIT and in our timeline.

What is this stable/testing thing?

We simultaneously maintain two versions (or branches) of PARI:

  • stable = odd minor version number, e.g 2.13.x: only modified to fix big problems, which are curable with minor changes. Neither minor improvements, nor complicated changes get incorporated. The primary goal is not to upset working scripts or code. Thus you can update with confidence from one patchlevel to another, within the same version branch, e.g. 2.13.x to 2.13.y. This branch may lag far behind testing : in this case, it will contain more known bugs and lack some useful features.

  • testing = even minor version number, e.g 2.10.x: this is a development version, and nothing is guaranteed, although everything usually works. Large sections of code are modified, features appear or are cancelled, interfaces change according to the feedback we receive, many bugs are fixed and new ones sneak in. Eventually, the testing version coalesces into a stable one, and a new testing branch is started.

In both branches, numbered versions (or snapshots) are released whenever we feel they are ready: they contain no unsatisfactory work in progress sections, do compile without warnings and pass all regression tests on a large number of systems. Since this takes a lot of time, these releases are far less frequent than we would like. In the meantime, our GIT server allows you to download the most recent code on either branch.

What version should I use?

It depends on your needs; we describe a few plausible scenarios below. First, always use the most recent version in a given branch, stable or testing.

If stable can run your scripts without problems, and you do not need any of the newer features, just use stable!

If you just want to give PARI/GP a try or have already run into problems with stable, then try out testing. In general, it contains fewer bugs, more features, and functions may be orders of magnitudes faster.

If you run into problems with testing, please use our GIT server: your problem is probably already solved. If not, please report your problem using our Bug Tracking System, then update as soon as the problem is solved; usually one or two days later. (If you submit a bug report, you will be individually notified when a solution to your problem is committed to the GIT server.)

Release 2.9.5? What does this mean?

The version number has three components M.m.p, e.g 2.9.5:

  • M is the Major version number, and only changed once since PARI/GP was created.
  • m is the minor version number. Odd version numbers correspond to stable versions, even numbers correspond to testing versions.
  • p is the patch revision and changes frequently.

The idea is that patchlevel is bumped whenever a series of modifications has been completed, and minor version number is increased when a testing version is deemed robust enough to become the new stable version. A new testing branch is then created.

This has a number of implications in terms of interface stability.

When will the next version be available?

We plan one major release every two years and one bugfix release every six months. However if you are bothered by a particular problem or want to check out new improvements, do not wait for a new release, please use the development version available via GIT or the binaries from the latest Ateliers. The history of past releases is available through GIT and in our timeline.

Mailing lists

Can you unsubscribe me from (subscribe me to) the mailing list?

Please do it yourself: depending on the list(s) you are interested in, please send an email to one of

Include either subscribe or unsubscribe in the message subject. To unsubscribe, make sure that you use the same email account as when you originally subscribed.

I have a problem with one list. Who should I contact?

Please email

Installation/Build Problems

Math::Pari problem

Please report it directly to Math::Pari author and maintainer Ilya Zakharevich.

PariEmacs problem

Please report to directly to the maintainer of the pariemacs package Olivier Ramaré.

Getting PARI sources

I fetched xxx.gz from the Download page, and the files are corrupted

Some web browsers silently expand anything the web server tells them is gzip'd. And will nevertheless offer you to save it with the .gz ending, now inappropriate. You are probably looking at xxx under the name of xxx.gz.

Under Unix, file xxx.gz should find out what the file format really is. The file sizes should also be a tell-tale.

Where is your CVS / Subversion server?

They are gone. We now use GIT.

What is GIT?

GIT is an open-source revision control system. For the developers of PARI/GP, it provides network-transparent source control. For ordinary users it provides a very convenient way to obtain patched versions (source code only) in between releases, or follow development branches.

GIT clients are available for all major platforms: Unix, MacOS, Windows. In particular, the git command-line client is readily available in all Linux distributions.

When your GIT client connects to our GIT repository server for the first time, it copies the source code on your hard drive, exactly as if you had fetched and extracted a source release. PARI can then be compiled and installed in the usual way. Later connections upgrade your local copy using latest available sources, fetching only modified files. It is easy to restrict to a specific version branch, to released snapshots, or to revert your local repository to an older copy.

More details on GIT setup for PARI.

I upgraded from GIT and build fails

First, there is no guarantee that bleeding edge sources from the development server will indeed compile. You may be unlucky, and fetch a broken version. More probably, new code modules have been added, which were not compiled in. Just try to run Configure after updating, new files will then be taken into account by the newly generated Makefiles. You may also try a make clean if you still run into problems.


Configure not found

If you get an error message of the form command not found when trying to Configure pari, '.' is not in your search path for commands. You need to type


in the toplevel directory.

I built gp and line editing does not work!

You need to install GNU readline first, then run Configure. Check out the Configure output to make sure the readline library is detected. You can help Configure with the


flags if you installed your library in an exotic place, or you installed many and Configure picks up the wrong one. Try a make clean before reporting a bug: it may be that a previous build with wrong Configure options is interfering with the now correct attempt.

I am using --with-xxx but Configure still picks the wrong library

For instance Configure --with-gmp=path, but the default libgmp is found instead of the intended one. The command is an alias for Configure --with-gmp-lib=path/lib --with-gmp-include=path/include and a common problem is that lib64 was intended instead of lib. The solution is to spell out a specific override as in
  Configure --with-gmp=/usr/local --with-gmp-lib=/usr/local/lib64

[Linux:] readline/ncurses not found

Linux distributions have separate readline and readline-devel packages. If only readline is installed, Configure will complain: readline is there so that precompiled binaries can function properly, you need both of them installed to compile programs with readline support.

Configure may also complain about a missing in which case, install the ncurses-devel package. (Some distributions let you install readline-devel without ncurses-devel, which is a bug in their package dependency handling.)

[Linux:] X11 graphic libraries not found

variant of this FAQ. It has been reported that in SuSE distributions, one needs to install both xorg-x11-libs and xorg-x11.


Build fails with undefined symbols

Before submitting a bug report, try the following
    make clean
    ./Configure    (with proper options)
    make gp
Sometimes, a previous build (e.g. interrupted or after an update from GIT) interferes with subsequent compilation; make clean deletes all cached information from the build directory.

Build fails with a TLS reference mismatching non-TLS definitions

This means you compiled the sources successively with different values of the --mt= option in Configure. Just run make clean then make gp again.

This can also happen when typing

    Configure --tune
    Configure --mt=some_other_value
The first command implicitly compiles for --mt=single before running the resulting binary. Again make clean should fix later builds (and preserve tuning results).

My sources are in "directory name containing spaces". Build breaks mysteriously.

This was specifically reported on Mac OS X, while trying to compile GNU readline. The exact message was:

    make: *** No rule to make target `/Volumes/Macintosh', needed by `readline.o'.

This was caused by a directory named "Macintosh HD" up the directory chain between the user's home directory and the root. In particular the environment variable $HOME contained spaces, something that readline-5 configuration cannot handle. Some versions of PARI could not handle them either, but this should no longer be the case.

If renaming the offending directories is not an option, a simple solution was to do the following

   ln -s "$HOME/readline-5.0" /tmp/tmpreadline
then cd to /tmp/tmpreadline and restart the standard procedure. The above creates a derivation (symbolic link) from /tmp/tmpreadline (doesn't contain spaces) to the directory we intended to work in. Adapt to your specific needs.

Using GIT sources, compilation fails in parse.y

This yacc file must be processed with a recent enough GNU bison, version 2.4 or higher. The distribution already contains the compiled files parse.c and parse.h, hence this problem is specific to GIT: people downloading the distribution do not need bison.

I can't build a 64bit sparcv9 executable!

On the sparcv9 architecture, a 32bit binary is produced by default. To override this, you need to type in something like (GNU cc)

    env CC="gcc -m64" ./Configure --kernel=none-none

or (Forte cc)

    env CC="cc -xarch=v9" CFLAGS="-xarch=v9" ./Configure --kernel=none-none

You may want to specify kernel=none-gmp instead, but it is crucial to use the portable native kernel, and not the sparcv9 one, because the latter is 32bit-specific. (Since sparc64 does not support multiplication of 64bit operands in hardware, there was no point in porting the sparc32 PARI assembler kernel.)

You may run into further trouble if the linker picks up some 32-bit libraries instead of their 64-bit equivalent. This might happen for user-installed libraries like libgcc or libgmp. In this case, try to unset LD_LIBRARY_PATH (or reset it to a suitable value), and possibly set LD_RUN_PATH to the 64-bit directory.

I'm using SELinux and pari-dyn fails with 'Permission denied'

This came up under Fedora Core 6 as Bug #621. Andrew van Herick reported the problem and worked around it by adjusting SELinux security policy.

Using Fedora's graphical interfaces, this is done as follows:

  1. In Gnome, go to
        System->Administration->Security Level and Firewall.
  2. Click on the SELinux tab
  3. under
        Modify SELinux Policy->Memory Protection
    enable the box labeled
        Allow all unconfined executables to use libraries requiring text
        relocation that are not labeled textrel_shlib_t

I get an internal compiler error in ...

We do not try to support broken compilers. You may try to compile the faulty code module using a lower optimization level (e.g. -O instead of -O3). Or switching to a stable version of a reliable compiler, e.g. gcc. Our continuous integration platform reports the current status of the master (testing) and stable branches with respect to a number of architectures and compilers.

I'm using gcc-2.96 and compilation fails in ...

gcc-2.96 is an unofficial version of gcc which was unfortunately shipped with many Linux distributions. These initial releases are badly broken on a number of architectures, ia64 for instance. Upgrade!

I'm using MacOS X and ... produces a segmentation fault.

As in Bug #565. First, try to upgrade your version of XCode. In particular, check the build number for gcc (e.g. in gp headers, Apple Computer Inc. build xxx). Builds < 5300 are very buggy, build 5363 works fine.

I'm using MacOS X and make bench fails horribly.

For instance bezout(32,432) returns [1,0,32]. First, try to update PARI to version 2.7.0 or above, which includes a workaround for the problem. If it still does not work, please notify us, then read on:

This is a painful bug in OS X Lion default compiler (LLVM build 2335.15.00), reported as Bug #1252. We suggest you try and install GCC-4.5, the simplest option being to use MacPorts; in this case

  port install gcc45
  export CC=/opt/local/bin/gcc-mp-4.5

should see you home. Another possibility, as a temporary workaround, which produces a slower gp binary, but avoids re-installing gcc:

  rm -f Odarwin*/mpker.o Odarwin*/mp.o
  make CFLAGS=-O0 gp

A [BUG] message appears in the 'program' test suite

In version 2.9 and above, something is wrong. Please notify us.

In pre-2.9 versions, the 'program' bench tried the install feature, which is not supported on all systems. Check the relevant 'diff' file mentioned at the end of make bench output. If the only difference comes from a failed installation or usage of addii, it only means that install() is not supported on your system; so you will not be able to use this feature. Besides that, you can safely ignore the Warning.


My own program compiled OK, but when I try to run it, I get an "error while loading shared libraries:"

On most platforms, make install builds and installs a shared PARI library (meaning that needed functions are loaded from the library at runtime, not copied into the executable at link time). By default this library goes to /usr/local/lib/. Unfortunately, this is often not enough to make your system aware of the new library.

Since the link command in your Makefile probably specifies something like

  $(CC) ... -L$(LIBDIR) -lpari -lm

with LIBDIR pointing at the directory containing, the link phase succeeds. Unfortunately, this does not register in the binary where the shared library can be found, and running the binary fails with the above error. You may check with ldd my_prog what are the shared libraries needed by your program, and which ones are found by the dynamic lib loader: this should fail with a message similar to the above.

The following solutions are available:
  • Run something like ldconfig to make your system aware of the new library. (Requires root powers.)
  • Hardcode a dynamic runpath in your binary, adding something like -Wl,-rpath "my exotic libpath" in the link command of your Makefile. See examples/Makefile for how to do it on your system. (This is the solution chosen for the gp binary.)
  • Require that all users of the program change their LD_LIBRARY_PATH. This can be as easy as adding
      export LD_LIBRARY_PATH=/usr/local/lib
    in a .bash_profile. Another possibility is to run
      env LD_LIBRARY_PATH=/usr/local/lib my_prog
    instead of my_prog.

GP Specific

How can I customize key bindings?

This is possible only if the readline library is compiled in (the gp header gives this information on startup). In this case, you need to create a .inputrc file. You can create this in your home directory under Unix; or any place you want provided you then set the environment variable INPUTRC to the full path of the .inputrc file.

In this file, you can associate actions to sequences of keypresses, e.g.

    "\e[1;5C": forward-word
    "\e[1;5D": backward-word
    "\e[A":  history-search-backward
    "\e[B":  history-search-forward

associates Esc + [ + 1 + ; + 5 + C, the sequence produced by hitting Control-RightArrow on my system, to the action forward-word: move the cursor to the right by one word. Similarly, the other lines associate Control-LeftArrow to backward-word, UpArrow to history-search-backward, and DownArrow to history-search-forward. For details, see readline's manual.

To determine what sequence is associated to a given combination of key presses, you can hit Control-V, then press the relevant keys. gp will print the corresponding sequence, with Esc coded as ^[. The list of bound commands and associated keybindings is available via bind -p in the bash shell.

I configured PARI to use readline and TAB completion only works if the completion is unique instead of showing all the possible completions?

This is the default readline behaviour. When there is more than one match, hitting TAB once only completes the longest common prefix. Hitting TAB a second time then displays the list of possible completions. You can get the second behaviour directly after a single TAB by adding

   set show-all-if-ambiguous

in $HOME/.inputrc (readline's configuration file).

How to keep the readline history across gp sessions ?

You need to define histfile in your gprc. E.g.

  histfile = "~/.gp_history"
This is not set by default because we chose to let users opt in for this feature rather than writing files behind their backs. Also depending on the value of history-size in your inputrc (Readline's initialization file) this file may have unbounded size.

How to search readline's history of commands under gp?

Readline provides a wealth of functions to search through the command history for lines containing a specified string. There are two search modes: incremental (search as you type) and non-incremental (type a prefix, then start searching).

Some of these commands are bound by default (all of them incremental):

reverse-search-history (C-r)
    Search backward starting at the current line and moving `up'
    through the history as necessary.

forward-search-history (C-s)
    Search forward starting at the current line and moving `down'
    through the the history as necessary.

Some are unbound (all of them non-incremental):

    Search backward starting at the current line and moving `up'
    through the history as necessary.

    Search forward starting at the current line and moving `down'
    through the the history as necessary.

    Search forward through the history for the string of characters
    between the start of the current line and the point.

    Search backward through the history for the string of characters
    between the start of the current line and the point.
The last two provide GAP/Magma-style history completion; to bind then to the UpArrow and DownArrow, add
  "\e[A": history-search-backward
  "\e[B": history-search-forward

in $HOME/.inputrc (readline's configuration file).

Result 'x' is history entry %n but they don't behave the same!

The function simplify is applied to gp's result, before storing it in the history, but after the command is fully evaluated, in particular after any variable assignment takes place. Here's an example:

      ? a = x+y-x
      %1 = y
      ? variable(a)
      %2 = x
      ? variable(%1)
      %3 = y

a is still a polynomial of degree 0 in x, but %1 has been simplified. You can use simplify yourself whenever you want to ensure an object has the expected "simplest possible" type.

How can I sort by increasing absolute value ?

Replace the default comparison function by an anonymous closure:

  vecsort(v, (x,y) -> sign( abs(x) - abs(y) ));
or by a special purpose auxilliary function
  cmp(x,y) = sign( abs(x) - abs(y) );
  vecsort(v, cmp);
The latter solution unfortunately creates a globally visible cmp function. You may use the following intermediate construction to restrict ad hoc subroutines to a function scope:
  Complicated_HighLevel_Function() =
    my(cmp = (x,y)->sign( abs(x) - abs(y) ));
    vecsort(v, cmp);
    vecsort(w, cmp);
User functions

How can I write a `procedure' (void return value)?

Use return(), without arguments.

I want to define functions within a user function

For instance, the following happens to work

    init(a) = f(x) = x + a

but this one does not

    init(a) = f(x) = x+a; g(x) = x*a

In fact, calling the latter defines the function f(x) whose definition is

    x+a; g(x) = x*a

which is not what was intended. In fact, the GP grammar states that the end of a function definition is the end of the longest expression sequence following the prototype. And the above is unfortunately a valid expression sequence. The solution is simple:

    init(a) = ( f(x) = x+a ); ( g(x) = x*a );

When a call to init is evaluated, the code within parentheses is evaluated first, defining separately f then g.

Uninitialized arguments are set to 0; I want an error instead

For the sake of backward compatibility with previous GP versions (which used the function arguments to declare local variables, before the introduction of my), an uninitialized argument to a user function is set to 0:

 ? fun(x, y) = print(y);
 ? fun(2, 3)
 ? fun(2)

In current GP, there is no reason to add unnecessary variables in the argument list, AND we have the possibility to explicitly set a default value as in

 ? fun(x, y = 0) = print(y);

So a missing argument is probably a mistake and it would be more useful to get a warning than to silently go on with incorrect data, as for built-in functions:

  ? sin()
  ***   too few arguments: sin()
  ***                          ^-

To enable this (backward incompatible) behaviour, set default(strictargs,1) before defining the function, or set strictargs = 1 in your gprc:

  ? default(strictargs, 1);
  ? fun(x, y = 0) = print(y);
  ? fun(1)
  ? fun()
  ***   at top-level: fun()
  ***                 ^-----
  ***   in function fun: x,y=0
  ***                    ^-----
  ***   missing mandatory argument 'x' in user function.

Can I install() a C function returning a GEN which may be NULL?

GEN being a pointer type, it is valid C to set a variable of type GEN to NULL and interpret this as a sentinel value. But this value is invalid to the GP interpreter, which expects only non-NULL objects. So when your installed function happens to return NULL, the evaluator will crash with a Segmentation Fault.

The correct solution, as when trying to install functions whose prototype is not supported by GP, is to write a small wrapper function

  GEN wrapper_fun()
    GEN z = fun();
    if (!z) pari_err(...); // or any desired special handling
    return z;
and install that one instead.

The following dirty trick also works in pure GP, without needing to mess with C code (not recommended). NULL is allowed in a very particular situation by the GP evaluator: to encode missing arguments in functions. So the following works:

  wrapNULL(a = "result_was_NULL") = a;

  install(Fp_sqrt,GG); \\ returns NULL when arg1 not a square mod arg2
  mysqrt(a, p) = wrapNULL(Fp_sqrt(a, p));

  ? mysqrt(1, 3)
  %1 = 1
  ? mysqrt(-1, 3)
  %2 = "result_was_NULL"

input() does not work from a script

It does. Read the following from a file (or copy-paste it)

    fun() =
      print("enter an integer a: ");
      a = input();
      print("a^2 is "a^2)

Then typing fun() from the gp prompt waits for some user input. The reason why the simpler

    print("enter an integer a: ");
    a = input();
    print("a^2 is "a^2)

does not work, is that input() reads expression from the program standard input (stdin). As you are reading the script into gp, the script itself is fed into stdin. So input gets some data, specifically the return value of

    print("a^2 is "a^2)

which is 0, then returns immediately. In the meantime, the expanded "a^2 is"a^2 has been printed, as "a^2 is a^2" since at this point the value of a has not been modified. If you re-read the same commands into gp, it will now print a^2 is 0.

How can I read from a file one item at a time?

Indeed, read does not work since it reads in the whole file, and all intermediate values are discarded. The simplest solution is to write all data into a vector or list and save the latter.

Another solution is to use the readvec function. It returns a vector whose entries contain the results of all expression sequences from the file. Let file contain

    ? readvec(file)
    %1 = [2, 4, 6]

A few amusing hacks: if you read in a file using \r (not with read), all history entries will be filled, and you can later access them individually. The following construction could be helpful:

    for (i = first, last,
      x = eval( Str("%", i) );    \\ the i-th history entry

On a Unix system, you can use the following trick (courtesy of Markus Endres) to input the n-th line of a given file:

    getline(n, file) = extern("sed -n " n "p" file);

On a Unix system, you can use the following beautiful hack (courtesy of Bill Allombert) using named fifo files: create a fifo and start reading from it. In another shell, write your data to it. For example:

    sh1$ mkfifo fifo
    sh1$ gp
    ? for(i=1, 10, print(i, read("fifo"))

    sh2$ perl -ne 'system("echo '\''$_'\'' > fifo")' < data

The possibility of handling file objects in GP without resorting to such extremities is in the TODO list.

How can I read space-separated data from a file, one line at a time?

Best answer: go and learn one of awk / sed / perl / python and preprocess your file!

Second best answer: you asked for it

    readsplit(file) =
    { my(cmd);
      cmd = "perl -ne \"chomp; print '[' . join(',', split(/ +/)) . ']\n';\"";
      eval( externstr(Str(cmd, " ", file)) );

    ? system("cat foo")
    1 2 3
    4 5 6 7
    ? readsplit("foo")
    %2 = [[1, 2, 3], [4, 5, 6, 7]]

How can I read numbers from a file and store them into a vector?

Use readvec.

How can I input integers in binary or hexadecimal ?

With version 2.9 and later: Use the prefix 0x for hexadecimal and 0b for binary:

? 0xff
%1 = 255
? 0b1111
%2 = 15

For older version, you may use the following function, courtesy of Martin Larsen on pari-user:

    hextodec(s) =
    { my(v=Vec(s), a=10,b=11,c=12,d=13,e=14,f=15,
                   A=10,B=11,C=12,D=13,E=14,F=15, h);

      for(i=1,#v,h = (h<<4) + eval(v[i])); h;

    ? hextodec("ff01")
    %1 = 65281
The above does not work with gp2c, because of eval. Here is a more robust (and more efficient) version
    hextodec(s) =
    { my(v = Vecsmall(s), h = 0);
      for(i = 1, #v,
        h = (h<<4) + if(v[i]<=57, v[i]-48, if(v[i]<=70, v[i]-55, v[i]-87))
In pari-2.6.* and above, the above can be streamlined as
    hextodec(s) =
    { my(v = Vecsmall(s), h = 0);
      for(i = 1, #v, h = (h<<4) + if(v[i]<=57, v[i]-48,
                                     v[i]<=70, v[i]-55,

How to input n random bytes from /dev/urandom?

On a Unix system, you can for instance use the od utility to suitably format the source:

    RAND(n)= extern(Str("echo 0x`od -x -An -N", n, " /dev/urandom`"))

    ? RAND(8)
    %1 = 17880935609819212970
    ? RAND(8)
    %2 = 6728931097565027184

You can use RAND(8) as a seed for (the non-cryptographically secure PRNG) setrand. For cryptographic uses, you should generate all random data directly from RAND.

Is there a tutorial for 'extern'?

Here is a worked out example for Cremona's mwrank.


How can I loop over permutations?

In 2.10.0 and higher, use forperm(). It is more than n times faster.
    n = 4;
    for (i = 1, n!, print( numtoperm(n, i) ))

How can I loop over all subsets of S?

In 2.10.0 and higher, you can use forsubset(). It is a little faster.
      forsubset(#S, s, print( vecextract(S, s) ))
    for (i = 0, 2^#S - 1, print( vecextract(S, i) ))

How can I loop over subsets of S with k elements?

In 2.10.0 and higher, you can use forsubset(). It is orders of magnitudes faster.
      forsubset([#S,k], s, print( vecextract(S, s) ))
    forvec( v=vector(k, i,[1,#S]), print(vecextract(S,v)), 2)

How can I loop over partitions?

Use forpart(). Here is an interesting but much slower recursive implementation:
    aux(n, m, v) =
      v = concat(v,m);
      if(!n, print(v), for(i=1, min(m,n), aux(n-i, i, v)));
    partitionsgp(n) = for(i=1, n, aux(n-i, i, []))

How can I loop over divisors of polynomials?

Use fordiv. It is not specific to integers.

Hardcoded Limits

I want bigger vectors, 224 is not enough

The limit is hardcoded into the object representation.

  • You can use a 64-bit machine: the limit goes up to 246 - 3
  • You can break up your list into smaller chunks (use vectors of vectors)

I want more variables, 16383 is not enough

Since pari-2.9, only polynomial variables are limited, not user variables. A polynomial variable springs into existence only when it is needed in an actual polynomial. I.e. 'z^2 creates a variable z, but z = 0 does not. Thus this limit should no longer be a problem.

It remains hardcoded into the object representation and cannot be easily increased. Note that on a 64-bit machine the limit goes from 214 - 1 to 216 - 1.

How can I increase maximal recursion depth?

The maximal recursion depth is governed by the maximal stack space allocated to the gp process (not to be confused with gp's stack). Use the following commands from the shell, before starting gp, to increase the process stack size to 20000 kbytes, say:

If your shell is sh-based (bash, ksh, sh, zsh):

    ulimit -s 20000

If your shell is csh-based (csh, tcsh):

    limit stacksize 20000

Timing functions do not take system time into account


    ? system("sleep 5")
    time = 0 ms
    ? for(i=1,10^4, write("testtimer.txt","12345678901"))
    time = 164 ms

where the second command actually requires about 10 seconds on my machine.

This is intentional. PARI supports four timing functions clock, times, ftime and getrusage. The last one is chosen by default, since it is the most reliable, and it reports user time, not system or wallclock time. In particular, it should not depend on the system load.

If you really want wallclock time, use getwalltime().

    ? T = getwalltime(); system("sleep 5"); getwalltime() - T
    %1 = 5000

Timer does not work when extern is involved

The time elapsed in commands called via extern is not taken into account by gp's timer. You can use getwalltime if you insist on wallclock time. You may also use

    extern("time cmd")

instead of


A patch was proposed to take into account the children of the main gp process, but the current behaviour was deemed preferable.

How can I timeout a command?

Use alarm:

  ? alarm(1, 2*3)
  %1 = 6
  ? alarm(1, bnfinit(x^30 - 2))
  time = 1,000 ms.
  %2 = error("alarm interrupt after 1,000 ms.")
  ? E = alarm(1, bnfinit(x^30 - 2));
  time = 1,000 ms.
  ? E      \\ this is a t_ERROR object
  %3 = error("alarm interrupt after 1,000 ms.")


Multiprecision Kernel

Should I use PARI's native kernel or the GMP kernel?

The GMP kernel is faster than the native one. Here are detailed benchmarks for integer and floating point arithmetic, comparing the two kernels.

The only drawback of using the GMP kernel is that it is sometimes slightly slower for tiny inputs, and that the GMP-enabled PARI library is not compatible with the native one. In particular, all programs built using a native shared library must be rebuilt if it is replaced by a GMP-enabled one.

Can PARI handle arbitrarily large numbers?

It depends on what you mean by "handle".

On a 32 bit machines, PARI cannot even represent integers larger than 2^(2^29), or floating point numbers whose absolute binary exponent is larger than 2^30. But the main problem is that PARI's native kernel does not include enough fast algorithms that would enable it to process huge numbers in a decent time. (Integer multiplication is almost linear, but integer division is quadratic, for instance.) I would say mantissas of 10000 decimal digits are a sensible practical limit for intensive number crunching with PARI's native kernel.

Using the GMP kernel, almost linear algorithms are available, and only the physical limitations of PARI's representation of integers and floats remain.

Why are floating point operations so slow when precision is large?

Karatsuba multiplication was not enabled by default for type t_REAL until version 2.2.4 included and a quadratic algorithm was used instead. Starting from version 2.2.5, this is no longer the case. Note that all division algorithms in the native kernel are quadratic, though.

Almost linear algorithms are available through the GMP kernel.

Why are divisions so slow?

All division algorithms in the current PARI kernel are quadratic.

Almost linear algorithms are available through the GMP kernel.

How do I enable the GMP kernel?


    Configure --with-gmp

or possibly

    Configure --with-gmp --builddir=some/exotic/directory

if you do not want to mess up with the binaries built using the native kernel.

Using the GMP kernel, my session dies on a huge division!

For instance

    (1 << (1<<25)) % (1<<(1<<24) + 1)

This is a general problem in GMP before version 4.2. It may allocate a huge amount of memory in the process stack space (using alloca). The best solution is to upgrade to GMP version 4.2 or higher.

Otherwise, one can increase the maximum size of the process stack segment before starting gp from the shell. Alternatively, you can configure an old GMP with

 configure --enable-alloca=malloc-noreentrant 

but this will slow down GMP.

Do you have Fast Fourier Transforms?

Not publicly exported. The library functions FFTinit and FFT are available for classic (complex) polynomial FFT. The integer arithmetic uses DFT-based almost-linear algorithm, but do not export the transformed inputs: you cannot precondition on a constant operand, for instance.

Why is cos(Pi) = -1 but sin(Pi) = -5.04870979 E-29 and not exactly 0?

The exact values depend on the version of PARI and exact accuracy used.

For gp, Pi is a rational approximation to π, not exactly π; so this behaviour is a consequence of the floating point model. With default precision, the relative error is bounded by 10-28 so Pi is within π 10-28 of π. In particular, the absolute value of sin(Pi) is less than 3.03 10-28 but need not be 0.

In fact cos(Pi) is not -1 either, but rather a real number close to -1:

    ? cos(Pi) + 1
    %1 = 1.2744735289059618216 E-57

How do I get the list of variables appearing in an expression ?

Here is a possibility, using 2.7.* syntax:

  vars(E) =
  { my(v, L, t = type(E));
    if (t == "t_RFRAC" || t == "t_POLMOD",
      return (setunion(vars(component(E,1)),
    v = variable(E); if (!v, return ([]));
    L = [v]; E = Vec(E);
    for (i = 1, #E, L = setunion(L, vars(E[i])));

  ? vars([1/x+y^2, 0; Mod(a*z, z^2), O(t)])
  %1 = [x, y, z, t, a]

This code recursively inspects all components of an expression. It works around the following complication: there is no builtin function returning the components of an object, Vec works when applied to most types handled by variable, but t_RFRAC and t_POLMOD must still be treated separately.

In version 2.8.* and above, use the built-in variables

  ? varables([1/x+y^2, 0; Mod(a*z, z^2), O(t)])
  %1 = [x, y, z, t, a]

How can I expand a multivariate polynomial ?

You cannot. E.g

    ? simplify((a+b)*(c+d))
    %1 = (c + d)*a + (c + d)*b

The reason is that PARI only knows about univariate polynomials (whose coefficient may themselves be polynomials, thus emulating multivariate algebra). The result above is really a simplified form of

    ( c^1 + (d^1 + 0*d^0)*c^0 )*a^1 + ( (c^1 + (d^1 + 0*d^0)*c^0)*b^1 + 0*b^0 )*a^0

There is no flat or sparse representation of polynomials, and in particular no way to obtain parentheses-free expanded expressions like

    a*c + a*d + b*c + b*d

On the other hand, the following routine (courtesy of Michael Somos) comes close to what was requested:

      if (!v, return([]));
      if (type(v) != "t_POL", return([v]));
      concat( vector(poldegree(v)+1, n,
                     variable(v)^(n-1) * polmonomials(polcoeff(v,n-1))) )

For instance

    ? polmonomials((a+b)*(c+d))
    %1 = [d*b, c*b, d*a, c*a]

Mod(y*x,y-1) returns 0!

If you expected to get Mod(x, y-1), you need to understand the concept of variable priority. Due to variable priority, the computation takes place in Q(y)[x] in which 0 is the correct result. Please see

??"Variable priorities, multivariate objects"@2

subst(p, x, polroots(p)[1]) is huge!

Assuming p has exact coefficients, then the output of polroots, is exact to the current accuracy. This means that for each (non-zero) x in the output vector, there exists a complex root z of p such that the relative error |x - z| / |x| is small: its base-2 logarithm is less than -realbitprecision. This is guaranteed by the algorithm, due to Shönhage and refined by Gourdon, and any other behaviour is a bug worth reporting. On the other hand, the value p(x) itself may be large.

For instance, with 38 digits of accuracy:

    ? \p38
    ? p =  x^3 + x - 10^39;
    ? a = polroots(p)[2]; subst(p, x, a)
    %2 = -4.000000000000000000 + 0.E1*I \\ does not look like 0 !
    ? \p 200
    ? b = polroots(p)[2]; subst(p, x, b)
    %3 = -1.6172698447808779627 E-173 + 0.E-173*I \\ b fares better
    ? abs(a - b) / abs(a)
    %4 = 0.E-38                        \\ but a is indeed close to b

Do you support sparse matrices/polynomials?


Integer Factorization

How does PARI factor integers?

The integer factorizer implements a variety of algorithms: Trial Division (all primes up to certain bound, plus user-defined private primetable, via addprimes), Pure powers, Shanks's SQUFOF, Pollard-Brent rho, Lenstra's ECM (with Brent and Montgomery improvements), MPQS (self-initializing, single large prime variation). Output "primes" are BPSW-pseudoprimes.

How efficient is factorint? How does it compare with other free implementations?

PARI's ECM is slower than GMP-ECM (portable), written by Paul Zimmermann on top of GMP. [comparative benches needed here].

PARI's Quadratic Sieve routinely (less than 10 seconds) factors general numbers in the 60 digits range. Abount one minute is needed for 70 digits, about 10 minutes for 80 digits and about 2 days for 100 digits. It will not accept integers with more than 107 digits. [comparative benches needed here].

Are the factors guaranteed to be primes?

The factors output are not proven primes if they are larger than 264. They are Baillie-Pomerance-Selfridge-Wagstaff pseudoprimes, among which no composite has been found yet, although it is expected infinitely many exist. If you need a proven factorization, use

    fa = factor(N);
    isprime( fa[,1] )
Another possibility is to set
    default(factor_proven, 1);
This ensures that a true primality proof is computed whenever we do not explicitly ask for partial factorization. This will slow down PARI if largish primes turn up often.

I monitor isprime() at \g2 and I get "untested integer" messages. Can I trust the output?

Yes! The message looks like:

    *** isprime: Warning: IFAC: untested integer declared prime.

It is harmless: during tests of Pocklington-Lehmer type, isprime() calls factor() which may produce this message, but it does go on to prove recursively that the prime factors encountered really are prime.

What primality tests do you use?

ispseudoprime(N) uses Baillie-Pomerance-Selfridge-Wagstaff deterministic test, which is a strong Rabin-Miller test for base 2, followed by a strong Lucas test for the sequence (P, -1), where P is the smallest positive integer such that P^2 - 4 is not a square modulo N. It has been checked that no composite smaller than 264 passes this test.

ispseudoprime(N, k) with k > 0, performs a strong Rabin-Miller test for k randomly chosen bases (with end-matching to catch roots of -1). This variant is about as fast as ispseudoprime(N) when k = 3, and slower afterwards.

isprime(N) first performs an ispseudoprime test. If N < 264, Selfridge-Pocklington-Lehmer p-1 test is performed if N-1 is smooth (factored part is larger than N1/3; Konyagin-Pomerance or more involved algorithms based on lattice reduction are not implemented). Otherwise we use APR-CL (Adleman-Pomerance-Rumely, Cohen-Lenstra) for small N < 21500 and ECPP for larger N.

primecert(N) Elliptic Curve Primality Prover.

PARI's APR-CL and ECPP can prove primality of numbers with 1000 decimal digits in about 10 minutes. Some examples on a 3.00GHz E5-2623 Intel Xeon processor.

N # digitstime APRCLtime ECPP
3^2833 - 2^28331,35214min6min 20s
(772! + 1)/7731,89352min32min

[A number of improvements should be implemented: N+1 test and combined N-1/N+1 tests, higher cyclotomic tests (Bosma-van der Hulst).]

Large examples using a computing cluster (PlaFRIM)

N# digits time ECPP
fibonacci(130021)27,1731,807 days (31 days real time using 128 cores)

Do you have AKS?

In its present form (May 2021), the Agrawal-Kayal-Saxena primality test is not practical for large numbers. Efficiency not being a primary concern, it is a simple exercise in GP programming to implement the algorithm.

I monitor factorint() at \g4, and ECM seems to be stuck

You get messages of the form

    IFAC: trying Lenstra-Montgomery ECM
    ECM: working on 64 curves at a time; initializing for up to 50 rounds...
    ECM: time =      0 ms
    ECM: dsn = 12,  B1 = 1800,      B2 = 198000,    gss =  128*420
    ECM: time =  68920 ms, B1 phase done, p = 1801, setting up for B2
    ECM: time =   1210 ms, entering B2 phase, p = 2017
    ECM: time =  42990 ms
       dsn = 72,  B1 = 1073500,   B2 = 118085000, gss = 1024*420 :

and the last line is repeated seemingly forever.

The short answer is: ECM is still making progress, although the diagnostics do not show it (detailed explanation).

MPQS gives a "sizing marginal" warning

This problem is specific to version 2.2.10 and occurs while factoring smallish integers with MPQS. For instance

    *** factor: Warning: MPQS: sizing marginal, index1 too large or too many primes in A.

This diagnostic was left in place to detect suboptimal behaviour with respect to the chosen tunings. This is not a bug and the result is correct. The corresponding tuning problem was fixed in 2.2.11 (2005).

Number fields

Let K=nfinit(x^2+1). Routine xxx dies on ideal J = [5, 4; 0, 1]?

Assuming that K.zk = [1, x] (which it should be, given nfinit's current implementation), J is not an ideal. It is a matrix in HNF form, but the Z-module generated by its columns is not an OK-module (exercise).

    ? K = nfinit(x^2+1); J = [5, 4; 0, 1]; nfisideal(K, J)
    %1 = 0

In fact, the two ideals above 5 are

    ? [idealhnf(K, p) | p <- idealprimedec(K, 5)]
    %2 = [[5, 3; 0, 1], [5, 2; 0, 1]]

PARI routines transform suitable objects (element in K, prime ideal) to an ideal (matrix in HNF), but do not check that matrices of the right dimension do indeed correspond to ideals, which would be quite costly. If you are unsure about how to input ideals, use idealhnf systematically, or ideals produced by other PARI routines.

If you insist on finding the OK-module generated by the columns of your matrix, you may use something like

  IDEAL(nf, J) =
  { my(M,i,j);

    J = mathnf(J); M = [;];
    for (i = 1, poldegree(nf.pol),
      for (j = 1, #J,
        M = concat(M, nfeltmul(nf, J[,j], nf.zk[i]))
    )); mathnf(M)

For better efficiency, J is put in HNF form first. You can also use directly idealhnf if your matrix does not have maximal rank, yielding the faster variant:

  IDEAL(nf, J) =
    J = mathnf(J);
    mathnfmodid( idealhnf(nf, vecextract(J, "2..")), J[1,1] )

As written, this assumes J is contained in OK and does not work if J has Z-rank 0 or 1, but you get the idea.

How to compute nfinit(P) when I cannot factor disc(P)?

Use something like

  nfinitpartial(P, B = 10^5) = nfinit([P, B])

This computes an nf structure from P, containing a tentative maximal order, which is only required to be maximal at primes less than B. The result is guaranteed to be correct whenever the field discriminant is B-smooth, i.e. has no prime divisor larger than B. If it has (at least two distinct) large prime divisors, on the other hand, the result will be wrong. If such prime factors of the discriminant are known, you can use


for each such p, or addprimes(L) for a list L of such primes.

The output can be certified, independently of B: nfcertify(nf) outputs either

  • an empty vector: all is fine, the nf structure is correct;
  • or a vector of composite integers, which you must factor in order to produce a proven (possibly different) result; you may then use addprimes with the corresponding prime divisors.

Which computations depend on the truth of the GRH?

The routines bnfinit and quadclassunit assume the truth of the Generalized Riemann Hypothesis, in order to have a small set of generators for the class group. From 2.4.0 on, they assume no more than the GRH. They are marginally slower than before, but no longer output wrong results (that we know of !).

You can use

    bnf = bnfinit(P);

to certify the bnf unconditionally. If the discriminant is large, this will be slow, and may overflow the possibilities of the implementation. Results obtained using a certified bnf do not depend on the GRH.

Note that output from quadclassunit cannot be certified. Use bnfinit instead if certification is important. Finally, quadunit and quadregulator do not assume the GRH (and are much slower than quadclassunit).

[ This paragraph is relevant for version numbers less than 2.4.0 ] In fact, bnfinit assumed more than GRH, namely that the ideals of norm less than 0.3 log( disc(K) )2 generate the class group of K. This is not known even under the GRH, which implies the above with 4 in place of 0.3, a famous result of Loïc Grenié and Guiseppe Molteni; the constant can be further optimized depending on the signature and assuming the discriminant is large enough, but not down to 0.3 in any case. Also, there are counter-examples even for relatively large discriminants : unlucky random seeds actually gave a wrong result.

So, technically, any result computed using a bnf from bnfinit was conditional, depending on a heuristic strengthening of a result already depending on the GRH. One could make sure that no more than the GRH is used with

    bnfinit(P,, [0.3, 12])     /* notice the double comma after P */

instead of bnfinit(P).

This is slower, of course. At worst about 40 times slower when the original result would have been right. The routine quadclassunit had the same problem as bnfinit (0.3 being replaced by 0.2). If you want to use no more than the GRH, use

    quadclassunit(D,, [0.2, 6])

Besides all functions using a bnf argument, the routines bnfclassunit, bnfclgp, bnfreg are trivial wrappers around bnfinit and make the same assumptions. All these routines predate the introduction of member functions and are now obsolete: bnfinit gives much more information in about the same time, is about as easy to use (using member functions), and has the further advantage of being an init function.

Elliptic Curves

ellap(E,p) is very slow when p is large, or even raises an exception ?

If E/F_p has complex multiplication by a principal imaginary quadratic order, we use a very fast explicit formula (quadratic in log p). Otherwise we rely on Shanks-Mestre's baby-step giant step method, whose run-time is unfortunately exponential. Hence this naive algorithm becomes unreasonable when p has about 30 decimal digits.

To go beyond that (assuming you are running PARI version 2.4.3 or higher), please install the optional package SEAdata, implementing the Schoof-Elkies-Atkin algorithm. It should work at least up to 400 decimal digits, way beyond cryptographic sizes.

Can I use Cremona's mwrank with gp ?

You may use SAGE. As for most external stand-alone programs without a graphical user interface, you may also call mwrank directly from gp, using extern. Note that PARI version 2.14.0 and higher will include an independent implementation ellrank, providing the functionality as a builtin.


What is a BIB ? What happens when my input doesn't make sense ?

Many routines expect operands of a specified type and shape, for instance nf, bnf, bnr, ell, modpr structures, or satisfying certain documented preconditions, e.g. an integer which is actually a prime. For reasons of efficiency, PARI routines trust the user input and only perform minimal sanity checks.

When a subtle undetected violation is performed, any of the following may occur: a regular exception is raised, the PARI stack overflows, a SIGSEGV or SIGBUS signal is generated, or we enter an infinite loop. The function can also quietly return a mathematically meaningless result: junk in, junk out.

Bugs in this class (Bad Input Bugs, or BIBs for short) are never fixed whenever a noticeable code complication or performance penalty would occur. All PARI objects and internals are exposed and at your disposal, which is both convenient and unsafe. There are plans to add more semantic to PARI types so that e.g., an ell structure would record the fact that it is not an arbitrary vector, but actually associated to an elliptic curve. Not yet.

intnum() gives ridiculous results

[This paragraph applies to versions prior to 2.2.9.] For instance
    ? intnum(x = 0, 10^6, x^2 * exp(-x^2))
    %1 = 0e-55
    ? intnum(x=0, Pi, sin(2^11 * x)^2)
    %2 = 3.788253725 E-46

The first integral should be indistinguishable from the integral to infinity, which is sqrt(Pi)/4, and the second is exactly Pi/2.

Numeric integration evaluates the function at regularly spaced points in the integration interval, then uses a standard quadrature rule. This is tried for a few decreasing values of the subdivision diameter until the value appears to stabilize (in fact, was accurately predicted by interpolation from the four last values). This gives incorrect results for functions of period a large power of 2 as in the second example. Or if most of the weight is concentrated in a very small interval as in the first example, where the function is essentially 0 in most of the interval.

For the time being, the user must break the integral into harmless chunks, or perform relevant change of variables herself. E.g make sure the function varies slowly, move singularities to the boundary or (better) remove them, restrict the interval to a region where the function is not negligible.

Starting from version 2.2.9, we use the "double exponential method" instead of standard Romberg, and get better results:

    ? intnum(x = 0, 10^6, x^2 * exp(-x^2))
    %1 = 0.4431134622969907111829434676
    ? intnum(x = 0, [oo, 2], x^2 * exp(-x^2)) / (sqrt(Pi) / 4)
    %2 = 1.000000000000000000000000000
    ? intnum(x=0, Pi, sin(2^11 * x)^2)
    %3 = 1.482067574055964668166042567   \\ not so good
    ? intnum(x=0, Pi, sin(2^11 * x)^2, 11) / (Pi/2)
    %4 = 1.000000000000000000000000000   \\ slower but perfect

It still helps a lot to split the integral into pieces, and indicate the growth rate and oscillation pattern of the function at their boundary, and possibly to use more sampling points for oscillating functions. Otherwise, loss of accuracy may be expected.

Modular objects behave strangely

For instance

    ? Mod(1,2) * Mod(1,3)
    %1 = Mod(0, 1)

    ? Mod(1,2)*x + Mod(1,3)
    %2 = Mod(1,2)*x

Elementary operations involving INTMODs and POLMODs with different moduli p and q first project both operands to the quotient modulo the ideal (p,q). In the above, the moduli 2 and 3 are coprime and we obtain the 0 ring. This explains the first example. For the second, notice that the result of Mod(1,2)*x is a polynomial in (Z/2Z)[x] whose constant coefficient is Mod(0,2).

Another example is

    ? Mod(0,2)^2
    %3 = Mod(0, 2)

whereas one might expect Mod(0,4) instead. Modular object operate with respect to a fixed modulus, which is never increased (it may be replaced by a strict divisor as explained above).

To keep track of accuracy, you may use PADICs, but operations will be much slower than with INTMODs. If efficiency is a primary concern, use INTs, keeping track of denominators separately, and reduce with relevant moduli to avoid coefficient explosion.

Is PARI suitable for high performance computation?

It depends on what you mean by that. PARI was not written to handle huge objects (e.g. millions of digits, dimension in the thousands). It can do it, but different algorithms should be used for these ranges, which are not currently implemented. PARI is quite good for heavy-duty computations in algebraic number theory.

System Specific


How can I get/use GP under Windows ?

A self-installing binary is available from the download page. See also the next question if you need more than gp (for instance gp2c, or more generally to compile programs using the PARI library).

How can I get GP2C under Windows ?

We do not distribute GP2C binaries for Windows: GP2C generates C files that need to be compiled with a C compiler. If you do not have a working C compilation environment, then GP2C cannot work for you.

We suggest to use the Windows subsystem for Linux (WSL), see our PARI with Windows tutorial. After

  sudo apt install pari-gp
  sudo apt install pari-gp2c
you have both gp2c and gp2c-run installed.

Do I need Cygwin or the Windows Subsystem for Linux to run GP under Windows ?

To run the gp binary, you don't. To use gp2c or compile C programs using libpari you will need one or the other. In this case, we recommend to use the Windows subsystem for Linux (WSL), see our PARI with Windows tutorial.

I cannot read in a GP file

You have created a text file and want to load its content in your GP session. The most direct solution is as follows: in the GP window, type without hitting Return

then click and drag the file icon from the Desktop (or the folder containing it) to the GP window. This will complete the name of the file and you can now hit Return.

I now describe a better, more elaborate, long-term solution which assumes that you have Administrator rights (write access to system directories). I am assuming that you start the GP.EXE program by (double) clicking on the PARI link which was created on the windows Desktop. Check the Properties of the link in the right-click menu. If you did not tamper with it at installation time, it states that the program starts in directory

  C:\Program Files\PARI

This is where GP expects to find its input files. That is if you type in


this will work if and only if we have a file foo or in

  C:\Program Files\PARI

Update the path default in gprc.txt if you want to store your GP files in some other directory. You can use the click and drag trick to double check the name of the folder containing your GP files.

Troubleshooting: we advise to include an explicit .gp suffix. Some Windows editors, e.g. NotePad, automatically append a .txt suffix to the text files they create. Make sure your file is really called and not foo.txt or

Cut'n Paste does not work

Right click on the GP icon and change Properties->Misc: select 'Quick Edit' and deselect 'Fast Pasting'.

I cannot enter the ^-key

You are running a keyboard driver for some non-English languages. These drivers catch the ^ character to produce the circumflex accent and do not pass it properly to PARI. You may try to type a SPACE after the ^ dead key, to produce a true ^; it may or may not fix the problem.

The Delete key does not work

This depends on your Windows installation and the compilers used when building the binary. It may work properly, or it may emit a beep and prints a tilde character '~'. In the latter case, no fix is known: use Backspace instead, or readline commands.

I would like a 64-bit gp.exe application.

Since pari-2.7.5, we now distribute both 32-bit and 64-bit binaries.


How can I get/use gp under MacOS 9 and below

This version of gp is not maintained anymore. You can get a legacy binary gp-2.0.14.sit.bin from the FTP archives. This is a sit archive, which may be extracted using StuffIt expander.

How can I get/use gp under MacOS X and above

There is a Homebrew Formula for PARI/GP. You can also get gp from the MacPorts distribution. But both only provide the stable version of pari, and you may want to try new features.

Alternatively you can install Apple's Developer Tools, which should be on one of your installation's CDs. The current name of the tool suite is XCode. Then you can compile pari yourself:

  • download the Unix source archive (pari.tgz, tar.gz format). It is also possible to download PARI sources via GIT.
  • extract the content of the archive: tar zxf pari.tgz
  • change directory to the extracted folder parixxx/
  • read README file!

You probably want to read this FAQ, which solves a very common problem.

I can't get readline to work under MacOS X.4 (or later)

Towards the end, Configure prints the message

    ### Editline wrapper detected, building without readline support

You can still compile gp, but line editing, history and command completion will not work.


Although a libreadline.dylib library is part of OS 10.4, it is a fake: a simple link to a replacement library, editline, which is only partially compatible with readline. So we must install the true readline library.

Step 1


Step 2

Extract the archive (tar zxf readline-8.0.tar.gz), then copy-paste the following in a Terminal:

    cd readline-8.0
    make && sudo make install

This installs a proper shared library for readline, under /usr/local (thus it does not disturb your vendor's libraries and won't cause problems in other parts of the system). Now rejoice: you have solved the readline problem once and for all.

Step 3

Now, you can compile and install PARI, making sure that the proper readline is used:

    make clean
    ./Configure --with-readline=/usr/local
    make all && sudo make install
Caveat. It is in principle possible to install readline in a different location by using
    ./configure --prefix=...
when configuring readline. This will break some versions of readline, which will raise a runpath error on startup (dyld: library not loaded). The technical reason is that libreadline's install-name is set incorrectly when libreadline is created, in this case. If you run into this problem, remove the --prefix argument and try again.

How can I get hi-res graphics under MacOS?

Under MacOS 9 and below, you can't. With MacOS X, if you have installed an XFree server and X11 developer's libraries, then compile gp as explained above: it should pick up the X11 installation. Starting from version 2.7.0, hi-res graphics should work out of the box: Configure will default to the plotps (up to version 2.9) or plotsvg (from 2.10 on) graphic engines, neither of which require an external library.

make install fails for pari-2.7.0

Indeed it fails with

  ln -s libpari.dylib "/usr/local/lib"/libpari.dylib;
  ln: /usr/local/lib/libpari.dylib: File exists
  make[1]: *** [install-lib-dyn-link] Error 1
  make: *** [install] Error 2
This is a glitch affecting OS/X systems, when building without GMP. In the file
change the target install-lib-dyn-link, by prepending a '-' (minus) sign on the first line. So that on line 498
        \$(LN) \$(LIBPARI_DYN) \$(LIBDIR)/\$(LIBPARI_SO);
        -\$(LN) \$(LIBPARI_DYN) \$(LIBDIR)/\$(LIBPARI_SO);
Then you can re-run Configure and 'make install' should work.

PARI/GP Development
Last Modified: 2024-02-10 11:39:46
Copyleft © 2003-2022 the PARI group.