The simplest command line for the parallel PC GAMESS/Firefly run is as follows:

      PCGAMESS.EXE DIR0 DIR1 DIR2 ... DIRN < wp4 options >

Here, DIR0, DIR1, DIR2, etc... are the working directories of the master PC GAMESS/Firefly process (i.e., of MPI RANK=0), second instance of PC GAMESS/Firefly (MPI RANK=1), third instance, and so on. Both absolute and relative paths are allowed. Relative paths are relative to the initial working directory you launched the PC GAMESS/Firefly from.

< wp4 options > are the optional wp4-specific options (see the WMPI documentation for the list).

For example, you can use something like the following:

      pcgamess.exe d:\mydir\wrk0 e:\mydir\wrk1 "f:\my dir\wrk2" -p4gm 10000000

Depending on the file used, the three directories above must exist prior to PC GAMESS/Firefly execution either on the single computer, two different computers, or three different computers. The input file must be in the master working directory (i.e., in the d:\mydir\wrk0 for the example above)

In the above example, -p4gm 10000000 option sets the size of the global memory used by the WMPI libraries and wp4 device to 10 MB.

Finally, it's a good idea to look at the more detailed instructions written by Prof. Ernst Schumacher at


  1. While running PC GAMESS/Firefly in parallel using standalone SMP system, the performance degradation is possible because of simultaneous I/O operations. In this case, the use of high-quality RAID or separate physical disks can help. If the problem persist, for dual- (and more, 4, 8, for example)-CPUs/cores SMP/multicore systems the better solution is probably to switch to the direct computation methods which require much less disk I/O.

  2. The default value for AOINTS is DUP. It is probably optimal for low-speed networks (10 and 100 Mbps Ethernet). On the other hand, for faster networks and SMP systems the optimal value could be AOINTS=DIST. You can change the default by using the AOINTS keyword in the $SYSTEM group. So, you can check what is the faster way for your systems.

  3. If you are experienced some unexpected problems during parallel PC GAMESS/Firefly execution, they are most likely WMPI-related. Namely, the most common problem is the overflow of the MPI/wp4 global memory pool during MPI broadcast operations (error # ::2). In this case, increasing wp4 memory pool can help (-p4gm 40000000, for example). Another known problem that may be encountered while running on SMP systems with local 1 and more specified in procgrp file (resulting in deadlock of compute processes) can be overwhelmed by these two steps:

  4. There are four keywords in the $SYSTEM group which can help in the case of MPI-related problems. Do not modify the default values unless you are absolutely sure that you need to do this. They are as follows:

            MXBCST (integer) - the maximum size (in DP words) of the message
                               used in broadcast operation. Default is 32768.
                               You can change it to see whether this helps
            MPISNC (logical) - activates the strategy when the call of the
                               broadcast operation will periodically
                               synchronize all MPI processes, thus freeing
                               wp4 global memory pool.
                               Default is false. Setting it to true should
                               resolve most buffer-overflow problems by the
                               cost of somewhat reduced performance.
            MXBNUM (integer) - the maximum number of broadcast operations
                               which can be performed before the global
                               synchronization call is done.
                               Relevant if MPISNC=.true. Default is 100.
            LENSNC (integer) - the maximum total length (in DP words) of all
                               messages which can be broadcasted before the
                               global synchronization call is done.
                               Relevant if MPISNC=.true. Default is dependent
                               on the number of processes used (meaningful values
                               vary from 20000 to, say, 262144 or even more).

See also:

Last updated: March 18, 2009