Either several (at least two) Intel/AMD-based computers having identical or similar hardware configuration and running in the local network environment under the control of Windows NT 4.0, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, or Windows 7. Each computer can be either single- CPU workstation, or dual (four, eight, etc..)-CPU (or core) SMP/multicore system, it does not matter.
Or, alternatively, single Intel/AMD-based SMP/multicore system running under Windows NT 4.0/Win2K/WinXP/Win2003/WinVista/Win2008. In this case, it is desirable (although not necessary) to have the high-quality hardware RAID controller installed as well. This will improve the overall performance of disk-intensive jobs considerably. Another things that can help are:
TCPIP protocol must be installed and configured correctly on each system.
The 32-bit MPICH.NT package must be installed on all computers to be used to run PC GAMESS/Firefly in parallel. Please consult with the MPICH.NT documentation before start experimenting with parallel PC GAMESS/Firefly runs.
The PC GAMESS/Firefly binaries should present on all the computers you plan to run the PC GAMESS/Firefly in parallel, as well as the proper MPI binding dll
Read the documentation on mpirun (the part of MPICH.NT), its options (run mpirun without any parameters to see the list of supported options), and format of configfile.
Finally, one has to carefully read these MUST READ documents:
Consult with MPICH.NT documentation and create an appropriate configfile. Below is an example:
exe C:\PCGAMESS\PCGAMESS.EXE args -o C:\WORK\chk.out D:\PCGAMESS\1 C:\PCGAMESS\1 C:\PCGAMESS\2 hosts P4 1 DUATH 2
Use mpirun to run PC GAMESS in parallel on your Windows cluster. The simplest command line arguments to pass the parallel PC GAMESS/Firefly jobs are as follows:
      DIR0 DIR1 DIR2 ... DIRN
Here, DIR0, DIR1, DIR2, etc... are the working directories of the master PC GAMESS/Firefly process (i.e., of MPI RANK=0), second instance of PC GAMESS/Firefly (MPI RANK=1), third instance, and so on. Both absolute and relative paths are allowed. Relative paths are relative to the initial working directory you launched the PC GAMESS/Firefly from.
For the particular configfile above, D:\PCGAMESS\1 must exist on host P4, while C:\PCGAMESS\1 and C:\PCGAMESS\2 must exist on host DUATH2 prior to the PC GAMESS/Firefly execution! The input file must be in the master working directory (i.e., in the D:\PCGAMESS\1 for this example)
While running PC GAMESS/Firefly in parallel using standalone SMP system, the performance degradation is possible because of simultaneous I/O operations. In this case, the use of high-quality RAID or separate physical disks can help. If the problem persist, for dual- (and more, 4, 8, for example)-CPUs/cores SMP/multicore systems the better solution is probably to switch to the direct computation methods which require much less disk I/O.
The default value for AOINTS is DUP. It is probably optimal for low-speed networks (10 and 100 Mbps Ethernet). On the other hand, for faster networks and SMP systems the optimal value could be AOINTS=DIST. You can change the default by using the AOINTS keyword in the $SYSTEM group. So, you can check what is the faster way for your systems.
There are four keywords in the $SYSTEM group which can help in the case of MPI-related problems. Do not modify the default values unless you are absolutely sure that you need to do this. They are as follows:
MXBCST (integer) - the maximum size (in DP words) of the message used in broadcast operation. Default is 32768. You can change it to see whether this helps MPISNC (logical) - activates the strategy when the call of the broadcast operation will periodically synchronize all MPI processes, thus freeing wp4 global memory pool. Default is false. Setting it to true should resolve most buffer-overflow problems by the cost of somewhat reduced performance. MXBNUM (integer) - the maximum number of broadcast operations which can be performed before the global synchronization call is done. Relevant if MPISNC=.true. Default is 100. LENSNC (integer) - the maximum total length (in DP words) of all messages which can be broadcasted before the global synchronization call is done. Relevant if MPISNC=.true. Default is dependent on the number of processes used (meaningful values vary from 20000 to, say, 262144 or even more).
Last updated: March 18, 2009