PC GAMESS v. 7.0.7 benchmarks and scalability on Myrinet Opteron Linux cluster


Number of processors used

Test 1, Wall clock time and relative speedup

Test 3, Wall clock time and relative speedup

Test 5, Wall clock time and relative speedup

Test 6, Wall clock time and relative speedup

1

4990.2

1.00

n/a

n/a

n/a

2

2485.0

2.01

n/a

n/a

n/a

4

1263.7

3.95

2337.7

4.00

2759.8

4.00

7039.0

4.00

8

651.3

7.66

1182.6

7.91

1527.6

7.23

3649.3

7.72

16

346.3

14.41

618.3

15.12

880.2

12.54

1960.3

14.36

32

195.3

25.55

314.9

29.69

572.6

19.28

1138.4

24.73

 

Graphical representation of scalability



OS and hardware description


Myricom's private Opteron cluster, dual processor (single core) AMD Opteron 248, 2.2 GHz, TYAN S2891 baseboard with nVidia nForce Professional 2200 chipset, 4 GB DDR 333 RAM, 16 nodes, interconnect: Myri-10G network card with CX4 port (10G-PCIE-8A-C), Myri-10G switch, MX-10G version 1.2.0i (Myrinet Express), MPICH-MX 1.2.7.4 using MX_RCACHE=1 flag, Linux 2.6.17.11 (Ubuntu)



Tests description


Test 1, single-point direct DFT (B3LYP) energy plus gradient for medium-size system (623 basis functions). View image

Test 3, single-point direct MP2 energy for medium-size system (623 basis functions, the same system as one used for Test 1). View image

Test 5, single-point direct CASSCF(12,12) for medium-size system (retinal molecule, cc-pVDZ, 565 Cartesian basis functions) using ALDET code. View image

Test 6, single-point direct CIS energy plus gradient of first excited state of medium-size system (porphyrin molecule, cc-pVTZ (aug-cc on Nitrogens), 1130 Cartesian basis functions, D2h group). View image


Test comments


All tests were run in standard parallel mode using dynamic load balancing over p2p interface, with two processes per each dual-CPU node. Wall clock times are given on master node in seconds. Test 5 is the most communication intensive and would scale better for larger job. For these tests, the most commonly used MPI calls are MPI_Allreduce and MPI_Bcast.


We are grateful to Scott Atchley (Myricom) for providing us by benchmark data on this cluster.

Copyright © 2007 by Alex A. Granovsky

Press to visit PC GAMESS v. 7.0.7 benchmarks and scalability on large Opteron Pathscale Infinipath Infiniband Linux cluster page

Press to visit PC GAMESS v. 7.0.4 benchmarks and scalability on SKIF K-1000 (another large Opteron Infiniband Linux cluster) page

Press to visit PC GAMESS v. 7.0.4 benchmarks and scalability on 21-node Pentium 4 Infiniband Linux cluster page

Press to visit PC GAMESS' eight core systems performance comparison page

Press to visit PC GAMESS' Woodcrest vs. Opteron performance comparison page

Press to visit PC GAMESS Pentium 4 family Xeon processor benchmarks page to compare the results of these benchmarks with those obtained on Xeon DP processors.

Press to visit PC GAMESS Pentium 4 family benchmarks page to compare the results of these benchmarks with those obtained on various Netburst (Pentium 4 and Pentium D) processors.

Press to visit the PC GAMESS vs. WinGamess performance comparison page to compare the results of these benchmarks with those obtained on older processors. Input files can be found there too.