Dawid
dawid.grabarek@pwr.edu.pl
Thank you for pointing that out. However, it does not work as it is supposed to. Again for 24 and 48 processors the performance is the same. I use -xp=$NCPUS option, where again $NCPUS is number of processors declared for my batching system. If I use a number for -xp different from number of processors, e.g. -xp=4 for 24 processors I get error like this
PROCESS 4 FAILED TO OPEN INPUT FILE, ERROR CODE: 21
PROCESS 8 FAILED TO OPEN INPUT FILE, ERROR CODE: 21
PROCESS 12 FAILED TO OPEN INPUT FILE, ERROR CODE: 21
PROCESS 16 FAILED TO OPEN INPUT FILE, ERROR CODE: 21
PROCESS 20 FAILED TO OPEN INPUT FILE, ERROR CODE: 21
Similarly for other combinations there are some processes that fail to open input file even though it is definitely present in both working and scratch directories. Where does it come from?
Best regards,
Dawid
On Sat Dec 24 '16 8:47pm, Igor Polyakov wrote
---------------------------------------------
>Dear Dawid,
>Please look for the extreme parallel mode (-xp command line option) in the Firefly manual.
>Best regards, Igor
>On Fri Dec 23 '16 4:34pm, Dawid wrote
>-------------------------------------
>>Dear All,
>>I use Firefly 8.2.0 Linux/OpenMPI v. 1.8.x, dynamically linked version.
>>I am currently performing the excited-state geometry optimization
>>in the first excited state using XMC-QDPT2 method and the 2-roots-SA-CASSCF reference wave function.
>>I have noticed that whether these calculations run on 8 or 48 processors, the performance is the same, i.e. of walltime
>>required to do one displacement step in geometry optimization.
>>Is it what I should expect? Is the xmcqdpt2 module properly
>>parallelized? I call Firefly with following command
>>mpirun -np $NCPUS /home/addiw17/firefly820_linux_openmpi_1.8/firefly820 -ex /home/addiw17/firefly820_linux_openmpi_1.8/ > firefly.out 2>&1
>>where $NCPUS is number of CPUS that I declared for my batching system.
>>Best regards,
>>Dawid Grabarek