Veinardi Suendo
vsuendo@chem.itb.ac.id
Thank you very much for your explanation and suggestion. We will try to repeat our cancelled calculation, I do hope it will give good result.
Best regards,
Veinardi
On Thu May 21 '09 8:50pm, Alex Granovsky wrote
----------------------------------------------
>Hi,
>1. The memory allocated using MWORDS/MEMORY keywords is allocated
>on a per process instance basis. You are trying to allocate
>approximately 4*3GB = 12 GB RAM but only have 8 GB of RAM available.
>This causes excessive paging and seriously degrades performance,
>as well as the lifetime of our HDDs.
>Try to run your job requesting only e.g. 260 MWORDS or perhaps
>even smaller amount.
>Actually, there are already multiple threads on this forum discussing
>optimal memory allocation strategies running in parallel on SMP/multi-core systems.
>2. Note that the following p2p settings are only relevant for MP2 at present:
>
$P2P xdlb=.t. mxbuf=2048 $END
>Moreover, mxbuf=2048 (p2p buffer size in kilobytes) is only
>recommended for shared memory version of p2p interface, which is
>available only for Windows and OS X at present. Otherwise, the
>default, much smaller size is optimal. Actually, you are setting p2p
>buffer to be 2 MB per each process; however, it is allocated
>(and thus consumes some extra memory) but is never used.
>Regards,
>Alex
>
>
>
>
>
>
>
>
>
>On Wed May 20 '09 10:04am, Veinardi Suendo wrote
>------------------------------------------------
>>Dear Prof. Alex and Firefly users,
>>Recently, we are working on CI calculation to obtain UV absorption spectra of porphyrin related molecules using Firefly. However, we faced some problems when we deal with a slightly higher basis set or using TDDFT calculation. The calculation seems to work properly, but I found that the memory becomes an important issue. Here, we observed that the processors were idle in a quite long period while the program took almost all memory. In this calculation, we carried out on a single PC with Intel Core2 Quad Processor Q9400, CentOS 5.3 64 bit with RAM of 8GB. In the beginning everything was fine, the program use all cores up to 100 % (4x100%), however, when they start the CI calculation, I do not know why, it seems they really need a lot of memory so they took all 8GB of RAM + 8 GB of swap memory. Is there any option to divide the jobs efficiently, so there will not much processors idle? Below, I list as well the header of our input file:
>> $CONTRL SCFTYP=RHF CITYP=TDDFT INTTYP=HONDO
>> EXETYP=RUN MAXIT=100 MULT=1 LOCAL=NONE ICUT=11
>> DFTTYP=B3LYP ITOL=30 D5=1 ICHARG=0 $END
>> $SYSTEM TIMLIM=31536000 MEMORY=400000000 KDIAG=0 FASTF=.TRUE. $END
>> $P2P p2p=.t. dlb=.t. xdlb=.t. mxbuf=2048 $END
>> $SMP Call64=.t. $END
>> $BASIS GBASIS=N31 NGAUSS=6 NDFUNC=1 NPFUNC=1 $END
>> $D5 D5=.TRUE. F7=.TRUE. G9=.TRUE. $END
>> $GUESS GUESS=MOREAD NORB=302 KDIAG=0 PRTMO=.TRUE. $END
>> $INTGRL SCHWRZ=.TRUE. NOPK=1 PACKAO=.TRUE. $END
>> $TDDFT NSTATE=60 ISTATE=1 TDA=.FALSE. $END
>> $SCF DIRSCF=.TRUE. FDIFF=.FALSE. NCONV=6
>> EXTRAP=.TRUE. DAMP=.TRUE. SHIFT=.TRUE. RSTRCT=.TRUE.
>> DIIS=.TRUE. SOSCF=.FALSE. $END
>> $STATPT OPTTOL=0.0001 NSTEP=400 Method=GDIIS HESS=GUESS $END
>> $DATA
>>I am looking forward to hear from you.
>>Best regards,
>>Veinardi