Alex Granovsky
gran@classic.chem.msu.su
this problem has been traced down to a bug in MKL matrix diagonalization
routine when working in multi-threaded mode and running on your particular
hardware (Core 2 Quad). This problem can be avoided by adding the following line to your input:
$smp dianp=1 $end
Kind regards,
Alex
On Thu Mar 30 '17 9:46pm, Bernhard Dick wrote
---------------------------------------------
>Dear Alex,
>here is a ZIP with the requested files.
>best regards,
>Bernhard
>
>
>On Tue Mar 28 '17 3:33pm, Alex Granovsky wrote
>----------------------------------------------
>>Dear Bernhard,
>>I'm sorry for delay with my reply. Could you please send me the entire
>>input file including $vec group, and also your basis set definition file?
>>All the best,
>>Alex
>>
>>
>>On Wed Mar 22 '17 0:37am, Bernhard Dick wrote
>>---------------------------------------------
>>>Dear Alex,
>>>I get a crash of an MCSCF run with the error message
>>> Fatal error in DOVESHIFTX, error code = -1
>>>immediately after the printout of the " DENSITY MATRIX OVER CANONICALIZED ACTIVE MO-S". The next part of the output should be
>>> -----------------------
>>> -MCHF- NATURAL ORBITALS
>>> -----------------------
>>>...
>>>The error is reproducible with this particular input. However, I have successfully run two other rather similar jobs (only singlets instead of triplets, and different ISTSYM). Here is the command line part of the input:
>>> $SYSTEM TIMLIM=60000 MWORDS=200 MKLNP=2 NP=4 $END
>>> $CONTRL SCFTYP=MCSCF RUNTYP=ENERGY EXETYP=RUN MULT=3 ICHARG=0
>>> MAXIT=200 COORD=UNIQUE fstint=.t. gencon=.t. inttyp=hondo icut=11
>>> NZVAR=0 MPLEVL=2 NOSYM=0 D5=.T. $END
>>> $BASIS GBASIS=cc-pvdz extfil=.t. $END
>>> $STATPT METHOD=GDIIS NSTEP=30 NOREG=5 HSSEND=.F. $END
>>> $MOORTH nostf=1 nozero=1 syms=1 symden=1 symvec=1 $end
>>> $SCF DIRSCF=.TRUE. DIIS=.TRUE. $END
>>> $MCSCF CISTEP=GUGA fullnr=.F. MAXIT=50 MINMEM=.T.
>>> fors=.t. acurcy=1d-7 engtol=1d-12 $END
>>> $DRT GROUP=C2 ISTSYM=2 NMCC=63 NDOC=5 NALP=2 NVAL=5 FORS=.T. $END
>>> $trans dirtrf=.t. $end
>>> $GUGDM2 wstate(1)=1 cutoff=1d-12 $end
>>> $GUGDIA ITERMX=2000 NSTATE=1 cvgtol=1d-7 $END
>>> $gugem pack2=.t. cutoff=1d-12 $end
>>> $XMCQDPT EDSHFT=0.02 ISTSYM=2 KSTATE(1)=1 $END
>>> $GUESS GUESS=MOREAD NORB=75 $END
>>>This is what Firefly says about the Computer:
>>> Intel Core2/ Win32 Firefly version running under Windows NT
>>> Running on Intel CPU: Brand ID 0, Family 6, Model 23, Stepping 10
>>> CPU Brand String : Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz
>>> CPU Features : x87 FPU, CMOV, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, MWAIT, EM64T
>>> Data cache size : L1 32 KB, L2 3072 KB, L3 0 KB
>>> max # of cores/package : 4
>>> max # of threads/package : 4
>>> max cache sharing level : 2
>>> Operating System successfully passed SSE support test.
>>>and somewhat later:
>>> This job is executing on 1 unique host(s)
>>> Minimum number of processes per host is: 1
>>> Maximum number of processes per host is: 1
>>> On master's host, detected 4 CPU core(s) in aggregate
>>> DGEMM will use 2 threads.
>>> Matrix diagonalization and inversion will use 2 threads.
>>> SMP/multicore aware parts of program will use 4 threads.
>>> Creating thread pool to serve up to 128 threads.
>>> Activating Call64 option.
>>>The two successful runs used the same computer. I am not so familiar with the MKLNP and NP parameters, I usually use parallel run with P2P, but I wanted to try XMCQDPT, and that does apparently not run parallel.
>>>best regards,
>>>Bernhard
>>>