Page 1 of 1

MD on gpu SCF fast update slow: solution?

Posted: Fri Apr 22, 2022 8:16 am
by paulfons
I have system of 428 atoms consisting of graphene and 100 water molecules. The system is quite large so I am running with vasp_gam. When I run the system on 32 cores, the SCF steps are quite slow with several minutes between steps. When I run the same input file on the ACC gpu vasp_gam version the SCF looks are very quick -- perhaps a few seconds per iteration, but the updates after the completion of a SCF loop are very slow and take more than ten minutes. I note that it is required to run the gpu using "mpirun -n 1 vasp_gam", e.g. with a single core. Is this the cause of the slow update (presumably updating the forces and calculating the new positions?). More importantly, is there something that can be done to speed up the calculation in terms of INCAR (or other) settings? It seems strange that the gpu SCF steps are so fast, but the updates to the next ion movement are a couple of orders of magnitude slower.

Re: MD on gpu SCF fast update slow: solution?

Posted: Mon Apr 25, 2022 9:17 am
by ferenc_karsai
Please send us the following files:

INCAR, KPOINTS, POSCAR, POTCAR,
OUTCAR for GPU calculation
OUTCAR for the pure CPU calculations

Is it a calculation using ML_LMLFF=.TRUE.?

Re: MD on gpu SCF fast update slow: solution?

Posted: Thu Apr 28, 2022 7:02 am
by paulfons
I am sorry for the delay in replying. This calculation was purely ab-initio. The job was run with a single core on the gpu with "mpirun -n 1 vasp_gam". The terminal output is in the file nohup.log in the attached tar file along with all of the input files. I was truly impressed with the the speed of the SCF look, but don't understand why ionic steps were so slow. I would be glad to try out any suggestions.

Re: MD on gpu SCF fast update slow: solution?

Posted: Tue May 17, 2022 4:53 am
by davidnormal
maybe try the openMP parallelization?

`export OMP_NUM_THREADS=4`

see if there is any improvement