Hello,
I am trying to do a GW calculation using VASP version 6.5.1 but keep running into Out Of Memory problems. During each run, VASP outputs a warning saying insufficient memory and quoting the available memory per rank along with the required memory. However, each time I try increasing the memory available, the required memory it quotes also goes up so that I never have enough memory and it always crashes. I have attempted increasing the total memory available and reducing the number of ranks.
I have tried running on 8 nodes x 14 tasks per node x 8 cpus per task and I am allocated a total of 3.50TB of memory. The WAVECAR that the calculation reads from is 2GB. The unit cell contains 152 atoms and there are a total of 320 electrons. I use 8 k-points which are all equivalent by symmetry. I am using the standard PAW_PBE potentials. All my parameters can be seen below. I have attached a zip file of the calculation inputs and corresponding OUTCAR files. The workflow is to run VASP using INCAR.DFT then INCAR.DIAG then INCAR.GW. The Out Of Memory error occurs on the GW step of the calculation which is the third step.
Does it make sense for the memory requirement to be this large for a WAVECAR of that size? And what could I try to reduce the memory requirements without losing convergence?
INCAR:
Code: Select all
ISMEAR = 0
NBANDS = 600
ENCUT = 400
EDIFF = 1E-8
ALGO = EVGW0
LSPECTRAL = .TRUE.
NOMEGA = 25
NELM = 1
PRECFOCK = FastKPOINTS:
Code: Select all
Automatically generated mesh
0
Gamma
2 2 2
0 0 0Submit script:
Code: Select all
#!/bin/bash -l
#SBATCH --time 10-00:00:00
#SBATCH --nodes=8
#SBATCH --ntasks-per-node=14
#SBATCH --cpus-per-task=8
#SBATCH --exclusive
module purge
module load bluebear
module load bear-apps/2024a/live
module load VASP/6.5.1-intel-2024a-Wannier90
export OMP_NUM_THREADS=1
export TERM=xterm
mpirun -np $SLURM_NTASKS vasp_std
exit
