Problems running VASP: crashes, internal errors, "wrong" results.
Moderators: Global Moderator, Moderator
-
asrosen
- Newbie

- Posts: 27
- Joined: Wed Oct 18, 2023 4:51 pm
#1
Post
by asrosen » Wed Feb 26, 2025 10:34 pm
I have compiled VASP 6.5.0 with libMBD 0.12.8 support using Intel OneAPI 2024.2. I ran a calculation with IVDW = 14 on one system, and everything worked as expected. However, when I try to model a primitive cell of Cu, it seg faults at geometry step 2.
Some things I tried that did not help:
- Switching to the master branch of libMBD
- Using only one core instead of the full node
- Using a full 1 TB of memory on the node
- Using a larger, off-equilibrium structure consisting of 8 Cu atoms (here it seg faulted at the first step)
- Using Ag instead of Cu
I have attached the relevant files. I used PAW_PBE Cu 22Jun2005 for the POTCAR.
You do not have the required permissions to view the files attached to this post.
-
ferenc_karsai
- Global Moderator

- Posts: 581
- Joined: Mon Nov 04, 2019 12:44 pm
#2
Post
by ferenc_karsai » Thu Feb 27, 2025 9:40 am
Please also post your used makefile.include and complete toolchain.
-
asrosen
- Newbie

- Posts: 27
- Joined: Wed Oct 18, 2023 4:51 pm
#3
Post
by asrosen » Thu Feb 27, 2025 12:48 pm
Of course. My makefile.include is shown below. For the toolchain, it's Intel OneAPI 2024.2 for the compilers, Intel OneAPI MPI 2021.13 for MPI, Intel OneAPI mkl 2024.2 for FFT, BLAS, LAPACK, ScaLAPACK, and hdf5 1.14.4 with Intel OneAPI 2024.2. I also built with DFTD4 3.7.0 support, although removing that does not seem to change anything.
Code: Select all
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxIFC\" \
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Dtbdyn \
-Dfock_dblbuf \
-D_OPENMP
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpiifort -fc=ifx -qopenmp
FCL = mpiifort -fc=ifx
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icx
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS = icpx
LLIBS = -lstdc++
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -xHOST
FFLAGS += $(VASP_TARGET_CPU)
# Intel MKL (FFTW, BLAS, LAPACK, and scaLAPACK)
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL += -qmkl
MKLROOT ?= /opt/intel/oneapi/mkl/2024.2
LLIBS += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
INCS =-I$(MKLROOT)/include/fftw
# HDF5-support (optional but strongly recommended, and mandatory for some features)
CPP_OPTIONS+= -DVASP_HDF5
HDF5_ROOT ?= /usr/local/hdf5/oneapi-2024.2/intel-mpi/1.14.4
LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran
INCS += -I$(HDF5_ROOT)/include
# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier
# For the fftlib library (hardly any benefit in combination with MKL's FFTs)
#FCL = mpiifort fftlib.o -qmkl
#CXX_FFTLIB = icpc -qopenmp -std=c++11 -DFFTLIB_USE_MKL -DFFTLIB_THREADSAFE
#INCS_FFTLIB = -I./include -I$(MKLROOT)/include/fftw
#LIBS += fftlib
# For machine learning library vaspml (experimental)
#CPP_OPTIONS += -Dlibvaspml
#CPP_OPTIONS += -DVASPML_USE_CBLAS
#CPP_OPTIONS += -DVASPML_USE_MKL
#CPP_OPTIONS += -DVASPML_DEBUG_LEVEL=3
#CXX_ML = mpiicpc -cxx=icpx -qopenmp
#CXXFLAGS_ML = -O3 -std=c++17 -Wall
#INCLUDE_ML =
# DFTD4
CPP_OPTIONS += -DDFTD4
LLIBS += $(shell pkg-config --libs dftd4) -lmulticharge -lmctc-lib -lmstore
INCS += $(shell pkg-config --cflags dftd4)
# LibMBD
CPP_OPTIONS += -DLIBMBD
LIBMBD_ROOT ?= /home/ROSENGROUP/software/libmbd/0.12.8
LLIBS += -L$(LIBMBD_ROOT)/lib64 -lmbd
INCS += -I$(LIBMBD_ROOT)/include/mbd
-
asrosen
- Newbie

- Posts: 27
- Joined: Wed Oct 18, 2023 4:51 pm
#4
Post
by asrosen » Thu Feb 27, 2025 7:55 pm
A note for debugging purposes: If you are using libMBD 0.12.8 with Intel OneAPI, you need to make the change described https://github.com/libmbd/libmbd/issues/65. Alternatively, you can use the master branch if you compile with -DCMAKE_BUILD_TYPE=Release.
In any case, here was how I build libMBD too:
Code: Select all
cmake -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_Fortran_COMPILER=ifx -DCMAKE_INSTALL_PREFIX=/home/ROSENGROUP/software/libmbd/0.12.8
make -C build install
The same error is obtained even with -DENABLE_SCALAPACK_MPI=ON.
-
ferenc_karsai
- Global Moderator

- Posts: 581
- Joined: Mon Nov 04, 2019 12:44 pm
#5
Post
by ferenc_karsai » Fri Feb 28, 2025 9:49 am
I've handed over this problem to a colleague who wrote the libMBD interface in VASP. He will post suggestions when he knows more.
-
fabien_tran1
- Global Moderator

- Posts: 502
- Joined: Mon Sep 13, 2021 11:02 am
#6
Post
by fabien_tran1 » Fri Feb 28, 2025 2:30 pm
Hi,
I can reproduce the problem, for which it does not matter which compiler is used. This is probably the same problem that I encountered when I wrote the interface, and I think that the (vague) explanation was that libMBD stops when unphysical values are obtained. A solution would be to use the implementation of MBD@rsSCS in VASP (IVDW=202), which should be slower.
-
asrosen
- Newbie

- Posts: 27
- Joined: Wed Oct 18, 2023 4:51 pm
#7
Post
by asrosen » Fri Feb 28, 2025 2:41 pm
Thanks for confirming it was not a "me" issue!
Unfortunate about the crash. We probably can't afford to use the more expensive built-in approach, so we will have to forego using this method for the time being then.
-
chuanlu_yang1
- Newbie

- Posts: 7
- Joined: Tue Apr 28, 2020 2:24 pm
#8
Post
by chuanlu_yang1 » Wed Oct 01, 2025 2:58 pm
Dear All,
I also meet the same problem. I have compiled libMBD 0.12.8 (in fact, master version is also tested and the Error is same.)with VASP 6.5.1 using the following command:
cmake -B build FC=mpiifx CC=mpiicx CXX=mpiicpx -DENABLE_SCALAPACK_MPI=ON
When running an optimization task in VASP with IVDW=14, the calculation completes the first ionic step and reports the parallel mode as either k-points or atoms. However, at the second ionic step, the calculation fails, with the parallel mode switching to none.
I have tested different systems and various numbers of MPI processes (-np xx, including -np 1), but all cases lead to the same error.
Any suggestions would be greatly appreciated.
Thanks in advance,
Eric
-
fabien_tran1
- Global Moderator

- Posts: 502
- Joined: Mon Sep 13, 2021 11:02 am
#9
Post
by fabien_tran1 » Thu Oct 02, 2025 8:50 am
Hi,
Could you please provide the input files of your calculations?
-
chuanlu_yang1
- Newbie

- Posts: 7
- Joined: Tue Apr 28, 2020 2:24 pm
#10
Post
by chuanlu_yang1 » Thu Oct 02, 2025 9:58 am
Dear fabien_tran1 , My INCAR is:
SYSTEM = WTe2-S
ISTART=0
ICHARG=2
ISMEAR = 0
SIGMA = 0.05
PREC = Accurate
ALGO = N
GGA = PE
NSW=1000
IBRION=2
ISIF=3
LOPTICS = .TRUE.
#KPAR=4
NPAR=1
NCORE=1
ENCUT=600
EDIFF = 1E-06
EDIFFG = -0.01
IVDW=14
-
fabien_tran1
- Global Moderator

- Posts: 502
- Joined: Mon Sep 13, 2021 11:02 am
#11
Post
by fabien_tran1 » Thu Oct 02, 2025 10:44 am
Thanks, but also POSCAR and KPOINTS.
-
chuanlu_yang1
- Newbie

- Posts: 7
- Joined: Tue Apr 28, 2020 2:24 pm
#12
Post
by chuanlu_yang1 » Thu Oct 02, 2025 2:35 pm
thanks!
POSCAR:
WS2
1.00000000000000
5.7228794617476364 0.0000000000000000 0.0000000000000000
0.0000000000000000 3.3030984096326228 0.0000000000000000
0.0000000000000000 0.0000000000000000 31.5905332914634585
W S
2 4
Direct
0.8333020359500113 0.5000001089999984 0.5000108540000028
0.3333019519500115 -0.0000000000000000 0.5000108540000028
0.0000156555249908 0.0000000000000000 0.5530572995125599
0.5000156555249907 0.5000000000000000 0.5530572995125599
0.0000156555249908 0.0000000000000000 0.4469642884874428
0.5000156555249907 0.5000000000000000 0.4469642884874428
KPOINTS:
000
0
G
3 5 1
0 0 0
-
fabien_tran1
- Global Moderator

- Posts: 502
- Joined: Mon Sep 13, 2021 11:02 am
#13
Post
by fabien_tran1 » Fri Oct 03, 2025 8:46 am
Hi,
I can reproduce your problem. libMBD crashes, and this does not seem related to MPI parallelization since the crash occurs also in non-MPI mode. This is probably due to some problems in libMBD and not VASP.
At the moment, I can only suggest you to use the method directly implemented in VASP. You did not specify the LIBMBD_METHOD tag in INCAR, which means that the default mbd-rsscs method is used. The corresponding method in VASP is IVDW=202, which is unfortunately slower than in libMBD.
-
chuanlu_yang1
- Newbie

- Posts: 7
- Joined: Tue Apr 28, 2020 2:24 pm
#14
Post
by chuanlu_yang1 » Sat Oct 04, 2025 11:00 am
I have carefully traced the libMBD workflow and found that the issue originates from the geometry transfer. Specifically, during the second ionic step, the lattice parameters within libMBD are set to zero. This problem may be related to the VASP call or the libMBD interface, and it could also involve the internal handling of the geometry data. Unfortunately, I have not yet located the solvation method.
-
fabien_tran1
- Global Moderator

- Posts: 502
- Joined: Mon Sep 13, 2021 11:02 am
#15
Post
by fabien_tran1 » Mon Oct 06, 2025 7:12 am
Thanks for tracking the problem. I will also try to figure out what is the problem. Could you please give more details about your findings?