Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

Question on input files/tags, interpreting output, etc.

Please check whether the answer to your question is given in the VASP online manual or has been discussed in this forum previously!

Moderators: Moderator, Global Moderator

Post Reply
Message
Author
jelle_lagerweij
Newbie
Newbie
Posts: 15
Joined: Fri Oct 20, 2023 1:13 pm

Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#1 Post by jelle_lagerweij » Mon Oct 23, 2023 10:10 am

Dear All,
I am a PhD who started simulating KOH (aq) 5 months ago. My experience is in molecular dynamics, mostly with LAMMPS, but now I am moving into QM simulations. This is mostly out of interest for MLFFs, which might provide my field with many useful insights. However, my question is regarding a pure AIMD calculation, as the QM should be set up correctly before starting an ML simulation. To introduce my research first: I am modeling a system of 110 H2O and 1 KOH molecules in liquid phase. My goal is to calculate the transport properties of this mixture. This is difficult with traditional MD as the OH- reacts with H2O molecules and reacting force fields are typically inaccurate. This reaction is similar to the more famous process that H+ undergoes in water, both are typically called Grotthuss transfer reactions. I want to stress that I am not interested in the most accurate QM calculations. My aim is to have adequate atomic forces to model the atomic diffusion and the reactions, but to optimize for calculation speed, as AIMD simulations for transport properties need over 100 000 time steps and computation time is scarce.

Now let's get to my simulation settings. I have (as per instructions) attached a *.zip directory with my simulation inputs. I am running my simulations with the following POTCARs: PAW_PBE H_h_GW 12Apr2008, PAW_PBE O_GW_new 19Mar2012, and PAW_PBE K_sv_GW 31Mar2010. I selected these as I aim to use two different functionals: the GGA functional RPBE+DFT-D3 (as this is a good reference functional and there is ample experience with water systems and this functional) and the metaGGA rVV10-r2SCAN functional (as literature states that this functional should be very good at modeling liquid systems). This means that the O and K POTCARs should support metaGGA functionals, these do. I added both INCARs to the *.zip. I set ENCUT to 550 eV after checking how the properties developed for both functionals, see figure Determining_ENCUT.png. In a similar method, I have set the smearing width and found the number of k-points (gamma mesh with 1k-point). I believe that I have set both functionals correctly (with their respective VdW correction), but please provide some feedback if you notice a mistake here. I set the time step to 0.5 fs and applied the Nosé-Hoover thermostat. Although I want to model 298K, I increased the temperature a bit. I have seen research indicate that the melting temperature of pure water is somewhat elevated with both RPBE-D3 and rVV10-r2SCAN and I want to ensure that I am modeling a liquid phase system.

During my simulations, I encountered 3 problems:
  • The most easy one is probably the energy drift I found in my systems. For Nosé-Hoover, the pseudo-Hamiltonian energy (where we include the thermostat energies as well) should be purely conserved. However, I find a drift of approximately 5eV for my entire system during 1.25 ps, this drift also continues for much longer simulation times. I added a figure showing these drifts as Pseudo_Hamiltionian.png I assume that this is caused by not minimizing the convergence loop enough. Changing the EDIFF to 1e-5 or 1e-6 could help. To get adequate trajectories, can I have some advice, what is a good tradeoff for speed/accuracy here?
  • A little more tricky is the issues I have with the rVV10-r2SCAN functional. It does not want to converge, I tried multiple ALGO and PREC settings. The simulation output file of an example of this is added under the name r2SCAN.out. The strange part is that my pure H2O system does converge. This is a smaller system (64 H2O molecules), and I used this to get my settings correct. Somehow, the change in species (potassium is added) and the system size really impacted the convergence. Are there tricks to solve this issue?
  • The last issue is a bit more VASP user case oriented. I can run my simulations on 3 different clusters, one with 48 cores Intel Cascade nodes, one with 128 cores AMD Rome nodes, and one with 192 cores AMD Genoa nodes (this feels like a luxury). I investigated the scaling of my VASP install using a different number of cores and compilers. Furthermore, I was quite disappointed with the result and wanted to know if I did something wrong here. I added two figures indicating the calculation time for 1 ps and the parallel efficiency as scaling_efficiency.pdf and scaling_time.pdf, respectively. Please do not be thrown off by the logarithmic axes. These scaling graphs are made using 25 time steps only. These initial time steps are more extensive, so I indicated a calculation with 2500 time steps (1.25p ps) as well. What we can see from these graphs is that the parallel efficiency quickly drops below 30%. I compiled my code depending on the platform, but all of them at least have the following packs: gcc 11, openmpi, openblas, netlib-scalapack, fftw and hdf5. Additionally, I use the NCORE = 2 up to number-of-cores-per-socket tag in the INCARs. Are there further improvements possible, or are gamma-point only calculations with a single k-point just not that easy to scale up?
Thanks a lot for reading through this long document, and lots of thanks for your feedback. On another note, I will partake the Moving ions with VASP seminars, maybe there I will find additional answers as well.
With kind regards,
Jelle Lagerweij
You do not have the required permissions to view the files attached to this post.

marie-therese.huebsch
Full Member
Full Member
Posts: 182
Joined: Tue Jan 19, 2021 12:01 am

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#2 Post by marie-therese.huebsch » Tue Oct 24, 2023 10:33 am

Hi,

Thank you for sharing your detailed description of your research and the challenges you've encountered during your AIMD simulations in VASP.

ad 1) Firstly, EDIFF = 5e-4 is indeed not a very tight convergence criterion, but your system is rather large. If you consider how many occupied band your system has, the value might suffice. Please also attach the OUTCAR files to facilitate better advice. Secondly, your simulation corresponds to an NVT ensemble. I am not sure why you expect the total energy to be conserved. Of course, an obvious drift in the total energy of the system is concerning. Did you thermalize the system beforehand? Can you perform a blocked average on the temperature and confirm it is the desired equilibrium temperature? If not, run the simulation until it thermalizes and perhaps increase the volume to avoid pressure. Also, consider adjusting the Nosé mass (SMASS) that corresponds to the frequency of the temperature oscillations during the simulation.

ad 2) as hinted upon in the previous point, you need to consider the system size when choosing EDIFF. Convergence is reached when the total energy change (of the electronic degree of freedom) and the band-structure-energy change ('change of eigenvalues') between two steps of the electronic minimization is smaller than EDIFF (in eV). That criterion is easier to fulfill if there are fewer eigenvalues. I recommend first performing an electronic minimization without updating the ionic positions (NSW=1). You may test to increase NELM to roughly 150 and increase NBANDS slightly (you can find the default in the OUTCAR). If this fails, try using ALGO=Normal, and then try fixing ICHARG=11. Please report your findings so I can provide further suggestions.

ad 3) Let me get back to you on this.

Best regards,
Marie-Therese

Ps.: I really appreciate your effort in preparing this post!

jelle_lagerweij
Newbie
Newbie
Posts: 15
Joined: Fri Oct 20, 2023 1:13 pm

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#3 Post by jelle_lagerweij » Wed Oct 25, 2023 9:26 am

Thanks for your quick reply, that makes this a lot easier. Your answer let to some new insights and also new questions ;). Firstly, I added all the outputs of a single simulation within the folder OUTPUT in the attached *.zip. Note that these are the raw outputs, and my input file has some mistakes in the comments (for example I write that MDALGO the Langevin thermostat is, although I use the Nosé-Hoover in settings). But these real inputs should be correct and, except for the temperatures, consistent with my last post. It is only 25 time steps (total of 12.5 fs) as larger sets introduced problems with file sizes.

There is a specific part of your answer that made me a bit confused. You wrote the following:
Secondly, your simulation corresponds to an NVT ensemble. I am not sure why you expect the total energy to be conserved.
This I found surprising as the Nosé-Hoover thermostat includes the heat bath into the system energies, making it following this Lagrangian: L = Ekinatomic_system - Epotatomic_system + EkinNose_Hoover - EpotNose_Hoover. This makes this system fully conserved to the following total energy: Etot = Ekinatomic_system + Epotatomic_system + EkinNose_Hoover + EpotNose_Hoover (very slight change). I think this is very compactly explained on the VASP wiki as well: wiki/index.php/Nose-Hoover_thermostat. The E0 in the output also includes the Nosé-Hoover energies, as explained in: wiki/index.php/OSZICAR. From my experience in MD, the drift of the total energy (including Nosé-Hoover) is on the level of machine precision, as the velocity Verlet algorithm conserves the energy. I think that my initial misunderstanding is caused by expecting the fluctuations of this energy to be on the same machine precision level. However, this is untrue just because of how the energy field is created. Equations like Lennard-Jones or Coulombic interactions are conserved purely because of their mathematical expressions (by design). However, the forces and energies in AIMD are calculated using the iterative self-consistency calculations. These are therefore not conserved up to machine precision, but up to convergence acceptance limit that is set (please point out if this understanding is wrong). Concluding, fluctuations are to be expected, however, they should not go up or down consistently over long times, as they should be random. In my case, there is a consistent trend. Instead of using box averages, I use long-term running averages in my simulations to recognize energy drifts. I added energy output files of 10ps AIMD calculations as well, here a Gaussian moving average filter with a width of 1fs is used. Note that the total energy.pdf file shows the atomic total energy (without Nosé-Hoover thermostat, and that pseudo hamiltionian.pdf shows the VASP total energy (EO) directly (with the same filter).

I also noted another relevant problem in my output files, see the slurm-2748211.out file. It wrote the following sentence:

Code: Select all

Information: wavefunction orthogonal band ...
at many time points. This reduced as well when I changed EDIFF (but it still occurs at times), these simulations are still running, so I will add them as well when they are done. I saw another forum post where this was discussed as a possible issue.

I will leave the r2SCAN simulations for now, as I should fix these issues first.

With kind regards,
Jelle Lagerweij
You do not have the required permissions to view the files attached to this post.

marie-therese.huebsch
Full Member
Full Member
Posts: 182
Joined: Tue Jan 19, 2021 12:01 am

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#4 Post by marie-therese.huebsch » Mon Oct 30, 2023 2:51 pm

Hi,
it is a good plan to address one issue at a time. So, let us focus on the energies and temperature for your MD runs.

The Output you added rises in temperature from 326K to 1283K. So, this is not a well-thermalized NVT calculation.

Regarding the other plots: While it is convenient to see a plot, I cannot properly judge data that I don't see directly as output from VASP. Particularly if I am not sure how you extracted the data and the raw data (numbers in a file) is not included. Thank you for understanding!

Could you please add all input files, but only the main output files that contain the data you want to analyze? Here, that is either the OUTCAR or the vaspout.h5.

As for the sentence in the stdout: It is a common issue for ALGO=fast that uses [url=https://www.vasp.at/wiki/index.php/RMM-DIIS]RMM-DIIS[\url]. It can help to increase the number of bands (NBANDS) slightly, to set a tighter convergence criterion (EDIFF), or to set a longer warm-up phase with the Davidson algorithm (NELMDL). Though if it happens only rarely, this should not be detrimental to your MD simulation.

Sorry I could not provide better answers.
Marie-Therese

jelle_lagerweij
Newbie
Newbie
Posts: 15
Joined: Fri Oct 20, 2023 1:13 pm

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#5 Post by jelle_lagerweij » Tue Oct 31, 2023 10:46 am

Dear all,
I wanted to give a quick catch-up. I found an energy conserving system after playing around with my time step size and the EDIFF value. Using a time step size of 0.5 fs and an EDIFF of 1E-5 seems to work fine, as the plot below shows. I used to different initial seeds (position and velocity wise) with EDIFF of 1E-4, 1E-5, 1E-6. Now the energy seems well conserved at the cost of additional computation time.
pseudo_hamiltonian.png
The are only two short term problem that I have to solve now, of which the first is the parallelization efficiency. I want to be able to compute at least tens of picoseconds within acceptable timeframes, however, for now I can only do up to 1 day/ps with acceptable parallel efficiency (approximately 30%). This is still something that I would love to get advice for.

I did look on the wiki page for parallelization, however, the KPAR parameter is probably unsuitable for me (as I do single gamma point calculations). Then the only thing left is the parallelization of the bands. I am planning to run my jobs on single nodes, as the largest node that is available to me is 192 cores, which should be enough. I did follow this tip of the output file of vasp:

Code: Select all

 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     For optimal performance we recommend to set                             |
|       NCORE = 2 up to number-of-cores-per-socket                            |
|     NCORE specifies how many cores store one orbital (NPAR=cpu/NCORE).      |
|     This setting can greatly improve the performance of VASP for DFT.       |
|     The default, NCORE=1 might be grossly inefficient on modern             |
|     multi-core architectures or massively parallel machines. Do your        |
|     own testing! More info at https://www.vasp.at/wiki/index.php/NCORE      |
|     Unfortunately you need to use the default for GW and RPA                |
|     calculations (for HF NCORE is supported but not extensively tested      |
|     yet).                                                                   |
|                                                                             |
 -----------------------------------------------------------------------------
However, I already call for this NCORE setting in the bottom of my incar:

Code: Select all

#############################################################
# Additional settings for vasp efficiencies
NCORE = 2 up to number-of-cores-per-socket    ! Allows for more cores/band might increase computation speed
I still get this warning in some simulations. If this warning pops up, seems to depend on the machine I am running on, one has VASP 6.4.0 the other 6.4.1. I replaced the VASP 6.4.0 install today with 6.4.2 myself, for the other cluster I will have to ask the support to do this for me. Maybe this was a known old 'bug'. I did not set NBANDS by hand, maybe I can make an improvement there. However, in short my question is the following:
Which settings are ideal run a large system (over 300 atoms) with a single KPoint, on many cores (up to 192), located on a single node?
My second issue is with a small wavefunction warning:

Code: Select all

Information: wavefunction orthogonal band  623  0.8974
I get this warning around 480 times in 5000 timesteps (with different values attached at the end). For my system does this pose an issue for my results?

Thanks for your help already.
Yours sincerely,
Jelle Lagerweij
You do not have the required permissions to view the files attached to this post.

jelle_lagerweij
Newbie
Newbie
Posts: 15
Joined: Fri Oct 20, 2023 1:13 pm

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#6 Post by jelle_lagerweij » Tue Oct 31, 2023 11:51 am

Dear Marie-T herese Huebsch,
Thanks for your feedback. The system I send was a new (fully non-equilibrated) testcase for parallel efficiency. My other output files were too large to share in this forum. However, I understand that using this as an example for the thermostat is probably bad practice ;) . As a temporary solution, I uploaded more representative simulation using my countries national scientific computing servers. I am unsure if this is within the VASP forum guidelines, as this is an external website: https://filesender.surf.nl/?s=download& ... 248752cbfe with a storage space that will disappear 14/11/2023 (so sadly future users with questions will not be able to see these files). This file location should be secure :). I added the input files: INCAR, KPOINTS, POSCAR, and POTCAR. The output files: OUTCAR, sbatch_out (the raw output file, with some afformentioned warnings), and vaspout.h5.

This is actually a simulation of the figure where the energy was nicely conserved in my last post (EDIFF=1e-5 and run 1). I also have some other results of this to show. The temperature seems to behave nicely, I set it to 335 K in the INCAR and this is the result. Unfiltered (the shared file is indexed as ediff=10^-5 run 1):
temperature_no_filter.png
Filtered, with Gaussian moving average (the shared file is indexed as ediff=10^-5 run 1):
temperature_filter.png
So overall, I am quite happy with these results from an energy conservation and temperature viewpoint. Only the pressure is significantly negative, however, I set my system to use experimental densities, so some issues there are to be expected, and it will not impact what property I am studying anyway.

As I stated in the last post, I am mostly worried about the parallel efficiency for now. My aim is to calculate transport properties (with AIMD and MLMD), which means that I need long simulation times. Although I found that I require fewer days/ps than I expected (the first few time steps are way more expensive to converge than those later on), the parallel performance is still not what I expected. I did notice that I did not specify the number of bands specifically in my INCAR, and my OUTCAR states that it has 624 bands (line 1400 of the outcar). This does not divide nicely with 32 cores (which I was using for testing). As you added that having more bands was also good to improve the convergence of the RMM-DIIS solver, I decided to increased it NBANDS=640 and see what it will do.

Thanks a lot already,
Jelle
You do not have the required permissions to view the files attached to this post.

marie-therese.huebsch
Full Member
Full Member
Posts: 182
Joined: Tue Jan 19, 2021 12:01 am

Re: Moddeling aqueous Potassium Hydroxide, energy drifts in AIMD calculations

#7 Post by marie-therese.huebsch » Tue Nov 28, 2023 7:47 am

Hi Jelle,

Did you enjoy the VASP workshop? I am guessing you had a lot of opportunities to ask questions there, so I want to ask if there are still questions open. And if so, could you please summarize?

Cheers,
Marie-Therese

Post Reply