Dear All,
I am seeking advice on an issue I'm encountering while training an on-the-fly MLFF for a H/Ru surface system using VASP version 6.4.2. I'm observing a large energy jump when the simulation switches from DFT to the MLFF, even when the step is considered "accurate."
My systems constists of 2 H atoms on a Ru slab with 72 Ru atoms. During the simulation, when the MLFF is judged as "accurate" based on the Bayesian error and the DFT calculation is skipped, I see a sudden jump in the total energy in the OSZICAR file. This energy difference between the DFT-calculated steps and the MLFF-predicted steps is consistently close to 100 eV.
My Question:
I understand that the ML and DFT energies can differ during training. However, why would such a large discrepancy (~100 eV) occur during a step that the algorithm has high confidence in? This seems too large to be a simple prediction error and might point to a more fundamental issue in my setup or understanding.
The relevant calculation files can be found in this archive:
https://www.dropbox.com/scl/fi/counsai5 ... 2ccrv&dl=0
Any insights into this behavior would be greatly appreciated.
Thank you for your time.
Best wishes,
Cheng-chau
The calculations files can be found here
https://www.dropbox.com/scl/fi/counsai5 ... 2ccrv&dl=0
best wishes,
Cheng-chau

