ML NCSHMEM: Difference between revisions

From VASP Wiki
mNo edit summary
mNo edit summary
 
Line 6: Line 6:
The total number of memory segments created equals the number of cores per node divided by {{TAG|ML_NCSHMEM}}. All memory segments have identical sizes, so a larger number of segments results in higher total memory consumption.  
The total number of memory segments created equals the number of cores per node divided by {{TAG|ML_NCSHMEM}}. All memory segments have identical sizes, so a larger number of segments results in higher total memory consumption.  


However, on systems with multiple NUMA domains, performance can degrade significantly during machine-learned force field inference if all domains access the same memory segment. For optimal performance, each NUMA domain should have its own dedicated shared memory segment.
However, on systems with multiple NUMA domains, performance can degrade significantly during machine-learned force field inference if all domains access the same memory segment. For optimal performance, each NUMA domain should have its own dedicated shared memory segment. For more details, we refer to {{TAG|NCSHMEM}}.


== Related tags and articles ==
== Related tags and articles ==

Latest revision as of 09:36, 5 February 2026

ML_NCSHMEM = [integer]
Default: ML_NCSHMEM = Number of available ranks per computational node 

Description: Specifies the number of MPI ranks that share a single shared memory segment.


The total number of memory segments created equals the number of cores per node divided by ML_NCSHMEM. All memory segments have identical sizes, so a larger number of segments results in higher total memory consumption.

However, on systems with multiple NUMA domains, performance can degrade significantly during machine-learned force field inference if all domains access the same memory segment. For optimal performance, each NUMA domain should have its own dedicated shared memory segment. For more details, we refer to NCSHMEM.

Related tags and articles

ML_LMLFF, ML_MODE, NCSHMEM, Shared memory