ML NCSHMEM: Difference between revisions
mNo edit summary |
No edit summary |
||
| Line 1: | Line 1: | ||
{{DISPLAYTITLE:ML_NCSHMEM}} | {{DISPLAYTITLE:ML_NCSHMEM}} | ||
{{TAGDEF|ML_NCSHMEM|[integer]|Number of available ranks per computational node}} | {{TAGDEF|ML_NCSHMEM|[integer]}} | ||
{{DEF|ML_NCSHMEM|komplex see below|VASP.6.6.0 or higher|Number of available ranks per computational node|else}} | |||
Description: Specifies the number of MPI ranks that share a single shared memory segment. | Description: Specifies the number of MPI ranks that share a single shared memory segment. | ||
Revision as of 07:52, 17 March 2026
ML_NCSHMEM = [integer]
| Default: ML_NCSHMEM | = komplex see below | VASP.6.6.0 or higher |
| = Number of available ranks per computational node | else |
Description: Specifies the number of MPI ranks that share a single shared memory segment.
The total number of memory segments created equals the number of cores per node divided by ML_NCSHMEM. All memory segments have identical sizes, so a larger number of segments results in higher total memory consumption.
However, on systems with multiple NUMA domains, performance can degrade significantly during machine-learned force field inference if all domains access the same memory segment. For optimal performance, each NUMA domain should have its own dedicated shared memory segment. For more details, we refer to NCSHMEM.
Related tags and articles
ML_LMLFF, ML_MODE, NCSHMEM, Shared memory