Dear VASP Admin, VASP colleagues,
We are working with the very nice modelBSE method. I found a technical bottleneck for the parallelization this method use only NPAR=total number of cores.
This is not practical if you are increasing the number of k-point which hugely increases the memory per core. In principle, one can play with kpar or ncore and npar to decrease the memory per node.
So my question: Have you a way of decreasing the memory per node? instead of using kpar, ncore, etc.
Sincerely,
Laurent Pedesseau
modelBSE
Moderators: Global Moderator, Moderator
-
- Global Moderator
- Posts: 460
- Joined: Mon Nov 04, 2019 12:44 pm
Re: modelBSE
The large arrays for BSE (also modelBSE) are distributed by scaLAPACK. So increasing the number of cores in the calculation should bring down the amount of memory needed per core.
-
- Newbie
- Posts: 2
- Joined: Wed Nov 13, 2019 8:34 am
Re: modelBSE
In the limit of having NPAR=total number of cores and NBAND/NPAR... However when we use the modelBSE, in principle, we need a huge amount of k-point which is increasing a lot the memory more than NBAND.
Many thanks for your reply
Many thanks for your reply
-
- Global Moderator
- Posts: 460
- Joined: Mon Nov 04, 2019 12:44 pm
Re: modelBSE
The block-cyclic distribution via scaLAPACK is apparent for NPAR/NCORE values. The important line is "BSE (scaLAPACK) attempting allocation of XXX Gbyte". This scales almost linearly with increasing number of cores.