modelBSE

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
laurent_pedesseau1
Newbie
Newbie
Posts: 2
Joined: Wed Nov 13, 2019 8:34 am

modelBSE

#1 Post by laurent_pedesseau1 » Tue Aug 31, 2021 2:49 pm

Dear VASP Admin, VASP colleagues,

We are working with the very nice modelBSE method. I found a technical bottleneck for the parallelization this method use only NPAR=total number of cores.

This is not practical if you are increasing the number of k-point which hugely increases the memory per core. In principle, one can play with kpar or ncore and npar to decrease the memory per node.

So my question: Have you a way of decreasing the memory per node? instead of using kpar, ncore, etc.

Sincerely,

Laurent Pedesseau

ferenc_karsai
Global Moderator
Global Moderator
Posts: 460
Joined: Mon Nov 04, 2019 12:44 pm

Re: modelBSE

#2 Post by ferenc_karsai » Mon Sep 06, 2021 8:54 am

The large arrays for BSE (also modelBSE) are distributed by scaLAPACK. So increasing the number of cores in the calculation should bring down the amount of memory needed per core.

laurent_pedesseau1
Newbie
Newbie
Posts: 2
Joined: Wed Nov 13, 2019 8:34 am

Re: modelBSE

#3 Post by laurent_pedesseau1 » Mon Sep 06, 2021 9:15 am

In the limit of having NPAR=total number of cores and NBAND/NPAR... However when we use the modelBSE, in principle, we need a huge amount of k-point which is increasing a lot the memory more than NBAND.

Many thanks for your reply

ferenc_karsai
Global Moderator
Global Moderator
Posts: 460
Joined: Mon Nov 04, 2019 12:44 pm

Re: modelBSE

#4 Post by ferenc_karsai » Mon Sep 06, 2021 9:20 am

The block-cyclic distribution via scaLAPACK is apparent for NPAR/NCORE values. The important line is "BSE (scaLAPACK) attempting allocation of XXX Gbyte". This scales almost linearly with increasing number of cores.

Post Reply