nanotools.mpidist module
This module defines the Mpidist
class.
- class nanotools.mpidist.Mpidist(bndblk=4, bndprc=None, grdblk=5, grdprc=None, imgblk=1, imgprc=None, kptblk=1, kptprc=None, nrgblk=1, nrgprc=None, orbblk=32, orbprc=None, zptprc=None)[source]
Bases:
Base
Mpidist
class.The
Mpidist
class defines parameters to control the data and computational load over MPI processes. RESCU+ data is distributed using a generalized block-cyclic distribution scheme. The 2D block-cyclic distribution is implemented in ScaLAPACK as described here. The main concepts here are data blocks and process-grids. Data block sizes determine the chunking of the data. Process-grids determine to which processes the data blocks are assigned. Usually, efficiency is optimal when process-grids have similar shape as the (distributed) array they are storing.- grdblk
Blocking factor for the grid. For instance, if
grdblk
is 10 and the domain is discretized on a ($20 imes 20 imes 20$) grid, the grid is split in eight ($10 imes 10 imes 10$) subgrids which are distributed among processes.Examples:
mpi.grdblk = 8
- Type:
int
- grdprc
Process grid dimensions for the grids.
Examples:
mpi.grdprc = [2,3,4]
- kptblk
Blocking factor for the k-points.
Examples:
mpi.kptblk = 8
- Type:
int
- kptprc
Process grid dimensions for the k-points. Setting
kptprc
to 1 is tantamount to turning k-point parallelization off.Examples:
mpi.kptprc = 1
- zptprc
Process grid dimensions for the energy points on complex contour. Only useful in ground state calculations for two-probe systems.
Examples:
mpi.zptprc = 1
- orbblk
Blocking factor for the atomic orbital matrices. A large value will result in load imbalance. A small value will result in high communication load. The ideal value usually lies between 32 and 128.
Examples:
mpi.orbblk = 32
- Type:
int
- orbprc
Process grid dimensions for the atomic orbital matrices. It is usually preferable to aim at a square process grid.
Examples:
mpi.orbprc = [20,20]