next up previous contents
Next: BIOTR Up: BP_EIV Previous: Energy corrections   Contents

MPI Implementation

Unlike bp_ang and bp_mat, which do not have communication overhead other than initializing a finalizing the MPI calculation, this program performs a number of MPI_Allreduce() steps, in which the quantities computed by each processor are summed at node = 0. The elements of the interaction matrix are calculated by columns in hmx_lsj(), additionally, the the diagonals are adjusted if the user provided desired energy. corrections. Each node assembles only columns which are multiples on myid + 1. After processing all ncfg columns, the information is exchanged between the nodes using a global summation over all nodes.

         do j = myid+1,ncfg,nprocs
           call hmx_lsj(ncfg,j,nze,ind_jj,nij,istart,shift,
     :                  mycol,pflsj,njv)
         end do
*       ..gather all diagonals from processors.
        call mpi_allr_dp(hii,ncfg) !gdsummpi(hii,ncfg,tm)
      end if

node 0 reads the configuration list and the wave function, and broadcasts the data to the other nodes. Each node of bp_eiv reads the corresponding data files, hnr.lst.nnnn, hzeta.lst.nnnn, hspin.lst.nnnn, Figure  9.34

Figure 9.34: IO files for the MPI version.
\begin{figure}\begin{center}
\centerline{\psfig{figure=tex/fig/mpi_io_bp_eiv.eps}}\end{center}\end{figure}



2001-10-11