setenv FC "pgf90" setenv FC_MPI "mpif90" setenv FFLAGS "-O2 -byteswapio" setenv MALLOC sun setenv MPI_FFLAGS # not used setenv CC "g++" setenv LDFLAGS # not used setenv ATSP ${HOME}/atsp2K setenv lapack "/usr/pgi/linux86/lib/liblapack.a" setenv blas "/usr/pgi/linux86/lib/libblas.a"
The MPI test can be started with:
cd atsp2K/run/N_like/ ./sh_ALL_mpi_linuxThe MPI scripts are similar to the previously described serial scripts: sh_mchf_E1, sh_mchf_O1, except for the method used to execute each application. MPI runs need to be executed using the mpirun script supplied with the MPI distribution:
..... mpirun -p4pg proc_file_nonh ${ATSP2K}/bin/nonh_mpi # ..... ..... mpirun -p4pg proc_file_mchf ${ATSP2K}/bin/mchf_mpi > out_${s}.${nat}-${n} << EOF .....
The starting node id=0 requires a process file as an argument. The process file contains information about each node and the executables.
%cat proc_file_nonh hf7 0 ${ATSP}/bin/nonh_mpi hf6 1 ${ATSP}/bin/nonh_mpiand proc_file_mchf:
%cat proc_file_mchf hf7 0 ${ATSP2K}/bin/mchf_mpi hf6 1 ${ATSP2K}/bin/mchf_mpi
Directory LSJ contains a script, sh_bp_mpi_linux, which facilitates the Breit-Pauli calculation (bp_ang_mpi, bp_mat_mpi, and bp_eiv_mpi). Each application is called with:
...... mpirun -p4pg proc_file_ang ${ATSP2K}/bin/bp_ang_mpi \ <in_ang_${D} # generate angular data ...... mpirun -p4pg proc_file_mat ${ATSP2K}/bin/bp_mat_mpi \ <in_mat_${D} # compute all contributions mpirun -p4pg proc_file_eiv ${ATSP2K}/bin/bp_eiv_mpi \ <in_eiv_${D}_${Z} # compute eigenvectors ......
Three process files are required:
cat proc_file_ang hf7 0 ${ATSP2K}/bin/bp_ang_mpi hf6 1 ${ATSP2K}/bin/bp_ang_mpi cat proc_file_mat hf7 0 ${ATSP2K}/bin/bp_mat_mpi hf6 1 ${ATSP2K}/bin/bp_mat_mpi cat proc_file_eiv hf7 0 ${ATSP2K}/bin/bp_eiv_mpi hf6 1 ${ATSP2K}/bin/bp_eiv_mpi