menu

Example  job-script

Job-script

The job script includes the options (qsub option) to be passed to the job scheduler and the execution command.

qsub Option: Start with #PBS at the beginning of the line then the arguments to be passed to the executed command.Other parts: treated as a command to be executed by the shell.Job script example:Execute MPI program (compiled)

#!/bin/csh                  ← Specify the shell to use for execution
#PBS -q SINGLE              ← Specify the destination queue
#PBS -oe                   ← Direct standard output and error to the same file
#PBS -l select=1:ncpus=16:mpiprocs=16   ← Specify the required resources(in this example 16 parallel MPI processors) 
#PBS -N my-job              ← Specify a job name

cd ${PBS_O_WORKDIR}        ← Change the working directory

mpiexec_mpt dplace -s1 ./hello_mpi.exe           ← Run the complied executable with MPI

Job-script sample

Other samples of various job-scripts are shown here.

In addition to lmpcc login, the sample PBS script is available via the following path.

lmpcc:/Samples

Resources specification example

The size of compute resource available for calculation is counted as "chunk"

Default chunk size of this system is:

1 Chunk:
1 CPU (16 CPU Core/ncpus= 16 ) + about 384GB Memory

If PBS is set as below,  2Chunks, + 32 CPU Core is provided.

#PBS -l select=2

If the resources are specified as below, the chunk size will change.In this example, 2 chunks, 64 CPU Core is set as one chunk, 32 CPU Core.

#PBS -l select=2:ncpus=32

MPI Job

1CPU ("16CPU Core, 384GB Mem"), 16 MPI parallelized job with 0.25 node.

#!/bin/csh
#PBS -l select=1:npus=16:mpiprocs=16
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpiexec_mpt dplace -s1 ./hello_mpi.exe

2CPU (32CPU Core, 768GB Mem), 16 MPI parallelized job with 0.5 node. (16 CPU will be free)

#!/bin/csh
#PBS -l select=2:ncpus=16:mpiprocs=8
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpiexec_mpt dplace -s1 ./hello_mpi.exe

OpenMP Job

0.5CPU (8CPU Core) 8 Threads parallelized job

#!/bin/csh
#PBS -l select=1:ncpus=16
#PBS -j oe
#PBS -N OpenMP-job

cd $PBS_O_WORKDIR
setenv OMP_NUM_THREADS 16
setenv KMP_AFFINITY disabled

dplace ./a.out

Hybrid(MPI+OpenMP) Job

Using 2CPU (32CPU Core) , 4 (MPI) process x 4 (OpenMP) threads per 16 Core

#!/bin/bash
#PBS -l select=2:ncpus=16:mpiprocs=4   <-- 4 processes per CPU
#PBS -j oe
#PBS -N hybrid-job

cd $PBS_O_WORKDIR

export OMP_NUM_THREADS=4    <-- 4 threads per process
export KMP_AFFINITY=disabled

mpiexec_mpt  omplace -nt ${OMP_NUM_THREADS} ./hello_hyb.exe <-- 実行

Materials Studio Job

1 CPU = 8 Cores running 8 parallel Dmol3 jobs

#!/bin/csh
#PBS -l select=1
#PBS -j oe
#PBS -N DMOL3

cd $PBS_O_WORKDIR
setenv PATH ${PATH}:/work/opt/lm/BIOVIA/MaterialsStudio22.1/etc/DMol3/bin

RunDMol3.sh -np 8 test  <-- (1 chunk : Specify a number less than 10 parallel Cores)