menu

Example  job-script

Job-script

The job script includes the options (qsub option) to be passed to the job scheduler and the execution command.

qsub Option: Start with #PBS at the beginning of the line then the arguments to be passed to the executed command.Other parts: treated as a command to be executed by the shell.Job script example:Execute MPI program (compiled)

#!/bin/csh                  ← Specify the shell to use for execution
#PBS -q SINGLE              ← Specify the destination queue
#PBS -oe                   ← Direct standard output and error to the same file
#PBS -l select=1:ncpus=16:mpiprocs=16   ← Specify the required resources(in this example 16 parallel MPI processors) 
#PBS -N my-job              ← Specify a job name

cd ${PBS_O_WORKDIR}        ← Change the working directory

mpiexec_mpt dplace -s1 ./hello_mpi.exe           ← Run the complied executable with MPI

Job-script sample

Other samples of various job-scripts are shown here.

In addition to pcc login, please have copy the sample PBS script below and use it.

pcc:/work/Samples

リソース指定例(チャンク)

利用するリソースをチャンク(chunk)としてカウントします.

PBSで以下のようにリソースを指定すると,デフォルトでは(1チャンク16CPUCore x2) =(2 CPU)を確保します.

#PBS -l select=2:ncpus=16

以下のようにリソースを指定した場合も確保されるCPU数は同じです.

#PBS -l select=1:ncpus=32

上記の違いは,主にMPIを利用する場合などに有効になります.

以下の例では, 2CPU(32Core)を用いて1CPUにつき8本ずつMPIプロセスを起動します.

#PBS -l select=2:ncpus=16:mpiprocs=8

これに対し,以下の例では2CPU(32Core)を確保しますが全16本のMPIプロセスは2CPUに均等に分散されるとは限りません.

#PBS -l select=1:ncpus=32:mpiprocs=16

MPI Job

1CPU ("16CPU Core, 384GB Mem"), 16 MPI parallelized job with 0.25 node.

#!/bin/csh
#PBS -l select=1:npus=16:mpiprocs=16
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpiexec_mpt dplace -s1 ./hello_mpi.exe

2CPU (32CPU Core, 768GB Mem), 16 MPI parallelized job with 0.5 node. (16 CPU will be free)

#!/bin/csh
#PBS -l select=2:ncpus=16:mpiprocs=8
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpiexec_mpt dplace -s1 ./hello_mpi.exe

OpenMP Job

0.5CPU (8CPU Core) 8 Threads parallelized job

#!/bin/csh
#PBS -l select=1:ncpus=16
#PBS -j oe
#PBS -N OpenMP-job

cd $PBS_O_WORKDIR
setenv OMP_NUM_THREADS 16
setenv KMP_AFFINITY disabled

dplace ./a.out

Hybrid(MPI+OpenMP) Job

Using 2CPU (32CPU Core) , 4 (MPI) process x 4 (OpenMP) threads per 16 Core

#!/bin/bash
#PBS -l select=2:ncpus=16:mpiprocs=4   <-- 4 processes per CPU
#PBS -j oe
#PBS -N hybrid-job

cd $PBS_O_WORKDIR

export OMP_NUM_THREADS=4    <-- 4 threads per process
export KMP_AFFINITY=disabled

mpiexec_mpt  omplace -nt ${OMP_NUM_THREADS} ./hello_hyb.exe <-- 実行

Materials Studio Job

1 CPU = 8 Cores running 8 parallel Dmol3 jobs

#!/bin/csh
#PBS -l select=1
#PBS -j oe
#PBS -N DMOL3

cd $PBS_O_WORKDIR
setenv PATH ${PATH}:/work/opt/Accelrys/MaterialsStudio6.1/etc/DMol3/bin

RunDMol3.sh -np 8 test  <-- (1 chunk : Specify a number less than 10 parallel Cores)