menu

Example  job-script

Job-script

The job script includes the options (qsub option) to be passed to the job scheduler and the execution command.

qsub Option: Start with #PBS at the beginning of the line then the arguments to be passed to the executed command.Other parts: treated as a command to be executed by the shell.Job script example:Execute MPI program (compiled)

#!/bin/csh                  ← Specify the shell to use for execution
#PBS -q SINGLE              ← Specify the destination queue
#PBS -oe                   ← Direct standard output and error to the same file
#PBS -l select=1:mpiprocs=8   ← Specify the required resources(in this example 8 parallel MPI processors) 
#PBS -N my-job              ← Specify a job name

cd ${PBS_O_WORKDIR}        ← Change the working directory

mpirun -r ssh  -machinefile ${PBS_NODEFILE} -np 8 ./hello_mpi               ← Run the complied executable

Job-script sample

Other samples of various job-scripts are shown here.

In addition to pcc login, please have copy the sample PBS script below and use it.

pcc:/work/Samples

Resources specification example

How to calculate the resources needed in chunks

Default chunk size of this system is:

1 Chunk:
0.5 CPU (8 CPU Core/ncpus= 8 ) + 16GB Memory

If PBS is set as below,  2Chunks, + 16 CPU Core is provided.

#PBS -l select=2

If the resources are specified as below, the chunk size will change.In this example, 2 chunks, 32 CPU Core is set as one chunk, 16 CPU Core.

#PBS -l select=2:ncpus=16

MPI Job

1CPU (16CPU Core, 32GB Mem), 16 MPI parallelized job with 0.5 node.

#!/bin/csh
#PBS -l select=2:ncpus=8:mpiprocs=8
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpirun -machinefile ${PBS_NODEFILE} -np 8 ./hello_mpi

2CPU (32CPU Core, 64GB Mem), 16 MPI parallelized job with 1 node. (16 CPU will be free)

#!/bin/csh
#PBS -l select=4:ncpus=8:mpiprocs=4
#PBS -j oe
#PBS -N MPI-job

cd $PBS_O_WORKDIR

mpirun -machinefile ${PBS_NODEFILE} -np 4 ./hello_mpi

OpenMP Job

0.5CPU (8CPU Core) 8 Threads parallelized job

#!/bin/csh
#PBS -l select=1
#PBS -j oe
#PBS -N OpenMP-job

cd $PBS_O_WORKDIR
setenv OMP_NUM_THREADS 8

./a.out

Hybrid(MPI+OpenMP) Job

Using 2CPU (32CPU Core) , 2 (MPI) process x 4 (OpenMP) threads per 8 Core

#!/bin/bash
#PBS -l select=4:ncpus=8:mpiprocs=2   <-- Two processes per CPU
#PBS -j oe
#PBS -N hybrid-job

cd $PBS_O_WORKDIR

export OMP_NUM_THREADS=4    <-- 4 threads per process

mpirun -machinefile ${PBS_NODEFILE} -np 2 ./hello_hyb <-- 2 Process generation

Using 2CPU (32CPU Core), 1 process x 4 threads 

#!/bin/bash
#PBS -l select=4:mpiprocs=2   <-- 2CPU, Per CPU, 2 processes ( = in total 8 processes )
#PBS -j oe
#PBS -N hybrid-job

cd $PBS_O_WORKDIR

export OMP_NUM_THREADS=4    <--  4threads per process


mpirun -machinefile ${PBS_NODEFILE} -np 4 ./hello_hyb <-- 4 process generation

Materials Studio Job

1 CPU = 8 Cores running 8 parallel Dmol3 jobs

#!/bin/csh
#PBS -l select=1
#PBS -j oe
#PBS -N DMOL3

cd $PBS_O_WORKDIR
setenv PATH ${PATH}:/work/opt/Accelrys/MaterialsStudio6.1/etc/DMol3/bin

RunDMol3.sh -np 8 test  <-- (1 chunk : Specify a number less than 10 parallel Cores)