1-2. Checking your job
Checking the job. 1 (Listing)
List jobs that are currently running on the system or waiting to be executed.
% qstat [option]
The output will be displayed as in the example below
Job id Name User Time Use S Queue
--------- ----------- ----------- -------- - -----
16.altix aims14 user1 00:15:30 R SINGLE
18.altix aims14 user1 03:21:03 R SMALL
26.altix airfoil barry 00:21:03 R SMALL
27.altix airfoil barry 21:09:12 R SMALL
28.altix myjob user1 0 Q SINGLE
29.altix tns3d susan 0 Q LARGE
30.altix airfoil barry 0 Q SINGLE
31.altix seq_35_3 donald 0 Q MEDIUM
- Job Id .... Unique Job ID
- Name .... Job name
- User .... User name
- Time Use .... Current execution time
- S(tatus) .... Job status(R： running，Q： queued/waiting, E: ended)
- Queue .... Queue used
Checking the job. 2 (details)
Check the job details.
Information such as when the job started execution, why the execution is not started, etc. are displayed.
% qstat -s <Job id>
% qstat -s 28
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
28.altix user1 SINGLE myjob -- 1 1 8190mb 168:0 Q --
Not Running: User has reached queue SINGLE running job limit.
In the example above，job with id:28 is waiting for execution in the SINGLE queue，since that queue limits the number of running tasks per user to only 1 and another job is already running from the same user.
Checking the job. 3 (other options)
Display detailed information
% qstat -f <Job id>
% qstat -x
Display for a specific user
% qstat -u <user id>
Checking the job. 4 (memory resources)
By displaying the detailed information of the job, the actual allocated memory resources can be checked.
#not 100% accurate
#xc30 doesn't report memory usage
%qstat -xf <job ID> | grep used.mem
If the allocated memory resources exceed that specified by the queue, an error may be returned, the calculation result may be empty, etc., and the job may not terminate normally.
If the job does not work properly, check the memory usage at once, if it seems that you are getting the value near the upper limit please submit the job to the next larger queue.