Unable to use “-v” variable in PBS Professional 19.2.5

I was not able to use the “-v file=test.m” in the latest version of PBS Professional 19.2.5

I was using the following commands and the qsub command did not work. It used to work in earlier version of PBS Professional

$ qsub gpu.pbs -v file=test.m
usage: qsub [-a date_time] [-A account_string] [-c interval]
[-C directive_prefix] [-e path] [-f ] [-h ] [-I [-X]] [-j oe|eo] [-J X-Y[:Z]]
[-k keep] [-l resource_list] [-m mail_options] [-M user_list]
[-N jobname] [-o path] [-p priority] [-P project] [-q queue] [-r y|n]
[-R o|e|oe] [-S path] [-u user_list] [-W otherattributes=value...]
[-S path] [-u user_list] [-W otherattributes=value...]
[-v variable_list] [-V ] [-z] [script | -- command [arg1 ...]]
qsub --version

The solution is that by design the job script has to be the last argument. Please change the commands accordingly.

$ qsub -v file=test.m gpu.pbs

Configure PBS not to accept jobs that will run into Scheduled Down-Time

Step 1: Go to /pbs/pbs_home/sched_priv and edit the file dedicated_time

# vim /pbs/pbs_home/sched_priv/dedicated_time

Edit the Start and End Date Time in the given format

# FORMAT: FROM TO
# ---- --
# MM/DD/YYYY HH:MM MM/DD/YYYY HH:MM
For example

01/08/2020 08:00 01/08/2020 20:00

Step 2: Reload the pbs configuration by sending a SIGHUP

# ps -eaf | grep -i pbs_sched
# kill -HUP 438652

Step 3: Submit a job that cross over the scheduled date and time, you should see

$ qstat -asn1
55445.hpc-mn1 user1 q32 MPI2 -- 3 96 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
55454.hpc-mn1 user2 q32 MPI -- 1 4 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
55455.hpc-mn1 user1 q32 MPI -- 1 4 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
.....
.....

Ports used by PBS Analytics

The default http port for the PBSA service is 9000.
The default https port for the PBSA service is 9143.
The default https port for the PBSA data collector is 9343.
The default port for the PBSA MonetDB is 9200.
The default port for the Envision Tomcat-8 server is 9080.
The default https port for Envision is 9443
The default port for the PBSA MongoDB is 9700.

Displaying node level source summary

P1: To view Node Level Source Summary like bhosts in Platform LSF

# pbsnodes -aSn
n003 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14654
n004 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14661
n005 free 9 9 0 346gb/346gb 21/32 0/0 0/0 14570,14571,14678,14443,14608,14609,14444,14678,14679
n006 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n008 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n009 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n010 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14665
n012 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n013 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n014 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n015 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n007 free 0 0 0 377gb/377gb 32/32 0/0 0/0 --
n016 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n017 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14676
n018 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14677

P2: To View Node Level Summary with explanation via qstat

# qstat -ans | less
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
40043.hpc-mn1 chunfei0 iworkq Ansys 144867 1 1 256mb 720:0 R 669:1
r001/11
Job run at Mon Oct 21 at 15:30 on (r001:ncpus=1:mem=262144kb:ngpus=1)
40092.hpc-mn1 e190013 iworkq Ansys 155351 1 1 256mb 720:0 R 667:0
r001/13
Job run at Mon Oct 21 at 17:41 on (r001:ncpus=1:mem=262144kb:ngpus=1)
42557.mn1 i180004 q32 LAMMPS -- 1 48 -- 72:00 Q --
--
Not Running: Insufficient amount of resource: ncpus (R: 48 A: 14 T: 2272)
42941.mn1 hpcsuppo iworkq Ansys 255754 1 1 256mb 720:0 R 290:2
hpc-r001/4
Job run at Wed Nov 06 at 10:18 on (r001:ncpus=1:mem=262144kb:ngpus=1)
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
40043.mn1 chunfei0 iworkq Ansys 144867 1 1 256mb 720:0 R 669:1
hpc-r001/11
Job run at Mon Oct 21 at 15:30 on (r001:ncpus=1:mem=262144kb:ngpus=1)
40092.hpc-mn1 e190013 iworkq Ansys 155351 1 1 256mb 720:0 R 667:0
hpc-r001/13
Job run at Mon Oct 21 at 17:41 on r001:ncpus=1:mem=262144kb:ngpus=1)
42557.hpc-mn1 i180004 q32 LAMMPS -- 1 48 -- 72:00 Q --
--
Not Running: Insufficient amount of resource: ncpus (R: 48 A: 14 T: 2272)
42941.mn1 hpcsuppo iworkq Ansys 255754 1 1 256mb 720:0 R 290:2
hpc-r001/4
Job run at Wed Nov 06 at 10:18 on (r001:ncpus=1:mem=262144kb:ngpus=1)
....
....
....

Clearing the password cache for Altair Display Manager

If you are using Altair Display Manager and you encounter this Error Message (java.util.concurrent.ExecutionException) below

 

Resolution Step 1: 

Click the Icon at the top left hand corner of the browser

 

Resolution Step 2:

Click the Compute Manager Icon

 

Resolution Step 3:

On the Top-Right Corner of the Browser, click the setting icon and “Edit/Unregister”

 

Resolution Step 4:

Click the bottom left hand corner and click “Unregister”

Click “Yes”

 

Resolution Step 5:

Click “Save”

Log out and Login again

Adding New Application to the Display Manager Portal for PBS-Pro

These are the steps to setup an application to be ready for PBS-Pro Display Manager COnsole

Step 1: Copy and Edit XML Files in the PBS PAS Repository

# cd /var/spool/pas/repository/applications/
# cp -Rv GlxSpheres Ansys

There are 3 important files which you must change the name to the application name

# mv app-inp-GlxSpheres.xml app-inp-Ansys.xml
# mv app-conv-GlxSpheres.xml app-conv-Ansys.xml
# mv app-actions-GlxSpheres.xml app-actions-Ansys.xml

Step 2: Change the inside content of the xml file from the original name (GlxSpheres) to (ANSYS)

# sed -i "s/GlxSpheres/Ansys/g" *.xml

Step 3: Edit site-config.xml to include the new application executable pathing

# cd /var/spool/pas/repository
# vim site-config.xml

Step 4: Updating Icons for the PBS-Pro Display Manager

See Updating Icons for the PBS-Pro Display Manager