User Tools

Site Tools


wiki:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
wiki:slurm [2023/01/07 10:06] – [Orca MPI] adminwiki:slurm [2024/03/29 10:07] (current) – [SLURM useful commands] admin
Line 65: Line 65:
 ^ Command ^ Example syntax ^ Meaning ^ ^ Command ^ Example syntax ^ Meaning ^
 |sbatch|sbatch <jobscript>| Submit a batch job.| |sbatch|sbatch <jobscript>| Submit a batch job.|
-|srun|srun --pty -t 0-0:5:0 -p cpu /bin/bash|Start an interactive session for five minutes in the cpu queue.|+|srun|%%srun --pty -t 0-0:5:0 -p cpu /bin/bash -i%%|Start an interactive session for five minutes in the cpu queue.|
 |squeue|squeue -u <userid>|View status of your jobs in the queue. Only non-completed jobs will be shown.| |squeue|squeue -u <userid>|View status of your jobs in the queue. Only non-completed jobs will be shown.|
 |scontrol|scontrol show job <jobid>| Look at a running job in detail. For more information about the job, add the -dd parameter.| |scontrol|scontrol show job <jobid>| Look at a running job in detail. For more information about the job, add the -dd parameter.|
Line 77: Line 77:
  
  
-  * request a node with 12GB of RAM (total): ''sbatch --mem=12G job_script'', to see how much memory is currently available on the nodes: ''sinfo --Node -l''+  * Request a node with 12GB of RAM (total): ''%%sbatch --mem=12G job_script%%''. To see how much memory is currently available on the nodes: ''%%sinfo --Node -l%%'' 
 + 
 +  * Request a node with 6GB of RAM per core (CPU): ''%%sbatch --mem-per-cpu=6G job_script%%''
 + 
 + 
 +  * Most of the Keck nodes have 24 GB of RAM (23936 B) but there are two nodes which have 32 GB (31977 B) of RAM (nodes w16 and w17). If your job needs more than 20GB of RAM (but less that 32GB) you can request one of the "high-memory" nodes with the following statements in your SLURM batch file: 
 + 
 +  #SBATCH --mem=30G               # request allocation of 30GB RAM for the job 
 +  #SBATCH --nodelist=w16 (or w17) # request the job to be sent to w16 or w17, pick a node which has no jobs running
  
-  * request a node with 6GB of RAM per core (CPU): ''sbatch --mem-per-cpu=6G job_script''. 
  
   * canceling jobs:    * canceling jobs: 
Line 85: Line 92:
 |scancel 1234                           | cancel job 1234| |scancel 1234                           | cancel job 1234|
 |scancel -u myusername                  | cancel all my jobs| |scancel -u myusername                  | cancel all my jobs|
-|scancel -u myusername --state=running  | cancel all my running jobs| +|%%scancel -u myusername --state=running%%  | cancel all my running jobs| 
-|scancel -u myusername --state=pending  | cancel all my pending jobs|+|%%scancel -u myusername --state=pending%%  | cancel all my pending jobs|
  
  
Line 99: Line 106:
 |scontrol show job <jobid> -dd|show details for a running job, -dd requests more detail| |scontrol show job <jobid> -dd|show details for a running job, -dd requests more detail|
  
-|sstat -j <jobid>.batch --format JobID,MaxRSS, MaxVMSize,NTasks | show status information for running job you can find all the fields you can specify with the --format parameter by running sstat -e| +|%%sstat -j <jobid>.batch --format JobID,MaxRSS, MaxVMSize,NTasks%% | show status information for running job you can find all the fields you can specify with the %%--format%% parameter by running sstat -e| 
-|sacct -j <jobid> --format=JobId,AllocCPUs,State,ReqMem, MaxRSS,Elapsed,TimeLimit,CPUTime,ReqTres|get statistics on a completed job you can find all the fields you can specify with the --format parameter by running sacct -e you can specify the width of a field with % and a number, for example --format=JobID%15 for 15 characters|+|%%sacct -j <jobid> --format=JobId,AllocCPUs,State,ReqMem, MaxRSS,Elapsed,TimeLimit,CPUTime,ReqTres%%|get statistics on a completed job you can find all the fields you can specify with the %%--format%% parameter by running sacct -e you can specify the width of a field with % and a number, for example %%--format=JobID%15%% for 15 characters|
  
  
Line 151: Line 158:
  
    sbatch gaussian.slurm    sbatch gaussian.slurm
-   +
 You can verify that the jobs is in the queue: You can verify that the jobs is in the queue:
  
Line 203: Line 210:
 </code> </code>
  
-Please note that you have to load the appropriate MPI library to use Orca. This is a compatibility +Please note that with older versions of Orca you have to load the appropriate MPI library to use it. This is a compatibility table between different Orca and MPI module versions:
-table between different Orca and MPI module versions:+
  
 |orca/4.0.0 | openmpi/2.0.1 | |orca/4.0.0 | openmpi/2.0.1 |
wiki/slurm.1673114774.txt.gz · Last modified: 2023/01/07 10:06 by admin