User Tools

Site Tools


wiki:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
wiki:slurm [2023/08/29 07:53] – [Example SLURM monitoring commands] adminwiki:slurm [2024/03/29 10:07] (current) – [SLURM useful commands] admin
Line 77: Line 77:
  
  
-  * request a node with 12GB of RAM (total): ''%%sbatch --mem=12G job_script%%'', to see how much memory is currently available on the nodes: ''%%sinfo --Node -l%%''+  * Request a node with 12GB of RAM (total): ''%%sbatch --mem=12G job_script%%''. To see how much memory is currently available on the nodes: ''%%sinfo --Node -l%%'' 
 + 
 +  * Request a node with 6GB of RAM per core (CPU): ''%%sbatch --mem-per-cpu=6G job_script%%''
 + 
 + 
 +  * Most of the Keck nodes have 24 GB of RAM (23936 B) but there are two nodes which have 32 GB (31977 B) of RAM (nodes w16 and w17). If your job needs more than 20GB of RAM (but less that 32GB) you can request one of the "high-memory" nodes with the following statements in your SLURM batch file: 
 + 
 +  #SBATCH --mem=30G               # request allocation of 30GB RAM for the job 
 +  #SBATCH --nodelist=w16 (or w17) # request the job to be sent to w16 or w17, pick a node which has no jobs running
  
-  * request a node with 6GB of RAM per core (CPU): ''%%sbatch --mem-per-cpu=6G job_script%%''. 
  
   * canceling jobs:    * canceling jobs: 
Line 151: Line 158:
  
    sbatch gaussian.slurm    sbatch gaussian.slurm
-   +
 You can verify that the jobs is in the queue: You can verify that the jobs is in the queue:
  
wiki/slurm.1693320813.txt.gz · Last modified: 2023/08/29 07:53 by admin