wiki:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
wiki:slurm [2023/01/07 10:05] – [Orca MPI] admin | wiki:slurm [2024/03/29 10:07] (current) – [SLURM useful commands] admin | ||
---|---|---|---|
Line 65: | Line 65: | ||
^ Command ^ Example syntax ^ Meaning ^ | ^ Command ^ Example syntax ^ Meaning ^ | ||
|sbatch|sbatch < | |sbatch|sbatch < | ||
- | |srun|srun --pty -t 0-0:5:0 -p cpu / | + | |srun|%%srun --pty -t 0-0:5:0 -p cpu / |
|squeue|squeue -u < | |squeue|squeue -u < | ||
|scontrol|scontrol show job < | |scontrol|scontrol show job < | ||
Line 77: | Line 77: | ||
- | * request | + | * Request |
+ | |||
+ | * Request a node with 6GB of RAM per core (CPU): '' | ||
+ | |||
+ | |||
+ | * Most of the Keck nodes have 24 GB of RAM (23936 B) but there are two nodes which have 32 GB (31977 B) of RAM (nodes w16 and w17). If your job needs more than 20GB of RAM (but less that 32GB) you can request one of the " | ||
+ | |||
+ | #SBATCH --mem=30G | ||
+ | #SBATCH --nodelist=w16 (or w17) # request the job to be sent to w16 or w17, pick a node which has no jobs running | ||
- | * request a node with 6GB of RAM per core (CPU): '' | ||
* canceling jobs: | * canceling jobs: | ||
Line 85: | Line 92: | ||
|scancel 1234 | cancel job 1234| | |scancel 1234 | cancel job 1234| | ||
|scancel -u myusername | |scancel -u myusername | ||
- | |scancel -u myusername --state=running | + | |%%scancel -u myusername --state=running%% | cancel all my running jobs| |
- | |scancel -u myusername --state=pending | + | |%%scancel -u myusername --state=pending%% | cancel all my pending jobs| |
Line 99: | Line 106: | ||
|scontrol show job < | |scontrol show job < | ||
- | |sstat -j < | + | |%%sstat -j < |
- | |sacct -j < | + | |%%sacct -j < |
Line 151: | Line 158: | ||
| | ||
- | + | ||
You can verify that the jobs is in the queue: | You can verify that the jobs is in the queue: | ||
Line 188: | Line 195: | ||
cd $scratch_dir | cd $scratch_dir | ||
- | module load openmpi/ | + | module load orca/5.0.3 |
- | module load orca/4.2.1 | + | |
$ORCA_PATH/ | $ORCA_PATH/ | ||
Line 204: | Line 210: | ||
</ | </ | ||
- | Please note that you have to load the appropriate MPI library to use Orca. This is a compatibility | + | Please note that with older versions of Orca you have to load the appropriate MPI library to use it. This is a compatibility table between different Orca and MPI module versions: |
- | table between different Orca and MPI module versions: | + | |
|orca/4.0.0 | openmpi/ | |orca/4.0.0 | openmpi/ |
wiki/slurm.1673114744.txt.gz · Last modified: 2023/01/07 10:05 by admin