wiki:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
wiki:slurm [2022/05/04 12:26] – admin | wiki:slurm [2024/03/29 10:07] (current) – [SLURM useful commands] admin | ||
---|---|---|---|
Line 65: | Line 65: | ||
^ Command ^ Example syntax ^ Meaning ^ | ^ Command ^ Example syntax ^ Meaning ^ | ||
|sbatch|sbatch < | |sbatch|sbatch < | ||
- | |srun|srun --pty -t 0-0:5:0 -p cpu / | + | |srun|%%srun --pty -t 0-0:5:0 -p cpu / |
|squeue|squeue -u < | |squeue|squeue -u < | ||
|scontrol|scontrol show job < | |scontrol|scontrol show job < | ||
Line 77: | Line 77: | ||
- | * request | + | * Request |
+ | |||
+ | * Request a node with 6GB of RAM per core (CPU): '' | ||
+ | |||
+ | |||
+ | * Most of the Keck nodes have 24 GB of RAM (23936 B) but there are two nodes which have 32 GB (31977 B) of RAM (nodes w16 and w17). If your job needs more than 20GB of RAM (but less that 32GB) you can request one of the " | ||
+ | |||
+ | #SBATCH --mem=30G | ||
+ | #SBATCH --nodelist=w16 (or w17) # request the job to be sent to w16 or w17, pick a node which has no jobs running | ||
- | * request a node with 6GB of RAM per core (CPU): '' | ||
* canceling jobs: | * canceling jobs: | ||
Line 85: | Line 92: | ||
|scancel 1234 | cancel job 1234| | |scancel 1234 | cancel job 1234| | ||
|scancel -u myusername | |scancel -u myusername | ||
- | |scancel -u myusername --state=running | + | |%%scancel -u myusername --state=running%% | cancel all my running jobs| |
- | |scancel -u myusername --state=pending | + | |%%scancel -u myusername --state=pending%% | cancel all my pending jobs| |
Line 99: | Line 106: | ||
|scontrol show job < | |scontrol show job < | ||
- | |sstat -j < | + | |%%sstat -j < |
- | |sacct -j < | + | |%%sacct -j < |
Line 151: | Line 158: | ||
| | ||
- | + | ||
You can verify that the jobs is in the queue: | You can verify that the jobs is in the queue: | ||
Line 174: | Line 181: | ||
#SBATCH -e %j.err | #SBATCH -e %j.err | ||
set -xv | set -xv | ||
+ | |||
+ | echo Running on host $(hostname) | ||
+ | echo "Job id: ${SLURM_JOB_ID}" | ||
+ | echo Time is $(date) | ||
+ | echo Directory is $(pwd) | ||
+ | echo "This job has allocated $SLURM_NPROCS processors in $SLURM_JOB_PARTITION partition " | ||
# create a scratch directory on the SDD and copy all runtime data there | # create a scratch directory on the SDD and copy all runtime data there | ||
Line 182: | Line 195: | ||
cd $scratch_dir | cd $scratch_dir | ||
- | module load openmpi/ | + | module load orca/5.0.3 |
- | module load orca/4.2.1 | + | |
$ORCA_PATH/ | $ORCA_PATH/ | ||
Line 198: | Line 210: | ||
</ | </ | ||
- | Please note that you have to load the appropriate MPI library to use Orca. This is a compatibility | + | Please note that with older versions of Orca you have to load the appropriate MPI library to use it. This is a compatibility table between different Orca and MPI module versions: |
- | table between different Orca and MPI module versions: | + | |
|orca/4.0.0 | openmpi/ | |orca/4.0.0 | openmpi/ | ||
Line 205: | Line 216: | ||
|orca/4.2.0 | openmpi/ | |orca/4.2.0 | openmpi/ | ||
|orca/4.2.1 | openmpi/ | |orca/4.2.1 | openmpi/ | ||
- | |orca/5.0.1 | openmpi/ | + | |orca/5.0.3 | no MPI loading necessary, it is built in | |
==== OpenMolcas ==== | ==== OpenMolcas ==== |
wiki/slurm.1651692402.txt.gz · Last modified: 2022/05/04 12:26 by admin