wiki:gpu_jobs
This is an old revision of the document!
Table of Contents
Running jobs on GPUs
Keck II workstation w01-w10 and w13-w15.keck2.ucsd.edu have NVidia GTX680 GPU installed each. These can be used to run computationally intensive jobs on.
All jobs must be submitted through the SGE queue manager. All rogue jobs will be terminated and user accounts not adhering to this policy will be suspended.
Example SGE scripts
These are example SGE script for running most common applications on the GPUs.
Amber
The optimal AMBER job configuration for KeckII is to use 1 CPU and 1 GPU per run.
#!/bin/bash
#$ -cwd
#$ -q all.q
#$ -V
#$ -N AMBER_job
#$ -S /bin/bash
#$ -e sge.err
#$ -o sge.out
myrun=my_simulation_name
module load nvidia
module load amber
export CUDA_VISIBLE_DEVICES=1
# create a scratch directory on the SDD and copy all runtime data there
export scratch_dir=`mktemp -d /scratch/${USER}.XXXXXX`
current_dir=`pwd`
cp * $scratch_dir
cd $scratch_dir
$AMBERHOME/bin/pmemd.cuda -O -i $myrun.in -o $myrun.out -r $myrun.rst \
-x $myrun.nc -p $myrun.prmtop -c $myrun.rst
# copy all data back from the scratch directory
cp * $current_dir
rm -rf $scratch_dir
NAMD
Running NAMD on 2 CPUs and one GPU is the optimal number of CPUs/GPUs for a typical NAMD job on KeckII workstations.
Running namd on 2 CPUs/1 GPU
#!/bin/bash
#$ -cwd
#$ -q all.q
#$ -V
#$ -N NAMD_job
#$ -pe orte-host 2
#$ -S /bin/bash
#$ -e sge.err
#$ -o sge.out
module load nvidia
module load namd-cuda
# create a scratch directory and copy all runtime data there
export scratch_dir=`mktemp -d /scratch/${USER}.XXXXXX`
current_dir=`pwd`
cp * $scratch_dir
cd $scratch_dir
# 2 CPUs/1 GPU
namd2 +idlepoll +p2 +devices 1 apoa1.namd >& apoa1-2.1.out
# copy all data back from the scratch directory
cp * $current_dir
rm -rf $scratch_dir
wiki/gpu_jobs.1350589461.txt.gz ยท Last modified: 2012/10/18 12:44 by admin