Slurm has a cool command called seff
(which stands for Slurm Efficiency
). You can run seff
on a completed job to see what resources (CPU and memory) your job actually used. Here is an example:
$ seff 502529
Job ID: 502529
Cluster: starlight
User/Group: jehalter/jehalter
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 36
CPU Utilized: 18:57:15
CPU Efficiency: 47.23% of 1-16:07:48 core-walltime
Job Wall-clock time: 01:06:53
Memory Utilized: 183.78 GB
Memory Efficiency: 49.01% of 375.00 GB
#!/bin/bash
#SBATCH --partition=Pisces
#SBATCH --job-name=snp_listeria
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=36
#SBATCH --mem=375gb
#SBATCH --time=24:00:00
#SBATCH --output=%x_%j.out
echo "======================================================"
echo "Start Time : $(date)"
echo "Submit Dir : $SLURM_SUBMIT_DIR"
echo "Job ID/Name : $SLURM_JOBID / $SLURM_JOB_NAME"
echo "Num Tasks : $SLURM_NTASKS total [$SLURM_NNODES nodes @ $SLURM_CPUS_ON_NODE CPUs/node]"
echo "Hostname : $HOSTNAME"
echo "======================================================"
echo ""
cd $SLURM_SUBMIT_DIR
#module avail
#module purge
module load snp-pipeline/2.2.1
module load sra-tools/2.10.5
module list
echo ""
cfsan_snp_pipeline run -m soft -o outputDirectory -s cleanInputs/samples cleanInputs/reference/
CFSAN023463.HGAP.draft.fasta
echo ""
echo "======================================================"
echo "End Time : $(date)"
Read more at https://urc.uncc.edu/research-clusters.
<aside> ⚠️ Some queues have changed names since 2020: Pisces (previously named Steelhead): Janies' lab; Draco (previously named Hammerhead): requires special permissions; Orion: this is the new general queue.
</aside>