AImageLab-HPC

Node Features and Constraints

Last updated: March 29, 2026


By default, SLURM assigns your job to any available node in the requested partition. This page explains how to target specific GPU hardware using node features and the --constraint directive, which is useful when your workload has minimum VRAM requirements or needs a particular GPU architecture.

Node Features

Each compute node is tagged with a feature describing its GPU model and VRAM. The full list of available features is:

Feature GPU model VRAM
gpu_1080_8G NVIDIA GeForce GTX 1080 8 GB
gpu_2080_11G NVIDIA GeForce RTX 2080 11 GB
gpu_2080Ti_11G NVIDIA GeForce RTX 2080 Ti 11 GB
gpu_K80_12G NVIDIA Tesla K80 12 GB
gpu_P100_16G NVIDIA Tesla P100 16 GB
gpu_RTX5000_16G NVIDIA Quadro RTX 5000 16 GB
gpu_RTX6000_24G NVIDIA Quadro RTX 6000 24 GB
gpu_RTX_A5000_24G NVIDIA RTX A5000 24 GB
gpu_A40_45G NVIDIA A40 45 GB
gpu_L40S_45G NVIDIA L40S 45 GB

Note: gpu_K80_12G and gpu_P100_16G are only available on the login nodes (all_serial partition, free of charge). All other features are available on all_usr_prod. The boost_usr_prod partition only contains nodes with gpu_A40_45G and gpu_L40S_45G.

Using --constraint

Specify one or more features with --constraint in your job script. Use | to express OR (any of the listed features is acceptable):

#SBATCH --constraint="gpu_L40S_45G|gpu_A40_45G"

If no constraint is specified, SLURM schedules the job on any available node in the partition, which generally leads to faster queue times.

Common Examples

Require at least 24 GB of VRAM:

#SBATCH --constraint="gpu_RTX6000_24G|gpu_RTX_A5000_24G|gpu_A40_45G|gpu_L40S_45G"

Require at least 16 GB of VRAM:

#SBATCH --constraint="gpu_RTX5000_16G|gpu_RTX6000_24G|gpu_RTX_A5000_24G|gpu_A40_45G|gpu_L40S_45G"

Require a specific GPU model:

#SBATCH --constraint="gpu_A40_45G"

Guidelines

Always include all GPU features that satisfy your job’s minimum VRAM requirement, not just the exact match. For example, if your job needs 16 GB, include 24 GB and 45 GB options as well - this widens the pool of eligible nodes and reduces queue wait time.

Avoid adding constraints that are stricter than your workload actually requires. Unnecessarily narrow constraints reduce the set of eligible nodes, delay your job, and reduce throughput for other users.