AImageLab-HPC

Environment and Customization

Last updated: March 30, 2026


All software installed on AImageLab-HPC is available as environment modules. This page describes how to use the module system and gives an overview of the main available packages.

The Module Command

A set of basic modules is preloaded at login. To manage modules in your session, use the module command:

Command Action
module avail Show all available modules
module load <package> Load a module in the current shell session
module load <package>/<version> Load a specific version of a module
module list Show the modules currently loaded
module unload <package> Unload a specific module
module purge Unload all loaded modules
module help <package> Show information and basic help for a module

Compilers

GCC

GCC 11.4.0 is the primary compiler available on AImageLab-HPC:

module load gcc/11.4.0-none-none

The GNU compilers are:

  • gcc: C compiler
  • g++: C++ compiler
  • gfortran: Fortran compiler

Loading the module sets the following environment variables: CCgcc, CXXg\+\+, FC/F90/F77gfortran.

MPI - OpenMPI

OpenMPI 5.0.7, built with GCC 11.4.0, is available for parallel applications:

module load openmpi/5.0.7-gcc-11.4.0

MPI compiler wrappers:

  • mpicc: C (gcc)
  • mpic++ / mpicxx / mpiCC: C++ (g++)
  • mpif77 / mpif90 / mpifort: Fortran (gfortran)

Example - compiling and running an MPI C program:

module load openmpi/5.0.7-gcc-11.4.0
mpicc -o myexec myprog.c
mpirun -n 4 ./myexec

CUDA and GPU Libraries

Two CUDA versions are available:

module load cuda/11.8.0-none-none    # for nodes with older GPUs (ailb-login-02)
module load cuda/12.6.3-none-none    # recommended for all other nodes

Note: The GPUs on ailb-login-02 are not compatible with CUDA 12.x. Use cuda/11.8.0-none-none when working on that node.

cuDNN is available paired with the corresponding CUDA version:

module load cudnn/8.7.0.84-11.8-none-none-cuda-11.8.0    # with CUDA 11.8
module load cudnn/8.9.7.29-12-none-none-cuda-12.6.3       # with CUDA 12.6

Always load the cuDNN build that matches your chosen CUDA version.

Python

Three Python versions are available:

module load python/3.9.21-gcc-11.4.0
module load python/3.10.16-gcc-11.4.0
module load python/3.11.11-gcc-11.4.0

For instructions on setting up a personal virtual environment and installing packages, refer to the Using Python article.

Deep Learning Stack

PyTorch

PyTorch is available in multiple builds, each compiled against a specific CUDA version. Choose the one that matches your target node:

# CUDA 11.8 - for ailb-login-02
module load py-torch/2.7.0-gcc-11.4.0-cuda-11.8.0

# CUDA 12.6 - recommended for all other nodes
module load py-torch/2.7.0-gcc-11.4.0-cuda-12.6.3
module load py-torch/2.8.0-gcc-11.4.0-cuda-12.6.3

NumPy

module load py-numpy/1.26.4-gcc-11.4.0

FAISS

FAISS (Facebook AI Similarity Search) provides efficient similarity search and clustering of dense vectors:

module load faiss/1.11.0-gcc-11.4.0

AI Inference - Ollama

Ollama allows running large language models locally. Two versions are available:

module load ollama/0.13.1-gcc-11.4.0    # latest
module load ollama/0.11.0-gcc-11.4.0    # older

Basic usage:

module load ollama/0.13.1-gcc-11.4.0
ollama serve &
ollama pull llama3
ollama run llama3

Containers - Singularity

Singularity CE 4.1.0 is available for running containerised applications without requiring root privileges:

module load singularityce/4.1.0-gcc-11.4.0

Basic usage:

# Pull an image from Docker Hub
singularity pull docker://ubuntu:22.04

# Run a container
singularity run ubuntu_22.04.sif

# Execute a specific command inside a container
singularity exec ubuntu_22.04.sif bash

Other Notable Modules

Module Version Description
cmake 3.31.6 Cross-platform build system generator
boost 1.88.0 General-purpose C++ libraries
intel-oneapi-mkl 2024.2.2 Intel Math Kernel Library (BLAS, LAPACK, FFT)
eigen 3.4.0 C++ template library for linear algebra
go 1.24.3 Go programming language toolchain
rust 1.85.0 Rust programming language toolchain