AImageLab-HPC

Jupyter Notebook on a Compute Node

Last updated: March 30, 2026


Running Jupyter as a SLURM job gives you access to a compute node’s GPU from an interactive notebook interface. This is particularly useful for exploratory debugging, inspecting tensors, and iterating quickly on code that requires GPU access.

Prerequisites

Jupyter must be installed in your virtual environment. If you have not set one up yet, see Using Python first.

source /homes/<username>/envs/my_env/bin/activate
pip install jupyterlab

Step 1 - Write the Job Script

Create a script (e.g. jupyter.sh) that requests a GPU, starts JupyterLab, and prints the connection URL:

#!/bin/bash
#SBATCH --job-name=jupyter
#SBATCH --partition=all_usr_prod
#SBATCH --qos=all_qos_dbg
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH --output=/work/<project>/logs/jupyter_%j.out
#SBATCH --account=<project>

source /homes/<username>/envs/my_env/bin/activate

jupyter lab --no-browser --ip=0.0.0.0 --port=8888

QOS: all_qos_dbg is recommended for debugging sessions - it gives fast queue access with a 1-hour limit and 1 GPU. For longer sessions use all_usr_prod (or your role-specific QOS) and increase --time accordingly.

Step 2 - Submit and Find the Compute Node

Submit the job:

sbatch jupyter.sh

Wait for the job to start, then find which node it landed on:

squeue --me

Look at the NODELIST column for the assigned node name (e.g. nico). You can also check the log file once the job is running:

tail -f /work/<project>/logs/jupyter_<job_id>.out

The log will contain a line like:

http://nico:8888/lab?token=abc123...

Note down the node name and the token.

Step 3 - Set Up the SSH Tunnel

From your local machine, open an SSH tunnel through the login node to the compute node:

ssh -L 8888:<node>:8888 <username>@ailb-login-02.ing.unimore.it

Replace <node> with the node name from Step 2 (e.g. nico). Keep this terminal open for the duration of your session.

If port 8888 is already in use on your local machine, pick any free port (e.g. 8889):

ssh -L 8889:<node>:8888 <username>@ailb-login-02.ing.unimore.it

Step 4 - Connect in the Browser

Open your browser and navigate to:

http://localhost:8888

When prompted for a token, paste the token from the log output. Alternatively, use the full URL from the log directly (replacing the node hostname with localhost):

http://localhost:8888/lab?token=abc123...

You are now connected to a JupyterLab session running on a GPU node. You can verify GPU access inside a notebook:

import torch
print(torch.cuda.is_available())   # should print True
print(torch.cuda.get_device_name(0))

Stopping the Session

When you are done, shut down JupyterLab from the browser (File - Shut Down) or cancel the SLURM job:

scancel <job_id>

Closing the browser tab alone does not release the allocated resources - the SLURM job continues running until it is cancelled or the walltime expires.