Services

Instructions for using Triton and LUMI

From this page, you can find instructions for using Triton and LUMI.

Triton instructions

Logging into Triton is done according to the Aalto Scientific Computing instructions. Once the user has access to the Triton computing environment, the necessary tools are already installed in the environments. 

Using Singularity on Triton does not fundamentally differ from using it outside the cluster. Triton has Apptainer installed, which is the successor to Singularity after joining the Linux Foundation. Using Apptainer does not differ from using Singularity, and on Triton both ‘singularity‘ and ‘apptainer‘ commands work.

Detailed instructions can be found at https://coderefinery.github.io/hpc-containers.
 

Ready-made .sif images

Downloading ready-made images is possible, for example, from Sylabs Cloud
(at https://cloud.sylabs.io/library) and Docker Hub (at https://hub.docker.com/). Triton does not natively support a “remote client” connection with Sylabs Cloud, so the “remote client” must be reinitialized to the default.

  • Using the apptainer remote add command, the “remote client” is initialized, and downloading images from Sylabs Cloud is possible with the command:

    apptainer remote add --no-login SylabsCloud cloud.sylabs.io


After this, containers and images are downloaded using pull and build commands. 

  • pull command is used as follows, for example (downloading container from a library): 

    singularity pull library://<NAME>
  • build command is used as follows:

    singularity build ubuntu.sif library://ubuntu

    When using the build command, the name of the container must be specified (as ubuntu.sif above). In addition to downloading, the build command can be used to create a new image based on another image or on a definition file. More about definition files can be found in the general instructions.


Building some containers can take a long time, and for this purpose, in Triton and LUMI you can either reserve an interactive compute node or submit a build script to be built with the ‘sbatch‘ command.

  •       sinteractive --cores 4 --mem 4000 --tmp 10 --time 0:15:00

          The command reserves an interactive node with the given parameters.

Alternatively, your own .sif images can be copied to the computing environment using supported standard methods, rsync and sftp. 
 

Running containers on Tritonissa

https://coderefinery.github.io/hpc-containers/intro_and_motivation/

https://scicomp.aalto.fi/triton/tut/interactive/

When logging into Triton, the service automatically directs the user to a “login node”. From the login node, you can access the data stored on Triton, but there is no computing power available on this node. Therefore, when using containers, you must join the “slurm” queue, from which you can access nodes that provide computing resources. There are two options for computing on Triton, either interactively using the "srun" command or by submitting a script with the "sbatch" command. You can get an interactive bash shell by running the command with the required options.

  • srun -p interactive --time=00:20:00 --mem=600M --pty bash

    Opens an interactive bash shell with 600 M of memory and 20 minutes of operating time reserved. The option "-p interactive" refers to the interactive command line.

  • srun --gpus=1 --pty bash

    If GPU computing power is needed, a GPU must be specified when running the "srun" command. This command reserves an unspecified GPU for an unspecified time for interactive use.

  • sinteractive

    Reserves an interactive command line.

After the queuing and once confirmed that you are on a compute node, running Singularity/Apptainer containers works normally.

Alternatively, with ‘sbatch‘ command you can submit jobs to the Slurm queue, which will be run inside the container.

NVIDIA and AMD processors

In the presence of GPU devices, it is possible to run GPU-accelerated containers using Apptainer's features. Assuming the host has the necessary CUDA/ROCm drivers, the container must be initialized with either CUDA or ROCm support depending on the platform on which it is used. The ‘--nv‘ and ‘--rocm‘ flags are used when running containers, and can be added to the ‘run‘, ‘exec‘ and ‘shell‘ commands.

LUMI instructions

Using Singularity in the LUMI computing environments does not fundamentally differ from using it outside the cluster. LUMI is equipped with Apptainer. Downloading ready-made images is possible from Sylabs Cloud (at https://cloud.sylabs.io/library) and Docker Hub (at https://hub.docker.com/). LUMI does not natively support a “remote client” connection with Sylabs Cloud, hence the “remote client” must be reinitialized to the default.

  • Using the apptainer remote add command, the “remote client” is initialized and downloading images from Sylabs Cloud is possible:

    apptainer remote add --no-login SylabsCloud cloud.sylabs.io


Images from Docker libraries can be converted into .sif format in the familiar way.

This service is provided by:

IT Services

For further support, please contact us.
  • Updated:
  • Published:
Share
URL copied!