... | ... | @@ -2,15 +2,14 @@ |
|
|
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
|
|
# Content
|
|
|
|
|
|
- [Job scheduling with slurm](#job-scheduling-with-slurm)
|
|
|
- [Slurm configuration](#slurm-configuration)
|
|
|
- [Slurm partitions](#slurm-partitions)
|
|
|
- [Interactive access to the nodes](#interactive-access-to-the-nodes)
|
|
|
- [Basic slurm script with an MPI application](#basic-slurm-script-with-an-mpi-application)
|
|
|
- [Other use-cases for slurm:](#other-use-cases-for-slurm)
|
|
|
- [Run a program in the background](#run-a-program-in-the-background)
|
|
|
- [Nohup](#nohup)
|
|
|
- [Screen (recommended)](#screen-recommended)
|
|
|
- [Slurm configuration](#slurm-configuration)
|
|
|
- [Slurm partitions](#slurm-partitions)
|
|
|
- [Interactive access to the nodes](#interactive-access-to-the-nodes)
|
|
|
- [Basic slurm script with an MPI application](#basic-slurm-script-with-an-mpi-application)
|
|
|
- [Other use-cases for slurm:](#other-use-cases-for-slurm)
|
|
|
- [Run a program in the background](#run-a-program-in-the-background)
|
|
|
- [Nohup](#nohup)
|
|
|
- [Screen (recommended)](#screen-recommended)
|
|
|
|
|
|
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
|
|
|
... | ... | @@ -20,7 +19,8 @@ By default and for each Slurm job, a directory named `job.<job_id>` will be crea |
|
|
|
|
|
## Slurm partitions
|
|
|
|
|
|
There are two partitions on which you can submit jobs on irma-atlas:
|
|
|
There are two partitions on which you can submit jobs on atlas:
|
|
|
|
|
|
* public: This partition allows to you access the 4 nodes. This is the default partition, that notably allows you to run MPI jobs ;
|
|
|
* K80: This partition allows you to access the node on which the K80 GPGPU cards are installed.
|
|
|
|
... | ... | @@ -34,12 +34,12 @@ If you want to keep accessing a node for a certain period of time, you can alloc |
|
|
# Here you allocate a job with the following constraints:
|
|
|
# -t "02:00:00": the job will remain active for 2 hours
|
|
|
# -p K80: it will be submitted to the K80 partition
|
|
|
# -w irma-atlas4: the job will target the irma-atlas4 machine only
|
|
|
# -w atlas4: the job will target the atlas4 machine only
|
|
|
# --exclusive: You will have exclusive access to the node
|
|
|
salloc -t "02:00:00" -p K80 -w irma-atlas4 --exclusive
|
|
|
salloc -t "02:00:00" -p K80 -w atlas4 --exclusive
|
|
|
```
|
|
|
> **INFO** This command only allocates the node exclusively for yourself.
|
|
|
You have to connect via ssh to the node `ssh irma-atlas4` before doing computations.
|
|
|
You have to connect via ssh to the node `ssh atlas4` before doing computations.
|
|
|
|
|
|
To automatically allocate and connect, you can use
|
|
|
```
|
... | ... | @@ -63,7 +63,7 @@ Here is a basic slurm script to get you started: |
|
|
#SBATCH -p public
|
|
|
# number of cores
|
|
|
#SBATCH -n 96
|
|
|
# Hyperthreading is enabled on irma-atlas, if you do not want to use it
|
|
|
# Hyperthreading is enabled on atlas, if you do not want to use it
|
|
|
# You must specify the following option
|
|
|
#SBATCH --ntasks-per-core 1
|
|
|
# min-max number of nodes
|
... | ... | @@ -118,8 +118,7 @@ Then you can launch the application with `sbatch <name_of_the_script>`. |
|
|
|
|
|
## Other use-cases for slurm:
|
|
|
|
|
|
* Slurm with R:
|
|
|
* Samples from the University of Michigan: [(external link)](http://sph.umich.edu/biostat/computing/cluster/slurm.html)
|
|
|
* Slurm with R: Samples from the University of Michigan: [(external link)](http://sph.umich.edu/biostat/computing/cluster/slurm.html)
|
|
|
|
|
|
## Run a program in the background
|
|
|
|
... | ... | @@ -173,6 +172,6 @@ There are many shortcuts you can use. Some of them are sumed up here: |
|
|
<ctrl+a> <d> : Detach from the current screen session
|
|
|
```
|
|
|
|
|
|
If you want to make you screen more user-friendly, you can customize it so that the bottom status line displays all the terminals opened in screen and the currently opened one. There are some configuration examples in the following link: [.screenrc examples](https://bbs.archlinux.org/viewtopic.php?id=55618)
|
|
|
If you want to make you screen more user-friendly, you can customize it so that the bottom status line displays all the terminals opened in screen and the currently opened one. There are some configuration examples in the following link: [screenrc examples](https://bbs.archlinux.org/viewtopic.php?id=55618)
|
|
|
|
|
|
See Manual for other features. |