@@ -19,10 +19,11 @@ By default and for each Slurm job, a directory named `job.<job_id>` will be crea
## Slurm partitions
There are two partitions on which you can submit jobs on atlas:
There are 3 partitions on which you can submit jobs on atlas:
* public: This partition allows to you access the 4 nodes. This is the default partition, that notably allows you to run MPI jobs ;
* K80: This partition allows you to access the node on which the K80 GPGPU cards are installed.
*`public`: This partition allows to you access the 5 nodes. This is the default partition, that notably allows you to run MPI jobs ;
*`K80`: This partition allows you to access the node on which the K80 GPGPU cards are installed.
*`nogpu`: This partition exclude `atlas4`. You should select it if you do not need GPU ressources.
## Interactive access to the nodes
...
...
@@ -46,7 +47,6 @@ To automatically allocate and connect, you can use
```bash
salloc -t"03:00:00"-p public -J"jobname"--exclusive-N 1 srun --pty${SHELL}
```
> **INFO:** A wrapper for this interactive command will soon be available `compute`.
> **IMPORTANT NOTE:** Please be reasonable with your use of the `--exclusive` and `-t "XX:YY:ZZ"`, as it could prevent other users to access the node. You can cancel a job with `scancel`.
...
...
@@ -88,10 +88,6 @@ Here is a basic slurm script to get you started: