Update slurm authored by Matthieu Boileau's avatar Matthieu Boileau
......@@ -19,10 +19,11 @@ By default and for each Slurm job, a directory named `job.<job_id>` will be crea
## Slurm partitions
There are two partitions on which you can submit jobs on atlas:
There are 3 partitions on which you can submit jobs on atlas:
* public: This partition allows to you access the 4 nodes. This is the default partition, that notably allows you to run MPI jobs ;
* K80: This partition allows you to access the node on which the K80 GPGPU cards are installed.
* `public`: This partition allows to you access the 5 nodes. This is the default partition, that notably allows you to run MPI jobs ;
* `K80`: This partition allows you to access the node on which the K80 GPGPU cards are installed.
* `nogpu`: This partition exclude `atlas4`. You should select it if you do not need GPU ressources.
## Interactive access to the nodes
......@@ -46,7 +47,6 @@ To automatically allocate and connect, you can use
```bash
salloc -t "03:00:00" -p public -J "jobname" --exclusive -N 1 srun --pty ${SHELL}
```
> **INFO:** A wrapper for this interactive command will soon be available `compute`.
> **IMPORTANT NOTE:** Please be reasonable with your use of the `--exclusive` and `-t "XX:YY:ZZ"`, as it could prevent other users to access the node. You can cancel a job with `scancel`.
......@@ -88,10 +88,6 @@ Here is a basic slurm script to get you started:
export FEELPP_SCRATCHDIR=/scratch/job.$SLURM_JOB_ID
#################### OPTIONAL:
# In case you want to use modules.
# You first have to activate the module command
source /etc/profile.d/modules.sh
# Source the configuration for Feel++ or your custom configuration
PREVPATH=`pwd`
cd /data/software/config/etc
......@@ -101,13 +97,13 @@ cd ${PREVPATH}
# Load modules here
# This is an example of module to load
module load gcc490.profile
#################### OPTIONAL:
####################
# Finally launch the job
# mpirun of openmpi is natively interfaced with Slurm
# No need to precise the number of processors to use
cd <appdir>
mpirun --bind-to-core -x LD_LIBRARY_PATH <appname> --config-file <appcfg.cfg>
mpirun --bind-to-core <appname> --config-file <appcfg.cfg>
mkdir -p /data/<login>/slurm
cp -r /scratch/job.$SLURM_JOB_ID /data/<login>/slurm
......
......