Update slurm authored by Matthieu Boileau's avatar Matthieu Boileau
...@@ -19,10 +19,11 @@ By default and for each Slurm job, a directory named `job.<job_id>` will be crea ...@@ -19,10 +19,11 @@ By default and for each Slurm job, a directory named `job.<job_id>` will be crea
## Slurm partitions ## Slurm partitions
There are two partitions on which you can submit jobs on atlas: There are 3 partitions on which you can submit jobs on atlas:
* public: This partition allows to you access the 4 nodes. This is the default partition, that notably allows you to run MPI jobs ; * `public`: This partition allows to you access the 5 nodes. This is the default partition, that notably allows you to run MPI jobs ;
* K80: This partition allows you to access the node on which the K80 GPGPU cards are installed. * `K80`: This partition allows you to access the node on which the K80 GPGPU cards are installed.
* `nogpu`: This partition exclude `atlas4`. You should select it if you do not need GPU ressources.
## Interactive access to the nodes ## Interactive access to the nodes
...@@ -46,7 +47,6 @@ To automatically allocate and connect, you can use ...@@ -46,7 +47,6 @@ To automatically allocate and connect, you can use
```bash ```bash
salloc -t "03:00:00" -p public -J "jobname" --exclusive -N 1 srun --pty ${SHELL} salloc -t "03:00:00" -p public -J "jobname" --exclusive -N 1 srun --pty ${SHELL}
``` ```
> **INFO:** A wrapper for this interactive command will soon be available `compute`.
> **IMPORTANT NOTE:** Please be reasonable with your use of the `--exclusive` and `-t "XX:YY:ZZ"`, as it could prevent other users to access the node. You can cancel a job with `scancel`. > **IMPORTANT NOTE:** Please be reasonable with your use of the `--exclusive` and `-t "XX:YY:ZZ"`, as it could prevent other users to access the node. You can cancel a job with `scancel`.
...@@ -88,10 +88,6 @@ Here is a basic slurm script to get you started: ...@@ -88,10 +88,6 @@ Here is a basic slurm script to get you started:
export FEELPP_SCRATCHDIR=/scratch/job.$SLURM_JOB_ID export FEELPP_SCRATCHDIR=/scratch/job.$SLURM_JOB_ID
#################### OPTIONAL: #################### OPTIONAL:
# In case you want to use modules.
# You first have to activate the module command
source /etc/profile.d/modules.sh
# Source the configuration for Feel++ or your custom configuration # Source the configuration for Feel++ or your custom configuration
PREVPATH=`pwd` PREVPATH=`pwd`
cd /data/software/config/etc cd /data/software/config/etc
...@@ -101,13 +97,13 @@ cd ${PREVPATH} ...@@ -101,13 +97,13 @@ cd ${PREVPATH}
# Load modules here # Load modules here
# This is an example of module to load # This is an example of module to load
module load gcc490.profile module load gcc490.profile
#################### OPTIONAL: ####################
# Finally launch the job # Finally launch the job
# mpirun of openmpi is natively interfaced with Slurm # mpirun of openmpi is natively interfaced with Slurm
# No need to precise the number of processors to use # No need to precise the number of processors to use
cd <appdir> cd <appdir>
mpirun --bind-to-core -x LD_LIBRARY_PATH <appname> --config-file <appcfg.cfg> mpirun --bind-to-core <appname> --config-file <appcfg.cfg>
mkdir -p /data/<login>/slurm mkdir -p /data/<login>/slurm
cp -r /scratch/job.$SLURM_JOB_ID /data/<login>/slurm cp -r /scratch/job.$SLURM_JOB_ID /data/<login>/slurm
... ...
......