|
|
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
|
|
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
|
|
# Content
|
|
|
|
|
|
- [Irma atlas cluster](#irma-atlas-cluster)
|
|
|
- [Description](#description)
|
|
|
- [Node configuration](#node-configuration)
|
|
|
- [Storage](#storage)
|
|
|
- [Switching between configuration : Modules](#switching-between-configuration-modules)
|
|
|
- [Usage](#usage)
|
|
|
- [Feel++ as a library](#feel-as-a-library)
|
|
|
- [Switching network configuration](#switching-network-configuration)
|
|
|
- [Slurm configuration](#slurm-configuration)
|
|
|
- [Slurm partitions](#slurm-partitions)
|
|
|
- [Interactive access to the nodes](#interactive-access-to-the-nodes)
|
|
|
- [Basic slurm script with an MPI application](#basic-slurm-script-with-an-mpi-application)
|
|
|
- [Other use-cases for slurm:](#other-use-cases-for-slurm)
|
|
|
- [Run a program in the background](#run-a-program-in-the-background)
|
|
|
- [Nohup](#nohup)
|
|
|
- [Screen (recommended)](#screen-recommended)
|
|
|
- [Troobleshooting](#troobleshooting)
|
|
|
- [My code runs slower on a computing server than on my laptop. Whay is the problem ?](#my-code-runs-slower-on-a-computing-server-than-on-my-laptop-whay-is-the-problem)
|
|
|
- [CMake](#cmake)
|
|
|
|
|
|
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
|
|
<!-- DOCTOC SKIP -->
|
|
|
|
|
|
# Irma atlas cluster
|
|
|
|
|
|
## Description
|
|
|
|
|
|
### Node configuration
|
|
|
|
|
|
The configuration has a frontal node, named irma-atlas, and 4 compute nodes.
|
|
|
|
|
|
Frontal node: irma-atlas:
|
|
|
|
|
|
* 64 cores on 4 sockets (AMD Opteron 6386 SE 2.8 Ghz),
|
|
|
* 512 GB of RAM,
|
|
|
* 2.4 TB of storage on SSD, (directory: /ssd)
|
|
|
* 70 TB for data storage (10'000 rpm HDD), (directory: /data)
|
|
|
* NFS mount to access laboratory data, (such as /home)
|
|
|
|
|
|
Compute nodes (x4):
|
|
|
|
|
|
* 24 cores on 2 sockets (Intel Xeon E5-2680 v3 2.50GHz), hyperthreaded
|
|
|
* 256 GB of RAM
|
|
|
* 1 TB scratch dir (/scratch)
|
|
|
* NFS mount to access frontal node data: /ssd and /data
|
|
|
* NFS mount to access laboratory data, (such as /home)
|
|
|
|
|
|
Since the 25th of novembre 2015, one of the node has been equipped with 2 NVIDIA K80 GPGPU cards.
|
|
|
|
|
|
Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards.
|
|
|
The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/).
|
|
|
|
|
|
### Storage
|
|
|
|
|
|
On the frontal node irma-atlas, you have access to several storage:
|
|
|
|
|
|
* Your home directory: /home/<username>. This directory is meant only to store important files and have to be kept to a minimal size.
|
|
|
* The /data/<username> directory. If this directory does not exist, you must create one, so you don't mix your files with other users. This partition has a size of 50 TB, so you can store big data, like simulation results, compilation-related files and libraries ...
|
|
|
* The /ssd/<username> directory. If this directory does not exist, you must create one, so you don't mix your files with other users. This partition has a size of 2 TB and is put on SSDs for increased access speed. You can use it to store medium-sized data.
|
|
|
- [Description](description)
|
|
|
- [Environment modules](modules)
|
|
|
|
|
|
## Switching between configuration : Modules
|
|
|
|
... | ... | |