Update doc irma-atlas -> atlas authored by Matthieu Boileau's avatar Matthieu Boileau
......@@ -2,25 +2,25 @@
# Nodes configuration
The configuration has a frontal node, named `irma-atlas`, and 4 compute nodes `irma-atlas[1-4]`.
The configuration has a frontal node, named `atlas`, and 4 compute nodes `atlas[1-4]`.
## Frontal node `irma-atlas`
## Frontal node `atlas`
* 64 cores on 4 sockets (AMD Opteron 6386 SE 2.8 Ghz),
* 512 GB of RAM,
* 2.4 TB of storage on SSD, (directory: /ssd)
* 70 TB for data storage (10'000 rpm HDD), (directory: /data)
* NFS mount to access laboratory data, (such as /home)
* 64 cores on 4 sockets (AMD Opteron 6386 SE 2.8 Ghz)
* 512 GB of RAM
* 2.4 TB of storage on SSD (directory: `/ssd`)
* 70 TB for data storage (10'000 rpm HDD) (directory: `/data`)
* NFS mount to access laboratory data (such as `/home`)
## Compute nodes `irma-atlas[1-4]` (x4)
## Compute nodes `atlas[1-4]` (x4)
* 24 cores on 2 sockets (Intel Xeon E5-2680 v3 2.50GHz), hyperthreaded
* 256 GB of RAM
* 1 TB scratch dir (/scratch)
* NFS mount to access frontal node data: /ssd and /data
* NFS mount to access laboratory data, (such as /home)
* 1 TB scratch dir `/scratch`
* NFS mount to access frontal node data: `/ssd` and `/data`
* NFS mount to access laboratory data (such as `/home`)
Since the 25th of novembre 2015, the `irma-atlas4` node has been equipped with 2 NVIDIA K80 GPGPU cards.
Since the 25th of novembre 2015, the `atlas4` node has been equipped with 2 NVIDIA K80 GPGPU cards.
Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards.
The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/).
......
......