Update description authored by Matthieu Boileau's avatar Matthieu Boileau
<!-- DOCTOC SKIP --> <!-- DOCTOC SKIP -->
# Description of the cluster # Nodes configuration
## Node configuration
The configuration has a frontal node, named irma-atlas, and 4 compute nodes. The configuration has a frontal node, named irma-atlas, and 4 compute nodes.
### Frontal node `irma-atlas` ## Frontal node `irma-atlas`
* 64 cores on 4 sockets (AMD Opteron 6386 SE 2.8 Ghz), * 64 cores on 4 sockets (AMD Opteron 6386 SE 2.8 Ghz),
* 512 GB of RAM, * 512 GB of RAM,
...@@ -14,7 +12,7 @@ The configuration has a frontal node, named irma-atlas, and 4 compute nodes. ...@@ -14,7 +12,7 @@ The configuration has a frontal node, named irma-atlas, and 4 compute nodes.
* 70 TB for data storage (10'000 rpm HDD), (directory: /data) * 70 TB for data storage (10'000 rpm HDD), (directory: /data)
* NFS mount to access laboratory data, (such as /home) * NFS mount to access laboratory data, (such as /home)
### Compute nodes `irma-atlas[1-4]` (x4) ## Compute nodes `irma-atlas[1-4]` (x4)
* 24 cores on 2 sockets (Intel Xeon E5-2680 v3 2.50GHz), hyperthreaded * 24 cores on 2 sockets (Intel Xeon E5-2680 v3 2.50GHz), hyperthreaded
* 256 GB of RAM * 256 GB of RAM
...@@ -27,7 +25,7 @@ Since the 25th of novembre 2015, the `irma-atlas4` node has been equipped with 2 ...@@ -27,7 +25,7 @@ Since the 25th of novembre 2015, the `irma-atlas4` node has been equipped with 2
Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards. Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards.
The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/). The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/).
## Storage # Storage
From every node, you have access to several storages. From every node, you have access to several storages.
... ...
......