... | @@ -2,7 +2,7 @@ |
... | @@ -2,7 +2,7 @@ |
|
|
|
|
|
# Nodes configuration
|
|
# Nodes configuration
|
|
|
|
|
|
The configuration has a frontal node, named `atlas`, and 5 compute nodes `atlas[1-5]`.
|
|
The configuration has a frontal node, named `atlas`, and 2 compute nodes `atlas[5-6]`.
|
|
|
|
|
|
## Frontal node `atlas`
|
|
## Frontal node `atlas`
|
|
|
|
|
... | @@ -12,16 +12,14 @@ The configuration has a frontal node, named `atlas`, and 5 compute nodes `atlas[ |
... | @@ -12,16 +12,14 @@ The configuration has a frontal node, named `atlas`, and 5 compute nodes `atlas[ |
|
* 70 TB for data storage (10'000 rpm HDD) (directory: `/data`)
|
|
* 70 TB for data storage (10'000 rpm HDD) (directory: `/data`)
|
|
* NFS mount to access laboratory data (such as `/home`)
|
|
* NFS mount to access laboratory data (such as `/home`)
|
|
|
|
|
|
## Compute nodes `atlas[1-6]`
|
|
## Compute nodes `atlas[5-6]`
|
|
|
|
|
|
* 24 cores on 2 sockets (Intel Xeon E5-2680 v3 2.50GHz (`atlas[1-4]`) or v4 @ 2.40GHz (`atlas[5-6]`)), hyperthreaded
|
|
* 24 cores on 2 sockets (v4 @ 2.40GHz (`atlas[5-6]`)), hyperthreaded
|
|
* 256 GB of RAM
|
|
* 256 GB of RAM
|
|
* 1 TB scratch dir `/scratch`
|
|
* 1 TB scratch dir `/scratch`
|
|
* NFS mount to access frontal node data: `/ssd` and `/data`
|
|
* NFS mount to access frontal node data: `/ssd` and `/data`
|
|
* NFS mount to access laboratory data (such as `/home`)
|
|
* NFS mount to access laboratory data (such as `/home`)
|
|
|
|
|
|
`atlas4` node is equipped with 2 NVIDIA K80 GPGPU cards.
|
|
|
|
|
|
|
|
Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards.
|
|
Everything is interconnected with both 10Gb Ethernet cards and 40Gb Infiniband cards.
|
|
The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/).
|
|
The workload manager is [slurm](https://computing.llnl.gov/linux/slurm/).
|
|
|
|
|
... | | ... | |