Discovery  is a Linux cluster comprised of:

  • 86 8-Core (2x) AMD nodes (1378 cores),
  • 8 12-Core (4x) AMD nodes (364 cores).
  • 44 8-Core (2x) Intel nodes (1280 cores),
  • 18 12-Core (2x) Intel nodes (432 cores).

In aggregate the cluster has 3200 cores, 14.5TB of memory, and more than 1.8 PB of disk space.

Node Hardware Breakdown

Cell

Vendor

CPU

Cores

RAM

Disk

Scratch

Features

Nodes

test
nodes
Dell AMD Opteron 4284 (3.0GHz) 16 64Gb 1TB 820Gb amd x01-x03
e Dell AMD Opteron 4386 (3.1GHz) 16 64Gb 1TB 820Gb amd e01-e44
f Dell AMD Opteron 6348 (2.8GHz) 48 192Gb 1.2TB 849Gb amd, ib4 f01-f08
g HPE Intel Xeon E5-2640V3 (2.6GHz) K80 X24 16
4992/card
128Gb
24Gb
1TB 820Gb gpu g01-g12
h Dell Intel Xeon E5-2470  (2.3GHz) 16 64Gb 1TB 820Gb intel h01-h08
j Dell Intel Xeon E5-2690  (2.6GHz) 24 96Gb 1TB 820Gb intel, cellj, ib3 j01-j18
k HPE Intel Xeon E5-2640V3 (2.6GHz) 16 64Gb 1TB 820Gb intel, cellk k01-k24
m HPE Intel Xeon E5-2643V4 (3.2GHz) 16 128Gb 1TB 820Gb test m01-m20
Discovery_clusterMap
Discovery Cluster Map
click to see full size

Specialized Compute Nodes:

  • Discovery offers researchers the ability to have specialized heads nodes available inside the cluster for dedicated compute. These nodes can come equipped with up to 64 compute cores and 1.5TB of memory

Operating System :

  • CentOS 6.7 is used on Discovery, its supporting head-nodes and compute nodes.

Node Names :

  • The compute nodes for queued jobs will be managed via the scheduler.
  • GPU compute nodes are only available to members via the gpuq queue.
  • Interactive nodes are named x01 – x03 & g03 and are available for testing your programs interactively prior to submitting them to the queue to be run on the main cluster.

Node Interconnects

  • A majority of the compute nodes are connected via 10GB Ethernet. The cluster itself is connected to Dartmouth’s Science DMZ; facilitating faster data transfer and stronger security
  • Cell f, and cell j have 40GB Infiniband connections for parallel compute.