Carya Cluster
Carya is the latest addition to the HPE-DSI shared campus resource pool, housing public CPU and GPU nodes with shared access to storage resources. It hosts a total of 9,984 Intel CPU cores and 327,680 Nvidia GPU cores integrated within 188 compute and 20 GPU nodes. All nodes are equipped with solid state drives (SSD) for local high-performance data storage. This will offer the fastest disk based I/O on Carya.
Carya is operated under HPE-DSI. It is housed in the RCDC (Research Computing Data Core) and went into production in early September 2020. If you plan to use this system, please make sure your PI has been granted an allocation and you have requested an account on the system. Please refer to the User Guide to find out about specifics of running jobs, selecting resources, and available software on this cluster.
Theoretical peak performance is approximately 770 TFlops.
Node Type | CPU Type | CPU Socket Count | Total Cores | Usable Memory | SSD Disk Space | Node Count |
---|---|---|---|---|---|---|
Login HPE ProLiant DL380 | Intel Xeon G6252 | 2 | 48 | 128 GB | 8 TB | 1 |
Compute ProLiant HPE XL170r | Intel Xeon G6252 | 2 | 48 | 189 GB | 960 GB | 180 |
GPU Accelerator HPE ProLiant XL190r | Intel Xeon G6252 Nvidia V100 |
CPU: 2 GPU: 2 |
CPU: 48 GPU: 10,240 |
CPU: 189 GB GPU: 64 GB |
960 GB | 16 |
GPU Accelerator HPE ProLiant XL270d | Intel Xeon G6252 Nvidia V100 |
CPU: 2 GPU: 8 |
CPU: 48 GPU: 40,960 |
CPU: 377 GB GPU:256 GB |
1.92 TB | 4 |
Large Memory HPE ProLiant DL380 | Intel Xeon G6252 | 2 | 48 | 755 GB | 1.92 TB | 8 |
Interconnect: Carya nodes are connected via Mellanox Infiniband HDR100 switches with a 100Gb/s Line Rate.
Storage: Carya has 1.5 PB of shared hard-disk-based storage and 122 TB of shared flash storage space.