H2 ECSs are designed to meet high-end computational needs, such as molecular modeling and computational fluid dynamics. In addition to the substantial CPU power, the H2 ECSs offer diverse options for low-latency RDMA networking using EDR InfiniBand NICs to support memory-intensive computational requirements.
HL1 ECSs are the second generation of high-computing ECSs, featuring large memory capacity. They are interconnected with each other using 100 Gbit/s RDMA InfiniBand NICs and support 56 Gbit/s shared high I/O storage.
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Virtualization |
Local Disks |
Local Disk (TB) |
Network |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
h2.3xlarge.10 |
16 |
128 |
13/13 |
90 |
8 |
12 |
KVM |
1 |
3.2 |
100 Gbit/s EDR InfiniBand |
CPU: Intel® Xeon® E5-2667 v4 |
h2.3xlarge.20 |
16 |
256 |
13/13 |
90 |
8 |
12 |
KVM |
1 |
3.2 |
100 Gbit/s EDR InfiniBand |
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Virtualization |
Network |
Hardware |
---|---|---|---|---|---|---|---|---|---|
hl1.8xlarge.8 |
32 |
256 |
9/9 |
90 |
8 |
12 |
KVM |
100 Gbit/s EDR InfiniBand |
CPU: Intel® Xeon® Processor E5-2690 v4 |
H2 and HL1: High-performance computing (HPC), big data, and Artificial Intelligence (AI)
H2 and HL1 ECSs provide computing capabilities for clusters with a large memory, good connectivity between nodes, and high storage I/O. The typical application scenarios include HPC, big data, and AI. In HPC solution, HL1 ECSs are perfectly suited for the Lustre parallel distributed file system, generally used for large-scale cluster computing.
For example, in HPC scenario, H2 ECSs can be used as compute nodes, and HL1 ECSs can be used as storage nodes.
High-performance computing ECSs have the following features:
High I/O (performance-optimized I)
Ultra-high I/O (latency-optimized)
To support 56 Gbit/s shared high I/O storage, you only need to attach high I/O (performance-optimized I) or ultra-high I/O (latency-optimized) EVS disks to target HL1 ECSs.
OS |
Version |
---|---|
CentOS |
|
Oracle Linux |
|
Red Hat |
Red Hat Enterprise Linux 7.9 64bit |
SUSE |
|
To support 56 Gbit/s shared high I/O storage, you only need to attach high I/O (performance-optimized I) or ultra-high I/O (latency-optimized) EVS disks to target HL1 ECSs.
OS |
Version |
---|---|
CentOS |
|
Oracle Linux |
|
Red Hat |
Red Hat Enterprise Linux 7.9 64bit |
SUSE |
|