GPU-accelerated ECSs provide outstanding floating-point computing capabilities. They are suitable for applications that require real-time, highly concurrent massive computing.
Recommended: Computing-accelerated P2s, Inference-accelerated PI2, and Graphics-accelerated Enhancement G6
Available now: All GPU models except the recommended ones. If available ECSs are sold out, use the recommended ones.
Overview
G6 ECSs use NVIDIA Tesla T4 GPUs to support DirectX, OpenGL, and Vulkan and provide 16 GiB of GPU memory. The theoretical Pixel rate is 101.8 Gpixel/s and Texture rate 254.4 GTexel/s, meeting professional graphics processing requirements.
Select your desired GPU-accelerated ECS type and specifications.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|
g6.10xlarge.7 |
40 |
280 |
25/15 |
200 |
16 |
8 |
1 × T4 |
16 |
KVM |
CPU: Intel® Xeon® Cascade Lake 6266 |
g6.20xlarge.7 |
80 |
560 |
30/30 |
400 |
32 |
16 |
2 × T4 |
32 |
KVM |
A G6.10xlarge.7 ECS exclusively uses a T4 GPU for professional graphics acceleration. Such an ECS can be used for heavy-load CPU inference.
G6 ECS Features
Supported Common Software
G6 ECSs are used in graphics acceleration scenarios, such as video rendering, cloud desktop, and 3D visualization. If the software relies on GPU DirectX and OpenGL hardware acceleration, use G6 ECSs. G6 ECSs support the following commonly used graphics processing software:
Notes
Overview
P3v ECSs use NVIDIA A100 GPUs and provide flexibility and ultra-high-performance computing. P3v ECSs have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics. Theoretically, P3v ECSs provide 19.5 TFLOPS of FP32 single-precision performance and 156 TFLOPS (sparsity disabled) or 312 TFLOPS (sparsity enabled) of TF32 peak tensor performance.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPU |
GPU Connection |
GPU Memory (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
p3v.3xlarge.8 |
12 |
96 |
17/5 |
200 |
4 |
4 |
1 × NVIDIA A100 80GB |
N/A |
80 |
KVM |
p3v.6xlarge.8 |
24 |
192 |
25/9 |
400 |
8 |
8 |
2 × NVIDIA A100 80GB |
NVLink |
160 |
KVM |
p3v.12xlarge.8 |
48 |
384 |
35/18 |
500 |
16 |
8 |
4 × NVIDIA A100 80GB |
NVLink |
320 |
KVM |
p3v.24xlarge.8 |
96 |
768 |
40/36 |
850 |
32 |
8 |
8 × NVIDIA A100 80GB |
NVLink |
640 |
KVM |
P3v ECS Features
Networks are user-defined, subnets can be divided, and network access policies can be configured as needed. Mass storage is used, elastic capacity expansion as well as backup and restoration are supported to make data more secure. Auto Scaling allows you to add or reduce the number of ECSs quickly.
Similar to other types of ECSs, P3v ECSs can be provisioned in a few minutes.
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P3v ECSs.
Supported Common Software
P3v ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P3v ECSs. P2vs ECSs support the following commonly used software:
Notes
Overview
P2s ECSs use NVIDIA Tesla V100 GPUs to provide flexibility, high-performance computing, and cost-effectiveness. P2s ECSs provide outstanding general computing capabilities and have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Connection |
GPU Memory (GiB) |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
p2s.2xlarge.8 |
8 |
64 |
10/4 |
50 |
4 |
4 |
1 × V100 |
PCIe Gen3 |
1 × 32 GiB |
KVM |
CPU: 2nd Generation Intel® Xeon® Scalable Processor 6278 |
p2s.4xlarge.8 |
16 |
128 |
15/8 |
100 |
8 |
8 |
2 × V100 |
PCIe Gen3 |
2 × 32 GiB |
KVM |
|
p2s.8xlarge.8 |
32 |
256 |
25/15 |
200 |
16 |
8 |
4 × V100 |
PCIe Gen3 |
4 × 32 GiB |
KVM |
|
p2s.16xlarge.8 |
64 |
512 |
30/30 |
400 |
32 |
8 |
8 × V100 |
PCIe Gen3 |
8 × 32 GiB |
KVM |
Networks are user-defined, subnets can be divided, and network access policies can be configured as needed. Mass storage is used, elastic capacity expansion as well as backup and restoration are supported to make data more secure. Auto Scaling allows you to add or reduce the number of ECSs quickly.
Similar to other types of ECSs, P2s ECSs can be provisioned in a few minutes.
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2s ECSs.
Supported Common Software
OS |
Version |
---|---|
CentOS |
CentOS 7.9 64bit |
EulerOS |
EulerOS 2.5 64bit |
Oracle Linux |
Oracle Linux Server release 7.6 64bit |
Ubuntu |
|
Windows |
|
Overview
P2v ECSs use NVIDIA Tesla V100 GPUs and deliver high flexibility, high-performance computing, and high cost-effectiveness. These ECSs use GPU NVLink for direct communication between GPUs, improving data transmission efficiency. P2v ECSs provide outstanding general computing capabilities and have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Connection |
GPU Memory (GiB) |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
p2v.2xlarge.8 |
8 |
64 |
10/4 |
50 |
4 |
4 |
1 × V100 |
N/A |
1 × 16 GiB |
KVM |
CPU: Intel® Xeon® Skylake-SP Gold 6151 v5 |
p2v.4xlarge.8 |
16 |
128 |
15/8 |
100 |
8 |
8 |
2 × V100 |
NVLink |
2 × 16 GiB |
KVM |
|
p2v.8xlarge.8 |
32 |
256 |
25/15 |
200 |
16 |
8 |
4 × V100 |
NVLink |
4 × 16 GiB |
KVM |
|
p2v.16xlarge.8 |
64 |
512 |
30/30 |
400 |
32 |
8 |
8 × V100 |
NVLink |
8 × 16 GiB |
KVM |
Networks are user-defined, subnets can be divided, and network access policies can be configured as needed. Mass storage is used, elastic capacity expansion as well as backup and restoration are supported to make data more secure. Auto Scaling allows you to add or reduce the number of ECSs quickly.
Similar to other types of ECSs, P2v ECSs can be provisioned in a few minutes.
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2v ECSs.
Supported Common Software
OS |
Version |
---|---|
CentOS |
CentOS 7.9 64bit |
EulerOS |
EulerOS 2.5 64bit |
Oracle Linux |
Oracle Linux Server release 7.6 64bit |
Ubuntu |
|
Windows |
|
Overview
Compared with P1 ECSs, P2 ECSs use NVIDIA Tesla V100 GPUs, which have improved both single- and double-precision computing capabilities by 50% and offer 112 TFLOPS of deep learning.
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Local Disks |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
p2.2xlarge.8 |
8 |
64 |
5/1.6 |
35 |
2 |
12 |
1 × V100 |
1 × 16 |
1 × 800 GiB NVMe |
KVM |
CPU: Intel® Xeon® Processor E5-2690 v4 |
p2.4xlarge.8 |
16 |
128 |
8/3.2 |
70 |
4 |
12 |
2 × V100 |
2 × 16 |
2 × 800 GiB NVMe |
KVM |
|
p2.8xlarge.8 |
32 |
256 |
10/6.5 |
140 |
8 |
12 |
4 × V100 |
4 × 16 |
4 × 800 GiB NVMe |
KVM |
Networks are user-defined, subnets can be divided, and network access policies can be configured as needed. Mass storage is used, elastic capacity expansion as well as backup and restoration are supported to make data more secure. Auto Scaling allows you to add or reduce the number of ECSs quickly.
Similar to other types of ECSs, P2 ECSs can be provisioned in a few minutes.
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2 ECSs.
Supported Common Software
Data may be lost on the local NVMe SSDs attached to P2 ECSs due to a fault, for example, due to a disk or host fault. Therefore, you are suggested to store only temporary data in local NVMe SSDs. If you store important data in such a disk, securely back up the data.
OS |
Version |
---|---|
CentOS |
CentOS 7.9 64bit |
EulerOS |
EulerOS 2.5 64bit |
Oracle Linux |
Oracle Linux Server release 7.6 64bit |
Ubuntu |
|
Windows |
|
Overview
P1 ECSs use NVIDIA Tesla P100 GPUs and provide flexibility, high performance, and cost-effectiveness. These ECSs support GPU Direct for direct communication between GPUs, improving data transmission efficiency. P1 ECSs provide outstanding general computing capabilities and have strengths in deep learning, graphic databases, high-performance databases, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics. They are designed for scientific computing.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Local Disks (GiB) |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
p1.2xlarge.8 |
8 |
64 |
5/1.6 |
35 |
2 |
12 |
1 × P100 |
1 × 16 |
1 × 800 |
KVM |
CPU: Intel® Xeon® Processor E5-2690 v4 |
p1.4xlarge.8 |
16 |
128 |
8/3.2 |
70 |
4 |
12 |
2 × P100 |
2 × 16 |
2 × 800 |
KVM |
|
p1.8xlarge.8 |
32 |
256 |
10/6.5 |
140 |
8 |
12 |
4 × P100 |
4 × 16 |
4 × 800 |
KVM |
Networks are user-defined, subnets can be divided, and network access policies can be configured as needed. Mass storage is used, elastic capacity expansion as well as backup and restoration are supported to make data more secure. Auto Scaling allows you to add or reduce the number of ECSs quickly.
Similar to other types of ECSs, P1 ECSs can be provisioned in a few minutes. You can configure specifications as needed. P1 ECSs with two, four, and eight GPUs will be supported later.
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P1 ECSs.
Supported Common Software
P1 ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software requires GPU CUDA parallel computing, use P1 ECSs. P1 ECSs support the following commonly used software:
Data may be lost on the local NVMe SSDs attached to P1 ECSs due to a fault, for example, due to a disk or host fault. Therefore, you are suggested to store only temporary data in local NVMe SSDs. If you store important data in such a disk, securely back up the data.
Overview
PI2 ECSs use NVIDIA Tesla T4 GPUs dedicated for real-time AI inference. These ECSs use the T4 INT8 calculator for up to 130 TOPS of INT8 computing. The PI2 ECSs can also be used for light-load training.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Local Disks |
Virtualization |
Hardware |
---|---|---|---|---|---|---|---|---|---|---|---|
pi2.2xlarge.4 |
8 |
32 |
10/4 |
50 |
4 |
4 |
1 × T4 |
1 × 16GiB |
N/A |
KVM |
CPU: Intel® Xeon® Skylake 6151 3.0 GHz or Intel® Xeon® Cascade Lake 6278 2.6 GHz |
pi2.4xlarge.4 |
16 |
64 |
15/8 |
100 |
8 |
8 |
2 × T4 |
2 × 16GiB |
N/A |
KVM |
|
pi2.8xlarge.4 |
32 |
128 |
25/15 |
200 |
16 |
8 |
4 × T4 |
4 × 16GiB |
N/A |
KVM |
|
pi2.16xlarge.4 |
64 |
256 |
30/30 |
400 |
32 |
8 |
8 × T4 |
8 × 16GiB |
N/A |
KVM |
PI2 ECS Features
Supported Common Software
PI2 ECSs are used in GPU-based inference computing scenarios, such as image recognition, speech recognition, and natural language processing. The PI2 ECSs can also be used for light-load training.
PI2 ECSs support the following commonly used software:
Notes
Resources are released after a PI2 ECS is stopped. If desired resources are insufficient when the PI2 ECS is started after being stopped, starting the ECS might fail. Therefore, if you need to use a PI2 ECS for a long time, keep the ECS running.