An elastic resource pool provides compute resources (CPU and memory) for running DLI jobs. The unit is CU. One CU contains one CPU and 4 GB memory.
You can create multiple queues in an elastic resource pool. Compute resources can be shared among queues. You can properly set the resource pool allocation policy for queues to improve compute resource utilization.
DLI offers compute resources in the specifications listed in Table 1.
Edition |
Specification |
Constraint |
Scenario |
---|---|---|---|
Basic |
16–64 CUs |
For more constraints and limitations on elastic resource pools, see Constraints. |
This edition is suitable for testing scenarios with low resource consumption and low requirements for resource reliability and availability. |
Standard |
64 CUs or higher |
For more constraints and limitations on elastic resource pools, see Constraints. |
This edition offers powerful computing capabilities, high availability, and flexible resource management. It is suitable for large-scale computing tasks and business scenarios with long-term resource planning needs. |
Resources too fixed to meet a range of requirements.
The quantities of compute resources required for jobs change in different time of a day. If the resources cannot be scaled based on service requirements, they may be wasted or insufficient. Figure 1 shows the resource usage during a day.
Resources are isolated and cannot be shared.
Elastic resource pools can be accessed by different queues and automatically scaled to improve resource utilization and handle resource peaks.
You can use elastic resource pools to centrally manage and allocate resources. Multiple queues can be bound to an elastic resource pool to share the pooled resources.
Elastic resource pools support the CCE cluster architecture for heterogeneous resources so you can centrally manage and allocate them.
Elastic resource pools have the following advantages:
Resources of different queues are isolated to reduce the impact on each other.
SQL jobs can run on independent Spark instances, reducing mutual impacts between jobs.
The queue quota is updated in real time based on workload and priority.
Using elastic resource pools has the following advantages.
Advantage |
No Elastic Resource Pool |
Use Elastic Resource Pool |
---|---|---|
Efficiency |
You need to set scaling tasks repeatedly to improve the resource utilization. |
Dynamic scaling can be done in seconds. |
Resource utilization |
Resources cannot be shared among different queues. For example, if queue 1 has 10 unused CUs and queue 2 requires more resources due to heavy load, queue 2 cannot utilize the resources of queue 1. It has to be scaled up. |
Queues added to the same elastic resource pool can share compute resources. |
When you set a data source, you must allocate different network segments to each queue, which requires a large number of VPC network segments. |
You can add multiple general-purpose queues in the same elastic resource pool to one network segment, simplifying the data source configuration. |
|
Resource allocation |
If resources are insufficient for scale-out tasks of multiple queues, some queues will fail to be scaled out. |
You can set the priority for each queue in the elastic resource pool based on the peak hours to ensure proper resource allocation. |
You can perform the following operations on elastic resource pools: