An elastic resource pool provides compute resources (CPU and memory) for running DLI jobs. The unit is CU. One CU contains one CPU and 4 GB memory.
You can create multiple queues in an elastic resource pool. Compute resources can be shared among queues. You can properly set the resource pool allocation policy for queues to improve compute resource utilization.
DLI offers compute resources in the specifications listed in Table 1.
Edition |
Specification |
Constraint |
Scenario |
---|---|---|---|
Basic |
16–64 CUs |
For more notes and constraints on elastic resource pools, see Notes and Constraints. |
This edition is suitable for testing scenarios with low resource consumption and low requirements for resource reliability and availability. |
Standard |
64 CUs or higher |
For more notes and constraints on elastic resource pools, see Notes and Constraints. |
This edition offers powerful computing capabilities, high availability, and flexible resource management. It is suitable for large-scale computing tasks and business scenarios with long-term resource planning needs. |
The system checks the resource usage before scaling in the elastic resource pool to determine if there is enough space for scaling in. If the existing resources cannot be scaled in according to the minimum scaling step, the pool may not be scaled in successfully or only partially.
The scaling step may vary depending on the resource specifications, usually 16 CUs, 32 CUs, 48 CUs, 64 CUs, etc.
For example, if the elastic resource pool has a capacity of 192 CUs and the queues in the pool are using 68 CUs due to running jobs, the plan is to scale in to 64 CUs.
When executing a scaling in task, the system determines that there are 124 CUs remaining and scales in by the minimum step of 64 CUs. However, the remaining 60 CUs cannot be scaled in any further. Therefore, after the elastic resource pool executes the scaling in task, its capacity is reduced to 128 CUs.
Resources too fixed to meet a range of requirements.
The quantities of compute resources required for jobs change in different time of a day. If the resources cannot be scaled based on service requirements, they may be wasted or insufficient. Figure 1 shows the resource usage during a day.
Resources are isolated and cannot be shared.
Elastic resource pools can be accessed by different queues and automatically scaled to improve resource utilization and handle resource peaks.
You can use elastic resource pools to centrally manage and allocate resources. Multiple queues can be bound to an elastic resource pool to share the pooled resources.
Elastic resource pools support the CCE cluster architecture for heterogeneous resources so you can centrally manage and allocate them.
Elastic resource pools have the following advantages:
Resources of different queues are isolated to reduce the impact on each other.
SQL jobs can run on independent Spark instances, reducing mutual impacts between jobs.
The queue quota is updated in real time based on workload and priority.
Using elastic resource pools has the following advantages.
Advantage |
No Elastic Resource Pool |
Use Elastic Resource Pool |
---|---|---|
Scale-out duration |
You will need to spend several minutes manually scaling out. |
No manual intervention is required, as dynamic scale out can be done in seconds. |
Resource utilization |
Resources cannot be shared among different queues. For example, if queue 1 has 10 unused CUs and queue 2 requires more resources due to heavy load, queue 2 cannot utilize the resources of queue 1. It has to be scaled up. |
Queues added to the same elastic resource pool can share compute resources. |
When you set a data source, you must allocate different network segments to each queue, which requires a large number of VPC network segments. |
You can add multiple general-purpose queues in the same elastic resource pool to one network segment, simplifying the data source configuration. |
|
Resource allocation |
If resources are insufficient for scale-out tasks of multiple queues, some queues will fail to be scaled out. |
You can set the priority for each queue in the elastic resource pool based on the peak hours to ensure proper resource allocation. |
You can perform the following operations on elastic resource pools: