cloud-container-engine/umn/source/node_pools/managing_a_node_pool.rst

77 KiB

original_name

cce_10_0222.html

Managing a Node Pool

Notes and Constraints

The default node pool DefaultPool does not support the following management operations.

Configuring Kubernetes Parameters

CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see kubelet.

This function is supported only in clusters of v1.15 and later. It is not displayed for clusters earlier than v1.15.

  1. Log in to the CCE console.

  2. Click the cluster name and access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.

  3. Choose More > Manage next to the node pool name.

  4. On the Manage Component page on the right, change the values of the following Kubernetes parameters:

    Table 1 kubelet
    Parameter Description Default Value Remarks
    cpu-manager-policy

    Specifies the CPU core binding configuration. For details, see CPU Core Binding <cce_10_0551>.

    • none: disables pods from exclusively occupying CPUs. Select this value if you want a large pool of shareable CPU cores.
    • static: enables pods to exclusively occupy CPUs. Select this value if your workload is sensitive to latency in CPU cache and scheduling.
    none The values can be modified during the node pool lifecycle.
    kube-api-qps Query per second (QPS) to use while talking with kube-apiserver. 100
    kube-api-burst Burst to use while talking with kube-apiserver. 100
    max-pods Maximum number of pods managed by kubelet.

    40

    20

    pod-pids-limit PID limit in Kubernetes -1
    with-local-dns Whether to use the local IP address as the ClusterDNS of the node. false
    event-qps QPS limit for event creation 5
    allowed-unsafe-sysctls

    Insecure system configuration allowed.

    Starting from v1.17.17, CCE enables pod security policies for kube-apiserver. You need to add corresponding configurations to allowedUnsafeSysctls of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see Example of Enabling Unsafe Sysctls in Pod Security Policy <cce_10_0275__section155111941177>.

    []
    over-subscription-resource

    Whether to enable node oversubscription.

    If this parameter is set to true, the node oversubscription feature is enabled. For details, see Hybrid Deployment of Online and Offline Jobs <cce_10_0384>.

    true -
    colocation

    Whether to enable node hybrid deployment.

    If this parameter is set to true, the node hybrid deployment feature is enabled. For details, see Hybrid Deployment of Online and Offline Jobs <cce_10_0384>.

    true -

    kube-reserved-mem

    system-reserved-mem

    Reserved node memory. Depends on node specifications. For details, see Formula for Calculating the Reserved Resources of a Node <cce_10_0178>. The sum of kube-reserved-mem and system-reserved-mem is less than half of the memory.
    topology-manager-policy

    Set the topology management policy.

    Valid values are as follows:

    • restricted: kubelet accepts only pods that achieve optimal NUMA alignment on the requested resources.
    • best-effort: kubelet preferentially selects pods that implement NUMA alignment on CPU and device resources.
    • none (default): The topology management policy is disabled.
    • single-numa-node: kubelet allows only pods that are aligned to the same NUMA node in terms of CPU and device resources.
    none

    The values can be modified during the node pool lifecycle.

    Important

    NOTICE: Exercise caution when modifying topology-manager-policy and topology-manager-scope will restart kubelet and recalculate the resource allocation of pods based on the modified policy. As a result, running pods may restart or even fail to receive any resources.

    topology-manager-scope

    Set the resource alignment granularity of the topology management policy. Valid values are as follows:

    • container (default)
    • pod
    container
    resolv-conf DNS resolution configuration file specified by the container The default value is null. -
    Table 2 kube-proxy
    Parameter Description Default Value Remarks
    conntrack-min sysctl -w net.nf_conntrack_max 131072 The values can be modified during the node pool lifecycle.
    conntrack-tcp-timeout-close-wait sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait 1h0m0s
    Table 3 Network components (available only for CCE Turbo clusters)
    Parameter Description Default Value Remarks
    nic-threshold

    Low threshold of the number of bound ENIs:High threshold of the number of bound ENIs

    Note

    This parameter is being discarded. Use the dynamic pre-binding parameters of the other four ENIs.

    Default: 0:0 -
    nic-minimum-target Minimum number of ENIs bound to the nodes in the node pool Default: 10 -
    nic-maximum-target Maximum number of ENIs pre-bound to a node at the node pool level Default: 0 -
    nic-warm-target Number of ENIs pre-bound to a node at the node pool level Default: 2 -
    nic-max-above-warm-target Reclaim number of ENIs pre-bound to a node at the node pool level Default: 2 -
    Table 4 Pod security group in a node pool (available only for CCE Turbo clusters)
    Parameter Description Default Value Remarks
    security_groups_for_nodepool
    • Default security group used by pods in a node pool. You can enter the security group ID. If this parameter is not set, the default security group of the cluster container network is used. A maximum of five security group IDs can be specified at the same time, separated by semicolons (;).
    • The priority of the security group is lower than that of the security group configured for the SecurityGroup <cce_10_0288> resource object.
    - -
    Table 5 Docker (available only for node pools that use Docker)
    Parameter Description Default Value Remarks
    native-umask `--exec-opt native.umask normal Cannot be changed.
    docker-base-size `--storage-opts dm.basesize 0 Cannot be changed.
    insecure-registry Address of an insecure image registry false Cannot be changed.
    limitcore Maximum size of a core file in a container. The unit is byte. 5368709120 -
    default-ulimit-nofile Limit on the number of handles in a container {soft}:{hard}

    The value cannot exceed the value of the kernel parameter nr_open and cannot be a negative number.

    You can run the following command to obtain the kernel parameter nr_open:

    sysctl -a | grep nr_open
  5. Click OK.

Editing a Node Pool

  1. Log in to the CCE console.

  2. Click the cluster name and access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.

  3. Click Edit next to the name of the node pool you will edit. In the Edit Node Pool page, edit the following parameters:

    Basic Settings

    Table 6 Basic settings
    Parameter Description
    Node Pool Name Name of the node pool.
    Auto Scaling

    By default, this parameter is disabled.

    After you enable autoscaler by clicking image1, nodes in the node pool are automatically created or deleted based on service requirements.

    • Maximum Nodes and Minimum Nodes: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range.
    • Priority: A larger value indicates a higher priority. For example, if this parameter is set to 1 and 4 respectively for node pools A and B, B has a higher priority than A, and auto scaling is first triggered for B. If the priorities of multiple node pools are set to the same value, for example, 2, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle.
    • Cooldown Period: Required. The unit is minute. This parameter indicates the interval between the previous scale-out action and the next scale-in action.

    If the Autoscaler field is set to on, install the autoscaler add-on <cce_10_0154> to use the autoscaler feature.

    Advanced Settings

    Table 7 Advanced settings
    Parameter Description
    K8s label

    Click Add Label to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added.

    Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see Labels and Selectors.

    Note

    After a K8s label is modified, the inventory nodes in the node pool are updated synchronously.

    Resource Tag

    You can add resource tags to classify resources.

    You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency.

    CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag.

    Note

    After a resource tag is modified, the modification automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the modification to take effect.

    Taint

    This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters:

    • Key: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
    • Value: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).
    • Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.

    For details, see Managing Node Taints <cce_10_0352>.

    Note

    After a taint is modified, the inventory nodes in the node pool are updated synchronously.

    Edit Key pair

    Only node pools that use key pairs for login support key pair editing. You can select another key pair.

    Note

    The edited key pair automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the key pair to take effect.

  4. Click OK.

    In the node pool list, the node pool status becomes Scaling. After the status changes to Completed, the node pool parameters are modified successfully. The modified configuration will be synchronized to all nodes in the node pool.

Deleting a Node Pool

Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.

  1. Log in to the CCE console.
  2. Click the cluster name and access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.
  3. Choose More > Delete next to a node pool name to delete the node pool.
  4. Read the precautions in the Delete Node Pool dialog box.
  5. In the text box, click Yes to confirm that you want to continue the deletion.

Copying a Node Pool

You can copy the configuration of an existing node pool to create a new node pool on the CCE console.

  1. Log in to the CCE console.
  2. Click the cluster name and access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.
  3. Choose More > Copy next to a node pool name to copy the node pool.
  4. The configurations of the selected node pool are replicated to the Clone Node Pool page. You can edit the configurations as required and click Next: Confirm.
  5. On the Confirm page, confirm the node pool configuration and click Create Now. Then, a new node pool is created based on the edited configuration.

Migrating a Node

Nodes in a node pool can be migrated. Currently, nodes in a node pool can be migrated only to the default node pool (defaultpool) in the same cluster.

  1. Log in to the CCE console and click the cluster name to access the cluster.

  2. In the navigation pane, choose Nodes and switch to the Node Pools tab page.

  3. Click View Node in the Operation column of the node pool to be migrated.

  4. Select the nodes to be migrated and choose More > Migrate to migrate the nodes to the default node pool in batches.

    You can also choose More > Migrate in the Operation column of a single node to migrate the node.

  5. In the displayed Migrate Node window, confirm the information.

    Note

    The migration has no impacts on the original resource tags, Kubernetes labels, and taints of the node.