Removing a Node

Scenario

Removing a node from a cluster in CCE will re-install the node OS and clear CCE components on the node.

Removing a node will not delete the server (ECS) corresponding to the node. You are advised to remove nodes at off-peak hours to avoid impacts on your services.

After a node is removed from the cluster, the node is still running and incurs fees.

Notes and Constraints

Precautions

Procedure

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Nodes. In the same row as the target node, choose More > Remove.
  2. In the dialog box displayed, enter REMOVE, configure the login information required for re-installing the OS, and click Yes. Wait until the node is removed.

    After the node is removed, workload pods on the node are automatically migrated to other available nodes.

Handling Failed OS Reinstallation

You can perform the following steps to re-install the OS and clear the CCE components on the node if previous attempts fail:

  1. Log in to the management console of the server and re-install the OS.
  2. Log in to the server and run the following commands to clear the CCE components and LVM data:

    Write the following scripts to the clean.sh file:

    lsblk
    vgs --noheadings | awk '{print $1}' | xargs vgremove -f
    pvs --noheadings | awk '{print $1}' | xargs pvremove -f
    lvs --noheadings | awk '{print $1}' | xargs -i lvremove -f --select {}
    function init_data_disk() {
        all_devices=$(lsblk -o KNAME,TYPE | grep disk | grep -v nvme | awk '{print $1}' | awk '{ print "/dev/"$1}')
        for device in ${all_devices[@]}; do
            isRootDisk=$(lsblk -o KNAME,MOUNTPOINT $device 2>/dev/null| grep -E '[[:space:]]/$' | wc -l )
            if [[ ${isRootDisk} != 0 ]]; then
                continue
            fi
            dd if=/dev/urandom of=${device} bs=512 count=64
            return
        done
        exit 1
    }
    init_data_disk
    lsblk

    Run the following command:

    bash clean.sh