Update content
This commit is contained in:
parent
60c08554d9
commit
0b0c0d553f
Before Width: | Height: | Size: 216 B After Width: | Height: | Size: 216 B |
Binary file not shown.
Before Width: | Height: | Size: 338 KiB |
Binary file not shown.
Before Width: | Height: | Size: 201 KiB |
BIN
umn/source/_static/images/en-us_image_0000001629186693.png
Normal file
BIN
umn/source/_static/images/en-us_image_0000001629186693.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 319 KiB |
BIN
umn/source/_static/images/en-us_image_0000001629926113.png
Normal file
BIN
umn/source/_static/images/en-us_image_0000001629926113.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.0 KiB |
@ -30,28 +30,30 @@ For more information, see :ref:`Before You Start <cce_10_0302>`.
|
|||||||
Procedure
|
Procedure
|
||||||
---------
|
---------
|
||||||
|
|
||||||
This section describes how to upgrade a CCE cluster of v1.15 or later. For other versions, see :ref:`Performing Replace/Rolling Upgrade <cce_10_0120>`.
|
The cluster upgrade goes through check, backup, configuration and upgrade, and verification.
|
||||||
|
|
||||||
#. Log in to the CCE console and click the cluster name to access the cluster.
|
#. Log in to the CCE console and access the cluster console.
|
||||||
|
|
||||||
#. In the navigation pane, choose **Cluster Upgrade**. You can view the new version available for upgrade on the right.
|
#. In the navigation pane, choose **Cluster Upgrade**. You can view the recommended version on the right.
|
||||||
|
|
||||||
Check the version information, last update/upgrade time, available upgrade version, and upgrade history of the current cluster.
|
#. Select the cluster version to be upgraded and click **Check**.
|
||||||
|
|
||||||
The cluster upgrade goes through pre-upgrade check, add-on upgrade/uninstallation, master node upgrade, worker node upgrade, and post-upgrade processing.
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
- If your cluster version is up-to-date, the **Upgrade** button is grayed out.
|
- If your cluster has a new minor version, you do not need to select the cluster version. The latest minor version is used by default.
|
||||||
- If your cluster status is abnormal or there are abnormal add-ons, the **Upgrade** button is dimmed. Perform a check by referring to :ref:`Before You Start <cce_10_0302>`.
|
- If your cluster has a new major version, you can select a version as required.
|
||||||
|
- If your cluster is of the latest version, the check entry will be hidden.
|
||||||
|
|
||||||
#. Click **Upgrade** or **Install Patch** on the right. Configure the upgrade parameters.
|
#. Click **Start Check** and confirm the check. If there are abnormal or risky items in the cluster, handle the exceptions based on the check results displayed on the page and check again.
|
||||||
|
|
||||||
- **New Version**: Select the Kubernetes version to which the cluster can be upgraded.
|
- **Exceptions**: View the solution displayed on the page, handle the exceptions and check again.
|
||||||
|
- **Risk Items**: may affect the cluster upgrade. Check the risk description and see whether you may be impacted. If no risk exists, click **OK** next to the risk item to manually skip this risk item and check again.
|
||||||
|
|
||||||
- (Optional) **Cluster Backup**: Determine whether to back up the entire master node. This backup mode is recommended.
|
After the check is passed, click **Next: Back Up**.
|
||||||
|
|
||||||
A manual confirmation is required for backing up the entire master node. The backup process uses the Cloud Backup and Recovery (CBR) service and takes about 20 minutes. If there are many cloud backup tasks at the current site, the backup time may be prolonged. You are advised to back up the master node.
|
#. (Optional) Manually back up the data. Data is backed up during the upgrade following a default policy. You can click **Back Up** to manually back up data. If you do not need to manually back up data, click **Next: Configure & Upgrade**.
|
||||||
|
|
||||||
|
#. Configure the upgrade parameters.
|
||||||
|
|
||||||
- **Add-on Upgrade Configuration**: Add-ons that have been installed in your cluster are listed. During the cluster upgrade, the system automatically upgrades the add-ons to be compatible with the target cluster version. You can click **Set** to re-define the add-on parameters.
|
- **Add-on Upgrade Configuration**: Add-ons that have been installed in your cluster are listed. During the cluster upgrade, the system automatically upgrades the add-ons to be compatible with the target cluster version. You can click **Set** to re-define the add-on parameters.
|
||||||
|
|
||||||
@ -59,21 +61,22 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
|
|||||||
|
|
||||||
If a red dot |image1| is displayed on the right of an add-on, the add-on is incompatible with the target cluster version. During the upgrade, the add-on will be uninstalled and then re-installed. Ensure that the add-on parameters are correctly configured.
|
If a red dot |image1| is displayed on the right of an add-on, the add-on is incompatible with the target cluster version. During the upgrade, the add-on will be uninstalled and then re-installed. Ensure that the add-on parameters are correctly configured.
|
||||||
|
|
||||||
- **Node Upgrade Configuration**: Before setting the node upgrade priority, you need to select a node pool. Nodes and node pools will be upgraded according to the priorities you specify. You can set the maximum number of nodes to be upgraded in a batch, or set priorities for nodes to be upgraded. If you do not set this parameter, the system will determine the nodes to upgrade in batches based on specific conditions.
|
- **Node Upgrade Configuration:** You can set the maximum number of nodes to be upgraded in a batch.
|
||||||
|
- **Node Priority:** You can set priorities for nodes to be upgraded. If you do not set this parameter, the system will determine the nodes to upgrade in batches based on specific conditions. Before setting the node upgrade priority, you need to select a node pool. Nodes and node pools will be upgraded according to the priorities you specify.
|
||||||
|
|
||||||
- **Add Upgrade Priority**: Add upgrade priorities for node pools.
|
- **Add Upgrade Priority**: Add upgrade priorities for node pools.
|
||||||
- **Add Node Priority**: After adding a node pool priority, you can set the upgrade sequence of nodes in the node pool. The system upgrades nodes in the sequence you specify. If you skip this setting, the system upgrades nodes based on the default policy.
|
- **Add Node Priority**: After adding a node pool priority, you can set the upgrade sequence of nodes in the node pool. The system upgrades nodes in the sequence you specify. If you skip this setting, the system upgrades nodes based on the default policy.
|
||||||
|
|
||||||
#. Read the upgrade instructions carefully, and select **I have read and agree to Upgrade Precautions**. Click **Upgrade**.
|
#. After the configuration is complete, click **Upgrade** and confirm the upgrade. The cluster starts to be upgraded. You can view the process in the lower part of the page.
|
||||||
|
|
||||||
#. After you click **Upgrade**, the cluster upgrade starts. You can view the upgrade process in the lower part of the page.
|
During the upgrade, you can click **Suspend** on the right to suspend the cluster upgrade. To continue the upgrade, click **Continue**. When the progress bar reaches 100%, the cluster upgrade is complete.
|
||||||
|
|
||||||
During the upgrade, you can click **Suspend** on the right to suspend the cluster upgrade. To continue the upgrade, click **Continue**.
|
.. note::
|
||||||
|
|
||||||
If an upgrade failure message is displayed during the cluster upgrade, rectify the fault as prompted and try again.
|
If an upgrade failure message is displayed during the cluster upgrade, rectify the fault as prompted and try again.
|
||||||
|
|
||||||
#. When the upgrade progress reaches 100%, the cluster is upgraded. The version information will be properly displayed, and no upgrade is required.
|
#. After the upgrade is complete, click **Next: Verify**. Verify the upgrade based on the displayed check items. After confirming that all check items are normal, click **Complete** and confirm that the post-upgrade check is complete.
|
||||||
|
|
||||||
#. After the upgrade is complete, verify the cluster Kubernetes version on the **Clusters** page.
|
You can verify the cluster Kubernetes version on the **Clusters** page.
|
||||||
|
|
||||||
.. |image1| image:: /_static/images/en-us_image_0000001518223024.png
|
.. |image1| image:: /_static/images/en-us_image_0000001517743672.png
|
||||||
|
@ -13,4 +13,4 @@ Check whether the current cce-controller-hpa add-on has compatibility restrictio
|
|||||||
Solution
|
Solution
|
||||||
--------
|
--------
|
||||||
|
|
||||||
The current cce-controller-hap add-on has compatibility restrictions. An add-on that can provide metric APIs, for example, metric-server, must be installed in the cluster.
|
The current cce-controller-hpa add-on has compatibility restrictions. An add-on that can provide metric APIs, for example, metric-server, must be installed in the cluster.
|
||||||
|
@ -11,7 +11,7 @@ Check Item
|
|||||||
Check the following aspects:
|
Check the following aspects:
|
||||||
|
|
||||||
- Check whether the add-on status is normal.
|
- Check whether the add-on status is normal.
|
||||||
- Check whether the add-on support the target version.
|
- Check whether the add-on supports the target version.
|
||||||
|
|
||||||
Solution
|
Solution
|
||||||
--------
|
--------
|
||||||
@ -24,10 +24,6 @@ Solution
|
|||||||
|
|
||||||
The add-on cannot be automatically upgraded with the cluster. Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually upgrade the add-on.
|
The add-on cannot be automatically upgraded with the cluster. Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually upgrade the add-on.
|
||||||
|
|
||||||
- **Scenario 3: The add-on does not support the target cluster even if the add-on is upgraded to the latest version. In this case, go to the cluster console and choose Add-ons in the navigation pane to manually uninstall the add-on.**
|
- **Scenario 3: The add-on does not support the target cluster even if the add-on is upgraded to the latest version. In this case, go to the cluster console and choose Cluster Information > O&M > Add-ons in the navigation pane to manually uninstall the add-on.**
|
||||||
|
|
||||||
For details about the supported add-on versions and replacement solutions, see the :ref:`add-on overview <cce_10_0277>`.
|
For details about the supported add-on versions and replacement solutions, see the :ref:`help document <cce_10_0277>`.
|
||||||
|
|
||||||
|image1|
|
|
||||||
|
|
||||||
.. |image1| image:: /_static/images/en-us_image_0000001518062716.png
|
|
||||||
|
@ -30,11 +30,15 @@ Solution
|
|||||||
|
|
||||||
.. table:: **Table 1** OSs that support the upgrade
|
.. table:: **Table 1** OSs that support the upgrade
|
||||||
|
|
||||||
============================ ===========
|
+-----------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||||
OS Restriction
|
| OS | Restriction |
|
||||||
============================ ===========
|
+=================+=======================================================================================================================+
|
||||||
EulerOS 2.3/2.5/2.8/2.9/2.10 None.
|
| EulerOS 2.5/2.9 | None |
|
||||||
============================ ===========
|
+-----------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| CentOS 7.7 | None |
|
||||||
|
+-----------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Ubuntu 22.04 | Some sites cannot perform upgrade. If the check result shows the upgrade is not supported, contact technical support. |
|
||||||
|
+-----------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
- **Scenario 3: There are unexpected node pool tags in the node.**
|
- **Scenario 3: There are unexpected node pool tags in the node.**
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ Check whether the containerd.sock file exists on the node. This file affects the
|
|||||||
Solution
|
Solution
|
||||||
--------
|
--------
|
||||||
|
|
||||||
**Scenario: The Docker used by the node is the customized Euler-dokcer.**
|
**Scenario: The Docker used by the node is the customized Euler-docker.**
|
||||||
|
|
||||||
#. Log in to the node.
|
#. Log in to the node.
|
||||||
#. Run the **rpm -qa \| grep docker \| grep euleros** command. If the command output is not empty, the Docker used on the node is Euler-docker.
|
#. Run the **rpm -qa \| grep docker \| grep euleros** command. If the command output is not empty, the Docker used on the node is Euler-docker.
|
||||||
|
@ -53,4 +53,4 @@ If cce-agent is not of the latest version, the automatic update fails. This prob
|
|||||||
|
|
||||||
If you have any questions about the preceding operations, contact technical support.
|
If you have any questions about the preceding operations, contact technical support.
|
||||||
|
|
||||||
.. |image1| image:: /_static/images/en-us_image_0000001517903052.png
|
.. |image1| image:: /_static/images/en-us_image_0000001629186693.png
|
||||||
|
@ -21,7 +21,7 @@ Nginx Overview
|
|||||||
|
|
||||||
Nginx is a lightweight web server. On CCE, you can quickly set up a Nginx web server.
|
Nginx is a lightweight web server. On CCE, you can quickly set up a Nginx web server.
|
||||||
|
|
||||||
The following describes how to create Nginx from the **Open Source Images**. It takes about 5 minutes to complete Nginx creation.
|
This section uses the Nginx application as an example to describe how to create a workload. The creation takes about 5 minutes.
|
||||||
|
|
||||||
After Nginx is created successfully, you can access the Nginx web page.
|
After Nginx is created successfully, you can access the Nginx web page.
|
||||||
|
|
||||||
@ -53,7 +53,7 @@ The following is the procedure for creating a containerized workload from a cont
|
|||||||
|
|
||||||
**Container Settings**
|
**Container Settings**
|
||||||
|
|
||||||
In the **Basic Info** area, click **Select Image**. In the dialog box displayed, select **Open Source Images**, search for **nginx**, and select the **nginx** image.
|
Enter **nginx:latest** in the **Image Name** text box.
|
||||||
|
|
||||||
**Service Settings**
|
**Service Settings**
|
||||||
|
|
||||||
@ -67,7 +67,7 @@ The following is the procedure for creating a containerized workload from a cont
|
|||||||
|
|
||||||
- **Protocol**: Select **TCP**.
|
- **Protocol**: Select **TCP**.
|
||||||
- **Service Port**: Set this parameter to **8080**, which is mapped to the container port.
|
- **Service Port**: Set this parameter to **8080**, which is mapped to the container port.
|
||||||
- **Container Port**: port on which the application listens. For containers created using the nginx image, set this parameter to **80**. For other applications, set this parameter to the port of the application.
|
- **Container Port**: port on which the application listens. For containers created using the Nginx image, set this parameter to **80**. For other applications, set this parameter to the port of the application.
|
||||||
|
|
||||||
#. Click **Create Workload**.
|
#. Click **Create Workload**.
|
||||||
|
|
||||||
@ -80,7 +80,7 @@ Accessing Nginx
|
|||||||
|
|
||||||
#. Obtain the external access address of Nginx.
|
#. Obtain the external access address of Nginx.
|
||||||
|
|
||||||
Click the nginx workload to enter its details page. On the **Access Mode** tab page, you can view the IP address of Nginx. The public IP address is the external access address.
|
Click the Nginx workload to enter its details page. On the **Access Mode** tab page, you can view the IP address of Nginx. The public IP address is the external access address.
|
||||||
|
|
||||||
#. Enter the **external access address** in the address box of a browser. The following shows the welcome page if you successfully access the workload.
|
#. Enter the **external access address** in the address box of a browser. The following shows the welcome page if you successfully access the workload.
|
||||||
|
|
||||||
|
@ -33,9 +33,7 @@ Creating a MySQL Workload
|
|||||||
|
|
||||||
**Container Settings**
|
**Container Settings**
|
||||||
|
|
||||||
In the **Basic Info** area, click **Select Image**. In the dialog box displayed, select **Open Source Images**, search for **mysql**, and select the **mysql** image.
|
Enter **mysql:5.7** in the **Image Name** text box.
|
||||||
|
|
||||||
Select **5.7** for **Image Version**.
|
|
||||||
|
|
||||||
Add the following four environment variables (details available in `MySQL <https://github.com/docker-library/docs/tree/master/mysql>`__):
|
Add the following four environment variables (details available in `MySQL <https://github.com/docker-library/docs/tree/master/mysql>`__):
|
||||||
|
|
||||||
|
@ -36,9 +36,7 @@ Creating a WordPress Blog Website
|
|||||||
|
|
||||||
**Container Settings**
|
**Container Settings**
|
||||||
|
|
||||||
In the **Basic Info** area, click **Select Image**. In the dialog box displayed, select **Open Source Images**, search for **wordpress**, and select the **wordpress** image.
|
Enter **wordpress:php7.3** in the **Image Name** text box.
|
||||||
|
|
||||||
Select **php7.3** for **Image Version**.
|
|
||||||
|
|
||||||
Add the following environment variables:
|
Add the following environment variables:
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ FAQs
|
|||||||
|
|
||||||
#. **Is CCE suitable for users who have little experience in building images?**
|
#. **Is CCE suitable for users who have little experience in building images?**
|
||||||
|
|
||||||
Yes. You can select images from **Open Source Images**, **Third-party Images**, and **Shared Images** pages on the CCE console. The **My Images** page displays only the images created by you. For details, see :ref:`Creating a Deployment (Nginx) from an Image <cce_qs_0003>`.
|
In addition to storing images created by yourself in **My Images**, CCE allows you to create containerized applications using open source images. For details, see :ref:`Creating a Deployment (Nginx) from an Image <cce_qs_0003>`.
|
||||||
|
|
||||||
#. **How do I create a workload using CCE?**
|
#. **How do I create a workload using CCE?**
|
||||||
|
|
||||||
|
@ -0,0 +1,14 @@
|
|||||||
|
:original_name: cce_10_0655.html
|
||||||
|
|
||||||
|
.. _cce_10_0655:
|
||||||
|
|
||||||
|
Copying a Node Pool
|
||||||
|
===================
|
||||||
|
|
||||||
|
You can copy the configuration of an existing node pool to create a new node pool on the CCE console.
|
||||||
|
|
||||||
|
#. Log in to the CCE console.
|
||||||
|
#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
||||||
|
#. Choose **More > Copy** next to a node pool name to copy the node pool.
|
||||||
|
#. The configurations of the selected node pool are replicated to the **Clone Node Pool** page. You can edit the configurations as required. For details about configuration items, see :ref:`Creating a Node Pool <cce_10_0012>`. After confirming the configuration, click **Next: Confirm**.
|
||||||
|
#. On the **Confirm** page, confirm the node pool configuration and click **Submit**. Then, a new node pool is created based on the edited configuration.
|
@ -0,0 +1,24 @@
|
|||||||
|
:original_name: cce_10_0657.html
|
||||||
|
|
||||||
|
.. _cce_10_0657:
|
||||||
|
|
||||||
|
Deleting a Node Pool
|
||||||
|
====================
|
||||||
|
|
||||||
|
Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools.
|
||||||
|
|
||||||
|
Precautions
|
||||||
|
-----------
|
||||||
|
|
||||||
|
- Deleting a node pool will delete all nodes in the node pool. Back up data in a timely manner to prevent data loss.
|
||||||
|
- Deleting a node will lead to pod migration, which may affect services. Perform this operation during off-peak hours. If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.
|
||||||
|
- When deleting a node pool, the system sets all nodes in the current node pool to the unschedulable state.
|
||||||
|
|
||||||
|
Procedure
|
||||||
|
---------
|
||||||
|
|
||||||
|
#. Log in to the CCE console.
|
||||||
|
#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
||||||
|
#. Choose **More > Delete** next to a node pool name to delete the node pool.
|
||||||
|
#. Read the precautions in the **Delete Node Pool** dialog box.
|
||||||
|
#. In the text box, click **Yes** to confirm that you want to continue the deletion.
|
@ -5,43 +5,22 @@
|
|||||||
Managing a Node Pool
|
Managing a Node Pool
|
||||||
====================
|
====================
|
||||||
|
|
||||||
Notes and Constraints
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
The default node pool DefaultPool does not support the following management operations.
|
|
||||||
|
|
||||||
Deleting a Node Pool
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.
|
|
||||||
|
|
||||||
#. Log in to the CCE console.
|
|
||||||
#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
|
||||||
#. Choose **More > Delete** next to a node pool name to delete the node pool.
|
|
||||||
#. Read the precautions in the **Delete Node Pool** dialog box.
|
|
||||||
#. In the text box, click **Yes** to confirm that you want to continue the deletion.
|
|
||||||
|
|
||||||
.. _cce_10_0222__section550619571556:
|
|
||||||
|
|
||||||
Copying a Node Pool
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
You can copy the configuration of an existing node pool to create a new node pool on the CCE console.
|
|
||||||
|
|
||||||
#. Log in to the CCE console.
|
|
||||||
#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
|
||||||
#. Choose **More > Copy** next to a node pool name to copy the node pool.
|
|
||||||
#. The configurations of the selected node pool are replicated to the **Clone Node Pool** page. You can edit the configurations as required and click **Next: Confirm**.
|
|
||||||
#. On the **Confirm** page, confirm the node pool configuration and click **Create Now**. Then, a new node pool is created based on the edited configuration.
|
|
||||||
|
|
||||||
- :ref:`Configuring a Node Pool <cce_10_0652>`
|
- :ref:`Configuring a Node Pool <cce_10_0652>`
|
||||||
|
- :ref:`Updating a Node Pool <cce_10_0653>`
|
||||||
- :ref:`Synchronizing Node Pools <cce_10_0654>`
|
- :ref:`Synchronizing Node Pools <cce_10_0654>`
|
||||||
- :ref:`Upgrading the OS <cce_10_0660>`
|
- :ref:`Upgrading the OS <cce_10_0660>`
|
||||||
|
- :ref:`Copying a Node Pool <cce_10_0655>`
|
||||||
|
- :ref:`Migrating a Node <cce_10_0656>`
|
||||||
|
- :ref:`Deleting a Node Pool <cce_10_0657>`
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
:hidden:
|
:hidden:
|
||||||
|
|
||||||
configuring_a_node_pool
|
configuring_a_node_pool
|
||||||
|
updating_a_node_pool
|
||||||
synchronizing_node_pools
|
synchronizing_node_pools
|
||||||
upgrading_the_os
|
upgrading_the_os
|
||||||
|
copying_a_node_pool
|
||||||
|
migrating_a_node
|
||||||
|
deleting_a_node_pool
|
||||||
|
@ -0,0 +1,18 @@
|
|||||||
|
:original_name: cce_10_0656.html
|
||||||
|
|
||||||
|
.. _cce_10_0656:
|
||||||
|
|
||||||
|
Migrating a Node
|
||||||
|
================
|
||||||
|
|
||||||
|
Nodes in a node pool can be migrated. Currently, nodes in a node pool can be migrated only to the default node pool (defaultpool) in the same cluster.
|
||||||
|
|
||||||
|
#. Log in to the CCE console and access the cluster console.
|
||||||
|
#. In the navigation pane, choose **Nodes** and switch to the **Node Pools** tab page.
|
||||||
|
#. Click **View Node** in the **Operation** column of the node pool to be migrated.
|
||||||
|
#. Click **More** > **Migrate** in the **Operation** column of the target node to migrate the node.
|
||||||
|
#. In the displayed **Migrate Node** window, confirm the information.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The migration has no impacts on the original resource tags, Kubernetes labels, and taints of the node.
|
@ -0,0 +1,98 @@
|
|||||||
|
:original_name: cce_10_0653.html
|
||||||
|
|
||||||
|
.. _cce_10_0653:
|
||||||
|
|
||||||
|
Updating a Node Pool
|
||||||
|
====================
|
||||||
|
|
||||||
|
Constraints
|
||||||
|
-----------
|
||||||
|
|
||||||
|
- When editing the resource tags of the node pool. The modified configuration takes effect only for new nodes. To synchronize the configuration to the existing nodes, you need to manually reset the existing nodes.
|
||||||
|
- Updates of kubernetes labels and taints are automatically synchronized to existing nodes. You do not need to reset nodes.
|
||||||
|
|
||||||
|
|
||||||
|
Updating a Node Pool
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
#. Log in to the CCE console.
|
||||||
|
|
||||||
|
#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
||||||
|
|
||||||
|
#. Click **Update** next to the name of the node pool you will edit. Configure the parameters in the displayed **Update Node Pool** page.
|
||||||
|
|
||||||
|
**Basic Settings**
|
||||||
|
|
||||||
|
.. table:: **Table 1** Basic settings
|
||||||
|
|
||||||
|
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Parameter | Description |
|
||||||
|
+===================================+=================================================================================================================================================================================================================================================================================================================================================================================================================================================+
|
||||||
|
| Node Pool Name | Name of the node pool. |
|
||||||
|
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Nodes | Modify the number of nodes based on service requirements. |
|
||||||
|
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Auto Scaling | By default, this parameter is disabled. |
|
||||||
|
| | |
|
||||||
|
| | After you enable autoscaler by clicking |image1|, nodes in the node pool are automatically created or deleted based on service requirements. |
|
||||||
|
| | |
|
||||||
|
| | - **Maximum Nodes** and **Minimum Nodes**: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range. |
|
||||||
|
| | |
|
||||||
|
| | - **Priority**: A larger value indicates a higher priority. For example, if this parameter is set to **1** and **4** respectively for node pools A and B, B has a higher priority than A, and auto scaling is first triggered for B. If the priorities of multiple node pools are set to the same value, for example, **2**, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle. |
|
||||||
|
| | |
|
||||||
|
| | After the priority is updated, the configuration takes effect within 1 minute. |
|
||||||
|
| | |
|
||||||
|
| | - **Cooldown Period**: Enter a period, in minutes. This field indicates the period during which the nodes added in the current node pool cannot be scaled in. |
|
||||||
|
| | |
|
||||||
|
| | If the **Autoscaler** field is set to on, install the :ref:`autoscaler add-on <cce_10_0154>` to use the autoscaler feature. |
|
||||||
|
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
|
**Advanced Settings**
|
||||||
|
|
||||||
|
.. table:: **Table 2** Advanced settings
|
||||||
|
|
||||||
|
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Parameter | Description |
|
||||||
|
+===================================+================================================================================================================================================================================================================================================================+
|
||||||
|
| Kubernetes Label | Click **Add Label** to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added. |
|
||||||
|
| | |
|
||||||
|
| | Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see `Labels and Selectors <https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/>`__. |
|
||||||
|
| | |
|
||||||
|
| | .. note:: |
|
||||||
|
| | |
|
||||||
|
| | After a **Kubernetes label** is modified, the inventory nodes in the node pool are updated synchronously. |
|
||||||
|
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Resource Tag | You can add resource tags to classify resources. |
|
||||||
|
| | |
|
||||||
|
| | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. |
|
||||||
|
| | |
|
||||||
|
| | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. |
|
||||||
|
| | |
|
||||||
|
| | .. note:: |
|
||||||
|
| | |
|
||||||
|
| | After a **resource tag** is modified, the modification automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the modification to take effect. |
|
||||||
|
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: |
|
||||||
|
| | |
|
||||||
|
| | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. |
|
||||||
|
| | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). |
|
||||||
|
| | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. |
|
||||||
|
| | |
|
||||||
|
| | For details, see :ref:`Managing Node Taints <cce_10_0352>`. |
|
||||||
|
| | |
|
||||||
|
| | .. note:: |
|
||||||
|
| | |
|
||||||
|
| | After a **taint** is modified, the inventory nodes in the node pool are updated synchronously. |
|
||||||
|
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| Edit Key pair | Only node pools that use key pairs for login support key pair editing. You can select another key pair. |
|
||||||
|
| | |
|
||||||
|
| | .. note:: |
|
||||||
|
| | |
|
||||||
|
| | The edited key pair automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the key pair to take effect. |
|
||||||
|
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
|
#. When the configuration is complete, click **OK**.
|
||||||
|
|
||||||
|
After the node pool parameters are updated, go to the **Nodes** page to check whether the node to which the node pool belongs is updated. You can reset the node to synchronize the configuration updates for the node pool.
|
||||||
|
|
||||||
|
.. |image1| image:: /_static/images/en-us_image_0000001629926113.png
|
@ -37,15 +37,21 @@ Mapping between Node OSs and Container Engines
|
|||||||
|
|
||||||
.. table:: **Table 2** Node OSs and container engines in CCE Turbo clusters
|
.. table:: **Table 2** Node OSs and container engines in CCE Turbo clusters
|
||||||
|
|
||||||
+-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+
|
+---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+
|
||||||
| Node Type | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime |
|
| Node Type | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime |
|
||||||
+=========================================+=============+================+==================+==========================+===================+
|
+===========================+==============+================+=================================================+==========================+===================+
|
||||||
| Elastic Cloud Server (VM) | CentOS 7.x | 3.x | Docker | OverlayFS | runC |
|
| Elastic Cloud Server (VM) | CentOS 7.x | 3.x | Docker | OverlayFS | runC |
|
||||||
|
+---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+
|
||||||
|
| | EulerOS 2.5 | 3.x | Docker | OverlayFS | runC |
|
||||||
|
+---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+
|
||||||
|
| | EulerOS 2.9 | 4.x | Docker | OverlayFS | runC |
|
||||||
| | | | | | |
|
| | | | | | |
|
||||||
| | EulerOS 2.9 | | | | |
|
| | | | Clusters of v1.23 and later support containerd. | | |
|
||||||
+-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+
|
+---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+
|
||||||
| Elastic Cloud Server (physical machine) | EulerOS 2.9 | 4.x | containerd | Device Mapper | Kata |
|
| | Ubuntu 22.04 | 4.x | Docker | OverlayFS | runC |
|
||||||
+-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+
|
| | | | | | |
|
||||||
|
| | | | containerd | | |
|
||||||
|
+---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+
|
||||||
|
|
||||||
Differences in Tracing
|
Differences in Tracing
|
||||||
----------------------
|
----------------------
|
||||||
@ -75,6 +81,6 @@ Container Engine Version Description
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
- You are advised to use the containerd engine for Ubuntu nodes.
|
- You are advised to use the containerd engine for Ubuntu nodes.
|
||||||
- The open source docker-ce of Ubuntu 18.04 nodes may trigger bugs when concurrent exec operations are performed (for example, multiple exec probes are configured). You are advised to use HTTP/TCP probes.
|
- The open source docker-ce of the Ubuntu 18.04 node may trigger bugs when concurrent exec operations are performed (for example, multiple exec probes are configured). You are advised to use HTTP/TCP probes.
|
||||||
|
|
||||||
- containerd: 1.6.14
|
- containerd: 1.6.14
|
||||||
|
@ -74,7 +74,7 @@ Scenario 2: The Original Node Is Not in DefaultPool
|
|||||||
|
|
||||||
#. .. _cce_10_0276__li1992616214312:
|
#. .. _cce_10_0276__li1992616214312:
|
||||||
|
|
||||||
Copy the node pool and add nodes to it. For details, see :ref:`Copying a Node Pool <cce_10_0222__section550619571556>`.
|
Copy the node pool and add nodes to it. For details, see :ref:`Copying a Node Pool <cce_10_0655>`.
|
||||||
|
|
||||||
#. Click **View Node** in the **Operation** column of the node pool. The IP address of the new node is displayed in the node list.
|
#. Click **View Node** in the **Operation** column of the node pool. The IP address of the new node is displayed in the node list.
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user