Update content
This commit is contained in:
parent
4a8bc7b0af
commit
84a626cef4
@ -12,7 +12,7 @@ Autoscaler is an important Kubernetes controller. It supports microservice scali
|
||||
|
||||
When the CPU or memory usage of a microservice is too high, horizontal pod autoscaling is triggered to add pods to reduce the load. These pods can be automatically reduced when the load is low, allowing the microservice to run as efficiently as possible.
|
||||
|
||||
CCE simplifies the creation, upgrade, and manual scaling of Kubernetes clusters, in which traffic loads change over time. To balance resource usage and workload performance of nodes, Kubernetes introduces the autoscaler add-on to automatically resize a cluster based on the resource usage required for workloads deployed in the cluster. For details, see :ref:`Creating a Node Scaling Policy <cce_10_0209>`.
|
||||
CCE simplifies the creation, upgrade, and manual scaling of Kubernetes clusters, in which traffic loads change over time. To balance resource usage and workload performance of nodes, Kubernetes introduces the autoscaler add-on to automatically adjust the number of nodes in a cluster based on the resource usage required for workloads deployed in the cluster. For details, see :ref:`Creating a Node Scaling Policy <cce_10_0209>`.
|
||||
|
||||
Open source community: https://github.com/kubernetes/autoscaler
|
||||
|
||||
@ -65,10 +65,10 @@ Constraints
|
||||
Installing the Add-on
|
||||
---------------------
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **autoscaler** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **autoscaler** on the right, and click **Install**.
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** Specifications configuration
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -88,7 +88,7 @@ Installing the Add-on
|
||||
| | |
|
||||
| | If you select **Custom**, you can adjust the number of pods as required. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. |
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ. |
|
||||
| | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. |
|
||||
@ -98,7 +98,7 @@ Installing the Add-on
|
||||
|
||||
#. Configure the add-on parameters.
|
||||
|
||||
.. table:: **Table 2** Parameters
|
||||
.. table:: **Table 2** Add-on parameters
|
||||
|
||||
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
|
@ -2,8 +2,8 @@
|
||||
|
||||
.. _cce_10_0129:
|
||||
|
||||
CoreDNS (System Resource Add-On, Mandatory)
|
||||
===========================================
|
||||
coredns
|
||||
=======
|
||||
|
||||
Introduction
|
||||
------------
|
||||
@ -27,18 +27,18 @@ Open source community: https://github.com/coredns/coredns
|
||||
Constraints
|
||||
-----------
|
||||
|
||||
When CoreDNS is running properly or being upgraded, ensure that the number of available nodes is greater than or equal to the number of CoreDNS instances and all CoreDNS instances are running. Otherwise, the upgrade will fail.
|
||||
When CoreDNS is running properly or being upgraded, ensure that the number of available nodes is greater than or equal to the number of the add-on pods and all the add-on pods are running. Otherwise, the upgrade will fail.
|
||||
|
||||
Installing the Add-on
|
||||
---------------------
|
||||
|
||||
This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps:
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **coredns** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **coredns** on the right, and click **Install**.
|
||||
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** CoreDNS parameters
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -61,7 +61,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
|
||||
#. Configure the add-on parameters.
|
||||
|
||||
.. table:: **Table 2** CoreDNS add-on parameters
|
||||
.. table:: **Table 2** Add-on parameters
|
||||
|
||||
+-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -80,7 +80,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
| | |
|
||||
| | - **upstream_nameservers**: IP address of the upstream DNS server. |
|
||||
| | |
|
||||
| | - servers:The servers configuration has been available since CoreDNS 1.23.1. You can customize the servers configuration. For details, see `dns-custom-nameservers <https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/>`__. |
|
||||
| | - **servers**: nameservers, which are available in CoreDNS v1.23.1 and later versions. You can customize nameservers. For details, see `dns-custom-nameservers <https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers>`__. |
|
||||
| | |
|
||||
| | **plugins** indicates the configuration of each component in CoreDNS. Retain the default settings typically to prevent CoreDNS from being unavailable due to configuration errors. Each plugin component contains **name**, **parameters** (optional), and **configBlock** (optional). The format of the generated Corefile is as follows: |
|
||||
| | |
|
@ -2,8 +2,8 @@
|
||||
|
||||
.. _cce_10_0066:
|
||||
|
||||
everest (System Resource Add-On, Mandatory)
|
||||
===========================================
|
||||
everest
|
||||
=======
|
||||
|
||||
Introduction
|
||||
------------
|
||||
@ -24,11 +24,11 @@ Installing the Add-on
|
||||
|
||||
This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps:
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **everest** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **everest** on the right, and click **Install**.
|
||||
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** everest parameters
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -39,7 +39,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
| | |
|
||||
| | If you select **Custom**, you can adjust the number of pods as required. |
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. |
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ. |
|
||||
| | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. |
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Containers | The everest add-on contains the everest-csi-controller and everest-csi-driver components. For details, see :ref:`Components <cce_10_0066__section0377457163618>`. |
|
||||
@ -55,7 +55,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
| | |
|
||||
| | - everest-csi-driver |
|
||||
| | |
|
||||
| | - CPU limit: 300 m for 200 or fewer nodes, 500 m for 1,000 nodes, and 800 m for 2,000 nodes |
|
||||
| | - CPU limit: 300 m for 200 or fewer nodes, 500 m for 1000 nodes, and 800 m for 2000 nodes |
|
||||
| | - Memory limit: 300 MiB for 200 or fewer nodes, 600 MiB for 1000 nodes, and 900 MiB for 2000 nodes |
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
@ -70,7 +70,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
+------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+
|
||||
| 50 | 1000 | 2 | 250 m | 600 MiB | 300 m | 300 MiB |
|
||||
+------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+
|
||||
| 200 | 1,000 | 2 | 250 m | 1 GiB | 300 m | 300 MiB |
|
||||
| 200 | 1000 | 2 | 250 m | 1 GiB | 300 m | 300 MiB |
|
||||
+------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+
|
||||
| 1000 | 1000 | 2 | 350 m | 2 GiB | 500 m | 600 MiB |
|
||||
+------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+
|
||||
@ -83,7 +83,7 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
|
||||
#. Configure the add-on parameters.
|
||||
|
||||
.. table:: **Table 3** everest add-on parameters
|
||||
.. table:: **Table 3** Add-on parameters
|
||||
|
||||
+------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
@ -23,10 +23,10 @@ Constraints
|
||||
Installing the Add-on
|
||||
---------------------
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **gpu-beta** or **gpu-device-plugin** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **gpu-beta** on the right, and click **Install**.
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** Add-on specifications
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+----------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
|
@ -6,25 +6,25 @@ Add-ons
|
||||
=======
|
||||
|
||||
- :ref:`Overview <cce_10_0277>`
|
||||
- :ref:`CoreDNS (System Resource Add-On, Mandatory) <cce_10_0129>`
|
||||
- :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>`
|
||||
- :ref:`coredns <cce_10_0129>`
|
||||
- :ref:`everest <cce_10_0066>`
|
||||
- :ref:`npd <cce_10_0132>`
|
||||
- :ref:`autoscaler <cce_10_0154>`
|
||||
- :ref:`metrics-server <cce_10_0205>`
|
||||
- :ref:`gpu-beta <cce_10_0141>`
|
||||
- :ref:`Volcano <cce_10_0193>`
|
||||
- :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>`
|
||||
- :ref:`volcano <cce_10_0193>`
|
||||
- :ref:`storage-driver(Flexvolume, Deprecated) <cce_10_0127>`
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
overview
|
||||
coredns_system_resource_add-on_mandatory
|
||||
everest_system_resource_add-on_mandatory
|
||||
coredns
|
||||
everest
|
||||
npd
|
||||
autoscaler
|
||||
metrics-server
|
||||
gpu-beta
|
||||
volcano
|
||||
storage-driver_system_resource_add-on_discarded
|
||||
storage-driverflexvolume_deprecated
|
||||
|
@ -16,27 +16,27 @@ The official community project and documentation are available at https://github
|
||||
Installing the Add-on
|
||||
---------------------
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **metrics-server** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **metrics-server** on the right, and click **Install**.
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** metrics-server configuration
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
+===================================+==========================================================================================================================================================================================================================+
|
||||
+===================================+=================================================================================================================================================================================================================+
|
||||
| Add-on Specifications | Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Pods | Number of pods that will be created to match the selected add-on specifications. |
|
||||
| | |
|
||||
| | If you select **Custom**, you can adjust the number of pods as required. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. |
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Multi-AZ | - **Preferred**: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ. |
|
||||
| | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. |
|
||||
| | |
|
||||
| | If you select **Custom**, you can adjust the container specifications as required. |
|
||||
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
#. Click **Install**.
|
||||
|
||||
@ -45,8 +45,8 @@ Components
|
||||
|
||||
.. table:: **Table 2** metrics-server components
|
||||
|
||||
+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|
||||
| Component | Description | Resource Type |
|
||||
+================+============================================================================================================================================================================+===============+
|
||||
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|
||||
| Container Component | Description | Resource Type |
|
||||
+=====================+============================================================================================================================================================================+===============+
|
||||
| metrics-server | Aggregator for the monitored data of cluster core resources, which is used to collect and aggregate resource usage metrics obtained through the Metrics API in the cluster | Deployment |
|
||||
+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|
||||
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|
||||
|
@ -35,7 +35,7 @@ Installing the Add-on
|
||||
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** npd configuration
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------+------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -51,7 +51,7 @@ Installing the Add-on
|
||||
|
||||
Only v1.16.0 and later versions support the configurations.
|
||||
|
||||
.. table:: **Table 2** npd parameters
|
||||
.. table:: **Table 2** Add-on parameters
|
||||
|
||||
+-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
|
@ -13,25 +13,25 @@ CCE provides multiple types of add-ons to extend cluster functions and meet feat
|
||||
|
||||
.. table:: **Table 1** Add-on list
|
||||
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Add-on Name | Introduction |
|
||||
+=========================================================================+=================================================================================================================================================================================================================================================================================================================================+
|
||||
| :ref:`CoreDNS (System Resource Add-On, Mandatory) <cce_10_0129>` | CoreDNS is a DNS server that provides domain name resolution for Kubernetes clusters through chain plug-ins. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` | everest is a cloud native container storage system, which enables clusters of Kubernetes v1.15.6 or later to use cloud storage through the Container Storage Interface (CSI). |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+=============================================================+=================================================================================================================================================================================================================================================================================================================================+
|
||||
| :ref:`coredns <cce_10_0129>` | CoreDNS is a DNS server that provides domain name resolution for Kubernetes clusters through chain plug-ins. |
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`storage-driver(Flexvolume, Deprecated) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. |
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`everest <cce_10_0066>` | everest is a cloud native container storage system, which enables clusters of Kubernetes v1.15.6 or later to use cloud storage through the Container Storage Interface (CSI). |
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`npd <cce_10_0132>` | node-problem-detector (npd for short) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. The npd add-on can run as a DaemonSet or a daemon. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`autoscaler <cce_10_0154>` | The autoscaler add-on resizes a cluster based on pod scheduling status and resource usage. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`metrics-server <cce_10_0205>` | metrics-server is an aggregator for monitoring data of core cluster resources. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`gpu-device-plugin (formerly gpu-beta) <cce_10_0141>` | gpu-device-plugin is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`volcano <cce_10_0193>` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Add-on Lifecycle
|
||||
----------------
|
||||
|
@ -2,8 +2,8 @@
|
||||
|
||||
.. _cce_10_0127:
|
||||
|
||||
storage-driver (System Resource Add-On, Discarded)
|
||||
==================================================
|
||||
storage-driver(Flexvolume, Deprecated)
|
||||
======================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
@ -16,7 +16,7 @@ Constraints
|
||||
-----------
|
||||
|
||||
- For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume add-on (storage-driver) is compatible with the CSI add-on (:ref:`everest <cce_10_0066>`). Clusters of v1.17 and later versions do not support FlexVolume anymore. Use the everest add-on.
|
||||
- The FlexVolume add-on will be maintained by Kubernetes developers, but new functionality will only be added to :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>`. Do not create CCE storage that connects to the FlexVolume add-on (storage-driver) anymore. Otherwise, storage may malfunction.
|
||||
- The FlexVolume add-on will be maintained by Kubernetes developers, but new functionality will only be added to :ref:`everest <cce_10_0066>`. Do not create CCE storage that connects to the FlexVolume add-on (storage-driver) anymore. Otherwise, storage may malfunction.
|
||||
- This add-on can be installed only in **clusters of v1.13 or earlier**. By default, the :ref:`everest <cce_10_0066>` add-on is installed when clusters of v1.15 or later are created.
|
||||
|
||||
.. note::
|
||||
@ -30,5 +30,5 @@ This add-on has been installed by default. If it is uninstalled due to some reas
|
||||
|
||||
If storage-driver is not installed in a cluster, perform the following steps to install it:
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **storage-driver** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **storage-driver** on the right, and click **Install**.
|
||||
#. Click **Install** to install the add-on. Note that the storage-driver has no configurable parameters and can be directly installed.
|
@ -2,7 +2,7 @@
|
||||
|
||||
.. _cce_10_0193:
|
||||
|
||||
Volcano
|
||||
volcano
|
||||
=======
|
||||
|
||||
Introduction
|
||||
@ -29,11 +29,11 @@ Install and configure the Volcano add-on in CCE clusters. For details, see :ref:
|
||||
Installing the Add-on
|
||||
---------------------
|
||||
|
||||
#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **volcano** on the right, and click **Install**.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Add-ons** in the navigation pane, locate **volcano** on the right, and click **Install**.
|
||||
|
||||
#. On the **Install Add-on** page, configure the specifications.
|
||||
|
||||
.. table:: **Table 1** Volcano specifications
|
||||
.. table:: **Table 1** Add-on configuration
|
||||
|
||||
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
@ -62,7 +62,7 @@ Installing the Add-on
|
||||
| | |
|
||||
| | For example, for 2000 nodes and 20,000 pods, Number of target nodes x Number of target pods = 40 million, which is close to the specification of 700/70000 (Number of cluster nodes x Number of pods = 49 million). According to the following table, you are advised to set the CPU request value to 4000 m and the limit value to 5500 m. |
|
||||
| | |
|
||||
| | - Memory request value: It is recommended that 2.4 GiB memory be allocated to every 1,000 nodes and 1 GiB memory be allocated to every 10,000 pods. The memory request value is the sum of these two values. (The obtained value may be different from the recommended value in :ref:`Table 2 <cce_10_0193__table4742829185912>`. You can use either of them.) |
|
||||
| | - Memory request value: It is recommended that 2.4 GiB memory be allocated to every 1000 nodes and 1 GiB memory be allocated to every 10,000 pods. The memory request value is the sum of these two values. (The obtained value may be different from the recommended value in :ref:`Table 2 <cce_10_0193__table4742829185912>`. You can use either of them.) |
|
||||
| | |
|
||||
| | Memory request = Number of target nodes/1000 x 2.4 GiB + Number of target pods/10000 x 1 GiB |
|
||||
| | |
|
||||
|
@ -21,6 +21,7 @@ Constraints
|
||||
-----------
|
||||
|
||||
- Auto scaling policies apply to node pools. When the number of nodes in a node pool is 0 and the scaling policy is based on CPU or memory usage, node scaling is not triggered.
|
||||
- Node scale-in will cause PVC/PV data loss for the :ref:`local PVs <cce_10_0391>` associated with the node. These PVCs and PVs cannot be restored or used again. In a node scale-in, the pod that uses the local PV is evicted from the node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled.
|
||||
- When autoscaler is used, some taints or annotations may affect auto scaling. Therefore, do not use the following taints or annotations in clusters:
|
||||
|
||||
- **ignore-taint.cluster-autoscaler.kubernetes.io**: The taint works on nodes. Kubernetes-native autoscaler supports protection against abnormal scale outs and periodically evaluates the proportion of available nodes in the cluster. When the proportion of non-ready nodes exceeds 45%, protection will be triggered. In this case, all nodes with the **ignore-taint.cluster-autoscaler.kubernetes.io** taint in the cluster are filtered out from the autoscaler template and recorded as non-ready nodes, which affects cluster scaling.
|
||||
|
@ -264,9 +264,7 @@ Expanding the Capacity of a Data Disk Used by Pod (basesize)
|
||||
|
||||
.. important::
|
||||
|
||||
Resetting a node may make unavailable the node-specific resources (such as local storage and workloads scheduled to this node). Exercise caution when performing this operation to avoid impact on running services.
|
||||
|
||||
#. Click **Yes**.
|
||||
Resetting a node may make the node-specific resources (such as local storage and workloads scheduled to this node) unavailable. Exercise caution when performing this operation to avoid impact on running services.
|
||||
|
||||
#. Reconfigure node parameters.
|
||||
|
||||
@ -277,7 +275,7 @@ Expanding the Capacity of a Data Disk Used by Pod (basesize)
|
||||
**Storage Settings**: Click **Expand** next to the data disk to set the following parameters:
|
||||
|
||||
- **Allocate Disk Space**: storage space used by the container engine to store the Docker/containerd working directory, container image data, and image metadata. Defaults to 90% of the data disk.
|
||||
- **Allocate Pod Basesize**: CCE allows you to set an upper limit for the disk space occupied by each workload pod (including the space occupied by container images). This setting prevents the pods from taking all the disk space available, which may cause service exceptions. It is recommended that the value be smaller than or equal to 80% of the container engine space.
|
||||
- **Allocate Pod Basesize**: CCE allows you to set an upper limit for the disk space occupied by each workload pod (including the space occupied by container images). This setting prevents the pods from taking all the disk space available, which may cause service exceptions. It is recommended that the value be less than or equal to 80% of the container engine space.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -287,11 +285,11 @@ Expanding the Capacity of a Data Disk Used by Pod (basesize)
|
||||
|
||||
- When the rootfs uses OverlayFS, most nodes do not support custom pod basesize. The storage space of a single container is not limited and defaults to the container engine space.
|
||||
|
||||
Only EulerOS 2.9 nodes in clusters of 1.19.16, 1.21.3, 1.23.3, and later versions support custom pod basesize.
|
||||
Only nodes running EulerOS 2.9 in clusters of 1.19.16, 1.21.3, 1.23.3, and later versions support custom pod basesize.
|
||||
|
||||
- In the case of using Docker on EulerOS 2.9 nodes, **basesize** will not take effect if **CAP_SYS_RESOURCE** or **privileged** is configured for a container.
|
||||
- In the case of using Docker on nodes running EulerOS 2.9, **basesize** will not take effect if **CAP_SYS_RESOURCE** or **privileged** is configured for a container.
|
||||
|
||||
#. After the node is reset, log in to the node and run the following command to access the container and check whether the container storage capacity has been expanded:
|
||||
#. After the node is reset, log in to the node and run the following command to access the container and check whether the container storage capacity has been expanded.
|
||||
|
||||
**docker exec -it** *container_id* **/bin/sh** or **kubectl exec -it** *container_id* **/bin/sh**
|
||||
|
||||
@ -311,7 +309,7 @@ Cloud storage:
|
||||
|
||||
- You can expand the capacity of automatically created pay-per-use volumes on the console. The procedure is as follows:
|
||||
|
||||
#. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** in the **Operation** column of the target PVC and select **Scale-out**.
|
||||
#. Choose **Storage** in the navigation pane and click the **PersistentVolumeClaims (PVCs)** tab. Locate the row containing the target PVC and choose **More** > **Scale-out** in the **Operation** column.
|
||||
#. Enter the capacity to be added and click **OK**.
|
||||
|
||||
- For SFS Turbo, expand the capacity on the SFS console and then change the capacity in the PVC.
|
||||
|
@ -28,37 +28,37 @@ If the node status is abnormal, contact technical support.
|
||||
|
||||
If the container network is abnormal and your services are affected, contact technical support and confirm the abnormal network access path.
|
||||
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| Source | Destination | Destination Type | Possible Fault |
|
||||
+==============================================+==============================================================================+======================================+===========================================================================================================================+
|
||||
+=========================================================================+==============================================================================+======================================+===========================================================================================================================+
|
||||
| - Pods (inside a cluster) | Public IP address of Service ELB | Cluster traffic load balancing entry | No record. |
|
||||
| - Nodes (inside a cluster) | | | |
|
||||
| - Nodes in the same VPC (outside a cluster) | | | |
|
||||
| - Third-party clouds | | | |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| - Cloud servers outside the cluster but in the same VPC as the cluster | | | |
|
||||
| - Outside the VPC to which the cluster belongs | | | |
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Private IP address of Service ELB | Cluster traffic load balancing entry | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Public IP address of ingress ELB | Cluster traffic load balancing entry | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Private IP address of ingress ELB | Cluster traffic load balancing entry | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Public IP address of NodePort Service | Cluster traffic entry | The kube proxy configuration is overwritten. This fault has been rectified in the upgrade process. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Private IP address of NodePort Service | Cluster traffic entry | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ClusterIP Service | Service network plane | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Non NodePort Service port | Container network | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Cross-node pods | Container network plane | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Pods on the same node | Container network plane | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | Service and pod domain names are resolved by CoreDNS. | Domain name resolution | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | External domain names are resolved based on the CoreDNS hosts configuration. | Domain name resolution | After CoreDNS is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | External domain names are resolved based on the CoreDNS upstream server. | Domain name resolution | After CoreDNS is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
| | External domain names are not resolved by CoreDNS. | Domain name resolution | No record. |
|
||||
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|
||||
|
@ -63,7 +63,7 @@ You can upgrade CoreDNS separately after confirming the configuration difference
|
||||
|
||||
To retain the different configurations, use either of the following methods:
|
||||
|
||||
- Set **parameterSyncStrategy** to **force**. Manually enter the differential configuration. For details, see :ref:`CoreDNS (System Resource Add-On, Mandatory) <cce_10_0129>`.
|
||||
- Set **parameterSyncStrategy** to **force**. Manually enter the differential configuration. For details, see :ref:`coredns <cce_10_0129>`.
|
||||
- If **parameterSyncStrategy** is set to **inherit**, differentiated configurations are automatically inherited. The system automatically parses, identifies, and inherits differentiated parameters.
|
||||
|
||||
|image1|
|
||||
|
@ -5,12 +5,10 @@
|
||||
Cluster Network Settings
|
||||
========================
|
||||
|
||||
- :ref:`Switching a Node Subnet <cce_10_0464>`
|
||||
- :ref:`Adding a Container CIDR Block for a Cluster <cce_10_0680>`
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
switching_a_node_subnet
|
||||
adding_a_container_cidr_block_for_a_cluster
|
||||
|
@ -1,31 +0,0 @@
|
||||
:original_name: cce_10_0464.html
|
||||
|
||||
.. _cce_10_0464:
|
||||
|
||||
Switching a Node Subnet
|
||||
=======================
|
||||
|
||||
Scenario
|
||||
--------
|
||||
|
||||
This section describes how to switch subnets for nodes in a cluster.
|
||||
|
||||
Constraints
|
||||
-----------
|
||||
|
||||
- Only subnets in the same VPC as the cluster can be switched. The security group of the node cannot be switched.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
#. Log in to the ECS console.
|
||||
#. Click **More > Manage Network > Change VPC** in the **Operation** column of the target ECS.
|
||||
#. Set parameters for changing the VPC.
|
||||
|
||||
- **VPC**: Select the same VPC as that of the cluster.
|
||||
- **Subnet**: Select the target subnet to be switched.
|
||||
- **Private IP Address**: Select **Assign new** or **Use existing** as required.
|
||||
- **Security Group**: Select the security group of the cluster node. Otherwise, the node is unavailable.
|
||||
|
||||
#. Click **OK**.
|
||||
#. Go to the CCE console and reset the node. You can use the default parameter settings. For details, see :ref:`Resetting a Node <cce_10_0003>`.
|
@ -16,7 +16,7 @@ Constraints
|
||||
The following shows constraints on setting the rate limiting for inter-pod access:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| Constraint Type | Tunnel network model | VPC network model | Cloud Native 2.0 Network Model |
|
||||
| Constraint Type | Tunnel Network Model | VPC Network Model | Cloud Native 2.0 Network Model |
|
||||
+=========================+=======================================================================+=======================================================================+==================================================================================+
|
||||
| Supported versions | All versions | Clusters of v1.19.10 and later | Clusters of v1.19.10 and later |
|
||||
+-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+
|
||||
|
@ -7,7 +7,7 @@ DNS Configuration
|
||||
|
||||
Every Kubernetes cluster has a built-in DNS add-on (Kube-DNS or CoreDNS) to provide domain name resolution for workloads in the cluster. When handling a high concurrency of DNS queries, Kube-DNS/CoreDNS may encounter a performance bottleneck, that is, it may fail occasionally to fulfill DNS queries. There are cases when Kubernetes workloads initiate unnecessary DNS queries. This makes DNS overloaded if there are many concurrent DNS queries. Tuning DNS configuration for workloads will reduce the risks of DNS query failures to some extent.
|
||||
|
||||
For more information about DNS, see :ref:`CoreDNS (System Resource Add-On, Mandatory) <cce_10_0129>`.
|
||||
For more information about DNS, see :ref:`coredns <cce_10_0129>`.
|
||||
|
||||
DNS Configuration Items
|
||||
-----------------------
|
||||
|
@ -29,7 +29,6 @@ Interconnecting with ELB
|
||||
| | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | |
|
||||
+------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+
|
||||
| kubernetes.io/ingress.class | String | - **cce**: The self-developed ELB ingress is used. | Only clusters of v1.21 or earlier |
|
||||
| | | - **nginx**: Nginx ingress is used. | |
|
||||
| | | | |
|
||||
| | | This parameter is mandatory when an ingress is created by calling the API. | |
|
||||
| | | | |
|
||||
@ -166,7 +165,8 @@ Data Structure
|
||||
| eip_type | Yes for public network load balancers | String | EIP type. |
|
||||
| | | | |
|
||||
| | | | - **5_bgp**: dynamic BGP |
|
||||
| | | | - **5_sbgp**: static BGP |
|
||||
| | | | |
|
||||
| | | | The specific type varies with regions. For details, see the EIP console. |
|
||||
+----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| available_zone | Yes | Array of strings | AZ where the load balancer is located. |
|
||||
| | | | |
|
||||
|
@ -44,7 +44,7 @@ This section uses an Nginx workload as an example to describe how to add an ELB
|
||||
|
||||
- **Instance Name**: Enter a load balancer name.
|
||||
- **Public Access**: If enabled, an EIP with 5 Mbit/s bandwidth will be created.
|
||||
- **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet, AZ, and specifications. Only HTTP- or HTTPS-compliant dedicated load balancers can be automatically created.
|
||||
- **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet AZ, and specifications. Only HTTP- or HTTPS-compliant dedicated load balancers can be automatically created.
|
||||
|
||||
- **Listener**: Ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is *k8s__<Protocol type>_<Port number>*, for example, *k8s_HTTP_80*.
|
||||
|
||||
|
@ -329,7 +329,8 @@ The following describes how to run the kubectl command to automatically create a
|
||||
| eip_type | Yes for public network load balancers | String | EIP type. |
|
||||
| | | | |
|
||||
| | | | - **5_bgp**: dynamic BGP |
|
||||
| | | | - **5_sbgp**: static BGP |
|
||||
| | | | |
|
||||
| | | | The specific type varies with regions. For details, see the EIP console. |
|
||||
+----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| available_zone | Yes | Array of strings | AZ where the load balancer is located. |
|
||||
| | | | |
|
||||
@ -368,7 +369,7 @@ The following describes how to run the kubectl command to automatically create a
|
||||
|
||||
**kubectl get ingress**
|
||||
|
||||
If information similar to the following is displayed, the ingress has been created successfully and the workload is accessible.
|
||||
If information similar to the following is displayed, the ingress has been created and the workload is accessible.
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
@ -24,7 +24,7 @@ The cluster-internal domain name format is *<Service name>*.\ *<Namespace of the
|
||||
Creating a ClusterIP Service
|
||||
----------------------------
|
||||
|
||||
#. Log in to the CCE console and access the cluster console.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console.
|
||||
#. Choose **Networking** in the navigation pane and click **Create Service** in the upper right corner.
|
||||
#. Set intra-cluster access parameters.
|
||||
|
||||
|
@ -41,7 +41,7 @@ Constraints
|
||||
- CCE Turbo clusters support only cluster-level service affinity.
|
||||
- Dedicated ELB load balancers can be used only in clusters of v1.17 and later.
|
||||
- Dedicated load balancers must be of the network type (TCP/UDP) supporting private networks (with a private IP). If the Service needs to support HTTP, the specifications of dedicated load balancers must use HTTP/HTTPS (application load balancing) in addition to TCP/UDP (network load balancing).
|
||||
- In a CCE cluster, if the cluster-level affinity is configured for a LoadBalancer Service, requests are distributed to the node ports of each node using SNAT when entering the cluster. The number of node ports cannot exceed the number of available node ports on the node. If the service affinity is at the node level (Local), there is no such constraint. In a CCE Turbo cluster, this constraint applies to shared ELB load balancers, but not dedicated ones. Use dedicated ELB load balancers in CCE Turbo clusters.
|
||||
- In a CCE cluster, if the cluster-level affinity is configured for a LoadBalancer Service, requests are distributed to the node ports of each node using SNAT when entering the cluster. The number of node ports cannot exceed the number of available node ports on the node. If the service affinity is at the node level (Local), there is no such constraint. In a CCE Turbo cluster, this constraint applies to shared load balancers, but not dedicated ones. Use dedicated load balancers in CCE Turbo clusters.
|
||||
- When the cluster service forwarding (proxy) mode is IPVS, the node IP cannot be configured as the external IP of the Service. Otherwise, the node is unavailable.
|
||||
- In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service.
|
||||
|
||||
@ -74,7 +74,7 @@ Creating a LoadBalancer Service
|
||||
|
||||
- **Instance Name**: Enter a load balancer name.
|
||||
- **Public Access**: If enabled, an EIP with 5 Mbit/s bandwidth will be created.
|
||||
- **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet, AZ, and specifications. Currently, only dedicated load balancers of the network type (TCP/UDP) can be automatically created.
|
||||
- **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet AZ, and specifications. Currently, only dedicated load balancers of the network type (TCP/UDP) can be automatically created.
|
||||
|
||||
You can click **Edit** in the **Set ELB** area and configure load balancer parameters in the **Set ELB** dialog box.
|
||||
|
||||
@ -621,7 +621,8 @@ You can set the Service when creating a workload using kubectl. This section use
|
||||
| eip_type | Yes for public network load balancers | String | EIP type. |
|
||||
| | | | |
|
||||
| | | | - **5_bgp**: dynamic BGP |
|
||||
| | | | - **5_sbgp**: static BGP |
|
||||
| | | | |
|
||||
| | | | The specific type varies with regions. For details, see the EIP console. |
|
||||
+----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| available_zone | Yes | Array of strings | AZ where the load balancer is located. |
|
||||
| | | | |
|
||||
|
@ -35,8 +35,8 @@ CCE supports passthrough networking. You can configure the **annotation** of **k
|
||||
|
||||
When a LoadBalancer Service (configured with elb.pass-through) is accessed within the cluster, the access is first forwarded to the load balancer, and then to the pods.
|
||||
|
||||
Notes and Constraints
|
||||
---------------------
|
||||
Constraints
|
||||
-----------
|
||||
|
||||
- After passthrough networking is configured for a dedicated load balancer, containers on the node where the workload runs cannot be accessed through the Service.
|
||||
- Passthrough networking is not supported for clusters of v1.15 or earlier.
|
||||
|
@ -491,7 +491,8 @@ Data Structure
|
||||
| eip_type | Yes for public network load balancers | String | EIP type. |
|
||||
| | | | |
|
||||
| | | | - **5_bgp**: dynamic BGP |
|
||||
| | | | - **5_sbgp**: static BGP |
|
||||
| | | | |
|
||||
| | | | The specific type varies with regions. For details, see the EIP console. |
|
||||
+----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| available_zone | Yes | Array of strings | AZ where the load balancer is located. |
|
||||
| | | | |
|
||||
|
@ -28,7 +28,7 @@ Constraints
|
||||
Creating a NodePort Service
|
||||
---------------------------
|
||||
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console.
|
||||
#. Choose **Networking** in the navigation pane and click **Create Service** in the upper right corner.
|
||||
#. Set intra-cluster access parameters.
|
||||
|
||||
|
@ -20,7 +20,7 @@ Procedure
|
||||
|
||||
#. Log in to the CCE console.
|
||||
|
||||
#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane on the left and click the **Node Pools** tab on the right.
|
||||
#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right.
|
||||
|
||||
#. In the upper right corner of the page, click **Create Node Pool**.
|
||||
|
||||
@ -43,7 +43,7 @@ Procedure
|
||||
| | |
|
||||
| | - **Maximum Nodes** and **Minimum Nodes**: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range. |
|
||||
| | |
|
||||
| | - **Priority**: Set this parameter based on service requirements. A larger value indicates a higher priority. For example, if this parameter is set to **1** and **4** respectively for node pools A and B, B has a higher priority than A. If the priorities of multiple node pools are set to the same value, for example, **2**, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle. |
|
||||
| | - **Priority**: Set this parameter based on service requirements. A larger value indicates a higher priority. For example, if this parameter is set to **1** and **4** respectively for node pools A and B, B has a higher priority than A. If the priorities of multiple node pools are set to the same value, these node pools are not prioritized and they will be scaled out by following the rule of maximizing resource utilization. |
|
||||
| | |
|
||||
| | .. note:: |
|
||||
| | |
|
||||
@ -92,7 +92,7 @@ Procedure
|
||||
| | |
|
||||
| | - ECS (VM): Containers run on ECSs. Only the ECSs that can be bound with multiple NICs are supported. |
|
||||
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Container engine | CCE clusters support Docker and containerd in some scenarios. |
|
||||
| Container Engine | CCE clusters support Docker and containerd in some scenarios. |
|
||||
| | |
|
||||
| | - VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd. |
|
||||
| | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines <cce_10_0462__section159298451879>`. |
|
||||
@ -105,7 +105,7 @@ Procedure
|
||||
| | |
|
||||
| | **Private image**: You can use private images. |
|
||||
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Login mode | - **Key Pair** |
|
||||
| Login Mode | - **Key Pair** |
|
||||
| | |
|
||||
| | Select the key pair used to log in to the node. You can select a shared key. |
|
||||
| | |
|
||||
|
@ -44,8 +44,8 @@ Description of DefaultPool
|
||||
|
||||
DefaultPool is not a real node pool. It only **classifies** nodes that are not in the user-created node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.
|
||||
|
||||
Applicable Scenarios
|
||||
--------------------
|
||||
Application Scenarios
|
||||
---------------------
|
||||
|
||||
When a large-scale cluster is required, you are advised to use node pools to manage nodes.
|
||||
|
||||
|
@ -14,6 +14,7 @@ Constraints
|
||||
-----------
|
||||
|
||||
- VM nodes that are being used by CCE do not support deletion on the ECS page.
|
||||
- Deleting a node will cause PVC/PV data loss for the :ref:`local PVs <cce_10_0391>` associated with the node. These PVCs and PVs cannot be restored or used again. In this scenario, the pod that uses the local PV is evicted from the node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled.
|
||||
|
||||
Precautions
|
||||
-----------
|
||||
@ -26,7 +27,7 @@ Precautions
|
||||
Procedure
|
||||
---------
|
||||
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster.
|
||||
#. Log in to the CCE console and click the cluster name to access the cluster console.
|
||||
#. In the navigation pane, choose **Nodes**. In the same row as the node you will delete, choose **More** > **Delete**.
|
||||
#. In the **Delete Node** dialog box, click **Yes**.
|
||||
|
||||
|
@ -21,6 +21,7 @@ Constraints
|
||||
- A CCE node can be removed only when it is in the **Active**, **Abnormal**, or **Error** status.
|
||||
- A CCE node in the **Active** status can have its OS re-installed and CCE components cleared after it is removed.
|
||||
- If the OS fails to be re-installed after the node is removed, manually re-install the OS. After the re-installation, log in to the node and run the clearance script to clear CCE components. For details, see :ref:`Handling Failed OS Reinstallation <cce_10_0338__section149069481111>`.
|
||||
- Removing a node will cause PVC/PV data loss for the :ref:`local PV <cce_10_0391>` associated with the node. These PVCs and PVs cannot be restored or used again. In this scenario, the pod that uses the local PV is evicted from the node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled.
|
||||
|
||||
Precautions
|
||||
-----------
|
||||
|
@ -27,6 +27,7 @@ Precautions
|
||||
- The IP addresses of the workload pods on the node will change, but the container network access is not affected.
|
||||
- There is remaining EVS disk quota.
|
||||
- While the node is being deleted, the backend will set the node to the unschedulable state.
|
||||
- Resetting a node will cause PVC/PV data loss for the :ref:`local PV <cce_10_0391>` associated with the node. These PVCs and PVs cannot be restored or used again. In this scenario, the pod that uses the local PV is evicted from the reset node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod is always in the creating state because the underlying logical volume corresponding to the PVC does not exist.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
@ -174,7 +174,7 @@ The following shows how to use a hostPath volume. Compared with emptyDir, the ty
|
||||
logs:
|
||||
rotate: Hourly
|
||||
annotations:
|
||||
|
||||
pathPattern: '**'
|
||||
format: ''
|
||||
volumes:
|
||||
- hostPath:
|
||||
@ -185,9 +185,9 @@ The following shows how to use a hostPath volume. Compared with emptyDir, the ty
|
||||
|
||||
.. table:: **Table 2** Parameter description
|
||||
|
||||
+--------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description | Description |
|
||||
+================================+=========================+=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+
|
||||
+=====================================+=========================+=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+
|
||||
| extendPathMode | Extended host path | Extended host paths contain pod IDs or container names to distinguish different containers into which the host path is mounted. |
|
||||
| | | |
|
||||
| | | A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single Pod. |
|
||||
@ -197,7 +197,7 @@ The following shows how to use a hostPath volume. Compared with emptyDir, the ty
|
||||
| | | - **PodName**: name of a pod. |
|
||||
| | | - **PodUID/ContainerName**: ID of a pod or name of a container. |
|
||||
| | | - **PodName/ContainerName**: name of a pod or container. |
|
||||
+--------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| policy.logs.rotate | Log dump | Log dump refers to rotating log files on a local host. |
|
||||
| | | |
|
||||
| | | - **Enabled**: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new **.zip** file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 **.zip** files. When the number of **.zip** files exceeds 20, earlier **.zip** files will be deleted. After the dump is complete, the log file in AOM will be cleared. |
|
||||
@ -208,7 +208,20 @@ The following shows how to use a hostPath volume. Compared with emptyDir, the ty
|
||||
| | | - AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur. |
|
||||
| | | - Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have set rotation for log files, skip the configuration. Otherwise, conflicts may occur. |
|
||||
| | | - You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files. |
|
||||
+--------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| policy.logs.annotations.pathPattern | Collection path | A collection path narrows down the scope of collection to specified logs. |
|
||||
| | | |
|
||||
| | | - If no collection path is specified, log files in **.log**, **.trace**, and **.out** formats will be collected from the specified path. |
|
||||
| | | - **/Path/**/** indicates that all log files in **.log**, **.trace**, and **.out** formats will be recursively collected from the specified path and all subdirectories at 5 levels deep. |
|
||||
| | | - \* in log file names indicates a fuzzy match. |
|
||||
| | | |
|
||||
| | | Example: The collection path **/tmp/**/test*.log** indicates that all **.log** files prefixed with **test** will be collected from **/tmp** and subdirectories at 5 levels deep. |
|
||||
| | | |
|
||||
| | | .. caution:: |
|
||||
| | | |
|
||||
| | | CAUTION: |
|
||||
| | | Ensure that the ICAgent version is 5.12.22 or later. |
|
||||
+-------------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| policy.logs.annotations.format | Multi-line log matching | Some program logs (for example, Java program logs) contain a log that occupies multiple lines. By default, the log collection system collects logs by line. If you want to display logs as a single log message in the log collection system, you can enable the multi-line log function and use the log time or regular pattern mode. When a line of log message matches the preset time format or regular expression, it is considered as the start of a log message and the next line starts with this line of log message is considered as the end identifier of the log message. |
|
||||
| | | |
|
||||
| | | The format is as follows: |
|
||||
@ -226,7 +239,7 @@ The following shows how to use a hostPath volume. Compared with emptyDir, the ty
|
||||
| | | |
|
||||
| | | - **time**: log time. Enter a time wildcard. For example, if the time in the log is 2017-01-01 23:59:59, the wildcard is YYYY-MM-DD hh:mm:ss. |
|
||||
| | | - **regular**: regular pattern. Enter a regular expression. |
|
||||
+--------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Viewing Logs
|
||||
------------
|
||||
|
@ -126,8 +126,7 @@ Prerequisites
|
||||
namespace: default
|
||||
annotations:
|
||||
everest.io/disk-volume-type: SAS # EVS disk type.
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk.
|
||||
labels:
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk. labels:
|
||||
failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed.
|
||||
failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed.
|
||||
spec:
|
||||
|
@ -14,7 +14,7 @@ Precautions
|
||||
|
||||
- The snapshot function is available **only for clusters of v1.15 or later** and requires the CSI-based everest add-on.
|
||||
- The subtype (common I/O, high I/O, or ultra-high I/O), disk mode (SCSI or VBD), data encryption, sharing status, and capacity of an EVS disk created from a snapshot must be the same as those of the disk associated with the snapshot. These attributes cannot be modified after being queried or set.
|
||||
- The disk must be available or in use. During the free trial, you can create up to 7 snapshots per disk.
|
||||
- Snapshots can be created only for EVS disks that are available or in use, and a maximum of seven snapshots can be created for a single EVS disk.
|
||||
- Snapshots can be created only for PVCs created using the storage class (whose name starts with csi) provided by the everest add-on. Snapshots cannot be created for PVCs created using the Flexvolume storage class whose name is ssd, sas, or sata.
|
||||
- Snapshot data of encrypted disks is stored encrypted, and that of non-encrypted disks is stored non-encrypted.
|
||||
|
||||
|
@ -126,8 +126,7 @@ Constraints
|
||||
namespace: default
|
||||
annotations:
|
||||
everest.io/disk-volume-type: SAS # EVS disk type.
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk.
|
||||
labels:
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk. labels:
|
||||
failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed.
|
||||
failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed.
|
||||
spec:
|
||||
|
@ -155,8 +155,7 @@ Using an Existing EVS Disk on the Console
|
||||
everest.io/disk-mode: SCSI # Device type of the EVS disk. Only SCSI is supported.
|
||||
everest.io/disk-volume-type: SAS # EVS disk type.
|
||||
storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk.
|
||||
persistentVolumeReclaimPolicy: Delete # Reclaim policy.
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk. persistentVolumeReclaimPolicy: Delete # Reclaim policy.
|
||||
storageClassName: csi-disk # Storage class name. The value must be csi-disk for EVS disks.
|
||||
|
||||
.. table:: **Table 2** Key parameters
|
||||
@ -225,8 +224,7 @@ Using an Existing EVS Disk on the Console
|
||||
namespace: default
|
||||
annotations:
|
||||
everest.io/disk-volume-type: SAS # EVS disk type.
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk.
|
||||
labels:
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) Encryption key ID. Mandatory for an encrypted disk. labels:
|
||||
failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed.
|
||||
failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed.
|
||||
spec:
|
||||
|
@ -30,7 +30,7 @@ You can mount a path on the host to a specified container path. A hostPath volum
|
||||
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Parameter | Description |
|
||||
+===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+
|
||||
| Storage Type | Select **HostPath**. |
|
||||
| Volume Type | Select **HostPath**. |
|
||||
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Host Path | Path of the host to which the local volume is to be mounted, for example, **/etc/hosts**. |
|
||||
| | |
|
||||
|
@ -28,6 +28,6 @@ Constraints
|
||||
-----------
|
||||
|
||||
- Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended.
|
||||
- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Removing a Node <cce_10_0338>`, :ref:`Deleting a Node <cce_10_0186>`, :ref:`Resetting a Node <cce_10_0003>`, and :ref:`Scaling In a Node <cce_10_0291>`. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist.
|
||||
- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Deleting a Node <cce_10_0186>`,\ :ref:` Removing a Node <cce_10_0338>`, :ref:`Resetting a Node <cce_10_0003>`, and :ref:`Scaling a Node <cce_10_0291>`. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist.
|
||||
- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur.
|
||||
- A local PV cannot be mounted to multiple workloads or jobs at the same time.
|
||||
|
@ -16,7 +16,7 @@ Constraints
|
||||
-----------
|
||||
|
||||
- Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended.
|
||||
- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Removing a Node <cce_10_0338>`, :ref:`Deleting a Node <cce_10_0186>`, :ref:`Resetting a Node <cce_10_0003>`, and :ref:`Scaling In a Node <cce_10_0291>`. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist.
|
||||
- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Deleting a Node <cce_10_0186>`,\ :ref:` Removing a Node <cce_10_0338>`, :ref:`Resetting a Node <cce_10_0003>`, and :ref:`Scaling a Node <cce_10_0291>`. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist.
|
||||
- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur.
|
||||
- A local PV cannot be mounted to multiple workloads or jobs at the same time.
|
||||
|
||||
|
@ -97,6 +97,7 @@ You can use the **mountOptions** field to configure mount options in a PV. The o
|
||||
storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner
|
||||
everest.io/obs-volume-type: STANDARD
|
||||
everest.io/region: <your_region> # Region where the OBS volume is.
|
||||
|
||||
nodePublishSecretRef: # Custom secret of the OBS volume.
|
||||
name: <your_secret_name> # Custom secret name.
|
||||
namespace: <your_namespace> # Namespace of the custom secret.
|
||||
|
@ -15,7 +15,7 @@ The everest add-on version must be **1.2.8 or later**. The add-on identifies the
|
||||
Constraints
|
||||
-----------
|
||||
|
||||
Mount options cannot be configured for secure containers.
|
||||
Mount options cannot be configured for Kata containers.
|
||||
|
||||
.. _cce_10_0337__section14888047833:
|
||||
|
||||
@ -43,7 +43,7 @@ The everest add-on in CCE presets the options described in :ref:`Table 1 <cce_10
|
||||
+-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| timeo | 600 | Waiting time before the NFS client retransmits a request. The unit is 0.1 seconds. Recommended value: **600** |
|
||||
+-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| hard/soft | Leave it blank. | Mounting mode. |
|
||||
| hard/soft | Leave it blank. | Mount mode. |
|
||||
| | | |
|
||||
| | | - **hard**: If the NFS request times out, the client keeps resending the request until the request is successful. |
|
||||
| | | - **soft**: If the NFS request times out, the client returns an error to the invoking program. |
|
||||
@ -53,14 +53,14 @@ The everest add-on in CCE presets the options described in :ref:`Table 1 <cce_10
|
||||
|
||||
You can set other mount options if needed. For details, see `Mounting an NFS File System to ECSs (Linux) <https://docs.otc.t-systems.com/en-us/usermanual/sfs/en-us_topic_0034428728.html>`__.
|
||||
|
||||
Setting Mount Options in a PV
|
||||
-----------------------------
|
||||
Configuring Mount Options in a PV
|
||||
---------------------------------
|
||||
|
||||
You can use the **mountOptions** field to set mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options <cce_10_0337__section14888047833>`.
|
||||
You can use the **mountOptions** field to configure mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options <cce_10_0337__section14888047833>`.
|
||||
|
||||
#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl <cce_10_0107>`.
|
||||
|
||||
#. Set mount options in a PV. Example:
|
||||
#. Configure mount options in a PV. Example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
@ -124,7 +124,7 @@ You can use the **mountOptions** field to set mount options in a PV. The options
|
||||
Setting Mount Options in a StorageClass
|
||||
---------------------------------------
|
||||
|
||||
You can use the **mountOptions** field to set mount options in a StorageClass. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options <cce_10_0337__section14888047833>`.
|
||||
You can use the **mountOptions** field to configure mount options in a StorageClass. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options <cce_10_0337__section14888047833>`.
|
||||
|
||||
#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl <cce_10_0107>`.
|
||||
|
||||
|
@ -92,10 +92,7 @@ Automatically Creating an SFS File System on the Console
|
||||
name: pvc-sfs-auto
|
||||
namespace: default
|
||||
annotations:
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) ID of the key for encrypting file systems
|
||||
everest.io/crypt-alias: sfs/default # (Optional) Key name. Mandatory for encrypting volumes.
|
||||
everest.io/crypt-domain-id: <your_domain_id> # (Optional) ID of the tenant to which an encrypted volume belongs. Mandatory for encrypting volumes.
|
||||
spec:
|
||||
everest.io/crypt-key-id: <your_key_id> # (Optional) ID of the key for encrypting file systems everest.io/crypt-alias: sfs/default # (Optional) Key name. Mandatory for encrypting volumes. everest.io/crypt-domain-id: <your_domain_id> # (Optional) ID of the tenant to which an encrypted volume belongs. Mandatory for encrypting volumes. spec:
|
||||
accessModes:
|
||||
- ReadWriteMany # The value must be ReadWriteMany for SFS.
|
||||
resources:
|
||||
|
@ -54,7 +54,7 @@ You can use the **mountOptions** field to configure mount options in a PV. The o
|
||||
|
||||
#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl <cce_10_0107>`.
|
||||
|
||||
#. Set mount options in a PV. Example:
|
||||
#. Configure mount options in a PV. Example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user