Update content

This commit is contained in:
OpenTelekomCloud Proposal Bot 2022-10-31 11:23:46 +00:00
parent 4a35211c44
commit 291c194b26
78 changed files with 98 additions and 217 deletions

View File

@ -184,7 +184,6 @@ DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four
.. figure:: /_static/images/en-us_image_0186273271.png
:alt: **Figure 1** Routing
**Figure 1** Routing
Upgrading the Add-on

View File

@ -77,7 +77,6 @@ Obtaining the Driver Link from Public Network
.. figure:: /_static/images/en-us_image_0000001280466745.png
:alt: **Figure 1** Setting parameters
**Figure 1** Setting parameters
5. After confirming the driver information, click **SEARCH**. A page is displayed, showing the driver information, as shown in :ref:`Figure 2 <cce_01_0141__fig7873421145213>`. Click **DOWNLOAD**.
@ -87,7 +86,6 @@ Obtaining the Driver Link from Public Network
.. figure:: /_static/images/en-us_image_0181616313.png
:alt: **Figure 2** Driver information
**Figure 2** Driver information
6. Obtain the driver link in either of the following ways:
@ -101,7 +99,6 @@ Obtaining the Driver Link from Public Network
.. figure:: /_static/images/en-us_image_0181616314.png
:alt: **Figure 3** Obtaining the link
**Figure 3** Obtaining the link
Uninstalling the Add-on

View File

@ -51,7 +51,6 @@ Using the CCE Console
.. figure:: /_static/images/en-us_image_0000001190658439.png
:alt: **Figure 1** Node affinity scheduling policy
**Figure 1** Node affinity scheduling policy
Using kubectl

View File

@ -59,7 +59,6 @@ Workload affinity determines the pods as which the target workload will be deplo
.. figure:: /_static/images/en-us_image_0000001144578756.png
:alt: **Figure 1** Pod affinity scheduling policy
**Figure 1** Pod affinity scheduling policy
Using kubectl

View File

@ -59,7 +59,6 @@ Workload anti-affinity determines the pods from which the target workload will b
.. figure:: /_static/images/en-us_image_0000001144738550.png
:alt: **Figure 1** Pod anti-affinity scheduling policy
**Figure 1** Pod anti-affinity scheduling policy
Using kubectl

View File

@ -44,7 +44,6 @@ A simple scheduling policy allows you to configure affinity between workloads an
.. figure:: /_static/images/en-us_image_0165899095.png
:alt: **Figure 1** Affinity between workloads
**Figure 1** Affinity between workloads
- **Anti-affinity between workloads**: For details, see :ref:`Workload-Workload Anti-Affinity <cce_01_0227>`. Constraining multiple instances of the same workload from being deployed on the same node reduces the impact of system breakdowns. Anti-affinity deployment is also recommended for workloads that may interfere with each other.
@ -56,7 +55,6 @@ A simple scheduling policy allows you to configure affinity between workloads an
.. figure:: /_static/images/en-us_image_0165899282.png
:alt: **Figure 2** Anti-affinity between workloads
**Figure 2** Anti-affinity between workloads
.. important::

View File

@ -42,7 +42,6 @@ autoscaler Architecture
.. figure:: /_static/images/en-us_image_0000001199848585.png
:alt: **Figure 1** autoscaler architecture
**Figure 1** autoscaler architecture
**Description**

View File

@ -44,9 +44,9 @@ HPA can work with Metrics Server to implement auto scaling based on the CPU and
Use the formula: ratio = currentMetricValue/desiredMetricValue
When \|ratio 1.0\| ≤ tolerance, scaling will not be performed.
When \|ratio - 1.0\| <= tolerance, scaling will not be performed.
When \|ratio 1.0\| > tolerance, the desired value is calculated using the formula mentioned above.
When \|ratio - 1.0\| > tolerance, the desired value is calculated using the formula mentioned above.
The default value is 0.1 in the current community version.

View File

@ -45,7 +45,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144779790.png
:alt: **Figure 1** Expanding trace details
**Figure 1** Expanding trace details
#. Click **View Trace** in the **Operation** column. The trace details are displayed.
@ -54,7 +53,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144620002.png
:alt: **Figure 2** Viewing event details
**Figure 2** Viewing event details
.. |image1| image:: /_static/images/en-us_image_0144054048.gif

View File

@ -22,7 +22,6 @@ The following figure shows the architecture of a Kubernetes cluster.
.. figure:: /_static/images/en-us_image_0267028603.png
:alt: **Figure 1** Kubernetes cluster architecture
**Figure 1** Kubernetes cluster architecture
**Master node**
@ -127,5 +126,4 @@ Cluster Lifecycle
.. figure:: /_static/images/en-us_image_0000001160731158.png
:alt: **Figure 2** Cluster status transition
**Figure 2** Cluster status transition

View File

@ -36,7 +36,7 @@ This parameter affects the maximum number of pods that can be created on a node.
|image1|
By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 3)**.
By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 - 3)**.
.. _cce_01_0348__section16296174054019:

View File

@ -85,7 +85,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144208440.png
:alt: **Figure 1** Obtaining the access address
**Figure 1** Obtaining the access address
In addition, the **X-Remote-Group** header field, that is, the user group name, is supported. During role binding, a role can be bound to a group and carry user group information when you access the cluster.

View File

@ -27,7 +27,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001150420952.png
:alt: **Figure 1** Creating a CCE Turbo cluster
**Figure 1** Creating a CCE Turbo cluster
#. On the page displayed, set the following parameters:

View File

@ -38,7 +38,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001190168507.png
:alt: **Figure 1** Deleting a cluster
**Figure 1** Deleting a cluster
#. Click **Yes** to start deleting the cluster.

View File

@ -23,7 +23,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001190859184.png
:alt: **Figure 1** Downloading a certificate
**Figure 1** Downloading a certificate
.. important::

View File

@ -35,7 +35,7 @@ Automatic Cluster Scale-out
+-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Maximum Nodes | Maximum number of nodes to which the cluster can scale out. |
| | |
| | 1 ≤ Maximum Nodes < cluster node quota |
| | 1 <= Maximum Nodes < cluster node quota |
| | |
| | .. note:: |
| | |

View File

@ -21,7 +21,6 @@ Choose **Resource Management** > **Clusters** and check whether there is an upgr
.. figure:: /_static/images/en-us_image_0000001190048341.png
:alt: **Figure 1** Cluster with the upgrade flag
**Figure 1** Cluster with the upgrade flag
Cluster Upgrade

View File

@ -40,7 +40,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001229793402.png
:alt: **Figure 1** Upgrading a cluster
**Figure 1** Upgrading a cluster
.. note::
@ -56,7 +55,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001280171657.png
:alt: **Figure 2** Determining whether to back up the entire master node
**Figure 2** Determining whether to back up the entire master node
#. Check the version information, last update/upgrade time, available upgrade version, and upgrade history of the current cluster.
@ -67,7 +65,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001274316069.png
:alt: **Figure 3** Cluster upgrade page
**Figure 3** Cluster upgrade page
#. Click **Upgrade** on the right. Set the upgrade parameters.
@ -89,7 +86,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001229794946.png
:alt: **Figure 4** Configuring upgrade parameters
**Figure 4** Configuring upgrade parameters
#. Read the upgrade instructions carefully, and select **I have read the upgrade instructions**. Click **Upgrade**.
@ -98,7 +94,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001280421317.png
:alt: **Figure 5** Final step before upgrade
**Figure 5** Final step before upgrade
#. After you click **Upgrade**, the cluster upgrade starts. You can view the upgrade process in the lower part of the page.
@ -109,7 +104,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001280181541.png
:alt: **Figure 6** Cluster upgrade in process
**Figure 6** Cluster upgrade in process
#. When the upgrade progress reaches 100%, the cluster is upgraded. The version information will be properly displayed, and no upgrade is required.
@ -118,7 +112,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001236582394.png
:alt: **Figure 7** Upgrade completed
**Figure 7** Upgrade completed
#. After the upgrade is complete, verify the cluster Kubernetes version on the **Clusters** page.
@ -127,7 +120,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other
.. figure:: /_static/images/en-us_image_0000001236263298.png
:alt: **Figure 8** Verifying the upgrade success
**Figure 8** Verifying the upgrade success
.. |image1| image:: /_static/images/en-us_image_0000001159118361.png

View File

@ -45,7 +45,6 @@ On the `Kubernetes release <https://github.com/kubernetes/kubernetes/blob/master
.. figure:: /_static/images/en-us_image_0000001283755568.png
:alt: **Figure 1** Downloading kubectl
**Figure 1** Downloading kubectl
**Installing and configuring kubectl**
@ -83,7 +82,6 @@ Currently, CCE supports two-way authentication for domain names.
.. figure:: /_static/images/en-us_image_0000001243407853.png
:alt: **Figure 2** Two-way authentication disabled for domain names
**Figure 2** Two-way authentication disabled for domain names
Common Issue (Error from server Forbidden)

View File

@ -16,7 +16,6 @@ Complete the following tasks to get started with CCE.
.. figure:: /_static/images/en-us_image_0000001178352608.png
:alt: **Figure 1** Procedure for getting started with CCE
**Figure 1** Procedure for getting started with CCE
#. **Authorize an IAM user to use CCE.**

View File

@ -20,7 +20,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001177874150.png
:alt: **Figure 1** Cluster specifications in CCE 1.0
**Figure 1** Cluster specifications in CCE 1.0
.. table:: **Table 1** Parameters for creating a cluster
@ -39,7 +38,7 @@ Procedure
+---------------------------+------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| \*VPC | VPCs created in CCE 1.0 can be used in CCE 2.0. | VPC where the new cluster is located. |
| | | |
| | | If no VPCs are available, click **Create aVPC**. |
| | | If no VPCs are available, click **Create a VPC**. |
+---------------------------+------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| \*Subnet | Subnets created in CCE 1.0 can be used in CCE 2.0. | Subnet in which the cluster will run. |
+---------------------------+------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -26,7 +26,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001178352594.png
:alt: **Figure 1** Generate the Docker login command
**Figure 1** Generate the Docker login command
#. Log in to the CCE 1.0 console, and obtain the docker login configuration file **dockercfg.json**.
@ -35,7 +34,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001223473833.png
:alt: **Figure 2** Obtain the docker login configuration file
**Figure 2** Obtain the docker login configuration file
#. Log in to the Docker client as user **root**, and copy the **dockercfg.json** file obtained in Step 2 and the image migration tool to the **/root** directory.
@ -58,5 +56,4 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001223393885.png
:alt: **Figure 3** Migrate the image
**Figure 3** Migrate the image

View File

@ -37,7 +37,6 @@ Setting the Path for Storing Container Logs
.. figure:: /_static/images/en-us_image_0000001190538599.png
:alt: **Figure 1** Adding a log policy
**Figure 1** Adding a log policy
#. Set **Storage Type** to **Host Path** or **Container Path**.

View File

@ -62,7 +62,7 @@ The cluster monitoring page displays the monitoring status of cluster resources,
.. note::
Allocatable node resources (CPU or memory) = Total amount Reserved amount Eviction thresholds. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node <cce_01_0178>`.
Allocatable node resources (CPU or memory) = Total amount - Reserved amount - Eviction thresholds. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node <cce_01_0178>`.
On the cluster monitoring page, you can also view monitoring data of nodes, workloads, and pods. You can click |image3| to view the detailed data.
@ -82,8 +82,8 @@ The node list page also displays the data about the allocable resources of the n
The calculation formulas are as follows:
- Allocatable CPU = Total CPU Requested CPU of all pods Reserved CPU for other resources
- Allocatable memory = Total memory Requested memory of all pods Reserved memory for other resources
- Allocatable CPU = Total CPU - Requested CPU of all pods - Reserved CPU for other resources
- Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources
Viewing Workload Monitoring Data
--------------------------------

View File

@ -36,7 +36,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144779784.png
:alt: **Figure 1** Namespace-level network policy
**Figure 1** Namespace-level network policy
Network Isolation Description

View File

@ -32,7 +32,6 @@ Isolating Namespaces
.. figure:: /_static/images/en-us_image_0000001098645539.png
:alt: **Figure 1** One namespace for one environment
**Figure 1** One namespace for one environment
- **Isolating namespaces by application**
@ -43,7 +42,6 @@ Isolating Namespaces
.. figure:: /_static/images/en-us_image_0000001098403383.png
:alt: **Figure 2** Grouping workloads into different namespaces
**Figure 2** Grouping workloads into different namespaces
Deleting a Namespace

View File

@ -14,7 +14,6 @@ Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Int
.. figure:: /_static/images/en-us_image_0000001231949185.png
:alt: **Figure 1** Cloud Native Network 2.0
**Figure 1** Cloud Native Network 2.0
**Pod-to-pod communication**
@ -55,7 +54,6 @@ In the Cloud Native Network 2.0 model, BMS nodes use ENIs and ECS nodes use sub-
.. figure:: /_static/images/en-us_image_0000001172076961.png
:alt: **Figure 2** IP address management in Cloud Native Network 2.0
**Figure 2** IP address management in Cloud Native Network 2.0
- Pod IP addresses are allocated from **Pod Subnet** you configure from the VPC.
@ -86,7 +84,6 @@ In addition, a subnet can be added to the container CIDR block after a cluster i
.. figure:: /_static/images/en-us_image_0000001159831938.png
:alt: **Figure 3** Configuring CIDR blocks
**Figure 3** Configuring CIDR blocks
Example of Cloud Native Network 2.0 Access
@ -98,7 +95,6 @@ Create a CCE Turbo cluster, which contains three ECS nodes.
.. figure:: /_static/images/en-us_image_0000001198867835.png
:alt: **Figure 4** Cluster network
**Figure 4** Cluster network
Access the details page of one node. You can see that the node has one primary NIC and one extended NIC, and both of them are ENIs. The extended NIC belongs to the container CIDR block and is used to mount a sub-ENI to the pod.

View File

@ -14,7 +14,6 @@ The container tunnel network is constructed on but independent of the node netwo
.. figure:: /_static/images/en-us_image_0000001145535931.png
:alt: **Figure 1** Container tunnel network
**Figure 1** Container tunnel network
**Pod-to-pod communication**
@ -61,12 +60,11 @@ The container tunnel network allocates container IP addresses according to the f
.. figure:: /_static/images/en-us_image_0000001198861255.png
:alt: **Figure 2** IP address allocation of the container tunnel network
**Figure 2** IP address allocation of the container tunnel network
Maximum number of nodes that can be created in the cluster using the container tunnel network = Number of IP addresses in the container CIDR block / Size of the IP CIDR block allocated to the node by the container CIDR block at a time (16 by default)
For example, if the container CIDR block is 172.16.0.0/16, the number of IP addresses is 65536. If 16 IP addresses are allocated to a node at a time, a maximum of 4096 (65536/16) nodes can be created in the cluster. This is an extreme case. If 4096 nodes are created, a maximum of 16 pods can be created for each node because only 16 IP CIDR block\s are allocated to each node. In addition, the number of nodes that can be created in a cluster also depends on the node network and cluster scale.
For example, if the container CIDR block is 172.16.0.0/16, the number of IP addresses is 65536. If 16 IP addresses are allocated to a node at a time, a maximum of 4096 (65536/16) nodes can be created in the cluster. This is an extreme case. If 4096 nodes are created, a maximum of 16 pods can be created for each node because only 16 IP CIDR block\\s are allocated to each node. In addition, the number of nodes that can be created in a cluster also depends on the node network and cluster scale.
Recommendation for CIDR Block Planning
--------------------------------------

View File

@ -14,7 +14,6 @@ The VPC network uses VPC routing to integrate with the underlying network. This
.. figure:: /_static/images/en-us_image_0000001116237931.png
:alt: **Figure 1** VPC network model
**Figure 1** VPC network model
**Pod-to-pod communication**
@ -58,7 +57,6 @@ The VPC network allocates container IP addresses according to the following rule
.. figure:: /_static/images/en-us_image_0000001153101092.png
:alt: **Figure 2** IP address management of the VPC network
**Figure 2** IP address management of the VPC network
Maximum number of nodes that can be created in the cluster using the VPC network = Number of IP addresses in the container CIDR block /Number of IP addresses in the CIDR block allocated to the node by the container CIDR block
@ -91,7 +89,6 @@ Create a cluster using the VPC network model.
.. figure:: /_static/images/en-us_image_0000001198980979.png
:alt: **Figure 3** Cluster network
**Figure 3** Cluster network
The cluster contains one node.

View File

@ -17,7 +17,6 @@ An ingress is an independent resource in the Kubernetes cluster and defines rule
.. figure:: /_static/images/en-us_image_0000001238003081.png
:alt: **Figure 1** Ingress diagram
**Figure 1** Ingress diagram
The following describes the ingress-related definitions:
@ -41,5 +40,4 @@ ELB Ingress Controller is deployed on the master node and bound to the load bala
.. figure:: /_static/images/en-us_image_0000001192723190.png
:alt: **Figure 2** Working principle of ELB Ingress Controller
**Figure 2** Working principle of ELB Ingress Controller

View File

@ -135,7 +135,6 @@ This section uses an Nginx workload as an example to describe how to add an ELB
.. figure:: /_static/images/en-us_image_0000001192723194.png
:alt: **Figure 1** Accessing the /healthz interface of defaultbackend
**Figure 1** Accessing the /healthz interface of defaultbackend
Updating an Ingress

View File

@ -64,7 +64,6 @@ Using Ingress Rules
.. figure:: /_static/images/en-us_image_0259557735.png
:alt: **Figure 1** podSelector
**Figure 1** podSelector
- **Using namespaceSelector to specify the access scope**
@ -95,7 +94,6 @@ Using Ingress Rules
.. figure:: /_static/images/en-us_image_0259558489.png
:alt: **Figure 2** namespaceSelector
**Figure 2** namespaceSelector
Using Egress Rules
@ -133,7 +131,6 @@ Diagram:
.. figure:: /_static/images/en-us_image_0000001340138373.png
:alt: **Figure 3** ipBlock
**Figure 3** ipBlock
You can define ingress and egress in the same rule.
@ -172,7 +169,6 @@ Diagram:
.. figure:: /_static/images/en-us_image_0000001287883210.png
:alt: **Figure 4** Using both ingress and egress
**Figure 4** Using both ingress and egress
Adding a Network Policy on the Console

View File

@ -50,7 +50,6 @@ A Service is used for pod access. With a fixed IP address, a Service forwards ac
.. figure:: /_static/images/en-us_image_0258889981.png
:alt: **Figure 1** Accessing pods through a Service
**Figure 1** Accessing pods through a Service
You can configure the following types of Services:
@ -73,7 +72,6 @@ Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward
.. figure:: /_static/images/en-us_image_0258961458.png
:alt: **Figure 2** Ingress and Service
**Figure 2** Ingress and Service
For details about the ingress, see :ref:`Overview <cce_01_0094>`.
@ -100,7 +98,6 @@ Workload access scenarios can be categorized as follows:
.. figure:: /_static/images/en-us_image_0000001160748146.png
:alt: **Figure 3** Network access diagram
**Figure 3** Network access diagram
.. |image1| image:: /_static/images/en-us_image_0000001159292060.png

View File

@ -77,7 +77,7 @@ You can set the Service when creating a workload on the CCE console. An Nginx wo
- **Protocol**: protocol used by the Service.
- **Container Port**: port defined in the container image and on which the workload listens. The Nginx application listens on port 80.
- **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 165535.
- **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1-65535.
#. After the configuration is complete, click **OK**.
@ -173,7 +173,6 @@ After an ENI LoadBalancer Service is created, you can view the listener forwardi
.. figure:: /_static/images/en-us_image_0000001204449561.png
:alt: **Figure 1** ELB forwarding
**Figure 1** ELB forwarding
You can find that a listener is created for the load balancer. The backend server address is the IP address of the pod, and the service port is the container port. This is because the pod uses an ENI or sub-ENI. When traffic passes through the load balancer, it directly forwards the traffic to the pod. This is the same as that described in :ref:`Scenario <cce_01_0114__section025118182286>`.

View File

@ -19,7 +19,6 @@ The cluster-internal domain name format is *<Service name>*.\ *<Namespace of the
.. figure:: /_static/images/en-us_image_0000001117575950.png
:alt: **Figure 1** Intra-cluster access (ClusterIP)
**Figure 1** Intra-cluster access (ClusterIP)
Adding a Service When Creating a Workload
@ -35,7 +34,7 @@ You can set the access type (Service) when creating a workload on the CCE consol
- **Protocol**: protocol used by the Service.
- **Container Port**: port on which the workload listens. The Nginx application listens on port 80.
- **Access Port**: a port mapped to the container port at the cluster-internal IP address. The workload can be accessed at <cluster-internal IP address>:<access port>. The port number range is 165535.
- **Access Port**: a port mapped to the container port at the cluster-internal IP address. The workload can be accessed at <cluster-internal IP address>:<access port>. The port number range is 1-65535.
#. After the configuration, click **OK** and then **Next: Configure Advanced Settings**. On the page displayed, click **Create**.
#. Click **View Deployment Details** or **View StatefulSet Details**. On the **Services** tab page, obtain the access address, for example, 10.247.74.100:8080.
@ -58,7 +57,7 @@ You can set the Service after creating a workload. This has no impact on the wor
- **Protocol**: protocol used by the Service.
- **Container Port**: port on which the workload listens. The Nginx application listens on port 80.
- **Access Port**: port mapped to the container port at the cluster-internal IP address. The workload can be accessed at <cluster-internal IP address>:<access port>. The port number range is 165535.
- **Access Port**: port mapped to the container port at the cluster-internal IP address. The workload can be accessed at <cluster-internal IP address>:<access port>. The port number range is 1-65535.
#. Click **Create**. The ClusterIP Service will be added for the workload.

View File

@ -18,7 +18,6 @@ In this access mode, requests are transmitted through an ELB load balancer to a
.. figure:: /_static/images/en-us_image_0000001163928763.png
:alt: **Figure 1** LoadBalancer
**Figure 1** LoadBalancer
Notes and Constraints
@ -98,7 +97,7 @@ You can set the Service when creating a workload on the CCE console. An Nginx wo
- **Protocol**: protocol used by the Service.
- **Container Port**: port defined in the container image and on which the workload listens. The Nginx application listens on port 80.
- **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 165535.
- **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1-65535.
#. After the configuration is complete, click **OK**.
@ -335,7 +334,6 @@ You can set the access type when creating a workload using kubectl. This section
.. figure:: /_static/images/en-us_image_0276664171.png
:alt: **Figure 2** Accessing Nginx through the LoadBalancer Service
**Figure 2** Accessing Nginx through the LoadBalancer Service
Using kubectl to Create a Service (Automatically Creating a Load Balancer)
@ -642,7 +640,6 @@ You can add a Service when creating a workload using kubectl. This section uses
.. figure:: /_static/images/en-us_image_0000001093275701.png
:alt: **Figure 3** Accessing Nginx through the LoadBalancer Service
**Figure 3** Accessing Nginx through the LoadBalancer Service
.. _cce_01_0014__section52631714117:

View File

@ -14,7 +14,6 @@ A Service is exposed on each node's IP address at a static port (NodePort). A Cl
.. figure:: /_static/images/en-us_image_0000001163847995.png
:alt: **Figure 1** NodePort access
**Figure 1** NodePort access
Notes and Constraints
@ -54,7 +53,7 @@ You can set the access type when creating a workload on the CCE console. An Ngin
- **Access Port**: node port (with a private IP address) to which the container port will be mapped. You are advised to select **Automatically generated**.
- **Automatically generated**: The system automatically assigns a port number.
- **Specified port**: You have to manually specify a fixed node port number in the range of 3000032767. Ensure that the port is unique in a cluster.
- **Specified port**: You have to manually specify a fixed node port number in the range of 30000-32767. Ensure that the port is unique in a cluster.
#. After the configuration is complete, click **OK**.
#. Click **Next: Configure Advanced Settings**. On the page displayed, click **Create**.
@ -96,7 +95,7 @@ You can set the Service after creating a workload. This has no impact on the wor
- **Access Port**: node port (with a private IP address) to which the container port will be mapped. You are advised to select **Automatically generated**.
- **Automatically generated**: The system automatically assigns a port number.
- **Specified port**: You have to manually specify a fixed node port number in the range of 3000032767. Ensure that the port is unique in a cluster.
- **Specified port**: You have to manually specify a fixed node port number in the range of 30000-32767. Ensure that the port is unique in a cluster.
#. Click **Create**. A NodePort Service will be added for the workload.

View File

@ -21,7 +21,6 @@ For example, an application uses Deployments to create the frontend and backend.
.. figure:: /_static/images/en-us_image_0258894622.png
:alt: **Figure 1** Inter-pod access
**Figure 1** Inter-pod access
Using Services for Pod Access
@ -36,7 +35,6 @@ In the preceding example, a Service is added for the frontend pod to access the
.. figure:: /_static/images/en-us_image_0258889981.png
:alt: **Figure 2** Accessing pods through a Service
**Figure 2** Accessing pods through a Service
Service Types

View File

@ -26,7 +26,6 @@ Node Pool Architecture
.. figure:: /_static/images/en-us_image_0269288708.png
:alt: **Figure 1** Overall architecture of a node pool
**Figure 1** Overall architecture of a node pool
Generally, all nodes in a node pool have the following same attributes:

View File

@ -106,7 +106,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0144042759.png
:alt: **Figure 1** Creating a partition
**Figure 1** Creating a partition
c. Configure the start and last sectors as follows for example:

View File

@ -215,8 +215,8 @@ Procedure
The calculation formula is as follows:
- Allocatable CPUs = Total CPUs Requested CPUs of all pods Reserved CPUs for other resources
- Allocatable memory = Total memory Requested memory of all pods Reserved memory for other resources
- Allocatable CPUs = Total CPUs - Requested CPUs of all pods - Reserved CPUs for other resources
- Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources
.. |image1| image:: /_static/images/en-us_image_0273156799.png
.. |image2| image:: /_static/images/en-us_image_0220702939.png

View File

@ -22,35 +22,35 @@ Total reserved amount = Reserved memory for system components + Reserved memory
.. table:: **Table 1** Reservation rules for system components
+---------------------+-------------------------------------------------------------------------+
| Total Memory (TM) | Reserved Memory for System Components |
+=====================+=========================================================================+
| TM 8 GB | 0 MB |
+---------------------+-------------------------------------------------------------------------+
| 8 GB < TM ≤ 16 GB | [(TM 8 GB) x 1024 x 10%] MB |
+---------------------+-------------------------------------------------------------------------+
| 16 GB < TM ≤ 128 GB | [8 GB x 1024 x 10% + (TM 16 GB) x 1024 x 6%] MB |
+---------------------+-------------------------------------------------------------------------+
| TM > 128 GB | (8 GB x 1024 x 10% + 112 GB x 1024 x 6% + (TM 128 GB) x 1024 x 2%) MB |
+---------------------+-------------------------------------------------------------------------+
+----------------------+-------------------------------------------------------------------------+
| Total Memory (TM) | Reserved Memory for System Components |
+======================+=========================================================================+
| TM <= 8 GB | 0 MB |
+----------------------+-------------------------------------------------------------------------+
| 8 GB < TM <= 16 GB | [(TM - 8 GB) x 1024 x 10%] MB |
+----------------------+-------------------------------------------------------------------------+
| 16 GB < TM <= 128 GB | [8 GB x 1024 x 10% + (TM - 16 GB) x 1024 x 6%] MB |
+----------------------+-------------------------------------------------------------------------+
| TM > 128 GB | (8 GB x 1024 x 10% + 112 GB x 1024 x 6% + (TM - 128 GB) x 1024 x 2%) MB |
+----------------------+-------------------------------------------------------------------------+
.. table:: **Table 2** Reservation rules for kubelet
+-------------------+--------------------------------+-------------------------------------------------+
| Total Memory (TM) | Number of Pods | Reserved Memory for kubelet |
+===================+================================+=================================================+
| TM ≤ 2 GB | - | TM x 25% |
+-------------------+--------------------------------+-------------------------------------------------+
| TM > 2 GB | 0 < Max. pods on a node 16 | 700 MB |
+-------------------+--------------------------------+-------------------------------------------------+
| | 16 < Max. pods on a node ≤ 32 | [700 + (Max. pods on a node 16) x 18.75] MB |
+-------------------+--------------------------------+-------------------------------------------------+
| | 32 < Max. pods on a node ≤ 64 | [1024 + (Max. pods on a node 32) x 6.25] MB |
+-------------------+--------------------------------+-------------------------------------------------+
| | 64 < Max. pods on a node ≤ 128 | [1230 + (Max. pods on a node 64) x 7.80] MB |
+-------------------+--------------------------------+-------------------------------------------------+
| | Max. pods on a node > 128 | [1740 + (Max. pods on a node 128) x 11.20] MB |
+-------------------+--------------------------------+-------------------------------------------------+
+-------------------+---------------------------------+-------------------------------------------------+
| Total Memory (TM) | Number of Pods | Reserved Memory for kubelet |
+===================+=================================+=================================================+
| TM <= 2 GB | - | TM x 25% |
+-------------------+---------------------------------+-------------------------------------------------+
| TM > 2 GB | 0 < Max. pods on a node <= 16 | 700 MB |
+-------------------+---------------------------------+-------------------------------------------------+
| | 16 < Max. pods on a node <= 32 | [700 + (Max. pods on a node - 16) x 18.75] MB |
+-------------------+---------------------------------+-------------------------------------------------+
| | 32 < Max. pods on a node <= 64 | [1024 + (Max. pods on a node - 32) x 6.25] MB |
+-------------------+---------------------------------+-------------------------------------------------+
| | 64 < Max. pods on a node <= 128 | [1230 + (Max. pods on a node - 64) x 7.80] MB |
+-------------------+---------------------------------+-------------------------------------------------+
| | Max. pods on a node > 128 | [1740 + (Max. pods on a node - 128) x 11.20] MB |
+-------------------+---------------------------------+-------------------------------------------------+
.. important::
@ -61,17 +61,17 @@ Rules for Reserving Node CPU
.. table:: **Table 3** Node CPU reservation rules
+---------------------------+------------------------------------------------------------------------+
| Total CPU Cores (Total) | Reserved CPU Cores |
+===========================+========================================================================+
| Total 1 core | Total x 6% |
+---------------------------+------------------------------------------------------------------------+
| 1 core < Total ≤ 2 cores | 1 core x 6% + (Total 1 core) x 1% |
+---------------------------+------------------------------------------------------------------------+
| 2 cores < Total ≤ 4 cores | 1 core x 6% + 1 core x 1% + (Total 2 cores) x 0.5% |
+---------------------------+------------------------------------------------------------------------+
| Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total 4 cores) x 0.25% |
+---------------------------+------------------------------------------------------------------------+
+----------------------------+------------------------------------------------------------------------+
| Total CPU Cores (Total) | Reserved CPU Cores |
+============================+========================================================================+
| Total <= 1 core | Total x 6% |
+----------------------------+------------------------------------------------------------------------+
| 1 core < Total <= 2 cores | 1 core x 6% + (Total - 1 core) x 1% |
+----------------------------+------------------------------------------------------------------------+
| 2 cores < Total <= 4 cores | 1 core x 6% + 1 core x 1% + (Total - 2 cores) x 0.5% |
+----------------------------+------------------------------------------------------------------------+
| Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total - 4 cores) x 0.25% |
+----------------------------+------------------------------------------------------------------------+
.. important::

View File

@ -15,7 +15,6 @@ In a rolling upgrade, a new node is created, existing workloads are migrated to
.. figure:: /_static/images/en-us_image_0295359661.png
:alt: **Figure 1** Workload migration
**Figure 1** Workload migration
Notes and Constraints

View File

@ -39,7 +39,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001190302085.png
:alt: **Figure 1** Resetting the selected node
**Figure 1** Resetting the selected node
#. Click **Yes** and wait until the node is reset.

View File

@ -31,7 +31,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001190302087.png
:alt: **Figure 1** Nodes details page
**Figure 1** Nodes details page
#. In the upper right corner of the ECS details page, click **Stop**. In the **Stop ECS** dialog box, click **Yes**.
@ -40,5 +39,4 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144342232.png
:alt: **Figure 2** ECS details page
**Figure 2** ECS details page

View File

@ -27,7 +27,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001144502022.png
:alt: **Figure 1** Synchronizing node data
**Figure 1** Synchronizing node data
After the synchronization is complete, the "Sync success" message is displayed in the upper right corner.

View File

@ -28,7 +28,6 @@ Process Flow
.. figure:: /_static/images/en-us_image_0000001120226646.png
:alt: **Figure 1** Process of assigning CCE permissions
**Figure 1** Process of assigning CCE permissions
#. .. _cce_01_0188__li10176121316284:

View File

@ -22,7 +22,6 @@ Role and ClusterRole specify actions that can be performed on specific resources
.. figure:: /_static/images/en-us_image_0000001142984374.png
:alt: **Figure 1** Role binding
**Figure 1** Role binding
On the CCE console, you can assign permissions to a user or user group to access resources in one or multiple namespaces. By default, the CCE console provides the following ClusterRoles:
@ -139,7 +138,6 @@ The **subjects** section binds a Role with an IAM user so that the IAM user can
.. figure:: /_static/images/en-us_image_0262051194.png
:alt: **Figure 2** A RoleBinding binds the Role to the user.
**Figure 2** A RoleBinding binds the Role to the user.
You can also specify a user group in the **subjects** section. In this case, all users in the user group obtain the permissions defined in the Role.

View File

@ -32,7 +32,6 @@ In general, you configure CCE permissions in two scenarios. The first is creatin
.. figure:: /_static/images/en-us_image_0000001168537057.png
:alt: **Figure 1** Illustration on CCE permissions
**Figure 1** Illustration on CCE permissions
These permissions allow you to manage resource users at a finer granularity.

View File

@ -28,7 +28,6 @@ Check Item 1: Whether the Security Group Is Modified
.. figure:: /_static/images/en-us_image_0000001223473841.png
:alt: **Figure 1** Viewing inbound rules of the security group
**Figure 1** Viewing inbound rules of the security group
Inbound rule parameter description:
@ -42,7 +41,6 @@ Check Item 1: Whether the Security Group Is Modified
.. figure:: /_static/images/en-us_image_0000001178192662.png
:alt: **Figure 2** Viewing outbound rules of the security group
**Figure 2** Viewing outbound rules of the security group
.. _cce_faq_00039__section11822101617614:
@ -64,5 +62,4 @@ Check Item 2: Whether the DHCP Function of the Subnet Is Disabled
.. figure:: /_static/images/en-us_image_0000001223473843.png
:alt: **Figure 3** DHCP description in the VPC API Reference
**Figure 3** DHCP description in the VPC API Reference

Some files were not shown because too many files have changed in this diff Show More