diff --git a/umn/source/add-ons/autoscaler.rst b/umn/source/add-ons/autoscaler.rst index edb2c9d..c61be59 100644 --- a/umn/source/add-ons/autoscaler.rst +++ b/umn/source/add-ons/autoscaler.rst @@ -128,7 +128,6 @@ Installing the Add-on #. When the configuration is complete, click **Install**. -.. _cce_10_0154__section59676731017: Description of the Scale-In Cool-Down Period -------------------------------------------- diff --git a/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst b/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst index 812cbf8..0013d3c 100644 --- a/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst +++ b/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst @@ -136,7 +136,6 @@ This add-on has been installed by default. If it is uninstalled due to some reas | | } | +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0129__table1420814384015: .. table:: **Table 2** Default plugin configuration of the active zone of coredns @@ -195,6 +194,4 @@ DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four .. figure:: /_static/images/en-us_image_0000001199021308.png - :alt: **Figure 1** Routing - - **Figure 1** Routing + :alt: diff --git a/umn/source/add-ons/gpu-beta.rst b/umn/source/add-ons/gpu-beta.rst index 75ac615..72f3291 100644 --- a/umn/source/add-ons/gpu-beta.rst +++ b/umn/source/add-ons/gpu-beta.rst @@ -54,7 +54,6 @@ If GPU information is returned, the device is available and the add-on is succes |image1| -.. _cce_10_0141__section95451728192112: Obtaining the Driver Link from Public Network --------------------------------------------- @@ -66,21 +65,15 @@ Obtaining the Driver Link from Public Network 4. Select the driver information on the **NVIDIA Driver Downloads** page, as shown in :ref:`Figure 1 `. **Operating System** must be **Linux 64-bit**. - .. _cce_10_0141__fig11696366517: .. figure:: /_static/images/en-us_image_0000001531533921.png - :alt: **Figure 1** Setting parameters - - **Figure 1** Setting parameters + :alt: 5. After confirming the driver information, click **SEARCH**. A page is displayed, showing the driver information, as shown in :ref:`Figure 2 `. Click **DOWNLOAD**. - .. _cce_10_0141__fig7873421145213: .. figure:: /_static/images/en-us_image_0000001531373685.png - :alt: **Figure 2** Driver information - - **Figure 2** Driver information + :alt: 6. Obtain the driver link in either of the following ways: @@ -88,11 +81,8 @@ Obtaining the Driver Link from Public Network - Method 2: As shown in :ref:`Figure 3 `, click **AGREE & DOWNLOAD** to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes. - .. _cce_10_0141__fig5901194614534: .. figure:: /_static/images/en-us_image_0000001531533045.png - :alt: **Figure 3** Obtaining the link - - **Figure 3** Obtaining the link + :alt: .. |image1| image:: /_static/images/en-us_image_0000001238225460.png diff --git a/umn/source/add-ons/npd.rst b/umn/source/add-ons/npd.rst index 10d233e..d381af6 100644 --- a/umn/source/add-ons/npd.rst +++ b/umn/source/add-ons/npd.rst @@ -276,7 +276,6 @@ Check items cover events and statuses. | | | Threshold: 90% | +-----------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0132__section1471610580474: Node-problem-controller Fault Isolation --------------------------------------- @@ -330,7 +329,6 @@ You can modify **add-onnpc.customConditionToTaint** according to the following t | Npc.affinity | Node affinity of the controller | N/A | +--------------------------------+---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0132__table147438134911: .. table:: **Table 7** Fault isolation rule configuration diff --git a/umn/source/add-ons/volcano.rst b/umn/source/add-ons/volcano.rst index 34772c3..ba260b6 100644 --- a/umn/source/add-ons/volcano.rst +++ b/umn/source/add-ons/volcano.rst @@ -76,7 +76,6 @@ Installing the Add-on server_cert: '' server_key: '' - .. _cce_10_0193__table562185146: .. table:: **Table 2** Volcano Plugins diff --git a/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst b/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst index aa2a7fc..f20c96c 100644 --- a/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst +++ b/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst @@ -132,7 +132,6 @@ The following is a YAML example of a node scaling policy: targetNodepoolIds: - 7d48eca7-3419-11ea-bc29-0255ac1001a8 -.. _cce_10_0209__table18763092201: .. table:: **Table 1** Key parameters diff --git a/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst b/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst index d117a1a..f86b108 100644 --- a/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst +++ b/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst @@ -37,12 +37,9 @@ autoscaler Architecture :ref:`Figure 1 ` shows the autoscaler architecture and its core modules: -.. _cce_10_0296__fig114831750115719: .. figure:: /_static/images/en-us_image_0000001199501290.png - :alt: **Figure 1** autoscaler architecture - - **Figure 1** autoscaler architecture + :alt: **Description** diff --git a/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst b/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst index 9bff10e..79ac09b 100644 --- a/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst +++ b/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst @@ -30,7 +30,6 @@ Procedure #. Set policy parameters. - .. _cce_10_0208__table8638121213265: .. table:: **Table 1** HPA policy parameters diff --git a/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst b/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst index 5acb210..fae4773 100644 --- a/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst +++ b/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst @@ -21,12 +21,9 @@ HPA and CA work with each other. HPA requires sufficient cluster resources for s As shown in :ref:`Figure 1 `, HPA performs scale-out based on the monitoring metrics. When cluster resources are insufficient, newly created pods are in Pending state. CA then checks these pending pods and selects the most appropriate node pool based on the configured scaling policy to scale out the node pool. -.. _cce_10_0300__cce_bestpractice_00282_fig6540132372015: .. figure:: /_static/images/en-us_image_0000001290111529.png - :alt: **Figure 1** HPA and CA working flows - - **Figure 1** HPA and CA working flows + :alt: Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed. @@ -81,13 +78,11 @@ Preparations docker build -t hpa-example:latest . - d. .. _cce_10_0300__cce_bestpractice_00282_li108181514125: (Optional) Log in to the SWR console, choose **Organization Management** in the navigation pane, and click **Create Organization** in the upper right corner to create an organization. Skip this step if you already have an organization. - e. .. _cce_10_0300__cce_bestpractice_00282_li187221141362: In the navigation pane, choose **My Images** and then click **Upload Through Client**. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command. diff --git a/umn/source/best_practice/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst b/umn/source/best_practice/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst index 1904a80..2ec9a2b 100644 --- a/umn/source/best_practice/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst +++ b/umn/source/best_practice/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst @@ -21,12 +21,9 @@ HPA and CA work with each other. HPA requires sufficient cluster resources for s As shown in :ref:`Figure 1 `, HPA performs scale-out based on the monitoring metrics. When cluster resources are insufficient, newly created pods are in Pending state. CA then checks these pending pods and selects the most appropriate node pool based on the configured scaling policy to scale out the node pool. -.. _cce_bestpractice_00282__fig6540132372015: .. figure:: /_static/images/en-us_image_0000001290111529.png - :alt: **Figure 1** HPA and CA working flows - - **Figure 1** HPA and CA working flows + :alt: Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed. @@ -81,13 +78,11 @@ Preparations docker build -t hpa-example:latest . - d. .. _cce_bestpractice_00282__li108181514125: (Optional) Log in to the SWR console, choose **Organization Management** in the navigation pane, and click **Create Organization** in the upper right corner to create an organization. Skip this step if you already have an organization. - e. .. _cce_bestpractice_00282__li187221141362: In the navigation pane, choose **My Images** and then click **Upload Through Client**. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command. diff --git a/umn/source/best_practice/cluster/connecting_to_multiple_clusters_using_kubectl.rst b/umn/source/best_practice/cluster/connecting_to_multiple_clusters_using_kubectl.rst index d701eb0..6515b3d 100644 --- a/umn/source/best_practice/cluster/connecting_to_multiple_clusters_using_kubectl.rst +++ b/umn/source/best_practice/cluster/connecting_to_multiple_clusters_using_kubectl.rst @@ -17,9 +17,7 @@ This section describes how to configure access to multiple clusters by modifying .. figure:: /_static/images/en-us_image_0261820020.png - :alt: **Figure 1** Using kubectl to connect to multiple clusters - - **Figure 1** Using kubectl to connect to multiple clusters + :alt: Prerequisites ------------- diff --git a/umn/source/best_practice/container/using_hostaliases_to_configure_etc_hosts_in_a_pod.rst b/umn/source/best_practice/container/using_hostaliases_to_configure_etc_hosts_in_a_pod.rst index f7f1718..a1ea46c 100644 --- a/umn/source/best_practice/container/using_hostaliases_to_configure_etc_hosts_in_a_pod.rst +++ b/umn/source/best_practice/container/using_hostaliases_to_configure_etc_hosts_in_a_pod.rst @@ -65,7 +65,6 @@ Procedure | spec | Mandatory | Detailed description of the pod. For details, see :ref:`Table 2 `. | +------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table33531919193: .. table:: **Table 2** spec field description @@ -77,7 +76,6 @@ Procedure | containers | Mandatory | For details, see :ref:`Table 3 `. | +-------------+--------------------+----------------------------------------------------------------------------------------------------------------------------+ - .. _cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table196127172016: .. table:: **Table 3** containers field description diff --git a/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst b/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst index 4a1c088..79ffeb2 100644 --- a/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst +++ b/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst @@ -14,9 +14,7 @@ GitLab provides powerful CI/CD functions and is widely used in software developm .. figure:: /_static/images/en-us_image_0000001291567729.png - :alt: **Figure 1** GitLab CI/CD process - - **Figure 1** GitLab CI/CD process + :alt: This section describes how to interconnect GitLab with SWR and CCE for CI/CD. @@ -111,7 +109,6 @@ Log in to `GitLab `__, choose **Settings** > **CI/CD** The command output displays the login key pair. -.. _cce_bestpractice_0324__section171541431101910: Creating a Pipeline ------------------- diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/installing_the_migration_tool.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/installing_the_migration_tool.rst index ea94f39..6b23dfa 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/installing_the_migration_tool.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/installing_the_migration_tool.rst @@ -54,7 +54,6 @@ In this example, MinIO is installed on a temporary ECS outside the cluster. wget https://dl.minio.io/server/minio/release/linux-amd64/minio chmod +x minio -#. .. _cce_bestpractice_0310__li126129251432: Set the username and password of MinIO. @@ -79,7 +78,6 @@ In this example, MinIO is installed on a temporary ECS outside the cluster. #. Use a browser to access http://{*EIP of the node where MinIO resides*}:30840. The MinIO console page is displayed. -.. _cce_bestpractice_0310__section138392220432: Installing Velero ----------------- @@ -108,7 +106,6 @@ Download the latest, stable binary file from https://github.com/vmware-tanzu/vel tar -xvf velero-v1.7.0-linux-amd64.tar.gz cp ./velero-v1.7.0-linux-amd64/velero /usr/local/bin -#. .. _cce_bestpractice_0310__li197871715322: Create the access key file **credentials-velero** for the backup object storage. @@ -124,7 +121,6 @@ Download the latest, stable binary file from https://github.com/vmware-tanzu/vel aws_access_key_id = {AK} aws_secret_access_key = {SK} -#. .. _cce_bestpractice_0310__li1722825643415: Deploy the Velero server. Change the value of **--bucket** to the name of the created object storage bucket. In this example, the bucket name is **velero**. For more information about custom installation parameters, see `Customize Velero Install `__. diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/migrating_resources_in_a_cluster.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/migrating_resources_in_a_cluster.rst index 37da7e9..f26538d 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/migrating_resources_in_a_cluster.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/migrating_resources_in_a_cluster.rst @@ -21,12 +21,10 @@ Prerequisites - CCE does not support EVS disks of the **ReadWriteMany** type. If resources of this type exist in the source cluster, change the storage type to **ReadWriteOnce**. - Velero integrates the Restic tool to back up and restore storage volumes. Currently, the storage volumes of the HostPath type are not supported. For details, see `Restic Restrictions `__. If you need to back up storage volumes of this type, replace the hostPath volumes with local volumes by referring to :ref:`Storage Volumes of the HostPath Type Cannot Be Backed Up `. If a backup task involves storage of the HostPath type, the storage volumes of this type will be automatically skipped and a warning message will be generated. This will not cause a backup failure. -.. _cce_bestpractice_0311__section750718193288: Backing Up Applications in the Source Cluster --------------------------------------------- -#. .. _cce_bestpractice_0311__li686918502812: (Optional) If you need to back up the data of a specified storage volume in the pod, add an annotation to the pod. The annotation template is as follows: @@ -100,7 +98,6 @@ Backing Up Applications in the Source Cluster |image1| -.. _cce_bestpractice_0311__section482103142819: Restoring Applications in the Target Cluster -------------------------------------------- diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/planning_resources_for_the_target_cluster.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/planning_resources_for_the_target_cluster.rst index 21126fc..2dfbfbc 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/planning_resources_for_the_target_cluster.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/planning_resources_for_the_target_cluster.rst @@ -9,9 +9,8 @@ CCE allows you to customize cluster resources to meet various service requiremen .. important:: - After a cluster is created, the resource parameters marked with asterisks (``*``) in :ref:`Table 1 ` cannot be modified. + After a cluster is created, the resource parameters marked with asterisks (\``*`\`) in :ref:`Table 1 ` cannot be modified. -.. _cce_bestpractice_0308__table1841815113913: .. table:: **Table 1** CCE cluster planning diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/solution_overview.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/solution_overview.rst index eb667e6..8483053 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/solution_overview.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/solution_overview.rst @@ -14,7 +14,6 @@ In terms of performance, an on-premises cluster has poor scalability due to its Now you can address the preceding challenges by using CCE, a service that allows easy cluster management and flexible scaling, integrated with application service mesh and Helm charts to simplify cluster O&M and reduce operations costs. CCE is easy to use and delivers high performance, security, reliability, openness, and compatibility. This section describes the solution and procedure for migrating on-premises clusters to CCE. -.. _cce_bestpractice_0307__section96147345128: Migration Solution ------------------ @@ -27,7 +26,6 @@ This section describes a cluster migration solution, which applies to the follow Before the migration, you need to analyze all resources in the source clusters and then determine the migration solution. Resources that can be migrated include resources inside and outside the clusters, as listed in the following table. -.. _cce_bestpractice_0307__table1126932541820: .. table:: **Table 1** Resources that can be migrated @@ -55,12 +53,9 @@ Before the migration, you need to analyze all resources in the source clusters a :ref:`Figure 1 ` shows the migration process. You can migrate resources outside a cluster as required. -.. _cce_bestpractice_0307__fig203631140201419: .. figure:: /_static/images/en-us_image_0000001172392670.png - :alt: **Figure 1** Migration solution diagram - - **Figure 1** Migration solution diagram + :alt: Migration Process ----------------- diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/troubleshooting.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/troubleshooting.rst index 864f9c6..6d81f9d 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/troubleshooting.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/troubleshooting.rst @@ -5,7 +5,6 @@ Troubleshooting =============== -.. _cce_bestpractice_0314__section11197194820367: Storage Volumes of the HostPath Type Cannot Be Backed Up -------------------------------------------------------- @@ -70,7 +69,6 @@ Both HostPath and Local volumes are local storage volumes. However, the Restic t NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mysql-pv 5Gi RWO Delete Available local 3s -.. _cce_bestpractice_0314__section321054511332: Backup Tool Resources Are Insufficient -------------------------------------- diff --git a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/updating_resources_accordingly.rst b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/updating_resources_accordingly.rst index 0824c54..53a34a8 100644 --- a/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/updating_resources_accordingly.rst +++ b/umn/source/best_practice/migration/migrating_on-premises_kubernetes_clusters_to_cce/updating_resources_accordingly.rst @@ -5,7 +5,6 @@ Updating Resources Accordingly ============================== -.. _cce_bestpractice_0312__section7125750134820: Updating Images --------------- @@ -30,7 +29,6 @@ The WordPress and MySQL images used in this example can be pulled from SWR. Ther #. Check the running status of the workload. -.. _cce_bestpractice_0312__section41282507482: Updating Services ----------------- @@ -57,7 +55,6 @@ After the cluster is migrated, the Service of the source cluster may fail to tak #. Use a browser to check whether the Service is available. -.. _cce_bestpractice_0312__section746195321414: Updating the Storage Class -------------------------- @@ -173,7 +170,6 @@ As the storage infrastructures of clusters may be different, storage volumes can NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc Bound pvc-4c8e655a-1dbc-4897-ae6c-446b502f5e77 5Gi RWX local 13s -.. _cce_bestpractice_0312__section728213614323: Updating Databases ------------------ diff --git a/umn/source/best_practice/networking/obtaining_the_client_source_ip_address_for_a_container.rst b/umn/source/best_practice/networking/obtaining_the_client_source_ip_address_for_a_container.rst index 8c90f70..fa2860b 100644 --- a/umn/source/best_practice/networking/obtaining_the_client_source_ip_address_for_a_container.rst +++ b/umn/source/best_practice/networking/obtaining_the_client_source_ip_address_for_a_container.rst @@ -66,7 +66,6 @@ To obtain source IP addresses, perform the following steps: g. To modify a listener, locate the listener and click |image3| on the right of its name. h. Enable **Obtain Client IP Address**. -.. _cce_bestpractice_00035__section6340152911914: NodePort -------- diff --git a/umn/source/best_practice/networking/planning_cidr_blocks_for_a_cluster.rst b/umn/source/best_practice/networking/planning_cidr_blocks_for_a_cluster.rst index 77090e8..6fb4169 100644 --- a/umn/source/best_practice/networking/planning_cidr_blocks_for_a_cluster.rst +++ b/umn/source/best_practice/networking/planning_cidr_blocks_for_a_cluster.rst @@ -27,9 +27,7 @@ Basic Concepts .. figure:: /_static/images/en-us_image_0261818822.png - :alt: **Figure 1** VPC CIDR block architecture - - **Figure 1** VPC CIDR block architecture + :alt: By default, ECSs in all subnets of the same VPC can communicate with one another, while ECSs in different VPCs cannot communicate with each other. @@ -59,12 +57,9 @@ Single-VPC Single-Cluster Scenarios - Container CIDR Block: cannot overlap with the subnet CIDR block. - Service CIDR Block: cannot overlap with the subnet CIDR block or the container CIDR block. -.. _cce_bestpractice_00004__en-us_topic_0099587154_fig15791152874920: .. figure:: /_static/images/en-us_image_0000001392318380.png - :alt: **Figure 2** Network CIDR block planning in the single-VPC single-cluster scenario (CCE cluster) - - **Figure 2** Network CIDR block planning in the single-VPC single-cluster scenario (CCE cluster) + :alt: :ref:`Figure 3 ` shows the CIDR block planning for a **CCE Turbo cluster** (cloud native network 2.0). @@ -73,12 +68,9 @@ Single-VPC Single-Cluster Scenarios - Container Subnet CIDR Block: The container subnet is included in the VPC CIDR block and can overlap with the subnet CIDR block or even be the same as the subnet CIDR block. Note that the container subnet size determines the maximum number of containers in the cluster because IP addresses in the VPC are directly allocated to containers. After a cluster is created, you can only add container subnets but cannot delete them. You are advised to set a larger IP address segment for the container subnet to prevent insufficient container IP addresses. - Service CIDR Block: cannot overlap with the subnet CIDR block or the container CIDR block. -.. _cce_bestpractice_00004__fig19746213285: .. figure:: /_static/images/en-us_image_0000001392280374.png - :alt: **Figure 3** CIDR block planning in the single-VPC single-cluster scenario (CCE Turbo cluster) - - **Figure 3** CIDR block planning in the single-VPC single-cluster scenario (CCE Turbo cluster) + :alt: **Single-VPC Multi-Cluster Scenarios** -------------------------------------- @@ -92,12 +84,9 @@ Pod packets are forwarded through VPC routes. CCE automatically configures a rou - Container CIDR Block: If multiple VPC network model clusters exist in a single VPC, the container CIDR blocks of all clusters cannot overlap because the clusters use the same routing table. In this case, CCE clusters are partially interconnected. A pod of a cluster can directly access the pods of another cluster, but cannot access the Services of the cluster. - Service CIDR Block: can be used only in clusters. Therefore, the service CIDR blocks of different clusters can overlap, but cannot overlap with the subnet CIDR block and container CIDR block of the cluster to which the clusters belong. -.. _cce_bestpractice_00004__en-us_topic_0099587154_fig69527530400: .. figure:: /_static/images/en-us_image_0261818824.png - :alt: **Figure 4** VPC network - multi-cluster scenario - - **Figure 4** VPC network - multi-cluster scenario + :alt: **Tunnel Network** @@ -108,12 +97,9 @@ Though at some cost of performance, the tunnel encapsulation enables higher inte - Container CIDR Block: The container CIDR blocks of all clusters can overlap. In this case, pods in different clusters cannot be directly accessed using IP addresses. It is recommended that ELB be used for the cross-cluster access between containers. - Service CIDR Block: can be used only in clusters. Therefore, the service CIDR blocks of different clusters can overlap, but cannot overlap with the subnet CIDR block and container CIDR block of the cluster to which the clusters belong. -.. _cce_bestpractice_00004__en-us_topic_0099587154_fig8672112184219: .. figure:: /_static/images/en-us_image_0261818885.png - :alt: **Figure 5** Tunnel network - multi-cluster scenario - - **Figure 5** Tunnel network - multi-cluster scenario + :alt: **Cloud native network 2.0 network model** (CCE Turbo cluster) @@ -126,9 +112,7 @@ In this mode, container IP addresses are allocated from the VPC CIDR block. ELB .. figure:: /_static/images/en-us_image_0000001392259910.png - :alt: **Figure 6** Cloud native network 2.0 network model - multi-cluster scenario - - **Figure 6** Cloud native network 2.0 network model - multi-cluster scenario + :alt: **Coexistence of Clusters in Multi-Network** @@ -148,9 +132,7 @@ In the VPC network model, after creating a peering connection, you need to add r .. figure:: /_static/images/en-us_image_0261818886.png - :alt: **Figure 7** VPC Network - VPC interconnection scenario - - **Figure 7** VPC Network - VPC interconnection scenario + :alt: When creating a VPC peering connection between containers across VPCs, pay attention to the following points: @@ -162,9 +144,7 @@ In the tunnel network model, after creating a peering connection, you need to ad .. figure:: /_static/images/en-us_image_0000001082048529.png - :alt: **Figure 8** Tunnel network - VPC interconnection scenario - - **Figure 8** Tunnel network - VPC interconnection scenario + :alt: Pay attention to the following: diff --git a/umn/source/best_practice/networking/selecting_a_network_model.rst b/umn/source/best_practice/networking/selecting_a_network_model.rst index b48f95c..224abb7 100644 --- a/umn/source/best_practice/networking/selecting_a_network_model.rst +++ b/umn/source/best_practice/networking/selecting_a_network_model.rst @@ -15,25 +15,19 @@ CCE uses self-proprietary, high-performance container networking add-ons to supp .. figure:: /_static/images/en-us_image_0000001145545261.png - :alt: **Figure 1** Container tunnel network - - **Figure 1** Container tunnel network + :alt: - **VPC network**: The container network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. VPC networks are free from tunnel encapsulation overhead and outperform container tunnel networks. In addition, as VPC routing includes routes to node IP addresses and container network segment, container pods in the cluster can be directly accessed from outside the cluster. .. figure:: /_static/images/en-us_image_0261818875.png - :alt: **Figure 2** VPC network - - **Figure 2** VPC network + :alt: - **Cloud Native Network 2.0**: The container network deeply integrates the elastic network interface (ENI) capability of VPC, uses the VPC CIDR block to allocate container addresses, and supports passthrough networking to containers through a load balancer. .. figure:: /_static/images/en-us_image_0000001352539924.png - :alt: **Figure 3** Cloud Native Network 2.0 - - **Figure 3** Cloud Native Network 2.0 + :alt: The following table lists the differences between the network models. diff --git a/umn/source/best_practice/storage/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst b/umn/source/best_practice/storage/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst index ea004c8..48f30de 100644 --- a/umn/source/best_practice/storage/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst +++ b/umn/source/best_practice/storage/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst @@ -18,7 +18,6 @@ Procedure #. (Optional) Back up data to prevent data loss in case of exceptions. -#. .. _cce_bestpractice_0107__li1219802032512: Configure a YAML file of the PV in the CSI format according to the PV in the FlexVolume format and associate the PV with the existing storage. @@ -223,7 +222,6 @@ Procedure | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-sfsturbo** for SFS Turbo volumes. | +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ -#. .. _cce_bestpractice_0107__li1710710385418: Configure a YAML file of the PVC in the CSI format according to the PVC in the FlexVolume format and associate the PVC with the PV created in :ref:`2 `. @@ -401,7 +399,6 @@ Procedure | volumeName | Name of the PV. Set this parameter to the name of the static PV created in :ref:`2 `. | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. .. _cce_bestpractice_0107__li487255772614: Upgrade the workload to use a new PVC. diff --git a/umn/source/best_practice/storage/mounting_an_object_storage_bucket_of_a_third-party_tenant.rst b/umn/source/best_practice/storage/mounting_an_object_storage_bucket_of_a_third-party_tenant.rst index 9c6ce2a..f1f4e00 100644 --- a/umn/source/best_practice/storage/mounting_an_object_storage_bucket_of_a_third-party_tenant.rst +++ b/umn/source/best_practice/storage/mounting_an_object_storage_bucket_of_a_third-party_tenant.rst @@ -12,12 +12,9 @@ Scenario The CCE cluster of a SaaS service provider needs to be mounted with the OBS bucket of a third-party tenant, as shown in :ref:`Figure 1 `. -.. _cce_bestpractice_00199__fig1315433183918: .. figure:: /_static/images/en-us_image_0268523694.png - :alt: **Figure 1** Mounting an OBS bucket of a third-party tenant - - **Figure 1** Mounting an OBS bucket of a third-party tenant + :alt: #. :ref:`The third-party tenant authorizes the SaaS service provider to access the OBS buckets or parallel file systems ` by setting the bucket policy and bucket ACL. #. :ref:`The SaaS service provider statically imports the OBS buckets and parallel file systems of the third-party tenant `. @@ -30,7 +27,6 @@ Precautions - Only clusters where the everest add-on of v1.1.11 or later has been installed (the cluster version must be v1.15 or later) can be mounted with OBS buckets of third-party tenants. - The service platform of the SaaS service provider needs to manage the lifecycle of the third-party bucket PVs. When a PVC is deleted separately, the PV is not deleted. Instead, it will be retained. To do so, you need to call the native Kubernetes APIs to create and delete static PVs. -.. _cce_bestpractice_00199__section193471249193310: Authorizing the SaaS Service Provider to Access the OBS Buckets --------------------------------------------------------------- @@ -46,9 +42,7 @@ The following uses an OBS bucket as an example to describe how to set a bucket p .. figure:: /_static/images/en-us_image_0000001325377749.png - :alt: **Figure 2** Creating a bucket policy - - **Figure 2** Creating a bucket policy + :alt: - **Policy Mode**: Select **Customized**. - **Effect**: Select **Allow**. @@ -58,7 +52,6 @@ The following uses an OBS bucket as an example to describe how to set a bucket p 4. In the navigation pane, choose **Permissions** > **Bucket ACLs**. In the right pane, click **Add**.Enter the account ID or account name of the authorized user, select **Read** and **Write** for **Access to Bucket**, select **Read** and **Write** for **Access to ACL**, and click **OK**. -.. _cce_bestpractice_00199__en-us_topic_0196817407_section155006183017: Statically Importing OBS Buckets and Parallel File Systems ---------------------------------------------------------- @@ -171,7 +164,6 @@ Statically Importing OBS Buckets and Parallel File Systems storageClassName: csi-obs-mountoption #The value must be the same as the storage class associated with the bound PV. volumeName: obsfscheck #Replace the name with the actual PV name of the parallel file system. -- .. _cce_bestpractice_00199__li1235812419467: **(Optional) Creating a custom OBS storage class to associate with a static PV:** diff --git a/umn/source/charts/deploying_an_application_from_a_chart.rst b/umn/source/charts/deploying_an_application_from_a_chart.rst index 2d9716e..93083df 100644 --- a/umn/source/charts/deploying_an_application_from_a_chart.rst +++ b/umn/source/charts/deploying_an_application_from_a_chart.rst @@ -47,7 +47,6 @@ The Redis workload is used as an example to illustrate the chart specifications. As listed in :ref:`Table 1 `, the parameters marked with \* are mandatory. - .. _cce_10_0146__tb7d789a3467e4fe9b4385a51f3460321: .. table:: **Table 1** Parameters in the directory structure of a chart @@ -98,7 +97,6 @@ Creating a Release #. Set workload installation parameters by referring to :ref:`Table 2 `. - .. _cce_10_0146__t26bc1c499f114b5185e5edcf61e44d95: .. table:: **Table 2** Installation parameters diff --git a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst b/umn/source/cloud_trace_service_cts/querying_cts_logs.rst index 3c0459c..67e3cb3 100644 --- a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst +++ b/umn/source/cloud_trace_service_cts/querying_cts_logs.rst @@ -43,17 +43,13 @@ Procedure .. figure:: /_static/images/en-us_image_0000001243981141.png - :alt: **Figure 1** Expanding trace details - - **Figure 1** Expanding trace details + :alt: #. Click **View Trace** in the **Operation** column. The trace details are displayed. .. figure:: /_static/images/en-us_image_0000001244141139.png - :alt: **Figure 2** Viewing event details - - **Figure 2** Viewing event details + :alt: .. |image1| image:: /_static/images/en-us_image_0000001244141141.gif .. |image2| image:: /_static/images/en-us_image_0000001199341250.png diff --git a/umn/source/clusters/cluster_overview/basic_cluster_information.rst b/umn/source/clusters/cluster_overview/basic_cluster_information.rst index f15add4..c64fdf6 100644 --- a/umn/source/clusters/cluster_overview/basic_cluster_information.rst +++ b/umn/source/clusters/cluster_overview/basic_cluster_information.rst @@ -20,9 +20,7 @@ The following figure shows the architecture of a Kubernetes cluster. .. figure:: /_static/images/en-us_image_0267028603.png - :alt: **Figure 1** Kubernetes cluster architecture - - **Figure 1** Kubernetes cluster architecture + :alt: **Master node** diff --git a/umn/source/clusters/obtaining_a_cluster_certificate.rst b/umn/source/clusters/obtaining_a_cluster_certificate.rst index 2bc42d4..716177c 100644 --- a/umn/source/clusters/obtaining_a_cluster_certificate.rst +++ b/umn/source/clusters/obtaining_a_cluster_certificate.rst @@ -21,9 +21,7 @@ Procedure .. figure:: /_static/images/en-us_image_0000001199181228.png - :alt: **Figure 1** Downloading a certificate - - **Figure 1** Downloading a certificate + :alt: .. important:: diff --git a/umn/source/clusters/upgrading_a_cluster/before_you_start.rst b/umn/source/clusters/upgrading_a_cluster/before_you_start.rst index 4e31c57..3792c48 100644 --- a/umn/source/clusters/upgrading_a_cluster/before_you_start.rst +++ b/umn/source/clusters/upgrading_a_cluster/before_you_start.rst @@ -30,7 +30,6 @@ Notes and Constraints In kubelet 1.16 and later versions, `QoS classes `__ are different from those in earlier versions. In kubelet 1.15 and earlier versions, only containers in **spec.containers** are counted. In kubelet 1.16 and later versions, containers in both **spec.containers** and **spec.initContainers** are counted. The QoS class of a pod will change after the upgrade. As a result, the container in the pod restarts. You are advised to modify the QoS class of the service container before the upgrade to avoid this problem. For details, see :ref:`Table 1 `. - .. _cce_10_0302__table10713231143911: .. table:: **Table 1** QoS class changes before and after the upgrade diff --git a/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst b/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst index fba3587..57b83c8 100644 --- a/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst +++ b/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst @@ -10,7 +10,7 @@ Application Scenarios This section describes how to migrate services from a cluster of an earlier version to a cluster of a later version in CCE. -This operation is applicable when a cross-version cluster upgrade is required (for example, upgrade from v1.7.\* or v1.9.\* to 1.17.*) and new clusters can be created for service migration. +This operation is applicable when a cross-version cluster upgrade is required (for example, upgrade from v1.7.\* or v1.9.\* to 1.17.\*) and new clusters can be created for service migration. Prerequisites ------------- diff --git a/umn/source/clusters/upgrading_a_cluster/performing_replace_rolling_upgrade.rst b/umn/source/clusters/upgrading_a_cluster/performing_replace_rolling_upgrade.rst index a8c4c29..7dde1d1 100644 --- a/umn/source/clusters/upgrading_a_cluster/performing_replace_rolling_upgrade.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_replace_rolling_upgrade.rst @@ -44,7 +44,6 @@ Procedure #. On the cluster upgrade page, review or configure basic information by referring to :ref:`Table 1 `. - .. _cce_10_0120__table924319911495: .. table:: **Table 1** Basic information diff --git a/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst b/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst index 6dcb5f3..b9a6fb2 100644 --- a/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst +++ b/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst @@ -19,11 +19,8 @@ Log in to the CCE console and check whether the message "New version available" .. figure:: /_static/images/en-us_image_0000001482796460.png - :alt: **Figure 1** Cluster with the upgrade flag + :alt: - **Figure 1** Cluster with the upgrade flag - -.. _cce_10_0197__section19981121648: Cluster Upgrade --------------- @@ -69,7 +66,6 @@ The upgrade processes are the same for master nodes. The differences between the | **Replace upgrade** | The latest worker node image is used to reset the node OS. | This is the fastest upgrade mode and requires few manual interventions. | Data or configurations on the node will be lost, and services will be interrupted for a period of time. | +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0197__section191131551162610: Precautions for Major Version Upgrade ------------------------------------- diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst b/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst index 0be6fa1..4dcca21 100644 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst +++ b/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst @@ -33,7 +33,6 @@ CCE allows you to access a cluster through a **VPC network** or a **public netwo Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure: -#. .. _cce_10_0107__li194691356201712: Download kubectl. @@ -41,11 +40,8 @@ Download kubectl and the configuration file. Copy the file to your client, and c .. figure:: /_static/images/en-us_image_0000001336475537.png - :alt: **Figure 1** Downloading kubectl + :alt: - **Figure 1** Downloading kubectl - -#. .. _cce_10_0107__li34691156151712: Obtain the kubectl configuration file (kubeconfig). @@ -102,7 +98,6 @@ Download kubectl and the configuration file. Copy the file to your client, and c For details about the cluster two-way authentication, see :ref:`Two-Way Authentication for Domain Names `. -.. _cce_10_0107__section1559919152711: Two-Way Authentication for Domain Names --------------------------------------- @@ -119,12 +114,9 @@ Currently, CCE supports two-way authentication for domain names. - If the domain name two-way authentication is not supported, **kubeconfig.json** contains the **"insecure-skip-tls-verify": true** field, as shown in :ref:`Figure 2 `. To use two-way authentication, you can download the **kubeconfig.json** file again and enable two-way authentication for the domain names. - .. _cce_10_0107__fig1941342411: .. figure:: /_static/images/en-us_image_0000001199021320.png - :alt: **Figure 2** Two-way authentication disabled for domain names - - **Figure 2** Two-way authentication disabled for domain names + :alt: Common Issue (Error from server Forbidden) ------------------------------------------ diff --git a/umn/source/conf.py b/umn/source/conf.py index b8567a6..b502764 100644 --- a/umn/source/conf.py +++ b/umn/source/conf.py @@ -18,7 +18,7 @@ import os import sys extensions = [ - 'otcdocstheme', + 'otcdocstheme' ] otcdocs_auto_name = False @@ -98,4 +98,9 @@ html_static_path = ['_static'] html_copy_source = False # -- Options for PDF output -------------------------------------------------- -latex_documents = [] +latex_documents = [ + ('index', + 'None.tex', + u'Cloud Container Engine - User Guide', + u'OpenTelekomCloud', 'manual'), +] diff --git a/umn/source/configuration_center/cluster_secrets.rst b/umn/source/configuration_center/cluster_secrets.rst index e062cd3..149239f 100644 --- a/umn/source/configuration_center/cluster_secrets.rst +++ b/umn/source/configuration_center/cluster_secrets.rst @@ -13,7 +13,6 @@ By default, CCE creates the following secrets in each namespace: The functions of these secrets are described as follows. -.. _cce_10_0388__section11760122012591: default-secret -------------- diff --git a/umn/source/configuration_center/creating_a_configmap.rst b/umn/source/configuration_center/creating_a_configmap.rst index 0124ef5..4745f24 100644 --- a/umn/source/configuration_center/creating_a_configmap.rst +++ b/umn/source/configuration_center/creating_a_configmap.rst @@ -27,7 +27,6 @@ Procedure #. Set parameters. - .. _cce_10_0152__table16321825732: .. table:: **Table 1** Parameters for creating a ConfigMap @@ -104,7 +103,6 @@ Related Operations After creating a configuration item, you can update or delete it as described in :ref:`Table 2 `. -.. _cce_10_0152__table1619535674020: .. table:: **Table 2** Related operations diff --git a/umn/source/configuration_center/creating_a_secret.rst b/umn/source/configuration_center/creating_a_secret.rst index 877105a..5765718 100644 --- a/umn/source/configuration_center/creating_a_secret.rst +++ b/umn/source/configuration_center/creating_a_secret.rst @@ -19,7 +19,6 @@ Procedure #. Set parameters. - .. _cce_10_0153__table16321825732: .. table:: **Table 1** Parameters for creating a secret @@ -122,7 +121,6 @@ After creating a secret, you can update or delete it as described in :ref:`Table The secret list contains system secret resources that can be queried only. The system secret resources cannot be updated or deleted. -.. _cce_10_0153__table555785274319: .. table:: **Table 2** Related Operations @@ -144,7 +142,6 @@ After creating a secret, you can update or delete it as described in :ref:`Table | | #. Follow the prompts to delete the secrets. | +-----------------------------------+------------------------------------------------------------------------------------------------------+ -.. _cce_10_0153__section175000605919: Base64 Encoding --------------- diff --git a/umn/source/configuration_center/using_a_configmap.rst b/umn/source/configuration_center/using_a_configmap.rst index 0a3664b..1926ff0 100644 --- a/umn/source/configuration_center/using_a_configmap.rst +++ b/umn/source/configuration_center/using_a_configmap.rst @@ -25,7 +25,6 @@ The following example shows how to use a ConfigMap. When a ConfigMap is used in a pod, the pod and ConfigMap must be in the same cluster and namespace. -.. _cce_10_0015__section1737733192813: Setting Workload Environment Variables -------------------------------------- @@ -85,7 +84,6 @@ To add all data in a ConfigMap to environment variables, use the **envFrom** par name: cce-configmap restartPolicy: Never -.. _cce_10_0015__section17930105710189: Setting Command Line Parameters ------------------------------- @@ -122,7 +120,6 @@ After the pod runs, the following information is displayed: Hello CCE -.. _cce_10_0015__section1490261161916: Attaching a ConfigMap to the Workload Data Volume ------------------------------------------------- diff --git a/umn/source/configuration_center/using_a_secret.rst b/umn/source/configuration_center/using_a_secret.rst index 9945789..ede7507 100644 --- a/umn/source/configuration_center/using_a_secret.rst +++ b/umn/source/configuration_center/using_a_secret.rst @@ -32,7 +32,6 @@ The following example shows how to use a secret. When a secret is used in a pod, the pod and secret must be in the same cluster and namespace. -.. _cce_10_0016__section472505211214: Configuring the Data Volume of a Pod ------------------------------------ @@ -84,7 +83,6 @@ In addition, you can specify the directory and permission to access a secret. Th To mount a secret to a data volume, you can also perform operations on the CCE console. When creating a workload, set advanced settings for the container, choose **Data Storage > Local Volume**, click **Add Local Volume**, and select **Secret**. For details, see :ref:`Secret `. -.. _cce_10_0016__section207271352141216: Setting Environment Variables of a Pod -------------------------------------- diff --git a/umn/source/instruction.rst b/umn/source/instruction.rst index 880b4c2..916695f 100644 --- a/umn/source/instruction.rst +++ b/umn/source/instruction.rst @@ -14,9 +14,7 @@ Complete the following tasks to get started with CCE. .. figure:: /_static/images/en-us_image_0000001178352608.png - :alt: **Figure 1** Procedure for getting started with CCE - - **Figure 1** Procedure for getting started with CCE + :alt: #. :ref:`Charts (Helm) `\ Authorize an IAM user to use CCE. diff --git a/umn/source/logging/using_icagent_to_collect_container_logs.rst b/umn/source/logging/using_icagent_to_collect_container_logs.rst index 407b7a5..079c54e 100644 --- a/umn/source/logging/using_icagent_to_collect_container_logs.rst +++ b/umn/source/logging/using_icagent_to_collect_container_logs.rst @@ -23,9 +23,7 @@ Using ICAgent to Collect Logs .. figure:: /_static/images/en-us_image_0000001199181298.png - :alt: **Figure 1** Adding a log policy - - **Figure 1** Adding a log policy + :alt: #. Set **Storage Type** to **Host Path** or **Container Path**. diff --git a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_applications.rst b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_applications.rst index 36f0f5a..6a9c297 100644 --- a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_applications.rst +++ b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_applications.rst @@ -17,11 +17,10 @@ Common Kubernetes resources include Deployments, StatefulSets, jobs, DaemonSets, Procedure --------- -#. .. _cce_01_9995__li156087595210: Export resource files from CCE 1.0. - **kubectl** **get** *{resource} {name}* -**n** *{namespace}* -**oyaml** --**export** > *{namespace}_{resource}_{name}*\ **.yaml** + **kubectl** **get** *{resource} {name}* -**n** *{namespace}* -**oyaml** --**export** > *{namespace}\_{resource}\_{name}*\ **.yaml** Assume that the following resource files are exported: @@ -33,7 +32,7 @@ Procedure #. Switch to the CCE 2.0 clusters and run the following kubectl command to create the resources exported in :ref:`1 `. - **kubectl create -f**\ *{namespace}_{resource}_{name}*\ **.yaml** + **kubectl create -f**\ *{namespace}\_{resource}\_{name}*\ **.yaml** Examples of creating resource files: diff --git a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_clusters.rst b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_clusters.rst index 1e0d6d3..ef1deff 100644 --- a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_clusters.rst +++ b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_clusters.rst @@ -18,9 +18,7 @@ Procedure .. figure:: /_static/images/en-us_image_0000001177874150.png - :alt: **Figure 1** Cluster specifications in CCE 1.0 - - **Figure 1** Cluster specifications in CCE 1.0 + :alt: .. table:: **Table 1** Parameters for creating a cluster @@ -87,7 +85,6 @@ Procedure #. Set the parameters based on :ref:`Table 2 `. - .. _cce_01_9996__table16351025186: .. table:: **Table 2** Parameters for adding a node diff --git a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_images.rst b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_images.rst index 5eaa47d..858a016 100644 --- a/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_images.rst +++ b/umn/source/migrating_data_from_cce_1.0_to_cce_2.0/migrating_images.rst @@ -24,17 +24,13 @@ Procedure .. figure:: /_static/images/en-us_image_0000001178352594.png - :alt: **Figure 1** Generate the Docker login command - - **Figure 1** Generate the Docker login command + :alt: #. Log in to the CCE 1.0 console, and obtain the docker login configuration file **dockercfg.json**. .. figure:: /_static/images/en-us_image_0000001223473833.png - :alt: **Figure 2** Obtain the docker login configuration file - - **Figure 2** Obtain the docker login configuration file + :alt: #. Log in to the Docker client as user **root**, and copy the **dockercfg.json** file obtained in Step 2 and the image migration tool to the **/root** directory. @@ -54,6 +50,4 @@ Procedure .. figure:: /_static/images/en-us_image_0000001223393885.png - :alt: **Figure 3** Migrate the image - - **Figure 3** Migrate the image + :alt: diff --git a/umn/source/monitoring_and_alarm/monitoring_overview.rst b/umn/source/monitoring_and_alarm/monitoring_overview.rst index 2578d8d..6e88c2d 100644 --- a/umn/source/monitoring_and_alarm/monitoring_overview.rst +++ b/umn/source/monitoring_and_alarm/monitoring_overview.rst @@ -15,7 +15,6 @@ CCE works with AOM to comprehensively monitor clusters. When a node is created, The ICAgent collects custom metrics of applications and uploads them to AOM. For details, see :ref:`Custom Monitoring `. -.. _cce_10_0182__section205486212251: Resource Metrics ---------------- diff --git a/umn/source/namespaces/creating_a_namespace.rst b/umn/source/namespaces/creating_a_namespace.rst index 74a688d..2717de1 100644 --- a/umn/source/namespaces/creating_a_namespace.rst +++ b/umn/source/namespaces/creating_a_namespace.rst @@ -46,7 +46,6 @@ Creating a Namespace #. Set namespace parameters based on :ref:`Table 1 `. - .. _cce_10_0278__table5523151617575: .. table:: **Table 1** Parameters for creating a namespace diff --git a/umn/source/namespaces/managing_namespaces.rst b/umn/source/namespaces/managing_namespaces.rst index 7346156..02708f0 100644 --- a/umn/source/namespaces/managing_namespaces.rst +++ b/umn/source/namespaces/managing_namespaces.rst @@ -30,9 +30,7 @@ Isolating Namespaces .. figure:: /_static/images/en-us_image_0000001199021298.png - :alt: **Figure 1** One namespace for one environment - - **Figure 1** One namespace for one environment + :alt: - **Isolating namespaces by application** @@ -40,9 +38,7 @@ Isolating Namespaces .. figure:: /_static/images/en-us_image_0000001243981147.png - :alt: **Figure 2** Grouping workloads into different namespaces - - **Figure 2** Grouping workloads into different namespaces + :alt: Deleting a Namespace -------------------- diff --git a/umn/source/namespaces/setting_a_resource_quota.rst b/umn/source/namespaces/setting_a_resource_quota.rst index 7a030b5..6e16f6b 100644 --- a/umn/source/namespaces/setting_a_resource_quota.rst +++ b/umn/source/namespaces/setting_a_resource_quota.rst @@ -29,7 +29,6 @@ Cluster Scale Recommended Number of Pods Starting from clusters of v1.21 and later, the default `Resource Quotas `__ are created when a namespace is created if you have enabled **enable-resource-quota** in :ref:`Managing Cluster Components `. :ref:`Table 1 ` lists the resource quotas based on cluster specifications. You can modify them according to your service requirements. -.. _cce_10_0287__table371165714613: .. table:: **Table 1** Default resource quotas diff --git a/umn/source/networking/accessing_public_networks_from_a_container.rst b/umn/source/networking/accessing_public_networks_from_a_container.rst index 67560d5..d0c0b7b 100644 --- a/umn/source/networking/accessing_public_networks_from_a_container.rst +++ b/umn/source/networking/accessing_public_networks_from_a_container.rst @@ -13,12 +13,9 @@ Containers can access public networks in either of the following ways: You can use NAT Gateway to enable container pods in a VPC to access public networks. NAT Gateway provides source network address translation (SNAT), which translates private IP addresses to a public IP address by binding an elastic IP address (EIP) to the gateway, providing secure and efficient access to the Internet. :ref:`Figure 1 ` shows the SNAT architecture. The SNAT function allows the container pods in a VPC to access the Internet without being bound to an EIP. SNAT supports a large number of concurrent connections, which makes it suitable for applications involving a large number of requests and connections. -.. _cce_10_0400__cce_bestpractice_00274_0_en-us_topic_0241700138_en-us_topic_0144420145_fig34611314153619: .. figure:: /_static/images/en-us_image_0000001192028618.png - :alt: **Figure 1** SNAT - - **Figure 1** SNAT + :alt: To enable a container pod to access the Internet, perform the following steps: diff --git a/umn/source/networking/configuring_intra-vpc_access.rst b/umn/source/networking/configuring_intra-vpc_access.rst index 495945a..ce17b8b 100644 --- a/umn/source/networking/configuring_intra-vpc_access.rst +++ b/umn/source/networking/configuring_intra-vpc_access.rst @@ -7,7 +7,6 @@ Configuring Intra-VPC Access This section describes how to access an intranet from a container (outside the cluster in a VPC), including intra-VPC access and cross-VPC access. -.. _cce_10_0399__section1940319933: Intra-VPC Access ---------------- @@ -58,7 +57,6 @@ The performance of accessing an intranet from a container varies depending on th --- 192.168.10.25 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss -.. _cce_10_0399__section44190754210: Cross-VPC Access ---------------- diff --git a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst b/umn/source/networking/container_network_models/cloud_native_network_2.0.rst index c24088f..d3a969b 100644 --- a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst +++ b/umn/source/networking/container_network_models/cloud_native_network_2.0.rst @@ -12,9 +12,7 @@ Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Int .. figure:: /_static/images/en-us_image_0000001199181336.png - :alt: **Figure 1** Cloud Native Network 2.0 - - **Figure 1** Cloud Native Network 2.0 + :alt: **Pod-to-pod communication** @@ -58,9 +56,7 @@ In addition, a subnet can be added to the container CIDR block after a cluster i .. figure:: /_static/images/en-us_image_0000001244261171.png - :alt: **Figure 2** Configuring CIDR blocks - - **Figure 2** Configuring CIDR blocks + :alt: Example of Cloud Native Network 2.0 Access ------------------------------------------ diff --git a/umn/source/networking/container_network_models/container_tunnel_network.rst b/umn/source/networking/container_network_models/container_tunnel_network.rst index 7767566..50ced30 100644 --- a/umn/source/networking/container_network_models/container_tunnel_network.rst +++ b/umn/source/networking/container_network_models/container_tunnel_network.rst @@ -12,9 +12,7 @@ The container tunnel network is constructed on but independent of the node netwo .. figure:: /_static/images/en-us_image_0000001199341330.png - :alt: **Figure 1** Container tunnel network - - **Figure 1** Container tunnel network + :alt: **Pod-to-pod communication** @@ -56,9 +54,7 @@ The container tunnel network allocates container IP addresses according to the f .. figure:: /_static/images/en-us_image_0000001244141217.png - :alt: **Figure 2** IP address allocation of the container tunnel network - - **Figure 2** IP address allocation of the container tunnel network + :alt: Maximum number of nodes that can be created in the cluster using the container tunnel network = Number of IP addresses in the container CIDR block / Size of the IP CIDR block allocated to the node by the container CIDR block at a time (16 by default) diff --git a/umn/source/networking/container_network_models/overview.rst b/umn/source/networking/container_network_models/overview.rst index 491b6a9..3213d74 100644 --- a/umn/source/networking/container_network_models/overview.rst +++ b/umn/source/networking/container_network_models/overview.rst @@ -20,7 +20,6 @@ Network Model Comparison After a cluster is created, the network model cannot be changed. -.. _cce_10_0281__en-us_topic_0146398798_table715802210336: .. table:: **Table 1** Network model comparison diff --git a/umn/source/networking/container_network_models/vpc_network.rst b/umn/source/networking/container_network_models/vpc_network.rst index 6ee0769..c232eff 100644 --- a/umn/source/networking/container_network_models/vpc_network.rst +++ b/umn/source/networking/container_network_models/vpc_network.rst @@ -12,9 +12,7 @@ The VPC network uses VPC routing to integrate with the underlying network. This .. figure:: /_static/images/en-us_image_0000001199181338.png - :alt: **Figure 1** VPC network model - - **Figure 1** VPC network model + :alt: **Pod-to-pod communication** @@ -41,7 +39,6 @@ Applicable Scenarios - High performance requirements: As no tunnel encapsulation is required, the VPC network model delivers the performance close to that of a VPC network when compared with the container tunnel network model. Therefore, the VPC network model is applicable to scenarios that have high requirements on performance, such as AI computing and big data computing. - Small- and medium-scale networking: The VPC network is limited by the VPC route quota. Currently, a maximum of 200 nodes are supported by default. If there are large-scale networking requirements, you can increase the VPC route quota. -.. _cce_10_0283__section1574982552114: Container IP Address Management ------------------------------- @@ -55,9 +52,7 @@ The VPC network allocates container IP addresses according to the following rule .. figure:: /_static/images/en-us_image_0000001244261173.png - :alt: **Figure 2** IP address management of the VPC network - - **Figure 2** IP address management of the VPC network + :alt: Maximum number of nodes that can be created in the cluster using the VPC network = Number of IP addresses in the container CIDR block /Number of IP addresses in the CIDR block allocated to the node by the container CIDR block diff --git a/umn/source/networking/dns/dns_configuration.rst b/umn/source/networking/dns/dns_configuration.rst index b6cb174..bd61388 100644 --- a/umn/source/networking/dns/dns_configuration.rst +++ b/umn/source/networking/dns/dns_configuration.rst @@ -30,7 +30,7 @@ Run the **cat /etc/resolv.conf** command on a Linux node or container to view th The value **ndots:5** means that if a domain name has fewer than 5 dots (.), DNS queries will be attempted by combining the domain name with each domain in the search list in turn. If no match is found after all the domains in the search list are tried, the domain name is then used for DNS query. If the domain name has 5 or more than 5 dots, it will be tried first for DNS query. In case that the domain name cannot be resolved, DNS queries will be attempted by combining the domain name with each domain in the search list in turn. - For example, the domain name **www.***.com** has only two dots (smaller than the value of **ndots**), and therefore the sequence of DNS queries is as follows: **www.***.default.svc.cluster.local**, **www.***.com.svc.cluster.local**, **www.***.com.cluster.local**, and **www.***.com**. This means that at least seven DNS queries will be initiated before the domain name is resolved into an IP address. It is clear that when many unnecessary DNS queries will be initiated to access an external domain name. There is room for improvement in workload's DNS configuration. + For example, the domain name **www.**\*.com** has only two dots (smaller than the value of **ndots**), and therefore the sequence of DNS queries is as follows: **www.**\*.default.svc.cluster.local**, **www.**\*.com.svc.cluster.local**, **www.**\*.com.cluster.local**, and **www.**\*.com**. This means that at least seven DNS queries will be initiated before the domain name is resolved into an IP address. It is clear that when many unnecessary DNS queries will be initiated to access an external domain name. There is room for improvement in workload's DNS configuration. .. note:: @@ -80,7 +80,6 @@ When creating a workload using a YAML file, you can configure the DNS settings i The **dnsPolicy** field is used to configure a DNS policy for an application. The default value is **ClusterFirst**. The DNS parameters in **dnsConfig** will be merged to the DNS file generated according to **dnsPolicy**. The merge rules are later explained in :ref:`Table 2 `. Currently, **dnsPolicy** supports the following four values: -.. _cce_10_0365__table144443315261: .. table:: **Table 1** dnsPolicy @@ -116,7 +115,6 @@ The **dnsPolicy** field is used to configure a DNS policy for an application. Th The **dnsConfig** field is used to configure DNS parameters for workloads. The configured parameters are merged to the DNS configuration file generated according to **dnsPolicy**. If **dnsPolicy** is set to **None**, the workload's DNS configuration file is specified by the **dnsConfig** field. If **dnsPolicy** is not set to **None**, the DNS parameters configured in **dnsConfig** are added to the DNS configuration file generated according to **dnsPolicy**. -.. _cce_10_0365__table16581121652515: .. table:: **Table 2** dnsConfig diff --git a/umn/source/networking/dns/overview.rst b/umn/source/networking/dns/overview.rst index b3b0ea8..6e922df 100644 --- a/umn/source/networking/dns/overview.rst +++ b/umn/source/networking/dns/overview.rst @@ -47,9 +47,7 @@ When a user accesses the *Service name:Port* of the Nginx pod, the IP address of .. figure:: /_static/images/en-us_image_0000001244261167.png - :alt: **Figure 1** Example of domain name resolution in a cluster - - **Figure 1** Example of domain name resolution in a cluster + :alt: Related Operations ------------------ diff --git a/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst b/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst index 4453ad2..13cbd31 100644 --- a/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst +++ b/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst @@ -28,7 +28,6 @@ Precautions Improper modification on CoreDNS configuration may cause domain name resolution failures in the cluster. Perform tests before and after the modification. -.. _cce_10_0361__section5202157467: Configuring the Stub Domain for CoreDNS --------------------------------------- @@ -103,7 +102,6 @@ You can also modify the ConfigMap as follows: resourceVersion: "8663493" uid: bba87142-9f8d-4056-b8a6-94c3887e9e1d -.. _cce_10_0361__section106211954135311: Modifying the CoreDNS Hosts Configuration File ---------------------------------------------- @@ -161,7 +159,6 @@ Modifying the CoreDNS Hosts Configuration File After modifying the hosts file in CoreDNS, you do not need to configure the hosts file in each pod. -.. _cce_10_0361__section2213823544: Adding the CoreDNS Rewrite Configuration to Point the Domain Name to Services in the Cluster -------------------------------------------------------------------------------------------- @@ -208,7 +205,6 @@ Use the Rewrite plug-in of CoreDNS to resolve a specified domain name to the dom selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: be64aaad-1629-441f-8a40-a3efc0db9fa9 -.. _cce_10_0361__section677819913541: Using CoreDNS to Cascade Self-Built DNS --------------------------------------- diff --git a/umn/source/networking/ingresses/ingress_overview.rst b/umn/source/networking/ingresses/ingress_overview.rst index 11cc203..af83be1 100644 --- a/umn/source/networking/ingresses/ingress_overview.rst +++ b/umn/source/networking/ingresses/ingress_overview.rst @@ -12,12 +12,9 @@ A Service is generally used to forward access requests based on TCP and UDP and An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in :ref:`Figure 1 `, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic. -.. _cce_10_0094__fig18155819416: .. figure:: /_static/images/en-us_image_0000001243981115.png - :alt: **Figure 1** Ingress diagram - - **Figure 1** Ingress diagram + :alt: The following describes the ingress-related definitions: @@ -35,9 +32,6 @@ ELB Ingress Controller is deployed on the master node and bound to the load bala #. When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule. #. When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service. -.. _cce_10_0094__fig122542486129: .. figure:: /_static/images/en-us_image_0000001199501200.png - :alt: **Figure 2** Working principle of ELB Ingress Controller - - **Figure 2** Working principle of ELB Ingress Controller + :alt: diff --git a/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst b/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst index d05c1b6..91b8ad7 100644 --- a/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst +++ b/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst @@ -40,7 +40,7 @@ This section uses an Nginx workload as an example to describe how to add an ELB Dedicated load balancers must support HTTP and the network type must support private networks. - - **Listener Configuration**: Ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is *k8s___*, for example, *k8s_HTTP_80*. + - **Listener Configuration**: Ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is *k8s\_\_\_*, for example, *k8s_HTTP_80*. - **Front-End Protocol**: **HTTP** and **HTTPS** are available. @@ -78,7 +78,7 @@ This section uses an Nginx workload as an example to describe how to add an ELB - **Prefix match**: If the URL is set to **/healthz**, the URL that meets the prefix can be accessed. For example, **/healthz/v1** and **/healthz/v2**. - **Exact match**: The URL can be accessed only when it is fully matched. For example, if the URL is set to **/healthz**, only /healthz can be accessed. - - **Regular expression**: The URL is matched based on the regular expression. For example, if the regular expression is **/[A-Za-z0-9_.-]+/test**, all URLs that comply with this rule can be accessed, for example, **/abcA9/test** and **/v1-Ab/test**. Two regular expression standards are supported: POSIX and Perl. + - **Regular expression**: The URL is matched based on the regular expression. For example, if the regular expression is **/[A-Za-z0-9\_.-]+/test**, all URLs that comply with this rule can be accessed, for example, **/abcA9/test** and **/v1-Ab/test**. Two regular expression standards are supported: POSIX and Perl. - **URL**: access path to be registered, for example, **/healthz**. @@ -90,7 +90,6 @@ This section uses an Nginx workload as an example to describe how to add an ELB - **Destination Service**: Select an existing Service or create a Service. Services that do not meet search criteria are automatically filtered out. - - .. _cce_10_0251__li118614181492: **Destination Service Port**: Select the access port of the destination Service. @@ -121,13 +120,10 @@ This section uses an Nginx workload as an example to describe how to add an ELB #. Access the /healthz interface of the workload, for example, workload **defaultbackend**. - a. Obtain the access address of the **/healthz** interface of the workload. The access address consists of the load balancer IP address, external port, and mapping URL, for example, 10.**.**.**:80/healthz. + a. Obtain the access address of the **/healthz** interface of the workload. The access address consists of the load balancer IP address, external port, and mapping URL, for example, 10.*\*.*\*.*\*:80/healthz. - b. Enter the URL of the /healthz interface, for example, http://10.**.**.**:80/healthz, in the address box of the browser to access the workload, as shown in :ref:`Figure 1 `. + b. Enter the URL of the /healthz interface, for example, http://10.*\*.*\*.*\*:80/healthz, in the address box of the browser to access the workload, as shown in :ref:`Figure 1 `. - .. _cce_10_0251__fig17115192714367: .. figure:: /_static/images/en-us_image_0000001199181230.png - :alt: **Figure 1** Accessing the /healthz interface of defaultbackend - - **Figure 1** Accessing the /healthz interface of defaultbackend + :alt: diff --git a/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst b/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst index fe30dc0..7c0ffe0 100644 --- a/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst +++ b/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst @@ -20,7 +20,6 @@ Prerequisites - A NodePort Service has been configured for the workload. For details about how to configure the Service, see :ref:`NodePort `. - Dedicated load balancers must be the application type (HTTP/HTTPS) supporting private networks (with a private IP). -.. _cce_10_0252__section084115985013: Ingress Description of networking.k8s.io/v1 ------------------------------------------- @@ -39,7 +38,6 @@ Compared with v1beta1, v1 has the following differences in parameters: |image1| -.. _cce_10_0252__section3675115714214: Creating an Ingress - Automatically Creating a Load Balancer ------------------------------------------------------------ @@ -241,7 +239,7 @@ The following describes how to run the kubectl command to automatically create a | | | | | | | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | | | | | | - | | | | {"type":"public","bandwidth_name":"cce-bandwidth-``******``","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | + | | | | {"type":"public","bandwidth_name":"cce-bandwidth-\``******`\`","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | | | | | | | | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | | | | | | @@ -272,49 +270,48 @@ The following describes how to run the kubectl command to automatically create a | | | | - **Prefix**: matching based on the URL prefix separated by a slash (/). The match is case-sensitive, and elements in the path are matched one by one. A path element refers to a list of labels in the path separated by a slash (/). | +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0252__table268711532210: .. table:: **Table 2** Data structure of the elb.autocreate field - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +======================+=======================================+=================+===============================================================================================================================================================================================+ - | type | No | String | Network type of the load balancer. | - | | | | | - | | | | - **public**: public network load balancer | - | | | | - **inner**: private network load balancer | - | | | | | - | | | | Default: **inner** | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | - | | | | | - | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_chargemode | No | String | Bandwidth mode. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The value ranges from 1 Mbit/s to 2000 Mbit/s by default. The actual range varies depending on the configuration in each region. | - | | | | | - | | | | - The minimum increment for bandwidth adjustment varies depending on the bandwidth range. The details are as follows: | - | | | | | - | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth ranges from 0 Mbit/s to 300 Mbit/s (with 300 Mbit/s included). | - | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | - | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth is greater than 1000 Mbit/s. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth type. | - | | | | | - | | | | **PER**: dedicated bandwidth. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | eip_type | Yes for public network load balancers | String | EIP type. | - | | | | | - | | | | - **5_bgp**: dynamic BGP | - | | | | - **5_sbgp**: static BGP | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | No | String | Name of the automatically created load balancer. | - | | | | | - | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - | | | | | - | | | | Default: **cce-lb+ingress.UID** | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +======================+=======================================+=================+================================================================================================================================================================================================+ + | type | No | String | Network type of the load balancer. | + | | | | | + | | | | - **public**: public network load balancer | + | | | | - **inner**: private network load balancer | + | | | | | + | | | | Default: **inner** | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-\*****\***. | + | | | | | + | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_chargemode | No | String | Bandwidth mode. | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The value ranges from 1 Mbit/s to 2000 Mbit/s by default. The actual range varies depending on the configuration in each region. | + | | | | | + | | | | - The minimum increment for bandwidth adjustment varies depending on the bandwidth range. The details are as follows: | + | | | | | + | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth ranges from 0 Mbit/s to 300 Mbit/s (with 300 Mbit/s included). | + | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | + | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth is greater than 1000 Mbit/s. | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth type. | + | | | | | + | | | | **PER**: dedicated bandwidth. | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | eip_type | Yes for public network load balancers | String | EIP type. | + | | | | | + | | | | - **5_bgp**: dynamic BGP | + | | | | - **5_sbgp**: static BGP | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | name | No | String | Name of the automatically created load balancer. | + | | | | | + | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | | + | | | | Default: **cce-lb+ingress.UID** | + +----------------------+---------------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. Create an ingress. @@ -335,11 +332,10 @@ The following describes how to run the kubectl command to automatically create a NAME HOSTS ADDRESS PORTS AGE ingress-test * 121.**.**.** 80 10s -#. Enter **http://121.**.**.*\*:80** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). +#. Enter **http://121.*\*.*\*.*\*:80** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). - **121.**.**.*\*** indicates the IP address of the unified load balancer. + **121.*\*.*\*.*\*** indicates the IP address of the unified load balancer. -.. _cce_10_0252__section32300431736: Creating an Ingress - Interconnecting with an Existing Load Balancer -------------------------------------------------------------------- @@ -579,7 +575,6 @@ Ingress supports TLS certificate configuration and secures your Services with HT | secretName | No | String | This parameter is mandatory if HTTPS is used. Set this parameter to the name of the created secret. | +--------------------------------------+-----------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0252__table9419191416246: .. table:: **Table 5** tls_ciphers_policy parameter description @@ -622,9 +617,9 @@ Ingress supports TLS certificate configuration and secures your Services with HT NAME HOSTS ADDRESS PORTS AGE ingress-test * 121.**.**.** 80 10s -#. Enter **https://121.**.**.*\*:443** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). +#. Enter **https://121.*\*.*\*.*\*:443** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). - **121.**.**.*\*** indicates the IP address of the unified load balancer. + **121.*\*.*\*.*\*** indicates the IP address of the unified load balancer. Using HTTP/2 ------------ @@ -709,7 +704,6 @@ Table 6 HTTP/2 parameters | | | | Note: **HTTP/2 can be enabled or disabled only when the listener uses HTTPS.** This parameter is invalid and defaults to **false** when the listener protocol is HTTP. | +--------------------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0252__section0555194782414: Configuring the Server Name Indication (SNI) -------------------------------------------- diff --git a/umn/source/networking/network_policies.rst b/umn/source/networking/network_policies.rst index 231d442..b98c374 100644 --- a/umn/source/networking/network_policies.rst +++ b/umn/source/networking/network_policies.rst @@ -63,9 +63,7 @@ Using Ingress Rules .. figure:: /_static/images/en-us_image_0259557735.png - :alt: **Figure 1** podSelector - - **Figure 1** podSelector + :alt: - **Using namespaceSelector to specify the access scope** @@ -90,12 +88,9 @@ Using Ingress Rules :ref:`Figure 2 ` shows how namespaceSelector selects ingress sources. - .. _cce_10_0059__en-us_topic_0249851123_fig127351855617: .. figure:: /_static/images/en-us_image_0259558489.png - :alt: **Figure 2** namespaceSelector - - **Figure 2** namespaceSelector + :alt: Using Egress Rules ------------------ @@ -130,9 +125,7 @@ Diagram: .. figure:: /_static/images/en-us_image_0000001340138373.png - :alt: **Figure 3** ipBlock - - **Figure 3** ipBlock + :alt: You can define ingress and egress in the same rule. @@ -168,9 +161,7 @@ Diagram: .. figure:: /_static/images/en-us_image_0000001287883210.png - :alt: **Figure 4** Using both ingress and egress - - **Figure 4** Using both ingress and egress + :alt: Creating a Network Policy on the Console ---------------------------------------- @@ -188,7 +179,6 @@ Creating a Network Policy on the Console |image2| - .. _cce_10_0059__table166419994515: .. table:: **Table 1** Adding an inbound rule @@ -208,17 +198,17 @@ Creating a Network Policy on the Console .. table:: **Table 2** Adding an outbound rule - +------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +========================+===================================================================================================================================================================================================================================================================================================================================================================================+ - | Protocol & Port | Select the protocol type and port. Currently, TCP and UDP are supported. If this parameter is not specified, the protocol type is not limited. | - +------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Destination CIDR Block | Allows requests to be routed to a specified CIDR block (and not to the exception CIDR blocks). Separate the destination and exception CIDR blocks by vertical bars (|), and separate multiple exception CIDR blocks by commas (,). For example, 172.17.0.0/16|172.17.1.0/24,172.17.2.0/24 indicates that 172.17.0.0/16 is accessible, but not for 172.17.1.0/24 or 172.17.2.0/24. | - +------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Destination Namespace | Select a namespace whose objects can be accessed. If this parameter is not specified, the source object belongs to the same namespace as the current policy. | - +------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Destination Pod Label | Allow access to the pods with this label, if not specified, all pods in the namespace can be accessed. | - +------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +========================+====================================================================================================================================================================================================================================================================================================================================================================================+ + | Protocol & Port | Select the protocol type and port. Currently, TCP and UDP are supported. If this parameter is not specified, the protocol type is not limited. | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination CIDR Block | Allows requests to be routed to a specified CIDR block (and not to the exception CIDR blocks). Separate the destination and exception CIDR blocks by vertical bars (\|), and separate multiple exception CIDR blocks by commas (,). For example, 172.17.0.0/16|172.17.1.0/24,172.17.2.0/24 indicates that 172.17.0.0/16 is accessible, but not for 172.17.1.0/24 or 172.17.2.0/24. | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination Namespace | Select a namespace whose objects can be accessed. If this parameter is not specified, the source object belongs to the same namespace as the current policy. | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination Pod Label | Allow access to the pods with this label, if not specified, all pods in the namespace can be accessed. | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. Click **OK**. diff --git a/umn/source/networking/overview.rst b/umn/source/networking/overview.rst index 2211858..104ed99 100644 --- a/umn/source/networking/overview.rst +++ b/umn/source/networking/overview.rst @@ -10,7 +10,6 @@ You can learn about a cluster network from the following two aspects: - What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see :ref:`Cluster Network Structure `. - How is pod access implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides :ref:`Service ` and :ref:`Ingress ` to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see :ref:`Access Scenarios `. -.. _cce_10_0010__section1131733719195: Cluster Network Structure ------------------------- @@ -39,7 +38,6 @@ All nodes in the cluster are located in a VPC and use the VPC network. The conta Service is also a Kubernetes object. Each Service has a fixed IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster. -.. _cce_10_0010__section1860619221134: Service ------- @@ -48,9 +46,7 @@ A Service is used for pod access. With a fixed IP address, a Service forwards ac .. figure:: /_static/images/en-us_image_0258889981.png - :alt: **Figure 1** Accessing pods through a Service - - **Figure 1** Accessing pods through a Service + :alt: You can configure the following types of Services: @@ -60,7 +56,6 @@ You can configure the following types of Services: For details about the Service, see :ref:`Service Overview `. -.. _cce_10_0010__section1248852094313: Ingress ------- @@ -69,13 +64,10 @@ Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward .. figure:: /_static/images/en-us_image_0258961458.png - :alt: **Figure 2** Ingress and Service - - **Figure 2** Ingress and Service + :alt: For details about the ingress, see :ref:`Ingress Overview `. -.. _cce_10_0010__section1286493159: Access Scenarios ---------------- @@ -95,8 +87,6 @@ Workload access scenarios can be categorized as follows: .. figure:: /_static/images/en-us_image_0000001244261169.png - :alt: **Figure 3** Network access diagram - - **Figure 3** Network access diagram + :alt: .. |image1| image:: /_static/images/en-us_image_0000001199181334.png diff --git a/umn/source/networking/services/intra-cluster_access_clusterip.rst b/umn/source/networking/services/intra-cluster_access_clusterip.rst index 38ff599..ab5d986 100644 --- a/umn/source/networking/services/intra-cluster_access_clusterip.rst +++ b/umn/source/networking/services/intra-cluster_access_clusterip.rst @@ -14,12 +14,9 @@ The cluster-internal domain name format is **.\ *` shows the mapping relationships between access channels, container ports, and access ports. -.. _cce_10_0011__fig192245420557: .. figure:: /_static/images/en-us_image_0000001243981117.png - :alt: **Figure 1** Intra-cluster access (ClusterIP) - - **Figure 1** Intra-cluster access (ClusterIP) + :alt: Creating a ClusterIP Service ---------------------------- diff --git a/umn/source/networking/services/loadbalancer.rst b/umn/source/networking/services/loadbalancer.rst index 20cf11c..395bf50 100644 --- a/umn/source/networking/services/loadbalancer.rst +++ b/umn/source/networking/services/loadbalancer.rst @@ -5,7 +5,6 @@ LoadBalancer ============ -.. _cce_10_0014__section19854101411508: Scenario -------- @@ -18,9 +17,7 @@ In this access mode, requests are transmitted through an ELB load balancer to a .. figure:: /_static/images/en-us_image_0000001244141181.png - :alt: **Figure 1** LoadBalancer - - **Figure 1** LoadBalancer + :alt: When **CCE Turbo clusters and dedicated load balancers** are used, passthrough networking is supported to reduce service latency and ensure zero performance loss. @@ -28,9 +25,7 @@ External access requests are directly forwarded from a load balancer to pods. In .. figure:: /_static/images/en-us_image_0000001249073211.png - :alt: **Figure 2** Passthrough networking - - **Figure 2** Passthrough networking + :alt: Notes and Constraints --------------------- @@ -87,7 +82,6 @@ Creating a LoadBalancer Service - **Type**: This function is disabled by default. You can select **Source IP address**. Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server. - **Health Check**: This function is disabled by default. The health check is for the load balancer. When TCP is selected during the :ref:`port settings `, you can choose either TCP or HTTP. When UDP is selected during the :ref:`port settings `, only UDP is supported.. By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named **cce-healthz** will be added for the Service. - - .. _cce_10_0014__li388800117144: **Port Settings** @@ -99,7 +93,6 @@ Creating a LoadBalancer Service #. Click **OK**. -.. _cce_10_0014__section1984211714368: Using kubectl to Create a Service (Using an Existing Load Balancer) ------------------------------------------------------------------- @@ -222,7 +215,6 @@ You can set the access type when creating a workload using kubectl. This section | kubernetes.io/elb.health-check-option | No | :ref:`Table 3 ` Object | ELB health check configuration items. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0014__table43592047133910: .. table:: **Table 2** Data structure of the elb.session-affinity-option field @@ -234,7 +226,6 @@ You can set the access type when creating a workload using kubectl. This section | | | | Value range: 1 to 60. Default value: **60** | +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0014__table236017471397: .. table:: **Table 3** Data structure description of the **elb.health-check-option** field @@ -311,11 +302,8 @@ You can set the access type when creating a workload using kubectl. This section .. figure:: /_static/images/en-us_image_0000001243981181.png - :alt: **Figure 3** Accessing Nginx through the LoadBalancer Service + :alt: - **Figure 3** Accessing Nginx through the LoadBalancer Service - -.. _cce_10_0014__section12168131904611: Using kubectl to Create a Service (Automatically Creating a Load Balancer) -------------------------------------------------------------------------- @@ -493,7 +481,6 @@ You can add a Service when creating a workload using kubectl. This section uses | externalTrafficPolicy | No | String | If sticky session is enabled, add this parameter so that requests are transferred to a fixed node. If a LoadBalancer Service with this parameter set to **Local** is created, a client can access the target backend only if the client is installed on the same node as the backend. | +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0014__table939522754617: .. table:: **Table 5** Data structure of the elb.autocreate field @@ -502,7 +489,7 @@ You can add a Service when creating a workload using kubectl. This section uses +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ | name | No | String | Name of the load balancer that is automatically created. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | | | | | | | | | | Default: **cce-lb+service.UID** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -513,9 +500,9 @@ You can add a Service when creating a workload using kubectl. This section uses | | | | | | | | | Default: **inner** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-\*****\***. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bandwidth_chargemode | No | String | Bandwidth mode. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -600,9 +587,7 @@ You can add a Service when creating a workload using kubectl. This section uses .. figure:: /_static/images/en-us_image_0000001199021334.png - :alt: **Figure 4** Accessing Nginx through the LoadBalancer Service - - **Figure 4** Accessing Nginx through the LoadBalancer Service + :alt: ELB Forwarding -------------- @@ -615,7 +600,6 @@ In the passthrough networking scenario (CCE Turbo + dedicated load balancer), af You can see that a listener is created for the load balancer. The backend server address is the IP address of the pod, and the service port is the container port. This is because the pod uses an ENI or sub-ENI. When traffic passes through the load balancer, it directly forwards the traffic to the pod. This is the same as that described in :ref:`Scenario `. -.. _cce_10_0014__section52631714117: Why a Cluster Fails to Access Services by Using the ELB Address --------------------------------------------------------------- diff --git a/umn/source/networking/services/nodeport.rst b/umn/source/networking/services/nodeport.rst index f952ae7..36d4c46 100644 --- a/umn/source/networking/services/nodeport.rst +++ b/umn/source/networking/services/nodeport.rst @@ -12,9 +12,7 @@ A Service is exposed on each node's IP address at a static port (NodePort). A Cl .. figure:: /_static/images/en-us_image_0000001199501230.png - :alt: **Figure 1** NodePort access - - **Figure 1** NodePort access + :alt: Notes and Constraints --------------------- @@ -188,7 +186,6 @@ You can run kubectl commands to set the access type. This section uses a Nginx w / # -.. _cce_10_0142__section18134208069: externalTrafficPolicy (Service Affinity) ---------------------------------------- diff --git a/umn/source/networking/services/service_annotations.rst b/umn/source/networking/services/service_annotations.rst index b194ce2..2de30e8 100644 --- a/umn/source/networking/services/service_annotations.rst +++ b/umn/source/networking/services/service_annotations.rst @@ -79,7 +79,6 @@ The annotations of a Service are the parameters that need to be specified for co | | | The default value is **false**, indicating that the host network is not used. | | | +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ -.. _cce_10_0385__table148341447193017: .. table:: **Table 2** Data structure of the elb.autocreate field @@ -88,7 +87,7 @@ The annotations of a Service are the parameters that need to be specified for co +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ | name | No | String | Name of the load balancer that is automatically created. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | | | | | | | | | | Default: **cce-lb+service.UID** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -99,9 +98,9 @@ The annotations of a Service are the parameters that need to be specified for co | | | | | | | | | Default: **inner** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-\*****\***. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (\_). The value must start with a lowercase letter and end with a lowercase letter or digit. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bandwidth_chargemode | No | String | Bandwidth mode. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -141,7 +140,6 @@ The annotations of a Service are the parameters that need to be specified for co | | | | ] | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0385__table19192143412319: .. table:: **Table 3** Data structure description of the **elb.health-check-option** field @@ -173,7 +171,6 @@ The annotations of a Service are the parameters that need to be specified for co | | | | The value can contain 1 to 10,000 characters. | +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ -.. _cce_10_0385__table3340195463412: .. table:: **Table 4** Data structure of the elb.session-affinity-option field diff --git a/umn/source/networking/services/service_overview.rst b/umn/source/networking/services/service_overview.rst index 8d87e99..668a87d 100644 --- a/umn/source/networking/services/service_overview.rst +++ b/umn/source/networking/services/service_overview.rst @@ -16,12 +16,9 @@ After a pod is created, the following problems may occur if you directly access For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in :ref:`Figure 1 `. Three pods are running in the backend, which are independent and replaceable. When a backend pod is re-created, the new pod is assigned with a new IP address, of which the frontend pod is unaware. -.. _cce_10_0249__en-us_topic_0249851121_fig2173165051811: .. figure:: /_static/images/en-us_image_0258894622.png - :alt: **Figure 1** Inter-pod access - - **Figure 1** Inter-pod access + :alt: Using Services for Pod Access ----------------------------- @@ -30,12 +27,9 @@ Kubernetes Services are used to solve the preceding pod access problems. A Servi In the preceding example, a Service is added for the frontend pod to access the backend pods. In this way, the frontend pod does not need to be aware of the changes on backend pods, as shown in :ref:`Figure 2 `. -.. _cce_10_0249__en-us_topic_0249851121_fig163156154816: .. figure:: /_static/images/en-us_image_0258889981.png - :alt: **Figure 2** Accessing pods through a Service - - **Figure 2** Accessing pods through a Service + :alt: Service Types ------------- diff --git a/umn/source/node_pools/creating_a_node_pool.rst b/umn/source/node_pools/creating_a_node_pool.rst index 73ede10..07e222d 100644 --- a/umn/source/node_pools/creating_a_node_pool.rst +++ b/umn/source/node_pools/creating_a_node_pool.rst @@ -196,8 +196,8 @@ Procedure +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Taint | This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | - | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (\_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | | | For details, see :ref:`Managing Node Taints `. | diff --git a/umn/source/node_pools/managing_a_node_pool.rst b/umn/source/node_pools/managing_a_node_pool.rst index ccac6d4..535b78e 100644 --- a/umn/source/node_pools/managing_a_node_pool.rst +++ b/umn/source/node_pools/managing_a_node_pool.rst @@ -198,8 +198,8 @@ Editing a Node Pool +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | - | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | + | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (\_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | | | For details, see :ref:`Managing Node Taints `. | @@ -230,7 +230,6 @@ Deleting a node pool will delete nodes in the pool. Pods on these nodes will be #. Read the precautions in the **Delete Node Pool** dialog box. #. In the text box, click **Yes** to confirm that you want to continue the deletion. -.. _cce_10_0222__section550619571556: Copying a Node Pool ------------------- diff --git a/umn/source/node_pools/node_pool_overview.rst b/umn/source/node_pools/node_pool_overview.rst index d75ae98..b449ca5 100644 --- a/umn/source/node_pools/node_pool_overview.rst +++ b/umn/source/node_pools/node_pool_overview.rst @@ -37,7 +37,6 @@ CCE provides the following extended attributes for node pools: - Node pool OS - Maximum number of pods on each node in a node pool -.. _cce_10_0081__section16928123042115: Description of DefaultPool -------------------------- diff --git a/umn/source/nodes/adding_nodes_for_management.rst b/umn/source/nodes/adding_nodes_for_management.rst index 23e0311..8f7a574 100644 --- a/umn/source/nodes/adding_nodes_for_management.rst +++ b/umn/source/nodes/adding_nodes_for_management.rst @@ -108,8 +108,8 @@ Procedure +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | - | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (\_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | | | .. important:: | diff --git a/umn/source/nodes/creating_a_node.rst b/umn/source/nodes/creating_a_node.rst index 677daed..7b7985b 100644 --- a/umn/source/nodes/creating_a_node.rst +++ b/umn/source/nodes/creating_a_node.rst @@ -151,8 +151,8 @@ After a cluster is created, you can create nodes for the cluster. +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Taint | This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | - | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (\_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | | | For details, see :ref:`Managing Node Taints `. | diff --git a/umn/source/nodes/managing_node_labels.rst b/umn/source/nodes/managing_node_labels.rst index 22fbd55..f9ab8f4 100644 --- a/umn/source/nodes/managing_node_labels.rst +++ b/umn/source/nodes/managing_node_labels.rst @@ -21,7 +21,6 @@ Inherent Label of a Node After a node is created, some fixed labels exist and cannot be deleted. For details about these labels, see :ref:`Table 1 `. -.. _cce_10_0004__table83962234533: .. table:: **Table 1** Inherent label of a node diff --git a/umn/source/nodes/managing_node_taints.rst b/umn/source/nodes/managing_node_taints.rst index 4d41705..4c4913a 100644 --- a/umn/source/nodes/managing_node_taints.rst +++ b/umn/source/nodes/managing_node_taints.rst @@ -89,7 +89,6 @@ This operation will add a taint to the node. You can use kubectl to view the con On the CCE console, perform the same operations again to remove the taint and set the node to be schedulable. -.. _cce_10_0352__section2047442210417: Tolerations ----------- diff --git a/umn/source/nodes/node_overview/container_engine.rst b/umn/source/nodes/node_overview/container_engine.rst index 340d1c0..cba2f57 100644 --- a/umn/source/nodes/node_overview/container_engine.rst +++ b/umn/source/nodes/node_overview/container_engine.rst @@ -10,7 +10,6 @@ Introduction to Container Engines Container engines, one of the most important components of Kubernetes, manage the lifecycle of images and containers. The kubelet interacts with a container runtime through the Container Runtime Interface (CRI). -.. _cce_10_0462__section159298451879: Mapping between Node OSs and Container Engines ---------------------------------------------- diff --git a/umn/source/nodes/node_overview/data_disk_space_allocation.rst b/umn/source/nodes/node_overview/data_disk_space_allocation.rst index b094458..1739b8b 100644 --- a/umn/source/nodes/node_overview/data_disk_space_allocation.rst +++ b/umn/source/nodes/node_overview/data_disk_space_allocation.rst @@ -16,7 +16,6 @@ When creating a node, you need to configure a data disk whose capacity is greate - :ref:`Allocate Pod Basesize `: indicates the base size of a container, that is, the upper limit of the disk space occupied by each workload pod (including the space occupied by container images). This setting prevents the pods from taking all the disk space available, which may cause service exceptions. It is recommended that the value be smaller than or equal to 80% of the container engine space. This parameter is related to the node OS and container storage rootfs and is not supported in some scenarios. -.. _cce_10_0341__section10653143445411: Setting Container Engine Space ------------------------------ @@ -59,7 +58,6 @@ Using rootfs for container storage in CCE - CCE cluster: EulerOS 2.5 nodes use Device Mapper and EulerOS 2.9 nodes use OverlayFS. CentOS 7.x nodes in clusters earlier than v1.19.16 use Device Mapper, and use OverlayFS in clusters of v1.19.16 and later. - CCE Turbo cluster: BMSs use Device Mapper. ECSs use OverlayFS. -.. _cce_10_0341__section12119191161518: Allocating Basesize for Pods ---------------------------- diff --git a/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst b/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst index 5378bb2..ad3f55b 100644 --- a/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst +++ b/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst @@ -107,7 +107,6 @@ Rules for Reserving Node CPU | Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total - 4 cores) x 0.25% | +----------------------------+------------------------------------------------------------------------+ -.. _cce_10_0178__section1057416013173: Default Maximum Number of Pods on a Node ---------------------------------------- diff --git a/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst b/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst index 7cf6523..e059824 100644 --- a/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst +++ b/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst @@ -11,7 +11,6 @@ The maximum number of pods that can be created on a node is determined by the fo - Maximum number of pods of a node (maxPods): Set this parameter when creating a node. It is a configuration item of kubelet. -- .. _cce_10_0348__li5286959123611: Number of ENIs of a CCE Turbo cluster node: In a CCE Turbo cluster, ECS nodes use sub-ENIs and BMS nodes use ENIs. The maximum number of pods that can be created on a node depends on the number of ENIs that can be used by the node. @@ -29,7 +28,6 @@ When creating a pod, you can select the container network or host network for th - Container network (default): **Each pod is assigned an IP address by the cluster networking add-ons, which occupies the IP addresses of the container network**. - Host network: The pod uses the host network (**hostNetwork: true** needs to be configured for the pod) and occupies the host port. The pod IP address is the host IP address. The pod does not occupy the IP addresses of the container network. To use the host network, you must confirm whether the container ports conflict with the host ports. Do not use the host network unless you know exactly which host port is used by which container. -.. _cce_10_0348__section10770192193714: Number of Container IP Addresses That Can Be Allocated on a Node ---------------------------------------------------------------- @@ -40,7 +38,6 @@ This parameter affects the maximum number of pods that can be created on a node. By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 - 3)**. -.. _cce_10_0348__section16296174054019: Maximum Number of Pods on a Node -------------------------------- diff --git a/umn/source/nodes/node_overview/precautions_for_using_a_node.rst b/umn/source/nodes/node_overview/precautions_for_using_a_node.rst index 63fa3f8..b1d4641 100644 --- a/umn/source/nodes/node_overview/precautions_for_using_a_node.rst +++ b/umn/source/nodes/node_overview/precautions_for_using_a_node.rst @@ -16,7 +16,6 @@ A container cluster consists of a set of worker machines, called nodes, that run CCE uses high-performance Elastic Cloud Servers (ECSs) as nodes to build highly available Kubernetes clusters. -.. _cce_10_0461__section1667513391595: Supported Node Specifications ----------------------------- diff --git a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst b/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst index f6d0645..4d61daa 100644 --- a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst +++ b/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst @@ -10,12 +10,9 @@ Scenario In a rolling upgrade, a new node is created, existing workloads are migrated to the new node, and then the old node is deleted. :ref:`Figure 1 ` shows the migration process. -.. _cce_10_0276__fig1689610598118: .. figure:: /_static/images/en-us_image_0000001199181340.png - :alt: **Figure 1** Workload migration - - **Figure 1** Workload migration + :alt: Notes and Constraints --------------------- @@ -27,7 +24,6 @@ Notes and Constraints Scenario 1: The Original Node Is in DefaultPool ----------------------------------------------- -#. .. _cce_10_0276__li375022715214: Create a node pool. For details, see :ref:`Creating a Node Pool `. @@ -72,7 +68,6 @@ Scenario 1: The Original Node Is in DefaultPool Scenario 2: The Original Node Is Not in DefaultPool --------------------------------------------------- -#. .. _cce_10_0276__li1992616214312: Copy the node pool and add nodes to it. For details, see :ref:`Copying a Node Pool `. diff --git a/umn/source/nodes/removing_a_node.rst b/umn/source/nodes/removing_a_node.rst index 045fdb0..7a04bcd 100644 --- a/umn/source/nodes/removing_a_node.rst +++ b/umn/source/nodes/removing_a_node.rst @@ -41,7 +41,6 @@ Procedure After the node is removed, workload pods on the node are automatically migrated to other available nodes. -.. _cce_10_0338__section149069481111: Handling Failed OS Reinstallation --------------------------------- diff --git a/umn/source/nodes/resetting_a_node.rst b/umn/source/nodes/resetting_a_node.rst index 27fac2c..9279bff 100644 --- a/umn/source/nodes/resetting_a_node.rst +++ b/umn/source/nodes/resetting_a_node.rst @@ -42,7 +42,6 @@ The new console allows you to reset nodes in batches. You can also use private i - For nodes in the DefaultPool node pool, the parameter setting page is displayed. Set the parameters by referring to :ref:`4 `. - For a node you create in a node pool, resetting the node does not support parameter configuration. You can directly use the configuration image of the node pool to reset the node. -#. .. _cce_10_0003__li1646785611239: Specify node parameters. @@ -107,8 +106,8 @@ The new console allows you to reset nodes in batches. You can also use private i +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | - | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (\_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | | | .. important:: | diff --git a/umn/source/nodes/stopping_a_node.rst b/umn/source/nodes/stopping_a_node.rst index f9d9c1a..20c6360 100644 --- a/umn/source/nodes/stopping_a_node.rst +++ b/umn/source/nodes/stopping_a_node.rst @@ -29,6 +29,4 @@ Procedure .. figure:: /_static/images/en-us_image_0000001244261119.png - :alt: **Figure 1** ECS details page - - **Figure 1** ECS details page + :alt: diff --git a/umn/source/nodes/synchronizing_data_with_cloud_servers.rst b/umn/source/nodes/synchronizing_data_with_cloud_servers.rst index 60381f0..a8c9428 100644 --- a/umn/source/nodes/synchronizing_data_with_cloud_servers.rst +++ b/umn/source/nodes/synchronizing_data_with_cloud_servers.rst @@ -32,8 +32,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001243981203.png - :alt: **Figure 1** Synchronizing server data - - **Figure 1** Synchronizing server data + :alt: After the synchronization is complete, the **ECS data synchronization requested** message is displayed in the upper right corner. diff --git a/umn/source/permissions_management/cluster_permissions_iam-based.rst b/umn/source/permissions_management/cluster_permissions_iam-based.rst index 93b372a..813752f 100644 --- a/umn/source/permissions_management/cluster_permissions_iam-based.rst +++ b/umn/source/permissions_management/cluster_permissions_iam-based.rst @@ -26,11 +26,8 @@ Process Flow .. figure:: /_static/images/en-us_image_0000001244261073.png - :alt: **Figure 1** Process of assigning CCE permissions + :alt: - **Figure 1** Process of assigning CCE permissions - -#. .. _cce_10_0188__li10176121316284: Create a user group and assign permissions to it. diff --git a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst b/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst index 8c42416..72a7f85 100644 --- a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst +++ b/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst @@ -20,9 +20,7 @@ Role and ClusterRole specify actions that can be performed on specific resources .. figure:: /_static/images/en-us_image_0000001244261071.png - :alt: **Figure 1** Role binding - - **Figure 1** Role binding + :alt: On the CCE console, you can assign permissions to a user or user group to access resources in one or multiple namespaces. By default, the CCE console provides the following ClusterRoles: @@ -31,14 +29,12 @@ On the CCE console, you can assign permissions to a user or user group to access - admin (O&M): read and write permissions on most resources in all namespaces, and read-only permission on nodes, storage volumes, namespaces, and quota management. - cluster-admin (administrator): read and write permissions on all resources in all namespaces. -.. _cce_10_0189__section207514572488: Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) --------------------------------------------------------------------------------- Users with different cluster permissions (assigned using IAM) have different namespace permissions (assigned using Kubernetes RBAC). :ref:`Table 1 ` lists the namespace permissions of different users. -.. _cce_10_0189__cce_10_0187_table886210176509: .. table:: **Table 1** Differences in namespace permissions @@ -82,7 +78,6 @@ You can regulate users' or user groups' access to Kubernetes resources in a sing #. Click **OK**. -.. _cce_10_0189__section1273861718819: Using kubectl to Configure Namespace Permissions ------------------------------------------------ @@ -137,9 +132,7 @@ The **subjects** section binds a Role with an IAM user so that the IAM user can .. figure:: /_static/images/en-us_image_0262051194.png - :alt: **Figure 2** A RoleBinding binds the Role to the user. - - **Figure 2** A RoleBinding binds the Role to the user. + :alt: You can also specify a user group in the **subjects** section. In this case, all users in the user group obtain the permissions defined in the Role. diff --git a/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst b/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst index 57f987c..d9f8528 100644 --- a/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst +++ b/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst @@ -25,7 +25,6 @@ To grant an IAM user the permissions to view or use resources of other cloud ser - AOM does not support resource-level monitoring. After operation permissions on specific resources are configured using IAM's fine-grained cluster resource management function, IAM users can view cluster monitoring information on the **Dashboard** page of the CCE console, but cannot view the data on non-fine-grained metrics. -.. _cce_10_0190__table99001215575: .. table:: **Table 1** Dependency policies diff --git a/umn/source/permissions_management/permissions_overview.rst b/umn/source/permissions_management/permissions_overview.rst index 190e97f..e73bdc5 100644 --- a/umn/source/permissions_management/permissions_overview.rst +++ b/umn/source/permissions_management/permissions_overview.rst @@ -30,20 +30,16 @@ In general, you configure CCE permissions in two scenarios. The first is creatin .. figure:: /_static/images/en-us_image_0000001199181266.png - :alt: **Figure 1** Illustration on CCE permissions - - **Figure 1** Illustration on CCE permissions + :alt: These permissions allow you to manage resource users at a finer granularity. -.. _cce_10_0187__section1464135853519: Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) --------------------------------------------------------------------------------- Users with different cluster permissions (assigned using IAM) have different namespace permissions (assigned using Kubernetes RBAC). :ref:`Table 1 ` lists the namespace permissions of different users. -.. _cce_10_0187__table886210176509: .. table:: **Table 1** Differences in namespace permissions diff --git a/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst b/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst index 1ba83b4..2dc948d 100644 --- a/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst +++ b/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst @@ -25,7 +25,6 @@ Before modifying the global default PSP, ensure that a CCE cluster has been crea #. Modify the parameters as required. For details, see `PodSecurityPolicy `__. -.. _cce_10_0275__section155111941177: Example of Enabling Unsafe Sysctls in Pod Security Policy --------------------------------------------------------- diff --git a/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst b/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst index c775122..9cd260a 100644 --- a/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst +++ b/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst @@ -9,7 +9,6 @@ Before using `Pod Security Admission `). You can set these labels in a namespace to define the pod security standard level to be used. However, do not change the pod security standard level in system namespaces such as kube-system. Otherwise, pods in the system namespace may be faulty. -.. _cce_10_0466__table198561415448: .. table:: **Table 2** Pod security admission labels @@ -50,7 +48,6 @@ Kubernetes defines three types of labels for Pod Security Admission (see :ref:`T Pods are often created indirectly, by creating a workload object such as a Deployment or job. To help catch violations early, both the audit and warning modes are applied to the workload resources. However, the enforce mode is applied only to the resulting pod objects. -.. _cce_10_0466__section4761636371: Enforcing Pod Security Admission with Namespace Labels ------------------------------------------------------ diff --git a/umn/source/product_bulletin/risky_operations_on_cluster_nodes.rst b/umn/source/product_bulletin/risky_operations_on_cluster_nodes.rst index 4595197..ba475ab 100644 --- a/umn/source/product_bulletin/risky_operations_on_cluster_nodes.rst +++ b/umn/source/product_bulletin/risky_operations_on_cluster_nodes.rst @@ -12,7 +12,7 @@ Precautions for Using a Cluster - The containerized network canal of CCE nodes uses a CIDR block as the CIDR block of the container network. This CIDR block can be configured during cluster creation and defaults to 172.16.0.0/16. The Docker service creates a docker0 bridge by default. The default docker0 address is 172.17.0.1. When creating a cluster, ensure that the CIDR block of the VPC in the cluster is different from the CIDR blocks of the container network docker0 bridge. If VPC peering connections are used, also ensure that the CIDR block of the peer VPC is different from the CIDR blocks of the container network docker0 bridge. - For a cluster of Kubernetes v1.15, the DNS server of nodes in the cluster uses the DNS address in the VPC subnet, not the CoreDNS address of Kubernetes. Ensure that the DNS address in the subnet exists and is configurable. - For a cluster of Kubernetes v1.17, a single-plane network is used for nodes. When a multi-plane network is used, if you bind a NIC to an ECS, you need to configure the NIC information on the node and restart the NIC. -- Do not modify the security groups, Elastic Volume Service (EVS) disks, and other resources created by CCE. Otherwise, clusters may not function properly. The resources created by CCE are labeled **cce**, for example, **cce-evs-jwh9pcl7-***\***. +- Do not modify the security groups, Elastic Volume Service (EVS) disks, and other resources created by CCE. Otherwise, clusters may not function properly. The resources created by CCE are labeled **cce**, for example, **cce-evs-jwh9pcl7-\***\***. - When adding a node, ensure that the DNS server in the subnet can resolve the domain name of the corresponding service. Otherwise, the node cannot be installed properly. Precautions for Using a Node diff --git a/umn/source/reference/how_do_i_change_the_mode_of_the_docker_device_mapper.rst b/umn/source/reference/how_do_i_change_the_mode_of_the_docker_device_mapper.rst index f90a3ea..81fdb8c 100644 --- a/umn/source/reference/how_do_i_change_the_mode_of_the_docker_device_mapper.rst +++ b/umn/source/reference/how_do_i_change_the_mode_of_the_docker_device_mapper.rst @@ -64,7 +64,6 @@ Procedure - If the command output contains the preceding information, the Docker Device Mapper mode of the current node is **direct-lvm**. You do not need to change the mode. - If the command output does not contain the preceding information or a message indicating that a file such as **daemon.json** is unavailable is displayed, the Docker Device Mapper mode of the current node is not **direct-lvm**. Change the mode by following :ref:`2 `. -#. .. _cce_faq_00096__li5148151113134: (Optional) If no elastic IP address is bound to the node for which the Docker Device Mapper mode needs to be changed, bind an elastic IP address. @@ -102,7 +101,6 @@ Procedure privateKey: serverId: - .. _cce_faq_00096__table43203543121749: .. table:: **Table 1** Parameter description @@ -134,7 +132,6 @@ Procedure | hosts | | Host array structure [1]. You can set multiple nodes for which you want to change the Device Mapper mode. The following parameters must be included: **user**, **password/privateKey**, and **serverId**. For details about the host array structure, see :ref:`Table 2 `. | +-------------------+---+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_faq_00096__table1718722614567: .. table:: **Table 2** Parameter description about the host array structure diff --git a/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst b/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst index a6c470e..d8cee79 100644 --- a/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst +++ b/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst @@ -13,7 +13,6 @@ Fault Locating - :ref:`Check Item 1: Whether the Security Group Is Modified ` - :ref:`Check Item 2: Whether the DHCP Function of the Subnet Is Disabled ` -.. _cce_faq_00039__section48059154014: Check Item 1: Whether the Security Group Is Modified ---------------------------------------------------- @@ -26,9 +25,7 @@ Check Item 1: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223473841.png - :alt: **Figure 1** Viewing inbound rules of the security group - - **Figure 1** Viewing inbound rules of the security group + :alt: Inbound rule parameter description: @@ -39,11 +36,8 @@ Check Item 1: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001178192662.png - :alt: **Figure 2** Viewing outbound rules of the security group + :alt: - **Figure 2** Viewing outbound rules of the security group - -.. _cce_faq_00039__section11822101617614: Check Item 2: Whether the DHCP Function of the Subnet Is Disabled ----------------------------------------------------------------- @@ -60,6 +54,4 @@ Check Item 2: Whether the DHCP Function of the Subnet Is Disabled .. figure:: /_static/images/en-us_image_0000001223473843.png - :alt: **Figure 3** DHCP description in the VPC API Reference - - **Figure 3** DHCP description in the VPC API Reference + :alt: diff --git a/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst b/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst index f55c6cf..a5d80c1 100644 --- a/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst +++ b/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst @@ -12,9 +12,7 @@ When a node is added, **EIP** is set to **Automatically assign**. The node canno .. figure:: /_static/images/en-us_image_0000001223393901.png - :alt: **Figure 1** Purchasing an EIP - - **Figure 1** Purchasing an EIP + :alt: Solution -------- @@ -33,17 +31,13 @@ Two methods are available to solve the problem. .. figure:: /_static/images/en-us_image_0000001223152423.png - :alt: **Figure 2** Unbinding an EIP - - **Figure 2** Unbinding an EIP + :alt: #. Return to the **Create Node** page on the CCE console and click **Use existing** to add an EIP. .. figure:: /_static/images/en-us_image_0000001223272345.png - :alt: **Figure 3** Using an unbound EIP - - **Figure 3** Using an unbound EIP + :alt: - Method 2: Increase the EIP quota. diff --git a/umn/source/reference/how_do_i_use_kubectl_to_set_the_workload_access_type_to_loadbalancer_elb.rst b/umn/source/reference/how_do_i_use_kubectl_to_set_the_workload_access_type_to_loadbalancer_elb.rst index 0012e83..f92db0e 100644 --- a/umn/source/reference/how_do_i_use_kubectl_to_set_the_workload_access_type_to_loadbalancer_elb.rst +++ b/umn/source/reference/how_do_i_use_kubectl_to_set_the_workload_access_type_to_loadbalancer_elb.rst @@ -154,39 +154,38 @@ Procedure | targetPort | String | Container port on the CCE console. | +-------------------------------------+-----------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_faq_00099__table957018613817: .. table:: **Table 2** elb.autocreate parameters - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Type | Description | - +=======================+=======================+=============================================================================================================================+ - | name | String | Name of the load balancer that is automatically created. | - | | | | - | | | The value is a string of 1 to 64 characters that consist of letters, digits, underscores (_), and hyphens (-). | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | type | String | Network type of the load balancer. | - | | | | - | | | - **public**: public network load balancer. | - | | | - **inner**: private network load balancer. | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | - | | | | - | | | The value is a string of 1 to 64 characters that consist of letters, digits, underscores (_), hyphens (-), and periods (.). | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_chargemode | String | Bandwidth billing mode. | - | | | | - | | | The value is **traffic**, indicating that the billing is based on traffic. | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_size | Integer | Bandwidth size. Set this parameter based on the bandwidth range supported by the region. | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_sharetype | String | Bandwidth sharing mode. | - | | | | - | | | - **PER**: dedicated bandwidth. | - | | | - **WHOLE**: shared bandwidth. | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ - | eip_type | String | EIP type. | - +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------+ + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Type | Description | + +=======================+=======================+==============================================================================================================================+ + | name | String | Name of the load balancer that is automatically created. | + | | | | + | | | The value is a string of 1 to 64 characters that consist of letters, digits, underscores (\_), and hyphens (-). | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | type | String | Network type of the load balancer. | + | | | | + | | | - **public**: public network load balancer. | + | | | - **inner**: private network load balancer. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_name | String | Bandwidth name. The default value is **cce-bandwidth-\*****\***. | + | | | | + | | | The value is a string of 1 to 64 characters that consist of letters, digits, underscores (\_), hyphens (-), and periods (.). | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_chargemode | String | Bandwidth billing mode. | + | | | | + | | | The value is **traffic**, indicating that the billing is based on traffic. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_size | Integer | Bandwidth size. Set this parameter based on the bandwidth range supported by the region. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_sharetype | String | Bandwidth sharing mode. | + | | | | + | | | - **PER**: dedicated bandwidth. | + | | | - **WHOLE**: shared bandwidth. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ + | eip_type | String | EIP type. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------+ #. Create a workload. diff --git a/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst b/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst index a27c353..11db564 100644 --- a/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst +++ b/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst @@ -17,7 +17,6 @@ Fault Locating - :ref:`Check Item 5: Whether the Disk Is Abnormal ` - :ref:`Check Item 6: Whether Internal Components Are Normal ` -.. _cce_faq_00120__section745921416917: Check Item 1: Whether the Node Is Overloaded -------------------------------------------- @@ -39,7 +38,6 @@ Check Item 1: Whether the Node Is Overloaded After the node becomes available, the workload is restored. -.. _cce_faq_00120__section19793128323: Check Item 2: Whether the ECS Is Deleted or Faulty -------------------------------------------------- @@ -51,7 +49,6 @@ Check Item 2: Whether the ECS Is Deleted or Faulty - If the cluster is unavailable, contact technical support to rectify the fault. - If the cluster is available but some nodes in the cluster are unavailable, go to :ref:`2 `. -#. .. _cce_faq_00120__li20888175614212: Log in to the ECS console. In the navigation pane, choose **Elastic Cloud Server** to view the ECS status. @@ -60,7 +57,6 @@ Check Item 2: Whether the ECS Is Deleted or Faulty - If the ECS status is **Faulty**, restart the ECS. If the ECS is still faulty, contact technical support to rectify the fault. - If the ECS status is **Running**, log in to the ECS to locate the fault according to :ref:`Check Item 6: Whether Internal Components Are Normal `. -.. _cce_faq_00120__section13620173173419: Check Item 3: Whether You Can Log In to the ECS ----------------------------------------------- @@ -73,13 +69,10 @@ Check Item 3: Whether You Can Log In to the ECS .. figure:: /_static/images/en-us_image_0000001178034104.png - :alt: **Figure 1** Check the node name on the VM and whether the node can be logged in to - - **Figure 1** Check the node name on the VM and whether the node can be logged in to + :alt: If the node names are inconsistent and the password and key cannot be used to log in to the node, Cloud-Init problems occurred when an ECS was created. In this case, restart the node and submit a service ticket to the ECS personnel to locate the root cause. -.. _cce_faq_00120__section39419166416: Check Item 4: Whether the Security Group Is Modified ---------------------------------------------------- @@ -92,9 +85,7 @@ Check Item 4: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223393891.png - :alt: **Figure 2** Viewing inbound rules of the security group - - **Figure 2** Viewing inbound rules of the security group + :alt: Inbound rule parameter description: @@ -105,11 +96,8 @@ Check Item 4: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223393887.png - :alt: **Figure 3** Viewing outbound rules of the security group + :alt: - **Figure 3** Viewing outbound rules of the security group - -.. _cce_faq_00120__section165209286116: Check Item 5: Whether the Disk Is Abnormal ------------------------------------------ @@ -118,7 +106,6 @@ After a node is created in a cluster of v1.7.3-r7 or a later version, a 100 GB d Click the node name to check whether the data disk mounted to the node is uninstalled. If the disk is uninstalled, mount a data disk to the node again and restart the node. Then the node can be recovered. -.. _cce_faq_00120__section89551837167: Check Item 6: Whether Internal Components Are Normal ---------------------------------------------------- diff --git a/umn/source/reference/what_is_the_relationship_between_clusters_vpcs_and_subnets.rst b/umn/source/reference/what_is_the_relationship_between_clusters_vpcs_and_subnets.rst index f738568..bc940c2 100644 --- a/umn/source/reference/what_is_the_relationship_between_clusters_vpcs_and_subnets.rst +++ b/umn/source/reference/what_is_the_relationship_between_clusters_vpcs_and_subnets.rst @@ -17,9 +17,6 @@ As shown in :ref:`Figure 1 ` - :ref:`Check Item 4: NAT Gateway + Port ` -.. _cce_faq_00202__section11960929145416: Check Item 1: Container and Container Port ------------------------------------------ @@ -32,7 +31,6 @@ If : cannot be accessed, you are advised to log in t #. The URL does not exist (no related path exists in the container). #. A Service exception (a Service bug in the container) occurs. -.. _cce_faq_00202__section138152218598: Check Item 2: Node IP Address and Node Port ------------------------------------------- @@ -61,7 +59,6 @@ After finding the node port, access : of the node where the co #. A custom route is incorrectly configured for the node. #. The label of the pod does not match that of the Service (created using kubectl or API). -.. _cce_faq_00202__section3966114610915: Check Item 3: ELB IP Address and Port ------------------------------------- @@ -81,7 +78,6 @@ There are several possible causes if : of the ELB cannot be ac #. For UDP access, the ICMP port of the node has not been allowed in the inbound rules. #. The label of the pod does not match that of the Service (created using kubectl or API). -.. _cce_faq_00202__section77791227101111: Check Item 4: NAT Gateway + Port -------------------------------- diff --git a/umn/source/reference/workload_abnormalities/failed_to_pull_an_image.rst b/umn/source/reference/workload_abnormalities/failed_to_pull_an_image.rst index c9bbbad..0822167 100644 --- a/umn/source/reference/workload_abnormalities/failed_to_pull_an_image.rst +++ b/umn/source/reference/workload_abnormalities/failed_to_pull_an_image.rst @@ -15,7 +15,6 @@ Fault Locating - :ref:`Check Item 3: Checking Whether an Incorrect Key Is Used or the Key Expires When a Third-Party Image Is Used ` - :ref:`Check Item 4: Checking Whether Disk Space Is Insufficient ` -.. _cce_faq_00015__section629791052512: Check Item 1: Checking Whether **imagePullSecret** Is Specified When You Use kubectl to Create a Workload --------------------------------------------------------------------------------------------------------- @@ -56,7 +55,6 @@ The example shows the situation when you fail to pull an image when **imagePullS When pulling an image from a third-party image repository, set **imagePullSecrets** to the created secret name. -.. _cce_faq_00015__section819316261313: Check Item 2: Checking Whether the Image Address Is Correct When a Third-Party Image Is Used -------------------------------------------------------------------------------------------- @@ -78,7 +76,6 @@ The following information is displayed when you fail to pull an image due to inc You can either edit your YAML file to modify the image address or log in to the CCE console to replace the image on the **Upgrade** tag page of the workload details page. -.. _cce_faq_00015__section9312113135616: Check Item 3: Checking Whether an Incorrect Key Is Used or the Key Expires When a Third-Party Image Is Used ----------------------------------------------------------------------------------------------------------- @@ -91,7 +88,6 @@ Generally, a third-party image repository can be accessed only after authenticat If the secret is incorrect or expires, images will fail to be pulled. -.. _cce_faq_00015__section165209286116: Check Item 4: Checking Whether Disk Space Is Insufficient --------------------------------------------------------- diff --git a/umn/source/reference/workload_abnormalities/failed_to_restart_a_container.rst b/umn/source/reference/workload_abnormalities/failed_to_restart_a_container.rst index 4b645bb..bd60887 100644 --- a/umn/source/reference/workload_abnormalities/failed_to_restart_a_container.rst +++ b/umn/source/reference/workload_abnormalities/failed_to_restart_a_container.rst @@ -43,7 +43,6 @@ Fault Locating - :ref:`Check Item 7: Whether the Container Ports in the Same Pod Conflict with Each Other ` - :ref:`Check Item 8: Whether the Container Startup Command Is Correctly Configured ` -.. _cce_faq_00018__section2524165018111: Check Item 1: Whether There Are Processes that Keep Running in the Container ---------------------------------------------------------------------------- @@ -62,7 +61,6 @@ Check Item 1: Whether There Are Processes that Keep Running in the Container If no running process in the container, the status code **Exited (0)** is displayed. -.. _cce_faq_00018__section1766510426482: Check Item 2: Whether Health Check Fails to Be Performed -------------------------------------------------------- @@ -75,7 +73,6 @@ If the liveness-type (workload liveness probe) health check is configured for th On the workload details page, choose **Upgrade** > **Advanced Settings** > **Health Check** to check whether the health check policy is properly set and whether services are normal. -.. _cce_faq_00018__section1833513213713: Check Item 3: Whether the User Service Has a Bug ------------------------------------------------ @@ -100,15 +97,12 @@ Check whether the workload startup command is correctly executed or whether the .. figure:: /_static/images/en-us_image_0000001223473849.png - :alt: **Figure 1** Incorrect startup command of the container - - **Figure 1** Incorrect startup command of the container + :alt: As shown above, the container fails to be started due to an incorrect startup command. For other errors, rectify the bugs based on the logs. Solution: Re-create a workload and configure a correct startup command. -.. _cce_faq_00018__section060854916109: Check Item 4: Whether the Upper Limit of Container Resources Has Been Reached ----------------------------------------------------------------------------- @@ -123,7 +117,6 @@ If the upper limit of container resources has been reached, OOM will be displaye When a workload is created, if the requested resources exceed the configured upper limit, the system OOM is triggered and the container exits unexpectedly. -.. _cce_faq_00018__section169421237111219: Check Item 5: Whether the Container Disk Space Is Insufficient -------------------------------------------------------------- @@ -146,7 +139,6 @@ The following message refers to the Thin Pool disk that is allocated from the Do #. Expand the disk capacity. For details, see the method of expanding the data disk capacity of a node. -.. _cce_faq_00018__section1548114151414: Check Item 6: Whether the Resource Limits Are Improperly Set for the Container ------------------------------------------------------------------------------ @@ -161,7 +153,6 @@ If the resource limits set for the container during workload creation are less t Modify the container specifications. -.. _cce_faq_00018__section17679197145618: Check Item 7: Whether the Container Ports in the Same Pod Conflict with Each Other ---------------------------------------------------------------------------------- @@ -180,15 +171,12 @@ Check Item 7: Whether the Container Ports in the Same Pod Conflict with Each Oth .. figure:: /_static/images/en-us_image_0000001178192674.png - :alt: **Figure 2** Container restart failure due to a container port conflict - - **Figure 2** Container restart failure due to a container port conflict + :alt: **Solution** Re-create the workload and set a port number that is not used by any other pod. -.. _cce_faq_00018__section1842111295128: Check Item 8: Whether the Container Startup Command Is Correctly Configured --------------------------------------------------------------------------- diff --git a/umn/source/reference/workload_abnormalities/failed_to_schedule_an_instance.rst b/umn/source/reference/workload_abnormalities/failed_to_schedule_an_instance.rst index 1a23b00..b2f74e7 100644 --- a/umn/source/reference/workload_abnormalities/failed_to_schedule_an_instance.rst +++ b/umn/source/reference/workload_abnormalities/failed_to_schedule_an_instance.rst @@ -14,7 +14,6 @@ Fault Locating - :ref:`Check Item 3: Checking the Affinity and Anti-Affinity Configuration of the Workload ` - :ref:`Check Item 4: Checking Whether the Workload's Volume and Node Reside in the Same AZ ` -.. _cce_faq_00098__section1678344818322: Viewing K8s Event Information ----------------------------- @@ -40,7 +39,6 @@ As shown in the following figure, the K8s event is "0/163 nodes are available: 1 The following is the fault locating procedure: -.. _cce_faq_00098__section133416392418: Check Item 1: Checking Whether a Node Is Available in the Cluster ----------------------------------------------------------------- @@ -54,7 +52,6 @@ For example, the event "0/1 nodes are available: 1 node(s) were not ready, 1 nod - Add a node and migrate the pods to the new available node to ensure that services are running properly. Then, rectify the fault on the unavailable node. For details about the troubleshooting process, see the methods in the node FAQs. - Create a new node or repair the faulty one. -.. _cce_faq_00098__section29231833141817: Check Item 2: Checking Whether Node Resources (CPU and Memory) Are Sufficient ----------------------------------------------------------------------------- @@ -77,7 +74,6 @@ If the requested workload resources exceed the available resources of the node w On the ECS console, modify node specifications to expand node resources. -.. _cce_faq_00098__section794092214205: Check Item 3: Checking the Affinity and Anti-Affinity Configuration of the Workload ----------------------------------------------------------------------------------- @@ -108,7 +104,6 @@ Inappropriate affinity policies will cause pod scheduling to fail. 0/1 nodes are available: 1 -.. _cce_faq_00098__section197421559143010: Check Item 4: Checking Whether the Workload's Volume and Node Reside in the Same AZ ----------------------------------------------------------------------------------- diff --git a/umn/source/storage/overview.rst b/umn/source/storage/overview.rst index b2dd45e..804df73 100644 --- a/umn/source/storage/overview.rst +++ b/umn/source/storage/overview.rst @@ -16,7 +16,6 @@ The following figure shows how a storage volume is used between containers in a A volume will no longer exist if the pod to which it is mounted does not exist. However, files in the volume may outlive the volume, depending on the volume type. -.. _cce_10_0307__section16559121287: Volume Types ------------ @@ -60,15 +59,12 @@ You can bind PVCs to PVs in a pod so that the pod can use storage resources. The .. figure:: /_static/images/en-us_image_0000001244141191.png - :alt: **Figure 1** PVC-to-PV binding - - **Figure 1** PVC-to-PV binding + :alt: PVs describes storage resources in the cluster. PVCs are requests for those resources. The following sections will describe how to use kubectl to connect to storage resources. If you do not want to create storage resources or PVs manually, you can use :ref:`StorageClasses `. -.. _cce_10_0307__section19926174743310: StorageClass ------------ @@ -95,9 +91,7 @@ CCE allows you to mount local and cloud storage volumes listed in :ref:`Volume T .. figure:: /_static/images/en-us_image_0000001203385342.png - :alt: **Figure 2** Volume types supported by CCE - - **Figure 2** Volume types supported by CCE + :alt: .. table:: **Table 1** Detailed description of cloud storage services diff --git a/umn/source/storage/persistentvolumeclaims_pvcs.rst b/umn/source/storage/persistentvolumeclaims_pvcs.rst index a90b650..6ba47fa 100644 --- a/umn/source/storage/persistentvolumeclaims_pvcs.rst +++ b/umn/source/storage/persistentvolumeclaims_pvcs.rst @@ -28,7 +28,6 @@ When a PVC is created, the system checks whether there is an available PV with t | Storage class | storageclass | storageclass | The settings must be consistent. | +---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -.. _cce_10_0378__section43881411172418: Volume Access Modes ------------------- diff --git a/umn/source/storage/persistentvolumes_pvs.rst b/umn/source/storage/persistentvolumes_pvs.rst index 3a447d0..01ce7ca 100644 --- a/umn/source/storage/persistentvolumes_pvs.rst +++ b/umn/source/storage/persistentvolumes_pvs.rst @@ -36,7 +36,6 @@ PVs can be mounted to the host system only in the mode supported by underlying s SFS Turbo x Y ============ ============= ============= -.. _cce_10_0379__section19999142414413: PV Reclaim Policy ----------------- diff --git a/umn/source/storage/setting_mount_options.rst b/umn/source/storage/setting_mount_options.rst index 8d9508a..ed05801 100644 --- a/umn/source/storage/setting_mount_options.rst +++ b/umn/source/storage/setting_mount_options.rst @@ -12,14 +12,12 @@ You can mount cloud storage volumes to your containers and use these volumes as This section describes how to set mount options when mounting SFS and OBS volumes. You can set mount options in a PV and bind the PV to a PVC. Alternatively, set mount options in a StorageClass and use the StorageClass to create a PVC. In this way, PVs can be dynamically created and inherit mount options configured in the StorageClass by default. -.. _cce_10_0337__section14888047833: SFS Volume Mount Options ------------------------ The everest add-on in CCE presets the options described in :ref:`Table 1 ` for mounting SFS volumes. You can set other mount options if needed. For details, see `Mounting an NFS File System to ECSs (Linux) `__. -.. _cce_10_0337__table128754351546: .. table:: **Table 1** SFS volume mount options @@ -40,14 +38,12 @@ The everest add-on in CCE presets the options described in :ref:`Table 1 ` and :ref:`Table 3 ` by default. The options in :ref:`Table 2 ` are mandatory. -.. _cce_10_0337__table1688593020213: .. table:: **Table 2** Mandatory mount options configured by default @@ -71,7 +67,6 @@ When mounting file storage, the everest add-on presets the options described in | sigv2 | Specifies the signature version. Used by default in object buckets. | +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0337__table9886123010217: .. table:: **Table 3** Optional mount options configured by default diff --git a/umn/source/storage/using_local_disks_as_storage_volumes.rst b/umn/source/storage/using_local_disks_as_storage_volumes.rst index 2b23dbd..3161148 100644 --- a/umn/source/storage/using_local_disks_as_storage_volumes.rst +++ b/umn/source/storage/using_local_disks_as_storage_volumes.rst @@ -19,7 +19,6 @@ CCE supports four types of local volumes. The following describes how to mount these four types of volumes. -.. _cce_10_0377__section196700523438: hostPath -------- @@ -32,7 +31,6 @@ You can mount a path on the host to a specified container path. A hostPath volum #. Set parameters for adding a local volume, as listed in :ref:`Table 1 `. - .. _cce_10_0377__table14312815449: .. table:: **Table 1** Setting parameters for mounting a hostPath volume @@ -78,7 +76,6 @@ You can mount a path on the host to a specified container path. A hostPath volum | | You can click |image1| to add multiple paths and subpaths. | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0377__section550555216467: emptyDir -------- @@ -91,7 +88,6 @@ emptyDir applies to temporary data storage, disaster recovery, and runtime data #. Set the local volume type to **emptyDir** and set parameters for adding a local volume, as described in :ref:`Table 2 `. - .. _cce_10_0377__table1867417102475: .. table:: **Table 2** Setting parameters for mounting an emptyDir volume @@ -132,7 +128,6 @@ emptyDir applies to temporary data storage, disaster recovery, and runtime data | | You can click |image2| to add multiple paths and subpaths. | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0377__section18638191594712: ConfigMap --------- @@ -145,7 +140,6 @@ The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. #. Set the local volume type to **ConfigMap** and set parameters for adding a local volume, as shown in :ref:`Table 3 `. - .. _cce_10_0377__table1776324831114: .. table:: **Table 3** Setting parameters for mounting a ConfigMap volume @@ -180,7 +174,6 @@ The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. | | You can click |image3| to add multiple paths and subpaths. | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0377__section10197243134710: Secret ------ @@ -193,7 +186,6 @@ You can mount a secret as a volume to the specified container path. Contents in #. Set the local volume type to **Secret** and set parameters for adding a local volume, as shown in :ref:`Table 4 `. - .. _cce_10_0377__table861818920109: .. table:: **Table 4** Setting parameters for mounting a secret volume diff --git a/umn/source/storage_flexvolume/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst b/umn/source/storage_flexvolume/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst index 7f529c5..08a5134 100644 --- a/umn/source/storage_flexvolume/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst +++ b/umn/source/storage_flexvolume/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst @@ -18,7 +18,6 @@ Procedure #. (Optional) Back up data to prevent data loss in case of exceptions. -#. .. _cce_10_0343__cce_bestpractice_0107_li1219802032512: Configure a YAML file of the PV in the CSI format according to the PV in the FlexVolume format and associate the PV with the existing storage. @@ -223,7 +222,6 @@ Procedure | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-sfsturbo** for SFS Turbo volumes. | +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ -#. .. _cce_10_0343__cce_bestpractice_0107_li1710710385418: Configure a YAML file of the PVC in the CSI format according to the PVC in the FlexVolume format and associate the PVC with the PV created in :ref:`2 `. @@ -401,7 +399,6 @@ Procedure | volumeName | Name of the PV. Set this parameter to the name of the static PV created in :ref:`2 `. | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. .. _cce_10_0343__cce_bestpractice_0107_li487255772614: Upgrade the workload to use a new PVC. diff --git a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst index cfe4975..4c7d2c5 100644 --- a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst +++ b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst @@ -33,7 +33,6 @@ Procedure **Clusters from v1.11.7 to v1.13** - - .. _cce_10_0313__li0648350102513: **Example YAML file for the PV:** @@ -149,7 +148,6 @@ Procedure **Clusters from v1.11 to v1.11.7** - - .. _cce_10_0313__li19211184720504: **Example YAML file for the PV:** @@ -249,7 +247,6 @@ Procedure **Clusters of v1.9** - - .. _cce_10_0313__li813222310297: **Example YAML file for the PV:** @@ -366,11 +363,9 @@ Procedure If you skip this step in this example or when creating a static PV or PVC, ensure that the EVS disk associated with the static PV has been unbound from the node before you delete the node. - a. .. _cce_10_0313__li6891526204113: Obtain the tenant token. For details, see `Obtaining a User Token `__. - b. .. _cce_10_0313__li17017349418: Obtain the EVS access address **EVS_ENDPOINT**. For details, see `Regions and Endpoints `__. diff --git a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst index 5720bcb..7597050 100644 --- a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst @@ -9,9 +9,7 @@ To achieve persistent storage, CCE allows you to mount the storage volumes creat .. figure:: /_static/images/en-us_image_0000001248663503.png - :alt: **Figure 1** Mounting EVS volumes to CCE - - **Figure 1** Mounting EVS volumes to CCE + :alt: Description ----------- diff --git a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst index 5efd49b..c12cdac 100644 --- a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst +++ b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst @@ -36,7 +36,6 @@ Procedure **Clusters from v1.11 to v1.13** - - .. _cce_10_0326__li45671840132016: **Example YAML file for the PV:** @@ -133,7 +132,6 @@ Procedure **Clusters of v1.9** - - .. _cce_10_0326__li154036581589: **Example YAML file for the PV:** diff --git a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst index c05a7c4..f82faa1 100644 --- a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst @@ -9,9 +9,7 @@ CCE allows you to mount a volume created from an Object Storage Service (OBS) bu .. figure:: /_static/images/en-us_image_0000001249023453.png - :alt: **Figure 1** Mounting OBS volumes to CCE - - **Figure 1** Mounting OBS volumes to CCE + :alt: Storage Class ------------- diff --git a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst index bed800b..4518ee7 100644 --- a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst +++ b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst @@ -31,7 +31,6 @@ Procedure **Clusters from v1.11 to v1.13** - - .. _cce_10_0319__li1252510101515: **Example YAML file for the PV:** @@ -127,7 +126,6 @@ Procedure **Clusters of v1.9** - - .. _cce_10_0319__li10858156164514: **Example YAML file for the PV:** diff --git a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst index 6567c03..01bba86 100644 --- a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst @@ -9,9 +9,7 @@ CCE allows you to mount a volume created from a Scalable File Service (SFS) file .. figure:: /_static/images/en-us_image_0000001201823500.png - :alt: **Figure 1** Mounting SFS volumes to CCE - - **Figure 1** Mounting SFS volumes to CCE + :alt: Description ----------- diff --git a/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst index 68c379d..6c0910b 100644 --- a/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst @@ -9,9 +9,7 @@ CCE allows you to mount a volume created from an SFS Turbo file system to a cont .. figure:: /_static/images/en-us_image_0000001202103502.png - :alt: **Figure 1** Mounting SFS Turbo volumes to CCE - - **Figure 1** Mounting SFS Turbo volumes to CCE + :alt: Description ----------- diff --git a/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst b/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst index 8952f33..0446301 100644 --- a/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst +++ b/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst @@ -240,9 +240,7 @@ In the preceding example, the node scheduling priority is as follows. Nodes with .. figure:: /_static/images/en-us_image_0000001202101148.png - :alt: **Figure 1** Scheduling priority - - **Figure 1** Scheduling priority + :alt: Workload Affinity (podAffinity) ------------------------------- diff --git a/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst b/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst index 1700061..2378297 100644 --- a/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst +++ b/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst @@ -16,7 +16,6 @@ CCE provides the following lifecycle callback functions: - **Post-Start**: executed immediately after a container is started. For details, see :ref:`Post-Start Processing `. - **Pre-Stop**: executed before a container is stopped. The pre-stop processing function helps you ensure that the services running on the pods can be completed in advance in the case of pod upgrade or deletion. For details, see :ref:`Pre-Stop Processing `. -.. _cce_10_0105__section54912655316: Startup Commands ---------------- @@ -62,7 +61,6 @@ If the commands and arguments used to run a container are set during application | | If there are multiple arguments, separate them in different lines. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0105__section15243544163715: Post-Start Processing --------------------- @@ -95,7 +93,6 @@ Post-Start Processing | | - **Host**: (optional) IP address of the request. The default value is the IP address of the node where the container resides. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0105__section2334114473712: Pre-Stop Processing ------------------- diff --git a/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst b/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst index 026dac6..49a1e66 100644 --- a/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst +++ b/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst @@ -20,7 +20,6 @@ The node where the workload is running is accessible from public networks. Using the Console ----------------- -#. .. _cce_10_0009__li16481144064414: Create a secret for accessing a third-party image repository. diff --git a/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst b/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst index 754725a..121f407 100644 --- a/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst +++ b/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst @@ -27,7 +27,6 @@ Prerequisites: You can use :ref:`Scheduling Policy (Affinity/Anti-affinity) ` to schedule the configured pods to the nodes where the static CPU policy is enabled. In this way, cores can be bound. -.. _cce_10_0351__section173918176434: Enabling the CPU Management Policy ---------------------------------- diff --git a/umn/source/workloads/creating_a_cron_job.rst b/umn/source/workloads/creating_a_cron_job.rst index 1fb71f1..0ecf1d2 100644 --- a/umn/source/workloads/creating_a_cron_job.rst +++ b/umn/source/workloads/creating_a_cron_job.rst @@ -66,8 +66,8 @@ Using the CCE Console - **Policy Settings**: specifies when a new cron job is executed. Policy settings in YAML are implemented using cron expressions. - A cron job is executed at a fixed interval. The unit can be minute, hour, day, or month. For example, if a cron job is executed every 30 minutes, the cron expression is **\*/30 \* \* \* \***, the execution time starts from 0 in the unit range, for example, **00:00:00**, **00:30:00**, **01:00:00**, and **...**. - - The cron job is executed at a fixed time (by month). For example, if a cron job is executed at 00:00 on the first day of each month, the cron expression is **0 0 1 \*/1 \***, and the execution time is **\****-01-01 00:00:00**, **\****-02-01 00:00:00**, and **...**. - - The cron job is executed at a fixed time (by week). For example, if a cron job is executed at 00:00 every Monday, the cron expression is **0 0 \* \* 1**, and the execution time is **\****-**-01 00:00:00 on Monday**, **\****-**-08 00:00:00 on Monday**, and **...**. + - The cron job is executed at a fixed time (by month). For example, if a cron job is executed at 00:00 on the first day of each month, the cron expression is **0 0 1 \*/1 \***, and the execution time is **\***\*-01-01 00:00:00**, **\***\*-02-01 00:00:00**, and **...**. + - The cron job is executed at a fixed time (by week). For example, if a cron job is executed at 00:00 every Monday, the cron expression is **0 0 \* \* 1**, and the execution time is **\***\*-\*\*-01 00:00:00 on Monday**, **\***\*-\*\*-08 00:00:00 on Monday**, and **...**. - For details about how to use cron expressions, see `cron `__. .. note:: @@ -181,7 +181,6 @@ Related Operations After a cron job is created, you can perform operations listed in :ref:`Table 1 `. -.. _cce_10_0151__t6d520710097a4ee098eae42bcb508608: .. table:: **Table 1** Other operations diff --git a/umn/source/workloads/creating_a_deployment.rst b/umn/source/workloads/creating_a_deployment.rst index 3972467..c7c0bee 100644 --- a/umn/source/workloads/creating_a_deployment.rst +++ b/umn/source/workloads/creating_a_deployment.rst @@ -77,7 +77,6 @@ Using the CCE Console #. Click **Create Workload** in the lower right corner. -.. _cce_10_0047__section155246177178: Using kubectl ------------- @@ -119,7 +118,6 @@ The following procedure uses Nginx as an example to describe how to create a wor For details about these parameters, see :ref:`Table 1 `. - .. _cce_10_0047__table132326831016: .. table:: **Table 1** Deployment YAML parameters diff --git a/umn/source/workloads/creating_a_job.rst b/umn/source/workloads/creating_a_job.rst index 1e54800..6e8e4df 100644 --- a/umn/source/workloads/creating_a_job.rst +++ b/umn/source/workloads/creating_a_job.rst @@ -71,7 +71,6 @@ Using the CCE Console #. Click **Create Workload** in the lower right corner. -.. _cce_10_0150__section450152719412: Using kubectl ------------- @@ -176,7 +175,6 @@ Related Operations After a one-off job is created, you can perform operations listed in :ref:`Table 2 `. -.. _cce_10_0150__t84075653e7544394939d13740fad0c20: .. table:: **Table 2** Other operations diff --git a/umn/source/workloads/gpu_scheduling.rst b/umn/source/workloads/gpu_scheduling.rst index e571069..ca47222 100644 --- a/umn/source/workloads/gpu_scheduling.rst +++ b/umn/source/workloads/gpu_scheduling.rst @@ -86,9 +86,7 @@ To use GPUs on the CCE console, select the GPU quota and specify the percentage .. figure:: /_static/images/en-us_image_0000001397733101.png - :alt: **Figure 1** Using GPUs - - **Figure 1** Using GPUs + :alt: GPU Node Labels --------------- diff --git a/umn/source/workloads/managing_workloads_and_jobs.rst b/umn/source/workloads/managing_workloads_and_jobs.rst index 93652dc..822e1fd 100644 --- a/umn/source/workloads/managing_workloads_and_jobs.rst +++ b/umn/source/workloads/managing_workloads_and_jobs.rst @@ -38,7 +38,6 @@ After a workload is created, you can upgrade, monitor, roll back, or delete the | Stop/Start | You can only start or stop a cron job. | +-------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0007__section7200124254011: Monitoring a Workload --------------------- @@ -49,7 +48,6 @@ You can view the CPU and memory usage of Deployments and pods on the CCE console #. Click the **Deployments** tab and click **Monitor** of the target workload. On the page that is displayed, you can view CPU usage and memory usage of the workload. #. Click the workload name. On the **Pods** tab page, click the **Monitor** of the target pod to view its CPU and memory usage. -.. _cce_10_0007__cce_01_0007_section51511928173817: Viewing Logs ------------ @@ -62,7 +60,6 @@ You can view logs of Deployments, StatefulSets, DaemonSets, and jobs. This secti On the displayed **View Log** window, you can view logs by time. -.. _cce_10_0007__cce_01_0007_section17604174417381: Upgrading a Workload -------------------- @@ -84,7 +81,6 @@ Before replacing an image or image version, upload the new image to the SWR serv #. Upgrade the workload based on service requirements. The method for setting parameter is the same as that for creating a workload. #. After the update is complete, click **Upgrade Workload**, manually confirm the YAML file, and submit the upgrade. -.. _cce_10_0007__cce_01_0007_section21669213390: Editing a YAML file ------------------- @@ -96,7 +92,6 @@ You can modify and download the YAML files of Deployments, StatefulSets, DaemonS #. Click **Edit** and then **OK** to save the changes. #. (Optional) In the **Edit YAML** window, click **Download** to download the YAML file. -.. _cce_10_0007__cce_01_0007_section13324541124815: Rolling Back a Workload (Available Only for Deployments) -------------------------------------------------------- @@ -107,7 +102,6 @@ CCE records the release history of all Deployments. You can roll back a Deployme #. Click the **Deployments** tab, choose **More > Roll Back** in the **Operation** column of the target workload. #. Switch to the **Change History** tab page, click **Roll Back to This Version** of the target version, manually confirm the YAML file, and click **OK**. -.. _cce_10_0007__section132451237607: Redeploying a Workload ---------------------- @@ -118,7 +112,6 @@ After you redeploy a workload, all pods in the workload will be restarted. This #. Click the **Deployments** tab and choose **More** > **Redeploy** in the **Operation** column of the target workload. #. In the dialog box that is displayed, click **Yes** to redeploy the workload. -.. _cce_10_0007__cce_01_0007_section12087915401: Disabling/Enabling Upgrade (Available Only for Deployments) ----------------------------------------------------------- @@ -139,7 +132,6 @@ Only Deployments support this operation. #. Click the **Deployments** tab and choose **More** > **Disable/Enable Upgrade** in the **Operation** column of the workload. #. In the dialog box that is displayed, click **Yes**. -.. _cce_10_0007__cce_01_0007_section5931193015488: Managing Labels --------------- @@ -158,9 +150,7 @@ If you set **key** to **role** and **value** to **frontend** when using workload .. figure:: /_static/images/en-us_image_0000001408895746.png - :alt: **Figure 1** Label example - - **Figure 1** Label example + :alt: #. Log in to the CCE console, go to an existing cluster, and choose **Workloads** in the navigation pane. #. Click the **Deployments** tab and choose **More** > **Manage Label** in the **Operation** column of the target workload. @@ -168,9 +158,8 @@ If you set **key** to **role** and **value** to **frontend** when using workload .. note:: - A key-value pair must contain 1 to 63 characters starting and ending with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. + A key-value pair must contain 1 to 63 characters starting and ending with a letter or digit. Only letters, digits, hyphens (-), underscores (\_), and periods (.) are allowed. -.. _cce_10_0007__cce_01_0007_section14423721191418: Deleting a Workload/Job ----------------------- @@ -190,7 +179,6 @@ You can delete a workload or job that is no longer needed. Deleted workloads or - If the node where the pod is located is unavailable or shut down and the workload cannot be deleted, you can forcibly delete the pod from the pod list on the workload details page. - Ensure that the storage volumes to be deleted are not used by other workloads. If these volumes are imported or have snapshots, you can only unbind them. -.. _cce_10_0007__cce_01_0007_section1947616516301: Viewing Events -------------- diff --git a/umn/source/workloads/overview.rst b/umn/source/workloads/overview.rst index fadd735..14192e5 100644 --- a/umn/source/workloads/overview.rst +++ b/umn/source/workloads/overview.rst @@ -18,12 +18,9 @@ Pods can be used in either of the following ways: - Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in :ref:`Figure 1 `. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory. - .. _cce_10_0006__en-us_topic_0254767870_fig347141918551: .. figure:: /_static/images/en-us_image_0258392378.png - :alt: **Figure 1** Pod - - **Figure 1** Pod + :alt: In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods. @@ -34,9 +31,7 @@ A pod is the smallest and simplest unit that you create or deploy in Kubernetes. .. figure:: /_static/images/en-us_image_0258095884.png - :alt: **Figure 2** Relationship between a Deployment and pods - - **Figure 2** Relationship between a Deployment and pods + :alt: A Deployment can contain one or more pods. These pods have the same role. Therefore, the system automatically distributes requests to multiple pods of a Deployment. @@ -73,9 +68,7 @@ DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet .. figure:: /_static/images/en-us_image_0258871213.png - :alt: **Figure 3** DaemonSet - - **Figure 3** DaemonSet + :alt: Job and Cron Job ---------------- diff --git a/umn/source/workloads/security_group_policies.rst b/umn/source/workloads/security_group_policies.rst index 127a1df..7ac082f 100644 --- a/umn/source/workloads/security_group_policies.rst +++ b/umn/source/workloads/security_group_policies.rst @@ -24,7 +24,6 @@ Using the Console #. Set the parameters as described in :ref:`Table 1 `. - .. _cce_10_0288__table572616321913: .. table:: **Table 1** Configuration parameters @@ -79,7 +78,6 @@ Using kubectl :ref:`Table 2 ` describes the parameters in the YAML file. - .. _cce_10_0288__table132326831016: .. table:: **Table 2** Description diff --git a/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst b/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst index 49a4ad0..072a72a 100644 --- a/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst +++ b/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst @@ -24,9 +24,7 @@ Hybrid deployment of online and offline jobs in a cluster can better utilize clu .. figure:: /_static/images/en-us_image_0000001378942548.png - :alt: **Figure 1** Resource oversubscription - - **Figure 1** Resource oversubscription + :alt: Oversubscription for Hybrid Deployment -------------------------------------- @@ -107,7 +105,6 @@ Notes and Constraints - If **cpu-manager-policy** is set to static core binding on a node, do not assign the QoS class of Guaranteed to offline pods. If core binding is required, change the pods to online pods. Otherwise, offline pods may occupy the CPUs of online pods, causing online pod startup failures, and offline pods fail to be started although they are successfully scheduled. - If **cpu-manager-policy** is set to static core binding on a node, do not bind cores to all online pods. Otherwise, online pods occupy all CPU or memory resources, leaving a small number of oversubscribed resources. -.. _cce_10_0384__section1940910414220: Configuring Oversubscription Labels for Scheduling -------------------------------------------------- @@ -116,7 +113,6 @@ If the label **volcano.sh/oversubscription=true** is configured for a node in th Ensure that you have correctly configure labels because the scheduler does not check the add-on and node configurations. -.. _cce_10_0384__table152481219311: .. table:: **Table 1** Configuring oversubscription labels for scheduling @@ -200,7 +196,6 @@ Using Hybrid Deployment Annotations: ... volcano.sh/evicting-cpu-high-watermark: 70 - .. _cce_10_0384__table1853397191112: .. table:: **Table 2** Node oversubscription annotations