Update content

This commit is contained in:
OpenTelekomCloud Proposal Bot 2023-05-25 16:25:59 +00:00
parent 687ed90baf
commit e9a09373b4
126 changed files with 216 additions and 639 deletions

View File

@ -128,7 +128,6 @@ Installing the Add-on
#. When the configuration is complete, click **Install**.
.. _cce_10_0154__section59676731017:
Description of the Scale-In Cool-Down Period
--------------------------------------------

View File

@ -136,7 +136,6 @@ This add-on has been installed by default. If it is uninstalled due to some reas
| | } |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _cce_10_0129__table1420814384015:
.. table:: **Table 2** Default plugin configuration of the active zone of coredns
@ -195,6 +194,4 @@ DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four
.. figure:: /_static/images/en-us_image_0000001199021308.png
:alt: **Figure 1** Routing
**Figure 1** Routing
:alt:

View File

@ -54,7 +54,6 @@ If GPU information is returned, the device is available and the add-on is succes
|image1|
.. _cce_10_0141__section95451728192112:
Obtaining the Driver Link from Public Network
---------------------------------------------
@ -66,21 +65,15 @@ Obtaining the Driver Link from Public Network
4. Select the driver information on the **NVIDIA Driver Downloads** page, as shown in :ref:`Figure 1 <cce_10_0141__fig11696366517>`. **Operating System** must be **Linux 64-bit**.
.. _cce_10_0141__fig11696366517:
.. figure:: /_static/images/en-us_image_0000001531533921.png
:alt: **Figure 1** Setting parameters
**Figure 1** Setting parameters
:alt:
5. After confirming the driver information, click **SEARCH**. A page is displayed, showing the driver information, as shown in :ref:`Figure 2 <cce_10_0141__fig7873421145213>`. Click **DOWNLOAD**.
.. _cce_10_0141__fig7873421145213:
.. figure:: /_static/images/en-us_image_0000001531373685.png
:alt: **Figure 2** Driver information
**Figure 2** Driver information
:alt:
6. Obtain the driver link in either of the following ways:
@ -88,11 +81,8 @@ Obtaining the Driver Link from Public Network
- Method 2: As shown in :ref:`Figure 3 <cce_10_0141__fig5901194614534>`, click **AGREE & DOWNLOAD** to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.
.. _cce_10_0141__fig5901194614534:
.. figure:: /_static/images/en-us_image_0000001531533045.png
:alt: **Figure 3** Obtaining the link
**Figure 3** Obtaining the link
:alt:
.. |image1| image:: /_static/images/en-us_image_0000001238225460.png

View File

@ -276,7 +276,6 @@ Check items cover events and statuses.
| | | Threshold: 90% |
+-----------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _cce_10_0132__section1471610580474:
Node-problem-controller Fault Isolation
---------------------------------------
@ -330,7 +329,6 @@ You can modify **add-onnpc.customConditionToTaint** according to the following t
| Npc.affinity | Node affinity of the controller | N/A |
+--------------------------------+---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+
.. _cce_10_0132__table147438134911:
.. table:: **Table 7** Fault isolation rule configuration

View File

@ -76,7 +76,6 @@ Installing the Add-on
server_cert: ''
server_key: ''
.. _cce_10_0193__table562185146:
.. table:: **Table 2** Volcano Plugins

View File

@ -132,7 +132,6 @@ The following is a YAML example of a node scaling policy:
targetNodepoolIds:
- 7d48eca7-3419-11ea-bc29-0255ac1001a8
.. _cce_10_0209__table18763092201:
.. table:: **Table 1** Key parameters

View File

@ -37,12 +37,9 @@ autoscaler Architecture
:ref:`Figure 1 <cce_10_0296__fig114831750115719>` shows the autoscaler architecture and its core modules:
.. _cce_10_0296__fig114831750115719:
.. figure:: /_static/images/en-us_image_0000001199501290.png
:alt: **Figure 1** autoscaler architecture
**Figure 1** autoscaler architecture
:alt:
**Description**

View File

@ -30,7 +30,6 @@ Procedure
#. Set policy parameters.
.. _cce_10_0208__table8638121213265:
.. table:: **Table 1** HPA policy parameters

View File

@ -21,12 +21,9 @@ HPA and CA work with each other. HPA requires sufficient cluster resources for s
As shown in :ref:`Figure 1 <cce_10_0300__cce_bestpractice_00282_fig6540132372015>`, HPA performs scale-out based on the monitoring metrics. When cluster resources are insufficient, newly created pods are in Pending state. CA then checks these pending pods and selects the most appropriate node pool based on the configured scaling policy to scale out the node pool.
.. _cce_10_0300__cce_bestpractice_00282_fig6540132372015:
.. figure:: /_static/images/en-us_image_0000001290111529.png
:alt: **Figure 1** HPA and CA working flows
**Figure 1** HPA and CA working flows
:alt:
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
@ -81,13 +78,11 @@ Preparations
docker build -t hpa-example:latest .
d. .. _cce_10_0300__cce_bestpractice_00282_li108181514125:
(Optional) Log in to the SWR console, choose **Organization Management** in the navigation pane, and click **Create Organization** in the upper right corner to create an organization.
Skip this step if you already have an organization.
e. .. _cce_10_0300__cce_bestpractice_00282_li187221141362:
In the navigation pane, choose **My Images** and then click **Upload Through Client**. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command.

View File

@ -21,12 +21,9 @@ HPA and CA work with each other. HPA requires sufficient cluster resources for s
As shown in :ref:`Figure 1 <cce_bestpractice_00282__fig6540132372015>`, HPA performs scale-out based on the monitoring metrics. When cluster resources are insufficient, newly created pods are in Pending state. CA then checks these pending pods and selects the most appropriate node pool based on the configured scaling policy to scale out the node pool.
.. _cce_bestpractice_00282__fig6540132372015:
.. figure:: /_static/images/en-us_image_0000001290111529.png
:alt: **Figure 1** HPA and CA working flows
**Figure 1** HPA and CA working flows
:alt:
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
@ -81,13 +78,11 @@ Preparations
docker build -t hpa-example:latest .
d. .. _cce_bestpractice_00282__li108181514125:
(Optional) Log in to the SWR console, choose **Organization Management** in the navigation pane, and click **Create Organization** in the upper right corner to create an organization.
Skip this step if you already have an organization.
e. .. _cce_bestpractice_00282__li187221141362:
In the navigation pane, choose **My Images** and then click **Upload Through Client**. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command.

View File

@ -17,9 +17,7 @@ This section describes how to configure access to multiple clusters by modifying
.. figure:: /_static/images/en-us_image_0261820020.png
:alt: **Figure 1** Using kubectl to connect to multiple clusters
**Figure 1** Using kubectl to connect to multiple clusters
:alt:
Prerequisites
-------------

View File

@ -65,7 +65,6 @@ Procedure
| spec | Mandatory | Detailed description of the pod. For details, see :ref:`Table 2 <cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table33531919193>`. |
+------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table33531919193:
.. table:: **Table 2** spec field description
@ -77,7 +76,6 @@ Procedure
| containers | Mandatory | For details, see :ref:`Table 3 <cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table196127172016>`. |
+-------------+--------------------+----------------------------------------------------------------------------------------------------------------------------+
.. _cce_bestpractice_00226__en-us_topic_0226102200_en-us_topic_0179003345_table196127172016:
.. table:: **Table 3** containers field description

View File

@ -14,9 +14,7 @@ GitLab provides powerful CI/CD functions and is widely used in software developm
.. figure:: /_static/images/en-us_image_0000001291567729.png
:alt: **Figure 1** GitLab CI/CD process
**Figure 1** GitLab CI/CD process
:alt:
This section describes how to interconnect GitLab with SWR and CCE for CI/CD.
@ -111,7 +109,6 @@ Log in to `GitLab <https://www.gitlab.com/>`__, choose **Settings** > **CI/CD**
The command output displays the login key pair.
.. _cce_bestpractice_0324__section171541431101910:
Creating a Pipeline
-------------------

View File

@ -54,7 +54,6 @@ In this example, MinIO is installed on a temporary ECS outside the cluster.
wget https://dl.minio.io/server/minio/release/linux-amd64/minio
chmod +x minio
#. .. _cce_bestpractice_0310__li126129251432:
Set the username and password of MinIO.
@ -79,7 +78,6 @@ In this example, MinIO is installed on a temporary ECS outside the cluster.
#. Use a browser to access http://{*EIP of the node where MinIO resides*}:30840. The MinIO console page is displayed.
.. _cce_bestpractice_0310__section138392220432:
Installing Velero
-----------------
@ -108,7 +106,6 @@ Download the latest, stable binary file from https://github.com/vmware-tanzu/vel
tar -xvf velero-v1.7.0-linux-amd64.tar.gz
cp ./velero-v1.7.0-linux-amd64/velero /usr/local/bin
#. .. _cce_bestpractice_0310__li197871715322:
Create the access key file **credentials-velero** for the backup object storage.
@ -124,7 +121,6 @@ Download the latest, stable binary file from https://github.com/vmware-tanzu/vel
aws_access_key_id = {AK}
aws_secret_access_key = {SK}
#. .. _cce_bestpractice_0310__li1722825643415:
Deploy the Velero server. Change the value of **--bucket** to the name of the created object storage bucket. In this example, the bucket name is **velero**. For more information about custom installation parameters, see `Customize Velero Install <https://velero.io/docs/v1.7/customize-installation/>`__.

View File

@ -21,12 +21,10 @@ Prerequisites
- CCE does not support EVS disks of the **ReadWriteMany** type. If resources of this type exist in the source cluster, change the storage type to **ReadWriteOnce**.
- Velero integrates the Restic tool to back up and restore storage volumes. Currently, the storage volumes of the HostPath type are not supported. For details, see `Restic Restrictions <https://velero.io/docs/v1.7/restic/#limitations>`__. If you need to back up storage volumes of this type, replace the hostPath volumes with local volumes by referring to :ref:`Storage Volumes of the HostPath Type Cannot Be Backed Up <cce_bestpractice_0314__section11197194820367>`. If a backup task involves storage of the HostPath type, the storage volumes of this type will be automatically skipped and a warning message will be generated. This will not cause a backup failure.
.. _cce_bestpractice_0311__section750718193288:
Backing Up Applications in the Source Cluster
---------------------------------------------
#. .. _cce_bestpractice_0311__li686918502812:
(Optional) If you need to back up the data of a specified storage volume in the pod, add an annotation to the pod. The annotation template is as follows:
@ -100,7 +98,6 @@ Backing Up Applications in the Source Cluster
|image1|
.. _cce_bestpractice_0311__section482103142819:
Restoring Applications in the Target Cluster
--------------------------------------------

View File

@ -9,9 +9,8 @@ CCE allows you to customize cluster resources to meet various service requiremen
.. important::
After a cluster is created, the resource parameters marked with asterisks (``*``) in :ref:`Table 1 <cce_bestpractice_0308__table1841815113913>` cannot be modified.
After a cluster is created, the resource parameters marked with asterisks (\``*`\`) in :ref:`Table 1 <cce_bestpractice_0308__table1841815113913>` cannot be modified.
.. _cce_bestpractice_0308__table1841815113913:
.. table:: **Table 1** CCE cluster planning

View File

@ -14,7 +14,6 @@ In terms of performance, an on-premises cluster has poor scalability due to its
Now you can address the preceding challenges by using CCE, a service that allows easy cluster management and flexible scaling, integrated with application service mesh and Helm charts to simplify cluster O&M and reduce operations costs. CCE is easy to use and delivers high performance, security, reliability, openness, and compatibility. This section describes the solution and procedure for migrating on-premises clusters to CCE.
.. _cce_bestpractice_0307__section96147345128:
Migration Solution
------------------
@ -27,7 +26,6 @@ This section describes a cluster migration solution, which applies to the follow
Before the migration, you need to analyze all resources in the source clusters and then determine the migration solution. Resources that can be migrated include resources inside and outside the clusters, as listed in the following table.
.. _cce_bestpractice_0307__table1126932541820:
.. table:: **Table 1** Resources that can be migrated
@ -55,12 +53,9 @@ Before the migration, you need to analyze all resources in the source clusters a
:ref:`Figure 1 <cce_bestpractice_0307__fig203631140201419>` shows the migration process. You can migrate resources outside a cluster as required.
.. _cce_bestpractice_0307__fig203631140201419:
.. figure:: /_static/images/en-us_image_0000001172392670.png
:alt: **Figure 1** Migration solution diagram
**Figure 1** Migration solution diagram
:alt:
Migration Process
-----------------

View File

@ -5,7 +5,6 @@
Troubleshooting
===============
.. _cce_bestpractice_0314__section11197194820367:
Storage Volumes of the HostPath Type Cannot Be Backed Up
--------------------------------------------------------
@ -70,7 +69,6 @@ Both HostPath and Local volumes are local storage volumes. However, the Restic t
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv 5Gi RWO Delete Available local 3s
.. _cce_bestpractice_0314__section321054511332:
Backup Tool Resources Are Insufficient
--------------------------------------

View File

@ -5,7 +5,6 @@
Updating Resources Accordingly
==============================
.. _cce_bestpractice_0312__section7125750134820:
Updating Images
---------------
@ -30,7 +29,6 @@ The WordPress and MySQL images used in this example can be pulled from SWR. Ther
#. Check the running status of the workload.
.. _cce_bestpractice_0312__section41282507482:
Updating Services
-----------------
@ -57,7 +55,6 @@ After the cluster is migrated, the Service of the source cluster may fail to tak
#. Use a browser to check whether the Service is available.
.. _cce_bestpractice_0312__section746195321414:
Updating the Storage Class
--------------------------
@ -173,7 +170,6 @@ As the storage infrastructures of clusters may be different, storage volumes can
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc Bound pvc-4c8e655a-1dbc-4897-ae6c-446b502f5e77 5Gi RWX local 13s
.. _cce_bestpractice_0312__section728213614323:
Updating Databases
------------------

View File

@ -66,7 +66,6 @@ To obtain source IP addresses, perform the following steps:
g. To modify a listener, locate the listener and click |image3| on the right of its name.
h. Enable **Obtain Client IP Address**.
.. _cce_bestpractice_00035__section6340152911914:
NodePort
--------

View File

@ -27,9 +27,7 @@ Basic Concepts
.. figure:: /_static/images/en-us_image_0261818822.png
:alt: **Figure 1** VPC CIDR block architecture
**Figure 1** VPC CIDR block architecture
:alt:
By default, ECSs in all subnets of the same VPC can communicate with one another, while ECSs in different VPCs cannot communicate with each other.
@ -59,12 +57,9 @@ Single-VPC Single-Cluster Scenarios
- Container CIDR Block: cannot overlap with the subnet CIDR block.
- Service CIDR Block: cannot overlap with the subnet CIDR block or the container CIDR block.
.. _cce_bestpractice_00004__en-us_topic_0099587154_fig15791152874920:
.. figure:: /_static/images/en-us_image_0000001392318380.png
:alt: **Figure 2** Network CIDR block planning in the single-VPC single-cluster scenario (CCE cluster)
**Figure 2** Network CIDR block planning in the single-VPC single-cluster scenario (CCE cluster)
:alt:
:ref:`Figure 3 <cce_bestpractice_00004__fig19746213285>` shows the CIDR block planning for a **CCE Turbo cluster** (cloud native network 2.0).
@ -73,12 +68,9 @@ Single-VPC Single-Cluster Scenarios
- Container Subnet CIDR Block: The container subnet is included in the VPC CIDR block and can overlap with the subnet CIDR block or even be the same as the subnet CIDR block. Note that the container subnet size determines the maximum number of containers in the cluster because IP addresses in the VPC are directly allocated to containers. After a cluster is created, you can only add container subnets but cannot delete them. You are advised to set a larger IP address segment for the container subnet to prevent insufficient container IP addresses.
- Service CIDR Block: cannot overlap with the subnet CIDR block or the container CIDR block.
.. _cce_bestpractice_00004__fig19746213285:
.. figure:: /_static/images/en-us_image_0000001392280374.png
:alt: **Figure 3** CIDR block planning in the single-VPC single-cluster scenario (CCE Turbo cluster)
**Figure 3** CIDR block planning in the single-VPC single-cluster scenario (CCE Turbo cluster)
:alt:
**Single-VPC Multi-Cluster Scenarios**
--------------------------------------
@ -92,12 +84,9 @@ Pod packets are forwarded through VPC routes. CCE automatically configures a rou
- Container CIDR Block: If multiple VPC network model clusters exist in a single VPC, the container CIDR blocks of all clusters cannot overlap because the clusters use the same routing table. In this case, CCE clusters are partially interconnected. A pod of a cluster can directly access the pods of another cluster, but cannot access the Services of the cluster.
- Service CIDR Block: can be used only in clusters. Therefore, the service CIDR blocks of different clusters can overlap, but cannot overlap with the subnet CIDR block and container CIDR block of the cluster to which the clusters belong.
.. _cce_bestpractice_00004__en-us_topic_0099587154_fig69527530400:
.. figure:: /_static/images/en-us_image_0261818824.png
:alt: **Figure 4** VPC network - multi-cluster scenario
**Figure 4** VPC network - multi-cluster scenario
:alt:
**Tunnel Network**
@ -108,12 +97,9 @@ Though at some cost of performance, the tunnel encapsulation enables higher inte
- Container CIDR Block: The container CIDR blocks of all clusters can overlap. In this case, pods in different clusters cannot be directly accessed using IP addresses. It is recommended that ELB be used for the cross-cluster access between containers.
- Service CIDR Block: can be used only in clusters. Therefore, the service CIDR blocks of different clusters can overlap, but cannot overlap with the subnet CIDR block and container CIDR block of the cluster to which the clusters belong.
.. _cce_bestpractice_00004__en-us_topic_0099587154_fig8672112184219:
.. figure:: /_static/images/en-us_image_0261818885.png
:alt: **Figure 5** Tunnel network - multi-cluster scenario
**Figure 5** Tunnel network - multi-cluster scenario
:alt:
**Cloud native network 2.0 network model** (CCE Turbo cluster)
@ -126,9 +112,7 @@ In this mode, container IP addresses are allocated from the VPC CIDR block. ELB
.. figure:: /_static/images/en-us_image_0000001392259910.png
:alt: **Figure 6** Cloud native network 2.0 network model - multi-cluster scenario
**Figure 6** Cloud native network 2.0 network model - multi-cluster scenario
:alt:
**Coexistence of Clusters in Multi-Network**
@ -148,9 +132,7 @@ In the VPC network model, after creating a peering connection, you need to add r
.. figure:: /_static/images/en-us_image_0261818886.png
:alt: **Figure 7** VPC Network - VPC interconnection scenario
**Figure 7** VPC Network - VPC interconnection scenario
:alt:
When creating a VPC peering connection between containers across VPCs, pay attention to the following points:
@ -162,9 +144,7 @@ In the tunnel network model, after creating a peering connection, you need to ad
.. figure:: /_static/images/en-us_image_0000001082048529.png
:alt: **Figure 8** Tunnel network - VPC interconnection scenario
**Figure 8** Tunnel network - VPC interconnection scenario
:alt:
Pay attention to the following:

View File

@ -15,25 +15,19 @@ CCE uses self-proprietary, high-performance container networking add-ons to supp
.. figure:: /_static/images/en-us_image_0000001145545261.png
:alt: **Figure 1** Container tunnel network
**Figure 1** Container tunnel network
:alt:
- **VPC network**: The container network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. VPC networks are free from tunnel encapsulation overhead and outperform container tunnel networks. In addition, as VPC routing includes routes to node IP addresses and container network segment, container pods in the cluster can be directly accessed from outside the cluster.
.. figure:: /_static/images/en-us_image_0261818875.png
:alt: **Figure 2** VPC network
**Figure 2** VPC network
:alt:
- **Cloud Native Network 2.0**: The container network deeply integrates the elastic network interface (ENI) capability of VPC, uses the VPC CIDR block to allocate container addresses, and supports passthrough networking to containers through a load balancer.
.. figure:: /_static/images/en-us_image_0000001352539924.png
:alt: **Figure 3** Cloud Native Network 2.0
**Figure 3** Cloud Native Network 2.0
:alt:
The following table lists the differences between the network models.

View File

@ -18,7 +18,6 @@ Procedure
#. (Optional) Back up data to prevent data loss in case of exceptions.
#. .. _cce_bestpractice_0107__li1219802032512:
Configure a YAML file of the PV in the CSI format according to the PV in the FlexVolume format and associate the PV with the existing storage.
@ -223,7 +222,6 @@ Procedure
| storageClassName | Name of the Kubernetes storage class. Set this field to **csi-sfsturbo** for SFS Turbo volumes. |
+----------------------------------+-------------------------------------------------------------------------------------------------------------------------+
#. .. _cce_bestpractice_0107__li1710710385418:
Configure a YAML file of the PVC in the CSI format according to the PVC in the FlexVolume format and associate the PVC with the PV created in :ref:`2 <cce_bestpractice_0107__li1219802032512>`.
@ -401,7 +399,6 @@ Procedure
| volumeName | Name of the PV. Set this parameter to the name of the static PV created in :ref:`2 <cce_bestpractice_0107__li1219802032512>`. |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#. .. _cce_bestpractice_0107__li487255772614:
Upgrade the workload to use a new PVC.

View File

@ -12,12 +12,9 @@ Scenario
The CCE cluster of a SaaS service provider needs to be mounted with the OBS bucket of a third-party tenant, as shown in :ref:`Figure 1 <cce_bestpractice_00199__fig1315433183918>`.
.. _cce_bestpractice_00199__fig1315433183918:
.. figure:: /_static/images/en-us_image_0268523694.png
:alt: **Figure 1** Mounting an OBS bucket of a third-party tenant
**Figure 1** Mounting an OBS bucket of a third-party tenant
:alt:
#. :ref:`The third-party tenant authorizes the SaaS service provider to access the OBS buckets or parallel file systems <cce_bestpractice_00199__section193471249193310>` by setting the bucket policy and bucket ACL.
#. :ref:`The SaaS service provider statically imports the OBS buckets and parallel file systems of the third-party tenant <cce_bestpractice_00199__en-us_topic_0196817407_section155006183017>`.
@ -30,7 +27,6 @@ Precautions
- Only clusters where the everest add-on of v1.1.11 or later has been installed (the cluster version must be v1.15 or later) can be mounted with OBS buckets of third-party tenants.
- The service platform of the SaaS service provider needs to manage the lifecycle of the third-party bucket PVs. When a PVC is deleted separately, the PV is not deleted. Instead, it will be retained. To do so, you need to call the native Kubernetes APIs to create and delete static PVs.
.. _cce_bestpractice_00199__section193471249193310:
Authorizing the SaaS Service Provider to Access the OBS Buckets
---------------------------------------------------------------
@ -46,9 +42,7 @@ The following uses an OBS bucket as an example to describe how to set a bucket p
.. figure:: /_static/images/en-us_image_0000001325377749.png
:alt: **Figure 2** Creating a bucket policy
**Figure 2** Creating a bucket policy
:alt:
- **Policy Mode**: Select **Customized**.
- **Effect**: Select **Allow**.
@ -58,7 +52,6 @@ The following uses an OBS bucket as an example to describe how to set a bucket p
4. In the navigation pane, choose **Permissions** > **Bucket ACLs**. In the right pane, click **Add**.Enter the account ID or account name of the authorized user, select **Read** and **Write** for **Access to Bucket**, select **Read** and **Write** for **Access to ACL**, and click **OK**.
.. _cce_bestpractice_00199__en-us_topic_0196817407_section155006183017:
Statically Importing OBS Buckets and Parallel File Systems
----------------------------------------------------------
@ -171,7 +164,6 @@ Statically Importing OBS Buckets and Parallel File Systems
storageClassName: csi-obs-mountoption #The value must be the same as the storage class associated with the bound PV.
volumeName: obsfscheck #Replace the name with the actual PV name of the parallel file system.
- .. _cce_bestpractice_00199__li1235812419467:
**(Optional) Creating a custom OBS storage class to associate with a static PV:**

View File

@ -47,7 +47,6 @@ The Redis workload is used as an example to illustrate the chart specifications.
As listed in :ref:`Table 1 <cce_10_0146__tb7d789a3467e4fe9b4385a51f3460321>`, the parameters marked with \* are mandatory.
.. _cce_10_0146__tb7d789a3467e4fe9b4385a51f3460321:
.. table:: **Table 1** Parameters in the directory structure of a chart
@ -98,7 +97,6 @@ Creating a Release
#. Set workload installation parameters by referring to :ref:`Table 2 <cce_10_0146__t26bc1c499f114b5185e5edcf61e44d95>`.
.. _cce_10_0146__t26bc1c499f114b5185e5edcf61e44d95:
.. table:: **Table 2** Installation parameters

View File

@ -43,17 +43,13 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001243981141.png
:alt: **Figure 1** Expanding trace details
**Figure 1** Expanding trace details
:alt:
#. Click **View Trace** in the **Operation** column. The trace details are displayed.
.. figure:: /_static/images/en-us_image_0000001244141139.png
:alt: **Figure 2** Viewing event details
**Figure 2** Viewing event details
:alt:
.. |image1| image:: /_static/images/en-us_image_0000001244141141.gif
.. |image2| image:: /_static/images/en-us_image_0000001199341250.png

View File

@ -20,9 +20,7 @@ The following figure shows the architecture of a Kubernetes cluster.
.. figure:: /_static/images/en-us_image_0267028603.png
:alt: **Figure 1** Kubernetes cluster architecture
**Figure 1** Kubernetes cluster architecture
:alt:
**Master node**

View File

@ -21,9 +21,7 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001199181228.png
:alt: **Figure 1** Downloading a certificate
**Figure 1** Downloading a certificate
:alt:
.. important::

View File

@ -30,7 +30,6 @@ Notes and Constraints
In kubelet 1.16 and later versions, `QoS classes <https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/>`__ are different from those in earlier versions. In kubelet 1.15 and earlier versions, only containers in **spec.containers** are counted. In kubelet 1.16 and later versions, containers in both **spec.containers** and **spec.initContainers** are counted. The QoS class of a pod will change after the upgrade. As a result, the container in the pod restarts. You are advised to modify the QoS class of the service container before the upgrade to avoid this problem. For details, see :ref:`Table 1 <cce_10_0302__table10713231143911>`.
.. _cce_10_0302__table10713231143911:
.. table:: **Table 1** QoS class changes before and after the upgrade

View File

@ -10,7 +10,7 @@ Application Scenarios
This section describes how to migrate services from a cluster of an earlier version to a cluster of a later version in CCE.
This operation is applicable when a cross-version cluster upgrade is required (for example, upgrade from v1.7.\* or v1.9.\* to 1.17.*) and new clusters can be created for service migration.
This operation is applicable when a cross-version cluster upgrade is required (for example, upgrade from v1.7.\* or v1.9.\* to 1.17.\*) and new clusters can be created for service migration.
Prerequisites
-------------

View File

@ -44,7 +44,6 @@ Procedure
#. On the cluster upgrade page, review or configure basic information by referring to :ref:`Table 1 <cce_10_0120__table924319911495>`.
.. _cce_10_0120__table924319911495:
.. table:: **Table 1** Basic information

View File

@ -19,11 +19,8 @@ Log in to the CCE console and check whether the message "New version available"
.. figure:: /_static/images/en-us_image_0000001482796460.png
:alt: **Figure 1** Cluster with the upgrade flag
:alt:
**Figure 1** Cluster with the upgrade flag
.. _cce_10_0197__section19981121648:
Cluster Upgrade
---------------
@ -69,7 +66,6 @@ The upgrade processes are the same for master nodes. The differences between the
| **Replace upgrade** | The latest worker node image is used to reset the node OS. | This is the fastest upgrade mode and requires few manual interventions. | Data or configurations on the node will be lost, and services will be interrupted for a period of time. |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _cce_10_0197__section191131551162610:
Precautions for Major Version Upgrade
-------------------------------------

View File

@ -33,7 +33,6 @@ CCE allows you to access a cluster through a **VPC network** or a **public netwo
Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure:
#. .. _cce_10_0107__li194691356201712:
Download kubectl.
@ -41,11 +40,8 @@ Download kubectl and the configuration file. Copy the file to your client, and c
.. figure:: /_static/images/en-us_image_0000001336475537.png
:alt: **Figure 1** Downloading kubectl
:alt:
**Figure 1** Downloading kubectl
#. .. _cce_10_0107__li34691156151712:
Obtain the kubectl configuration file (kubeconfig).
@ -102,7 +98,6 @@ Download kubectl and the configuration file. Copy the file to your client, and c
For details about the cluster two-way authentication, see :ref:`Two-Way Authentication for Domain Names <cce_10_0107__section1559919152711>`.
.. _cce_10_0107__section1559919152711:
Two-Way Authentication for Domain Names
---------------------------------------
@ -119,12 +114,9 @@ Currently, CCE supports two-way authentication for domain names.
- If the domain name two-way authentication is not supported, **kubeconfig.json** contains the **"insecure-skip-tls-verify": true** field, as shown in :ref:`Figure 2 <cce_10_0107__fig1941342411>`. To use two-way authentication, you can download the **kubeconfig.json** file again and enable two-way authentication for the domain names.
.. _cce_10_0107__fig1941342411:
.. figure:: /_static/images/en-us_image_0000001199021320.png
:alt: **Figure 2** Two-way authentication disabled for domain names
**Figure 2** Two-way authentication disabled for domain names
:alt:
Common Issue (Error from server Forbidden)
------------------------------------------

View File

@ -18,7 +18,7 @@ import os
import sys
extensions = [
'otcdocstheme',
'otcdocstheme'
]
otcdocs_auto_name = False
@ -98,4 +98,9 @@ html_static_path = ['_static']
html_copy_source = False
# -- Options for PDF output --------------------------------------------------
latex_documents = []
latex_documents = [
('index',
'None.tex',
u'Cloud Container Engine - User Guide',
u'OpenTelekomCloud', 'manual'),
]

View File

@ -13,7 +13,6 @@ By default, CCE creates the following secrets in each namespace:
The functions of these secrets are described as follows.
.. _cce_10_0388__section11760122012591:
default-secret
--------------

View File

@ -27,7 +27,6 @@ Procedure
#. Set parameters.
.. _cce_10_0152__table16321825732:
.. table:: **Table 1** Parameters for creating a ConfigMap
@ -104,7 +103,6 @@ Related Operations
After creating a configuration item, you can update or delete it as described in :ref:`Table 2 <cce_10_0152__table1619535674020>`.
.. _cce_10_0152__table1619535674020:
.. table:: **Table 2** Related operations

View File

@ -19,7 +19,6 @@ Procedure
#. Set parameters.
.. _cce_10_0153__table16321825732:
.. table:: **Table 1** Parameters for creating a secret
@ -122,7 +121,6 @@ After creating a secret, you can update or delete it as described in :ref:`Table
The secret list contains system secret resources that can be queried only. The system secret resources cannot be updated or deleted.
.. _cce_10_0153__table555785274319:
.. table:: **Table 2** Related Operations
@ -144,7 +142,6 @@ After creating a secret, you can update or delete it as described in :ref:`Table
| | #. Follow the prompts to delete the secrets. |
+-----------------------------------+------------------------------------------------------------------------------------------------------+
.. _cce_10_0153__section175000605919:
Base64 Encoding
---------------

View File

@ -25,7 +25,6 @@ The following example shows how to use a ConfigMap.
When a ConfigMap is used in a pod, the pod and ConfigMap must be in the same cluster and namespace.
.. _cce_10_0015__section1737733192813:
Setting Workload Environment Variables
--------------------------------------
@ -85,7 +84,6 @@ To add all data in a ConfigMap to environment variables, use the **envFrom** par
name: cce-configmap
restartPolicy: Never
.. _cce_10_0015__section17930105710189:
Setting Command Line Parameters
-------------------------------
@ -122,7 +120,6 @@ After the pod runs, the following information is displayed:
Hello CCE
.. _cce_10_0015__section1490261161916:
Attaching a ConfigMap to the Workload Data Volume
-------------------------------------------------

View File

@ -32,7 +32,6 @@ The following example shows how to use a secret.
When a secret is used in a pod, the pod and secret must be in the same cluster and namespace.
.. _cce_10_0016__section472505211214:
Configuring the Data Volume of a Pod
------------------------------------
@ -84,7 +83,6 @@ In addition, you can specify the directory and permission to access a secret. Th
To mount a secret to a data volume, you can also perform operations on the CCE console. When creating a workload, set advanced settings for the container, choose **Data Storage > Local Volume**, click **Add Local Volume**, and select **Secret**. For details, see :ref:`Secret <cce_10_0377__section10197243134710>`.
.. _cce_10_0016__section207271352141216:
Setting Environment Variables of a Pod
--------------------------------------

View File

@ -14,9 +14,7 @@ Complete the following tasks to get started with CCE.
.. figure:: /_static/images/en-us_image_0000001178352608.png
:alt: **Figure 1** Procedure for getting started with CCE
**Figure 1** Procedure for getting started with CCE
:alt:
#. :ref:`Charts (Helm) <cce_10_0019>`\ Authorize an IAM user to use CCE.

View File

@ -23,9 +23,7 @@ Using ICAgent to Collect Logs
.. figure:: /_static/images/en-us_image_0000001199181298.png
:alt: **Figure 1** Adding a log policy
**Figure 1** Adding a log policy
:alt:
#. Set **Storage Type** to **Host Path** or **Container Path**.

View File

@ -17,11 +17,10 @@ Common Kubernetes resources include Deployments, StatefulSets, jobs, DaemonSets,
Procedure
---------
#. .. _cce_01_9995__li156087595210:
Export resource files from CCE 1.0.
**kubectl** **get** *{resource} {name}* -**n** *{namespace}* -**oyaml** --**export** > *{namespace}_{resource}_{name}*\ **.yaml**
**kubectl** **get** *{resource} {name}* -**n** *{namespace}* -**oyaml** --**export** > *{namespace}\_{resource}\_{name}*\ **.yaml**
Assume that the following resource files are exported:
@ -33,7 +32,7 @@ Procedure
#. Switch to the CCE 2.0 clusters and run the following kubectl command to create the resources exported in :ref:`1 <cce_01_9995__li156087595210>`.
**kubectl create -f**\ *{namespace}_{resource}_{name}*\ **.yaml**
**kubectl create -f**\ *{namespace}\_{resource}\_{name}*\ **.yaml**
Examples of creating resource files:

View File

@ -18,9 +18,7 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001177874150.png
:alt: **Figure 1** Cluster specifications in CCE 1.0
**Figure 1** Cluster specifications in CCE 1.0
:alt:
.. table:: **Table 1** Parameters for creating a cluster
@ -87,7 +85,6 @@ Procedure
#. Set the parameters based on :ref:`Table 2 <cce_01_9996__table16351025186>`.
.. _cce_01_9996__table16351025186:
.. table:: **Table 2** Parameters for adding a node

View File

@ -24,17 +24,13 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001178352594.png
:alt: **Figure 1** Generate the Docker login command
**Figure 1** Generate the Docker login command
:alt:
#. Log in to the CCE 1.0 console, and obtain the docker login configuration file **dockercfg.json**.
.. figure:: /_static/images/en-us_image_0000001223473833.png
:alt: **Figure 2** Obtain the docker login configuration file
**Figure 2** Obtain the docker login configuration file
:alt:
#. Log in to the Docker client as user **root**, and copy the **dockercfg.json** file obtained in Step 2 and the image migration tool to the **/root** directory.
@ -54,6 +50,4 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001223393885.png
:alt: **Figure 3** Migrate the image
**Figure 3** Migrate the image
:alt:

View File

@ -15,7 +15,6 @@ CCE works with AOM to comprehensively monitor clusters. When a node is created,
The ICAgent collects custom metrics of applications and uploads them to AOM. For details, see :ref:`Custom Monitoring <cce_10_0201>`.
.. _cce_10_0182__section205486212251:
Resource Metrics
----------------

View File

@ -46,7 +46,6 @@ Creating a Namespace
#. Set namespace parameters based on :ref:`Table 1 <cce_10_0278__table5523151617575>`.
.. _cce_10_0278__table5523151617575:
.. table:: **Table 1** Parameters for creating a namespace

View File

@ -30,9 +30,7 @@ Isolating Namespaces
.. figure:: /_static/images/en-us_image_0000001199021298.png
:alt: **Figure 1** One namespace for one environment
**Figure 1** One namespace for one environment
:alt:
- **Isolating namespaces by application**
@ -40,9 +38,7 @@ Isolating Namespaces
.. figure:: /_static/images/en-us_image_0000001243981147.png
:alt: **Figure 2** Grouping workloads into different namespaces
**Figure 2** Grouping workloads into different namespaces
:alt:
Deleting a Namespace
--------------------

View File

@ -29,7 +29,6 @@ Cluster Scale Recommended Number of Pods
Starting from clusters of v1.21 and later, the default `Resource Quotas <https://kubernetes.io/docs/concepts/policy/resource-quotas/?spm=a2c4g.11186623.2.8.d882712bd1i8ae>`__ are created when a namespace is created if you have enabled **enable-resource-quota** in :ref:`Managing Cluster Components <cce_10_0213>`. :ref:`Table 1 <cce_10_0287__table371165714613>` lists the resource quotas based on cluster specifications. You can modify them according to your service requirements.
.. _cce_10_0287__table371165714613:
.. table:: **Table 1** Default resource quotas

View File

@ -13,12 +13,9 @@ Containers can access public networks in either of the following ways:
You can use NAT Gateway to enable container pods in a VPC to access public networks. NAT Gateway provides source network address translation (SNAT), which translates private IP addresses to a public IP address by binding an elastic IP address (EIP) to the gateway, providing secure and efficient access to the Internet. :ref:`Figure 1 <cce_10_0400__cce_bestpractice_00274_0_en-us_topic_0241700138_en-us_topic_0144420145_fig34611314153619>` shows the SNAT architecture. The SNAT function allows the container pods in a VPC to access the Internet without being bound to an EIP. SNAT supports a large number of concurrent connections, which makes it suitable for applications involving a large number of requests and connections.
.. _cce_10_0400__cce_bestpractice_00274_0_en-us_topic_0241700138_en-us_topic_0144420145_fig34611314153619:
.. figure:: /_static/images/en-us_image_0000001192028618.png
:alt: **Figure 1** SNAT
**Figure 1** SNAT
:alt:
To enable a container pod to access the Internet, perform the following steps:

View File

@ -7,7 +7,6 @@ Configuring Intra-VPC Access
This section describes how to access an intranet from a container (outside the cluster in a VPC), including intra-VPC access and cross-VPC access.
.. _cce_10_0399__section1940319933:
Intra-VPC Access
----------------
@ -58,7 +57,6 @@ The performance of accessing an intranet from a container varies depending on th
--- 192.168.10.25 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
.. _cce_10_0399__section44190754210:
Cross-VPC Access
----------------

Some files were not shown because too many files have changed in this diff Show More