Update content

This commit is contained in:
OpenTelekomCloud Proposal Bot 2023-06-19 09:09:03 +00:00
parent 3dd0a60a5a
commit 60c08554d9
119 changed files with 2119 additions and 1043 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -20,7 +20,7 @@ Installing the Add-on
#. Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. #. Select **Single**, **Custom**, or **HA** for **Add-on Specifications**.
- **Pods**: Set the number of pods based on service requirements. - **Pods**: Set the number of pods based on service requirements.
- Multi AZ - **Multi AZ**:
- **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ.
- **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run.

View File

@ -36,7 +36,6 @@ Installing the Add-on
#. On the **Install Add-on** page, select the add-on specifications and set related parameters. #. On the **Install Add-on** page, select the add-on specifications and set related parameters.
- **Pods**: Set the number of pods based on service requirements. - **Pods**: Set the number of pods based on service requirements.
- **Multi AZ**:
- **Containers**: Select a proper container quota based on service requirements. - **Containers**: Select a proper container quota based on service requirements.
#. Set the npd parameters and click **Install**. #. Set the npd parameters and click **Install**.

View File

@ -23,7 +23,7 @@ When a third-party enterprise needs to use this application, a suit of **Tomcat
**Figure 1** Application architecture **Figure 1** Application architecture
As shown in :ref:`Figure 1 <cce_bestpractice_0003__fig78809934014>`, the application is a standard Tomcat application, and its backend interconnects with MongoDB and MySQL databases. For this type of applications, there is no need to split its architecture. The entire application is packed as an image, and the mongoDB database is deployed in the same image as the Tomcat application. In this way, the application can be deployed or upgraded through the image. As shown in :ref:`Figure 1 <cce_bestpractice_0003__fig78809934014>`, the application is a standard Tomcat application, and its backend interconnects with MongoDB and MySQL databases. For this type of applications, there is no need to split the architecture. The entire application is built as an image, and the MongoDB database is deployed in the same image as the Tomcat application. In this way, the application can be deployed or upgraded through the image.
- Interconnecting with the MongoDB database for storing user files. - Interconnecting with the MongoDB database for storing user files.
- Interconnecting with the MySQL database for storing third-party enterprise data. The MySQL database is an external cloud database. - Interconnecting with the MySQL database for storing third-party enterprise data. The MySQL database is an external cloud database.
@ -35,7 +35,7 @@ In this example, the application was deployed on a VM. During application deploy
By using containers, you can easily pack application code, configurations, and dependencies and convert them into easy-to-use building blocks. This achieves the environmental consistency and version management, as well as improves the development and operation efficiency. Containers ensure quick, reliable, and consistent deployment of applications and prevent applications from being affected by deployment environment. By using containers, you can easily pack application code, configurations, and dependencies and convert them into easy-to-use building blocks. This achieves the environmental consistency and version management, as well as improves the development and operation efficiency. Containers ensure quick, reliable, and consistent deployment of applications and prevent applications from being affected by deployment environment.
.. table:: **Table 1** Comparison between the tow deployment modes .. table:: **Table 1** Comparison between the two deployment modes
+---------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +---------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Item | Before: Application Deployment on VM | After: Application Deployment Using Containers | | Item | Before: Application Deployment on VM | After: Application Deployment Using Containers |

View File

@ -55,7 +55,7 @@ Procedure
| | | | | | | |
| | | a. Log in to the management console. | | | | a. Log in to the management console. |
| | | | | | | |
| | | b. In the service list, choose **Security and Compliance** > **Data Encryption Workshop**. | | | | b. In the service list, choose **Data Encryption Workshop** under **Security & Compliance**. |
| | | | | | | |
| | | c. In the navigation pane, choose **Key Pair Service**. On the **Private Key Pairs** tab page, click **Create Key Pair**. | | | | c. In the navigation pane, choose **Key Pair Service**. On the **Private Key Pairs** tab page, click **Create Key Pair**. |
| | | | | | | |
@ -68,7 +68,7 @@ Procedure
#. Create a cluster and a node. #. Create a cluster and a node.
a. Log in to the CCE console, choose **Clusters**, and click **Buy** next to **CCE cluster**. a. Log in to the CCE console. Choose **Clusters**. On the displayed page, select the type of the cluster to be created and click Create.
Configure cluster parameters and select the VPC created in :ref:`1 <cce_bestpractice_0010__li1025612329217>`. Configure cluster parameters and select the VPC created in :ref:`1 <cce_bestpractice_0010__li1025612329217>`.

View File

@ -148,9 +148,9 @@ In the VPC network model, after creating a peering connection, you need to add r
.. figure:: /_static/images/en-us_image_0261818886.png .. figure:: /_static/images/en-us_image_0261818886.png
:alt: **Figure 7** VPC Network - VPC interconnection scenario :alt: **Figure 7** VPC network - VPC interconnection scenario
**Figure 7** VPC Network - VPC interconnection scenario **Figure 7** VPC network - VPC interconnection scenario
When creating a VPC peering connection between containers across VPCs, pay attention to the following points: When creating a VPC peering connection between containers across VPCs, pay attention to the following points:

View File

@ -5,7 +5,7 @@
Selecting a Network Model Selecting a Network Model
========================= =========================
CCE uses self-proprietary, high-performance container networking add-ons to support the tunnel network, Cloud Native Network 2.0, and VPC network models. CCE uses proprietary, high-performance container networking add-ons to support the tunnel network, Cloud Native Network 2.0, and VPC network models.
.. caution:: .. caution::

View File

@ -214,36 +214,6 @@ Other types of storage resources can be defined in the similar way. You can use
reclaimPolicy: Delete reclaimPolicy: Delete
volumeBindingMode: Immediate volumeBindingMode: Immediate
Specifying an Enterprise Project for Storage Classes
----------------------------------------------------
CCE allows you to specify an enterprise project when creating EVS disks and OBS PVCs. The created storage resources (EVS disks and OBS) belong to the specified enterprise project. **The enterprise project can be the enterprise project to which the cluster belongs or the default enterprise project.**
If you do no specify any enterprise project, the enterprise project in StorageClass is used by default. The created storage resources by using the csi-disk and csi-obs storage classes of CCE belong to the default enterprise project.
If you want the storage resources created from the storage classes to be in the same enterprise project as the cluster, you can customize a storage class and specify the enterprise project ID, as shown below.
.. note::
To use this function, the everest add-on must be upgraded to 1.2.33 or later.
.. code-block::
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-disk-epid #Customize a storage class name.
provisioner: everest-csi-provisioner
parameters:
csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io
csi.storage.k8s.io/fstype: ext4
everest.io/disk-volume-type: SAS
everest.io/enterprise-project-id: 86bfc701-9d9e-4871-a318-6385aa368183 #Specify the enterprise project ID.
everest.io/passthrough: 'true'
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
Setting a Default Storage Class Setting a Default Storage Class
------------------------------- -------------------------------

View File

@ -12,7 +12,6 @@ Change History
+===================================+=======================================================================================================================================================================================================================================+ +===================================+=======================================================================================================================================================================================================================================+
| 2023-05-30 | - Added\ :ref:`Configuring a Node Pool <cce_10_0652>`. | | 2023-05-30 | - Added\ :ref:`Configuring a Node Pool <cce_10_0652>`. |
| | - Added\ :ref:`Configuring Health Check for Multiple Ports <cce_10_0684>`. | | | - Added\ :ref:`Configuring Health Check for Multiple Ports <cce_10_0684>`. |
| | - Added\ :ref:`NetworkAttachmentDefinition <cce_10_0196>`. |
| | - Updated\ :ref:`Creating a Node <cce_10_0363>`. | | | - Updated\ :ref:`Creating a Node <cce_10_0363>`. |
| | - Updated\ :ref:`Creating a Node Pool <cce_10_0012>`. | | | - Updated\ :ref:`Creating a Node Pool <cce_10_0012>`. |
| | - Updated\ :ref:`OS Patch Notes for Cluster Nodes <cce_bulletin_0301>`. | | | - Updated\ :ref:`OS Patch Notes for Cluster Nodes <cce_bulletin_0301>`. |

View File

@ -12,26 +12,25 @@ The following table lists the differences between CCE Turbo clusters and CCE clu
.. table:: **Table 1** Cluster types .. table:: **Table 1** Cluster types
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| Dimension | Sub-dimension | CCE Turbo cluster | CCE cluster | | Dimension | Sub-dimension | CCE Turbo cluster | CCE cluster |
+=================+=============================+================================================================================================================================+========================================================================================+ +=================+=============================+================================================================================================================================+================================================================================================+
| Cluster | Positioning | Next-gen container cluster, with accelerated computing, networking, and scheduling. Designed for Cloud Native 2.0 | Standard cluster for common commercial use | | Cluster | Positioning | Next-gen container cluster, with accelerated computing, networking, and scheduling. Designed for Cloud Native 2.0 | Standard cluster for common commercial use |
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| | Node type | Hybrid deployment of VMs and bare-metal servers | Hybrid deployment of VMs and bare-metal servers | | | Node type | Deployment of VMs | Hybrid deployment of VMs and bare metal servers |
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| Network | Model | **Cloud Native Network 2.0**: applies to large-scale and high-performance scenarios. | **Cloud-native network 1.0**: applies to common, smaller-scale scenarios. | | Network | Model | **Cloud Native Network 2.0**: applies to large-scale and high-performance scenarios. | **Cloud-native network 1.0**: applies to common, smaller-scale scenarios. |
| | | | | | | | | |
| | | Max networking scale: 2,000 nodes | - Tunnel network model | | | | Max networking scale: 2,000 nodes | - Container tunnel network model |
| | | | - VPC network model | | | | | - VPC network model |
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| | Network performance | Flattens the VPC network and container network into one. No performance loss. | Overlays the VPC network with the container network, causing certain performance loss. | | | Network performance | Flattens the VPC network and container network into one. No performance loss. | Overlays the VPC network with the container network, causing certain performance loss. |
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| | Container network isolation | Associates pods with security groups. Unifies security isolation in and out the cluster via security groups' network policies. | - Tunnel network model: supports network policies for intra-cluster communications. | | | Container network isolation | Associates pods with security groups. Unifies security isolation in and out the cluster via security groups' network policies. | - Container tunnel network model: supports network policies for intra-cluster communications. |
| | | | - VPC network model: supports no isolation. | | | | | - VPC network model: supports no isolation. |
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
| Security | Isolation | - Physical machine: runs Kata containers, allowing VM-level isolation. | Common containers are deployed and isolated by cgroups. | | Security | Isolation | - VM: runs common containers, isolated by cgroups. | Common containers are deployed and isolated by cgroups. |
| | | - VM: runs common containers, isolated by cgroups. | | +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+
+-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+
QingTian Architecture QingTian Architecture
--------------------- ---------------------

View File

@ -7,9 +7,11 @@ Upgrading a Cluster
- :ref:`Upgrade Overview <cce_10_0197>` - :ref:`Upgrade Overview <cce_10_0197>`
- :ref:`Before You Start <cce_10_0302>` - :ref:`Before You Start <cce_10_0302>`
- :ref:`Post-Upgrade Verification <cce_10_0560>`
- :ref:`Performing Replace/Rolling Upgrade <cce_10_0120>` - :ref:`Performing Replace/Rolling Upgrade <cce_10_0120>`
- :ref:`Performing In-place Upgrade <cce_10_0301>` - :ref:`Performing In-place Upgrade <cce_10_0301>`
- :ref:`Migrating Services Across Clusters of Different Versions <cce_10_0210>` - :ref:`Migrating Services Across Clusters of Different Versions <cce_10_0210>`
- :ref:`Troubleshooting for Pre-upgrade Check Exceptions <cce_10_0550>`
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
@ -17,6 +19,8 @@ Upgrading a Cluster
upgrade_overview upgrade_overview
before_you_start before_you_start
post-upgrade_verification/index
performing_replace_rolling_upgrade performing_replace_rolling_upgrade
performing_in-place_upgrade performing_in-place_upgrade
migrating_services_across_clusters_of_different_versions migrating_services_across_clusters_of_different_versions
troubleshooting_for_pre-upgrade_check_exceptions/index

View File

@ -0,0 +1,26 @@
:original_name: cce_10_0560.html
.. _cce_10_0560:
Post-Upgrade Verification
=========================
- :ref:`Service Verification <cce_10_0561>`
- :ref:`Pod Check <cce_10_0562>`
- :ref:`Node and Container Network Check <cce_10_0563>`
- :ref:`Node Label and Taint Check <cce_10_0564>`
- :ref:`New Node Check <cce_10_0565>`
- :ref:`New Pod Check <cce_10_0566>`
- :ref:`Node Skipping Check for Reset <cce_10_0567>`
.. toctree::
:maxdepth: 1
:hidden:
service_verification
pod_check
node_and_container_network_check
node_label_and_taint_check
new_node_check
new_pod_check
node_skipping_check_for_reset

View File

@ -0,0 +1,21 @@
:original_name: cce_10_0565.html
.. _cce_10_0565:
New Node Check
==============
Check Item
----------
Check whether nodes can be created in the cluster.
Procedure
---------
Go to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane, and click **Create Node**.
Solution
--------
If nodes cannot be created in your cluster after the cluster is upgraded, contact technical support.

View File

@ -0,0 +1,64 @@
:original_name: cce_10_0566.html
.. _cce_10_0566:
New Pod Check
=============
Check Item
----------
- Check whether pods can be created on the existing nodes after the cluster is upgraded.
- Check whether pods can be created on new nodes after the cluster is upgraded.
Procedure
---------
After creating a node based on :ref:`New Node Check <cce_10_0565>`, create a DaemonSet workload to create pods on each node.
Go to the CCE console, access the cluster console, and choose **Workloads** in the navigation pane. On the displayed page, switch to the **DaemonSets** tab page and click **Create Workload** or **Create from YAML** in the upper right corner.
You are advised to use the image for routine tests as the base image. You can deploy a pod by referring to the following YAML file.
.. note::
In this test, YAML deploys DaemonSet in the default namespace, uses **ngxin:perl** as the base image, requests 10 MB CPU and 10 Mi memory, and limits 100 MB CPU and 50 Mi memory.
.. code-block::
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: post-upgrade-check
namespace: default
spec:
selector:
matchLabels:
app: post-upgrade-check
version: v1
template:
metadata:
labels:
app: post-upgrade-check
version: v1
spec:
containers:
- name: container-1
image: nginx:perl
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 100m
memory: 50Mi
After the workload is created, check whether the pod status of the workload is normal.
After the check is complete, go to the CCE console and access the cluster console. Choose **Workloads** in the navigation pane. On the displayed page, switch to the **DaemonSets** tab page, choose **More** > **Delete** in the **Operation** column of the **post-upgrade-check** workload to delete the test workload.
Solution
--------
If the pod cannot be created or the pod status is abnormal, contact technical support and specify whether the exception occurs on new nodes or existing nodes.

View File

@ -0,0 +1,68 @@
:original_name: cce_10_0563.html
.. _cce_10_0563:
Node and Container Network Check
================================
Check Item
----------
- Check whether the nodes are running properly.
- Check whether the node network is normal.
- Check whether the container network is normal.
Procedure
---------
The node status reflects whether the node component or network is normal.
Go to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane. You can filter node status by status to check whether there are abnormal nodes.
|image1|
The container network affects services. Check whether your services are available.
Solution
--------
If the node status is abnormal, contact technical support.
If the container network is abnormal and your services are affected, contact technical support and confirm the abnormal network access path.
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Source | Destination | Destination Type | Possible Fault |
+==============================================+==============================================================================+======================================+======================================================================================================================================+
| - Pods (inside a cluster) | Public IP address of Service ELB | Cluster traffic load balancing entry | No record. |
| - Nodes (inside a cluster) | | | |
| - Nodes in the same VPC (outside a cluster) | | | |
| - Third-party clouds | | | |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Private IP address of Service ELB | Cluster traffic load balancing entry | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Public IP address of ingress ELB | Cluster traffic load balancing entry | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Private IP address of ingress ELB | Cluster traffic load balancing entry | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Public IP address of NodePort Service | Cluster traffic entry | The kube proxy configuration is overwritten. This fault has been rectified in the upgrade process. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Private IP address of NodePort Service | Cluster traffic entry | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | ClusterIP Service | Service network plane | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Non NodePort Service port | Container network | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Cross-node pods | Container network plane | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Pods on the same node | Container network plane | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | Service and pod domain names are resolved by CoreDNS. | Domain name resolution | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | External domain names are resolved based on the CoreDNS hosts configuration. | Domain name resolution | After the coredns add-on is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | External domain names are resolved based on the CoreDNS upstream server. | Domain name resolution | After the coredns add-on is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| | External domain names are not resolved by CoreDNS. | Domain name resolution | No record. |
+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
.. |image1| image:: /_static/images/en-us_image_0000001518062524.png

View File

@ -0,0 +1,26 @@
:original_name: cce_10_0564.html
.. _cce_10_0564:
Node Label and Taint Check
==========================
Check Item
----------
- Check whether the label is lost.
- Check whether there are unexpected taints.
Procedure
---------
Go to the CCE console, access the cluster console, and choose **Nodes** in the navigation pane. On the displayed page, click the **Nodes** tab, select all nodes, and click **Manage Labels and Taints** to view the labels and taints of the current node.
Solution
--------
User labels are not changed during the cluster upgrade. If you find that labels are lost or added abnormally, contact technical support.
If you find a new taint (**node.kubernetes.io/upgrade**) on a node, the node may be skipped during the upgrade. For details, see :ref:`Node Skipping Check for Reset <cce_10_0567>`.
If you find that other taints are added to the node, contact technical support.

View File

@ -0,0 +1,22 @@
:original_name: cce_10_0567.html
.. _cce_10_0567:
Node Skipping Check for Reset
=============================
Check Item
----------
After the cluster is upgraded, you need to reset the nodes that fail to be upgraded.
Procedure
---------
Go back to the previous step or view the upgrade details on the upgrade history page to view the nodes that are skipped during the upgrade.
The skipped nodes are displayed on the upgrade details page. Reset the skipped nodes after the upgrade is complete. For details about how to reset a node, see :ref:`Resetting a Node <cce_10_0003>`.
.. note::
Resetting a node will reset all node labels, which may affect workload scheduling. Before resetting a node, check and retain the labels that you have manually added to the node.

View File

@ -0,0 +1,31 @@
:original_name: cce_10_0562.html
.. _cce_10_0562:
Pod Check
=========
Check Item
----------
- Check whether unexpected pods exist in the cluster.
- Check whether there are pods restart unexpectedly in the cluster.
Procedure
---------
Go to the CCE console and access the cluster console. Choose **Workloads** in the navigation pane. On the displayed page, switch to the **Pods** tab page. Select **All namespaces**, click **Status**, and check whether abnormal pods exist.
|image1|
View the **Restarts** column to check whether there are pods that are restarted abnormally.
|image2|
Solution
--------
If there are abnormal pods in your cluster after the cluster upgrade, contact technical support.
.. |image1| image:: /_static/images/en-us_image_0000001518222492.png
.. |image2| image:: /_static/images/en-us_image_0000001518062540.png

View File

@ -0,0 +1,28 @@
:original_name: cce_10_0561.html
.. _cce_10_0561:
Service Verification
====================
Check Item
----------
After the cluster is upgraded, check whether the services are running normal.
Procedure
---------
Different services have different verification mode. Select a suitable one and verify the service before and after the upgrade.
You can verify the service from the following aspects:
- The service page is available.
- No alarm or event is generated on the normal platform.
- No error log is generated for key processes.
- The API dialing test is normal.
Solution
--------
If your online services are abnormal after the cluster upgrade, contact technical support.

View File

@ -0,0 +1,16 @@
:original_name: cce_10_0479.html
.. _cce_10_0479:
cce-hpa-controller Restriction Check
====================================
Check Item
----------
Check whether the current cce-controller-hpa add-on has compatibility restrictions.
Solution
--------
The current cce-controller-hap add-on has compatibility restrictions. An add-on that can provide metric APIs, for example, metric-server, must be installed in the cluster.

View File

@ -0,0 +1,78 @@
:original_name: cce_10_0493.html
.. _cce_10_0493:
Checking CoreDNS Configuration Consistency
==========================================
Check Item
----------
Check whether the current CoreDNS key configuration Corefile is different from the Helm Release record. The difference may be overwritten during the add-on upgrade, affecting domain name resolution in the cluster.
Solution
--------
You can upgrade the coredns add-on separately after confirming the configuration differences.
#. For details about how to configure kubectl, see :ref:`Connecting to a Cluster Using kubectl <cce_10_0107>`.
#. .. _cce_10_0493__en-us_topic_0000001548755413_li1178291934910:
Obtain the Corefile that takes effect currently.
.. code-block::
kubectl get cm -nkube-system coredns -o jsonpath='{.data.Corefile}' > corefile_now.txt
cat corefile_now.txt
#. .. _cce_10_0493__en-us_topic_0000001548755413_li111544111811:
Obtain the Corefile in the Helm Release record (depending on Python 3).
.. code-block::
latest_release=`kubectl get secret -nkube-system -l owner=helm -l name=cceaddon-coredns --sort-by=.metadata.creationTimestamp | awk 'END{print $1}'`
kubectl get secret -nkube-system $latest_release -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d | python -m json.tool | python -c "
import json,sys,re,yaml;
manifests = json.load(sys.stdin)['manifest']
files = re.split('(?:^|\s*\n)---\s*',manifests)
for file in files:
if 'coredns/templates/configmap.yaml' in file and 'Corefile' in file:
corefile = yaml.safe_load(file)['data']['Corefile']
print(corefile,end='')
exit(0);
print('error')
exit(1);
" > corefile_record.txt
cat corefile_record.txt
#. Compare the output information of :ref:`2 <cce_10_0493__en-us_topic_0000001548755413_li1178291934910>` and :ref:`3 <cce_10_0493__en-us_topic_0000001548755413_li111544111811>`.
.. code-block::
diff corefile_now.txt corefile_record.txt -y;
|image1|
#. Return to the CCE console and click the cluster name to go to the cluster console. On the **Add-ons** page, select the coredns add-on and click **Upgrade**.
To retain the differentiated configurations, use either of the following methods:
- Set **parameterSyncStrategy** to **force**. You need to manually enter the differentiated configurations. For details, see :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>`.
- If **parameterSyncStrategy** is set to **inherit**, differentiated configurations are automatically inherited. The system automatically parses, identifies, and inherits differentiated parameters.
|image2|
#. Click **OK**. After the add-on upgrade is complete, check whether all CoreDNS instances are available and whether the Corefile meets the expectation.
.. code-block::
kubectl get cm -nkube-system coredns -o jsonpath='{.data.Corefile}'
#. Change the value of **parameterSyncStrategy** to **ensureConsistent** to enable configuration consistency verification.
Use the parameter configuration function of CCE add-on management to modify the Corefile configuration to avoid differences.
.. |image1| image:: /_static/images/en-us_image_0000001628843805.png
.. |image2| image:: /_static/images/en-us_image_0000001578443828.png

View File

@ -0,0 +1,28 @@
:original_name: cce_10_0487.html
.. _cce_10_0487:
Checking Deprecated Kubernetes APIs
===================================
Check Item
----------
The system scans the audit logs of the past day to check whether the user calls the deprecated APIs of the target Kubernetes version.
.. note::
Due to the limited time range of audit logs, this check item is only an auxiliary method. APIs to be deprecated may have been used in the cluster, but their usage is not included in the audit logs of the past day. Check the API usage carefully.
Solution
--------
**Description**
The check result shows that your cluster calls a deprecated API of the target cluster version through kubectl or other applications. You can rectify the fault before the upgrade. Otherwise, the API will be intercepted by kube-apiserver after the upgrade. For details about each deprecated API, see `Deprecated API Migration Guide <https://kubernetes.io/docs/reference/using-api/deprecation-guide/>`__.
**Cases**
Ingresses of extensions/v1beta1 and networking.k8s.io/v1beta1 API are deprecated in clusters of v1.22. If you upgrade a CCE cluster from v1.19 or v1.21 to v1.23, existing resources are not affected, but the v1beta1 API version may be intercepted in the creation and editing scenarios.
For details about the YAML configuration structure changes, see :ref:`Using kubectl to Create an ELB Ingress <cce_10_0252>`.

Some files were not shown because too many files have changed in this diff Show More