Changes to cce_umn from docs/doc-exports#473 (Escape .*/ and -*/ asterisks insid
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: proposalbot <proposalbot@otc-service.com> Co-committed-by: proposalbot <proposalbot@otc-service.com>
This commit is contained in:
parent
a70a2c8b2e
commit
e2743695aa
@ -87,7 +87,7 @@ When a cluster is created, the following security groups are created to ensure c
|
||||
- The source IP addresses defined in the security group rules must be permitted.
|
||||
- **4789** (required only for clusters using the container tunnel network model): used for network access between containers.
|
||||
- **10250**: used by the master node to access the kubelet component of a worker node (for example, run the kubectl exec {pod} command).
|
||||
- **30000**-**32767**: external access port (Nodeport) of a node. These ports need be specified when you create a Service. These ports must permit requests from VPC, container, and ELB CIDR blocks.
|
||||
- **30000**\ ``-``\ **32767**: external access port (Nodeport) of a node. These ports need be specified when you create a Service. These ports must permit requests from VPC, container, and ELB CIDR blocks.
|
||||
|
||||
After a cluster is created, you can view the created security group on the VPC console.
|
||||
|
||||
|
@ -66,7 +66,7 @@ CCE provides the following upgrade modes based on the cluster version and deploy
|
||||
+======================+===========================================================================================================================================================================================================================================================================================================================================================================================================================================+=========================================================================+===========================================================================================================================================================================================================+
|
||||
| **In-place upgrade** | Kubernetes components, network components, and CCE management components are upgraded on the node. During the upgrade, service pods and networks are not affected. The **SchedulingDisabled** label will be added to all existing nodes. After the upgrade is complete, you can properly use existing nodes. | You do not need to migrate services, ensuring service continuity. | In-place upgrade does not upgrade the OS of a node. If you want to upgrade the OS, clear the corresponding node after the node upgrade is complete and reset the node to upgrade the OS to a new version. |
|
||||
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| **Rolling upgrade** | Only the Kubernetes components and certain network components are upgraded on the node. The **SchedulingDisabled** label will be added to all existing nodes to ensure that the running applications are not affected. **After the upgrade is complete, you need to manually create nodes and gradually release the old nodes**, thereby migrating your applications to the new nodes. In this mode, you can control the upgrade process. | Services are not interrupted. | - |
|
||||
| **Rolling upgrade** | Only the Kubernetes components and certain network components are upgraded on the node. The **SchedulingDisabled** label will be added to all existing nodes to ensure that the running applications are not affected. **After the upgrade is complete, you need to manually create nodes and gradually release the old nodes**, thereby migrating your applications to the new nodes. In this mode, you can control the upgrade process. | Services are not interrupted. | ``-`` |
|
||||
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| **Replace upgrade** | The latest worker node image is used to reset the node OS. | This is the fastest upgrade mode and requires few manual interventions. | Data or configurations on the node will be lost, and services will be interrupted for a period of time. |
|
||||
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
@ -95,4 +95,9 @@ html_title = "Cloud Container Service - User Guide"
|
||||
html_static_path = ['_static']
|
||||
|
||||
# -- Options for PDF output --------------------------------------------------
|
||||
latex_documents = []
|
||||
latex_documents = [
|
||||
('index',
|
||||
'None.tex',
|
||||
u'Cloud Container Service - User Guide',
|
||||
u'OpenTelekomCloud', 'manual'),
|
||||
]
|
||||
|
@ -31,7 +31,7 @@ The following table compares the two versions.
|
||||
================== ====================================
|
||||
Container registry Image repository
|
||||
Cluster Resource Management > Hybrid Cluster
|
||||
Component template -
|
||||
App Designer -
|
||||
Component template ``-``
|
||||
App Designer ``-``
|
||||
App Manager Deployment
|
||||
================== ====================================
|
||||
|
@ -132,7 +132,7 @@ The following describes how to run the kubectl command to automatically create a
|
||||
| | | | |
|
||||
| | | | Supported range: 1 to 65535 |
|
||||
+-------------------------------------------+-----------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| kubernetes.io/elb.subnet-id | - | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| kubernetes.io/elb.subnet-id | ``-`` | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| | | | |
|
||||
| | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. |
|
||||
| | | | - Optional for clusters later than v1.11.7-r0. It is left blank by default. |
|
||||
|
@ -214,7 +214,7 @@ You can set the access type when creating a workload using kubectl. This section
|
||||
| | | | |
|
||||
| | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. |
|
||||
+-------------------------------------------+-----------------+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| kubernetes.io/elb.subnet-id | - | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| kubernetes.io/elb.subnet-id | ``-`` | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| | | | |
|
||||
| | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. |
|
||||
| | | | - Optional for clusters later than v1.11.7-r0. |
|
||||
@ -465,7 +465,7 @@ You can add a Service when creating a workload using kubectl. This section uses
|
||||
| | | | |
|
||||
| | | | Default value: **union** |
|
||||
+-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| kubernetes.io/elb.subnet-id | - | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| kubernetes.io/elb.subnet-id | ``-`` | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. |
|
||||
| | | | |
|
||||
| | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. |
|
||||
| | | | - Optional for clusters later than v1.11.7-r0. |
|
||||
|
@ -33,9 +33,9 @@ This function is supported only in clusters of **v1.15 and later**. It is not di
|
||||
+-------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+------------------------------------------------------------+
|
||||
| | insecure-registry | Address of an insecure image registry | false | Cannot be changed. |
|
||||
+-------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+------------------------------------------------------------+
|
||||
| | limitcore | Limit on the number of cores | 5368709120 | - |
|
||||
| | limitcore | Limit on the number of cores | 5368709120 | ``-`` |
|
||||
+-------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+------------------------------------------------------------+
|
||||
| | default-ulimit-nofile | Limit on the number of handles in a container | {soft}:{hard} | - |
|
||||
| | default-ulimit-nofile | Limit on the number of handles in a container | {soft}:{hard} | ``-`` |
|
||||
+-------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+------------------------------------------------------------+
|
||||
| kube-proxy | conntrack-min | sysctl -w net.nf_conntrack_max | 131072 | The values can be modified during the node pool lifecycle. |
|
||||
+-------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+------------------------------------------------------------+
|
||||
|
@ -39,7 +39,7 @@ Total reserved amount = Reserved memory for system components + Reserved memory
|
||||
+-------------------+---------------------------------+-------------------------------------------------+
|
||||
| Total Memory (TM) | Number of Pods | Reserved Memory for kubelet |
|
||||
+===================+=================================+=================================================+
|
||||
| TM <= 2 GB | - | TM x 25% |
|
||||
| TM <= 2 GB | ``-`` | TM x 25% |
|
||||
+-------------------+---------------------------------+-------------------------------------------------+
|
||||
| TM > 2 GB | 0 < Max. pods on a node <= 16 | 700 MB |
|
||||
+-------------------+---------------------------------+-------------------------------------------------+
|
||||
|
@ -67,7 +67,7 @@ Check Item 3: Whether You Can Log In to the ECS
|
||||
|
||||
#. Log in to the management console. Choose **Service List** > **Computing** > **Elastic Cloud Server**.
|
||||
|
||||
#. In the ECS list, locate the newly created node (generally named in the format of *cluster name*-*random number*) in the cluster and click **Remote Login** in the **Operation** column.
|
||||
#. In the ECS list, locate the newly created node (generally named in the format of *cluster name*\ ``-``\ *random number*) in the cluster and click **Remote Login** in the **Operation** column.
|
||||
|
||||
#. Check whether the node name displayed on the page is the same as that on the VM and whether the password or key can be used to log in to the node.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user