forked from docs/doc-exports
CCE UMN: Added the support of the OS for features and cluster versions.
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com> Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com> Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
This commit is contained in:
parent
e0f19ed93a
commit
3d9cca138b
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,9 +1,16 @@
|
||||
<a name="cce_01_0091"></a><a name="cce_01_0091"></a>
|
||||
|
||||
<h1 class="topictitle1">What Is Cloud Container Engine?</h1>
|
||||
<div id="body0000001151475048"><p id="cce_01_0091__p828704812214">Cloud Container Engine (CCE) provides highly scalable, high-performance, enterprise-class Kubernetes clusters and supports Docker containers. With CCE, you can easily deploy, manage, and scale containerized applications on the cloud.</p>
|
||||
<p id="cce_01_0091__p7288114822214">CCE is deeply integrated with the public cloud services, including high-performance computing (ECS), network (VPC, EIP, and ELB), and storage (EVS and SFS) services. It supports heterogeneous computing architectures such as GPU, ARM, and FPGA. By using multi-AZ and multi-region disaster recovery, CCE ensures high availability of Kubernetes clusters.</p>
|
||||
<p id="cce_01_0091__p1495073743620">You can use CCE through the console, kubectl, and <a href="https://docs.otc.t-systems.com/en-us/api2/cce/cce_02_0344.html" target="_blank" rel="noopener noreferrer">APIs</a>. Before using the CCE service, learn about the concepts related to Kubernetes. For details, see <a href="https://kubernetes.io/docs/concepts/" target="_blank" rel="noopener noreferrer">https://kubernetes.io/docs/concepts/</a>.</p>
|
||||
<ul id="cce_01_0091__ul2085315361497"><li id="cce_01_0091__li1985318365917">Junior users: You are advised to use the console. The console provides an intuitive interface for you to complete operations such as creating clusters or workloads.</li><li id="cce_01_0091__li162315481992">Advanced users: If you have experience in using kubectl, you are advised to use the kubectl, and <a href="https://docs.otc.t-systems.com/en-us/api2/cce/cce_02_0344.html" target="_blank" rel="noopener noreferrer">APIs</a> to perform operations. For details, see <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" target="_blank" rel="noopener noreferrer">Kubernetes APIs</a> and <a href="https://kubernetes.io/docs/reference/kubectl/overview/" target="_blank" rel="noopener noreferrer">kubectl CLI</a>.</li></ul>
|
||||
<div id="body32001227"><div class="section" id="cce_01_0091__en-us_topic_0000001499406010_section8343153913519"><h4 class="sectiontitle">Why CCE?</h4><p id="cce_01_0091__en-us_topic_0000001499406010_p72395598519">CCE is a one-stop platform integrating compute, networking, storage, and many other services. Supporting multi-AZ and multi-region disaster recovery, CCE ensures high availability of <a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer">Kubernetes</a> clusters.</p>
|
||||
<p id="cce_01_0091__en-us_topic_0000001499406010_p1220816614522">For more information, see <a href="cce_productdesc_0003.html#cce_productdesc_0003">Product Advantages</a> and <a href="cce_productdesc_0007.html#cce_productdesc_0007">Application Scenarios</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0091__en-us_topic_0000001499406010_section14578149155310"><h4 class="sectiontitle">Accessing CCE</h4><p id="cce_01_0091__en-us_topic_0000001499406010_p124041812145418">You can use CCE via the CCE console, kubectl, or Kubernetes APIs. <a href="#cce_01_0091__en-us_topic_0000001499406010_fig3404612135411">Figure 1</a> shows the process.</p>
|
||||
<div class="fignone" id="cce_01_0091__en-us_topic_0000001499406010_fig3404612135411"><a name="cce_01_0091__en-us_topic_0000001499406010_fig3404612135411"></a><a name="en-us_topic_0000001499406010_fig3404612135411"></a><span class="figcap"><b>Figure 1 </b>Accessing CCE</span><br><span><img id="cce_01_0091__en-us_topic_0000001499406010_image104041112125417" src="en-us_image_0000001499565914.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="en-us_topic_0000001550437509.html">Service Overview</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -8,7 +8,12 @@
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_01_0300__row181091826101811"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1510922618183">2023-02-10</p>
|
||||
<tbody><tr id="cce_01_0300__row450749103813"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p195076943820">2023-05-30</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul1843612311567"><li id="cce_01_0300__li14362312065">Added<a href="cce_10_0652.html">Configuring a Node Pool</a>.</li><li id="cce_01_0300__li48641237869">Added<a href="cce_10_0684.html">Configuring Health Check for Multiple Ports</a>.</li><li id="cce_01_0300__li152057919719">Updated<a href="cce_10_0363.html">Creating a Node</a>.</li><li id="cce_01_0300__li53955101178">Updated<a href="cce_10_0012.html">Creating a Node Pool</a>.</li><li id="cce_01_0300__li16648154715219">Updated<a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</li><li id="cce_01_0300__li7404516102217">Updated<a href="cce_productdesc_0005.html">Notes and Constraints</a>.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row181091826101811"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1510922618183">2023-02-10</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul262319241116"><li id="cce_01_0300__li192921356122114">Supported the creation of clusters of v1.25.</li><li id="cce_01_0300__li12638261224">Added <a href="cce_10_0466.html">Configuring Pod Security Admission</a>.</li><li id="cce_01_0300__li1238583891617">Added <a href="cce_bulletin_0011.html">Vulnerability Fixing Policies</a>.</li><li id="cce_01_0300__li1132183918530">Updated <a href="cce_10_0252.html">Using kubectl to Create an ELB Ingress</a>.</li></ul>
|
||||
</td>
|
||||
|
@ -1,15 +0,0 @@
|
||||
<a name="cce_01_9994"></a><a name="cce_01_9994"></a>
|
||||
|
||||
<h1 class="topictitle1">Obtaining Resource Permissions</h1>
|
||||
<div id="body32001227"><div class="p" id="cce_01_9994__en-us_topic_0000001162706450_p8060118">CCE works closely with multiple cloud services to support computing, storage, networking, and monitoring functions. When you log in to the CCE console for the first time, CCE automatically requests permissions to access those cloud services in the region where you run your applications. Specifically:<ul id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_ul3701191818917"><li id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li10701131818911">Compute services<p id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_p1087644518126"><a name="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li10701131818911"></a><a name="en-us_topic_0000001162706450_en-us_topic_0130767462_li10701131818911"></a>When you create a node in a cluster, an ECS is created accordingly. The prerequisite is that CCE have obtained the permissions to access Elastic Cloud Service (ECS).</p>
|
||||
</li><li id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li183546439915">Storage services<p id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_p1726215716134"><a name="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li183546439915"></a><a name="en-us_topic_0000001162706450_en-us_topic_0130767462_li183546439915"></a>CCE allows you to mount storage to nodes and containers in a cluster. The prerequisite is that CCE have obtained the permissions to access services such as Elastic Volume Service (EVS), Scalable File Service (SFS), and Object Storage Service (OBS).</p>
|
||||
</li><li id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li1982014497913">Networking services<p id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_p113391343111318"><a name="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li1982014497913"></a><a name="en-us_topic_0000001162706450_en-us_topic_0130767462_li1982014497913"></a>CCE allows containers in a cluster to be published as services that can be accessed by external systems. The prerequisite is that CCE have obtained the permissions to access services such as Virtual Private Cloud (VPC) and Elastic Load Balance (ELB).</p>
|
||||
</li><li id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li1828065516916">Container and monitoring services<p id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_p99237594139"><a name="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_li1828065516916"></a><a name="en-us_topic_0000001162706450_en-us_topic_0130767462_li1828065516916"></a>CCE supports functions such as container image pulling, monitoring, and logging. The prerequisite is that CCE have obtained the permissions to access services such as SoftWare Repository for Container (SWR) and Application Operations Management (AOM).</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
<p id="cce_01_9994__en-us_topic_0000001162706450_p175118118157">After you agree to delegate the permissions, an agency named <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b1568916310405">cce_admin_trust</strong> will be created for CCE in Identity and Access Management (IAM). The system account <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b48571932174019">op_svc_cce</strong> will be delegated the <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b981339420">Tenant Administrator</strong> role to perform operations on other cloud service resources. Tenant Administrator has the permissions on all cloud services except IAM, which calls the cloud services on which CCE depends. The delegation takes effect only in the current region. For details, see <a href="https://docs.otc.t-systems.com/en-us/usermanual/iam/iam_01_0054.html" target="_blank" rel="noopener noreferrer">Delegating Resource Access to Another Account</a>.</p>
|
||||
<p id="cce_01_9994__en-us_topic_0000001162706450_p46591740151520">To use CCE in multiple regions, you need to request cloud resource permissions in each region. You can go to the IAM console, choose <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b867181312472">Agencies</strong>, and click <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b111022114549">cce_admin_trust</strong> to view the delegation records of each region.</p>
|
||||
<div class="note" id="cce_01_9994__en-us_topic_0000001162706450_note158231511201611"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_p124671324103315">CCE may fail to run as expected if the Tenant Administrator role is not assigned. Therefore, do not delete or modify the <strong id="cce_01_9994__en-us_topic_0000001162706450_en-us_topic_0130767462_b17463155175611">cce_admin_trust</strong> agency when using CCE.</p>
|
||||
</div></div>
|
||||
</div>
|
||||
|
@ -93,7 +93,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.2 "><p id="cce_01_9996__p1226813566192">This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.3 "><p id="cce_01_9996__p13737141735611">By default, <span class="uicontrol" id="cce_01_9996__uicontrol13753167101316"><b>RBAC</b></span> is selected. Read <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0189.html" target="_blank" rel="noopener noreferrer">Namespace Permissions (Kubernetes RBAC-based)</a> and select <span class="uicontrol" id="cce_01_9996__uicontrol1663915553130"><b>I am aware of the above limitations and read the CCE Role Management Instructions</b></span>.</p>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.3 "><p id="cce_01_9996__p13737141735611">By default, <span class="uicontrol" id="cce_01_9996__uicontrol13753167101316"><b>RBAC</b></span> is selected. Read <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0189.html" target="_blank" rel="noopener noreferrer">Namespace Permissions (Kubernetes RBAC-based)</a> and select <span class="uicontrol" id="cce_01_9996__uicontrol1663915553130"><b>I am aware of the above limitations and read the CCE Role Management Instructions</b></span>.</p>
|
||||
<p id="cce_01_9996__p16141515161117">After RBAC is enabled, users access resources in the cluster according to fine-grained permissions policies.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -101,7 +101,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.2 "><p id="cce_01_9996__p1214101252312">This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.3 "><p id="cce_01_9996__p933784218111">The authentication mechanism performs permission control on resources in a cluster. For example, you can grant user A to read and write applications in a namespace, while granting user B to only read resources in a cluster. For details about role-based permission control, see <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0085.html" target="_blank" rel="noopener noreferrer">Controlling Cluster Permissions</a>.</p>
|
||||
<td class="cellrowborder" valign="top" width="33.33666633336666%" headers="mcps1.3.3.2.2.2.2.2.4.1.3 "><p id="cce_01_9996__p933784218111">The authentication mechanism performs permission control on resources in a cluster. For example, you can grant user A to read and write applications in a namespace, while granting user B to only read resources in a cluster. For details about role-based permission control, see <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0189.html" target="_blank" rel="noopener noreferrer">Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
|
||||
<ul id="cce_01_9996__ul208851410646"><li id="cce_01_9996__li198851101547">By default, X.509 authentication instead of <span class="uicontrol" id="cce_01_9996__uicontrol1371105874614"><b>Enhanced authentication</b></span> is enabled. X.509 is a standard defining the format of public key certificates. X.509 certificates are used in many Internet protocols.</li><li id="cce_01_9996__li1033718534516">If permission control on a cluster is required, select <strong id="cce_01_9996__b1631132022213">Enhanced authentication</strong> and then <strong id="cce_01_9996__b113212042216">Authenticating Proxy</strong>.<p id="cce_01_9996__p129632614510">Click <strong id="cce_01_9996__b185463373227">Upload</strong> next to <strong id="cce_01_9996__b1354616374228">CA Root Certificate</strong> to upload a valid certificate. Select the check box to confirm that the uploaded certificate is valid.</p>
|
||||
<p id="cce_01_9996__p36719411534">If the certificate is invalid, the cluster cannot be created. The uploaded certificate file must be smaller than 1 MB and in .crt or .cer format.</p>
|
||||
</li></ul>
|
||||
@ -159,7 +159,7 @@
|
||||
<tr id="cce_01_9996__row178313381813"><td class="cellrowborder" valign="top" headers="mcps1.3.3.2.5.2.1.2.4.1.1 "><p id="cce_01_9996__p08318320187">OS</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.3.2.5.2.1.2.4.1.2 "><p id="cce_01_9996__p1258174011292">Select an operating system for the node.</p>
|
||||
<p id="cce_01_9996__p47999261331">Reinstalling OSs or modifying OS configurations could make nodes unavailable. Exercise caution when performing these operations. For more information, see <a href="cce_bulletin_0054.html">Risky Operations on Cluster Nodes</a>.</p>
|
||||
<p id="cce_01_9996__p47999261331">Reinstalling OSs or modifying OS configurations could make nodes unavailable. Exercise caution when performing these operations. For more information, see <a href="cce_10_0054.html">High-Risk Operations and Solutions</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_9996__row950585532910"><td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.3.2.5.2.1.2.4.1.1 "><p id="cce_01_9996__p25051955142914">VPC</p>
|
||||
@ -290,7 +290,7 @@
|
||||
</table>
|
||||
</div>
|
||||
</p></li><li id="cce_01_9996__li62331449191411"><span>Click <span class="uicontrol" id="cce_01_9996__uicontrol09511938122317"><b>Next</b></span> to install add-ons.</span><p><p id="cce_01_9996__p16242151917508">System resource add-ons must be installed. Advanced functional add-ons are optional.</p>
|
||||
<p id="cce_01_9996__p987031411110">You can also install optional add-ons after the cluster is created. To do so, choose <span class="uicontrol" id="cce_01_9996__uicontrol143592045195114"><b>Add-ons</b></span> in the navigation pane of the CCE console and select the add-on you will install. For details, see <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0064.html" target="_blank" rel="noopener noreferrer">Add-ons</a>.</p>
|
||||
<p id="cce_01_9996__p987031411110">You can also install optional add-ons after the cluster is created. To do so, choose <span class="uicontrol" id="cce_01_9996__uicontrol143592045195114"><b>Add-ons</b></span> in the navigation pane of the CCE console and select the add-on you will install. For details, see <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0064.html" target="_blank" rel="noopener noreferrer">Add-ons</a>.</p>
|
||||
</p></li><li id="cce_01_9996__li15071642201916"><span>Click <strong id="cce_01_9996__b1627013211343">Create Now</strong>. Check all the configurations, and click <strong id="cce_01_9996__b8661184415419">Submit</strong>.</span><p><p id="cce_01_9996__p1150715424195">It takes 6 to 10 minutes to create a cluster. Information indicating the progress of the creation process will be displayed.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -4,12 +4,12 @@
|
||||
<div id="body1522665832344"><p id="cce_10_0006__p11116113204610">CCE provides Kubernetes-native container deployment and management and supports lifecycle management of container workloads, including creation, configuration, monitoring, auto scaling, upgrade, uninstall, service discovery, and load balancing.</p>
|
||||
<div class="section" id="cce_10_0006__section9645114684816"><h4 class="sectiontitle">Pod</h4><p id="cce_10_0006__en-us_topic_0254767870_p356108173515">A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates one or more containers, storage volumes, a unique network IP address, and options that govern how the containers should run.</p>
|
||||
<p id="cce_10_0006__en-us_topic_0254767870_p4629172611480">Pods can be used in either of the following ways:</p>
|
||||
<ul id="cce_10_0006__en-us_topic_0254767870_ul062982617481"><li id="cce_10_0006__en-us_topic_0254767870_li1629172611482">A container is running in a pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.</li><li id="cce_10_0006__en-us_topic_0254767870_li1962932615480">Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in <a href="#cce_10_0006__en-us_topic_0254767870_fig347141918551">Figure 1</a>. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory.<div class="fignone" id="cce_10_0006__en-us_topic_0254767870_fig347141918551"><a name="cce_10_0006__en-us_topic_0254767870_fig347141918551"></a><a name="en-us_topic_0254767870_fig347141918551"></a><span class="figcap"><b>Figure 1 </b>Pod</span><br><span><img id="cce_10_0006__en-us_topic_0254767870_image1835215316361" src="en-us_image_0258392378.png"></span></div>
|
||||
<ul id="cce_10_0006__en-us_topic_0254767870_ul062982617481"><li id="cce_10_0006__en-us_topic_0254767870_li1629172611482">A container is running in a pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.</li><li id="cce_10_0006__en-us_topic_0254767870_li1962932615480">Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in <a href="#cce_10_0006__en-us_topic_0254767870_fig347141918551">Figure 1</a>. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory.<div class="fignone" id="cce_10_0006__en-us_topic_0254767870_fig347141918551"><a name="cce_10_0006__en-us_topic_0254767870_fig347141918551"></a><a name="en-us_topic_0254767870_fig347141918551"></a><span class="figcap"><b>Figure 1 </b>Pod</span><br><span><img id="cce_10_0006__en-us_topic_0254767870_image1835215316361" src="en-us_image_0000001518222716.png"></span></div>
|
||||
</li></ul>
|
||||
<p id="cce_10_0006__en-us_topic_0254767870_p9163143619182">In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0006__section1972719357496"><h4 class="sectiontitle">Deployment</h4><p id="cce_10_0006__en-us_topic_0249851113_p13243347131615">A pod is the smallest and simplest unit that you create or deploy in Kubernetes. It is designed to be an ephemeral, one-off entity. A pod can be evicted when node resources are insufficient and disappears along with a cluster node failure. Kubernetes provides controllers to manage pods. Controllers can create and manage pods, and provide replica management, rolling upgrade, and self-healing capabilities. The most commonly used controller is Deployment.</p>
|
||||
<div class="fignone" id="cce_10_0006__en-us_topic_0249851113_fig12546173933714"><span class="figcap"><b>Figure 2 </b>Relationship between a Deployment and pods</span><br><span><img id="cce_10_0006__en-us_topic_0249851113_image5671529113711" src="en-us_image_0258095884.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0006__en-us_topic_0249851113_fig12546173933714"><span class="figcap"><b>Figure 2 </b>Relationship between a Deployment and pods</span><br><span><img id="cce_10_0006__en-us_topic_0249851113_image5671529113711" src="en-us_image_0000001569023033.png"></span></div>
|
||||
<p id="cce_10_0006__en-us_topic_0249851113_p35371248184511">A Deployment can contain one or more pods. These pods have the same role. Therefore, the system automatically distributes requests to multiple pods of a Deployment.</p>
|
||||
<p id="cce_10_0006__en-us_topic_0249851113_p11715188281">A Deployment integrates a lot of functions, including online deployment, rolling upgrade, replica creation, and restoration of online jobs. To some extent, Deployments can be used to realize unattended rollout, which greatly reduces difficulties and operation risks in the rollout process.</p>
|
||||
</div>
|
||||
@ -18,12 +18,12 @@
|
||||
<p id="cce_10_0006__en-us_topic_0249896621_p97277467269">With detailed analysis, it is found that each part of distributed stateful applications plays a different role. For example, the database nodes are deployed in active/standby mode, and pods are dependent on each other. In this case, you need to meet the following requirements for the pods:</p>
|
||||
<ul id="cce_10_0006__en-us_topic_0249896621_ul1181724132317"><li id="cce_10_0006__en-us_topic_0249896621_li10181102419231">A pod can be recognized by other pods. Therefore, a pod must have a fixed identifier.</li><li id="cce_10_0006__en-us_topic_0249896621_li81819249237">Each pod has an independent storage device. After a pod is deleted and then restored, the data read from the pod must be the same as the previous one. Otherwise, the pod status is inconsistent.</li></ul>
|
||||
<p id="cce_10_0006__en-us_topic_0249896621_p929315724313">To address the preceding requirements, Kubernetes provides StatefulSets.</p>
|
||||
<ol id="cce_10_0006__en-us_topic_0249896621_ol117020203559"><li id="cce_10_0006__en-us_topic_0249896621_li183871501692">A StatefulSet provides a fixed name for each pod following a fixed number ranging from 0 to N. After a pod is rescheduled, the pod name and the host name remain unchanged.</li><li id="cce_10_0006__en-us_topic_0249896621_li1789810518913">A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in following sections).</li><li id="cce_10_0006__en-us_topic_0249896621_li43183204569">The StatefulSet creates PersistentVolumeClaims (PVCs) with fixed identifiers to ensure that pods can access the same persistent data after being rescheduled.<p id="cce_10_0006__en-us_topic_0249896621_p8536185392116"><a name="cce_10_0006__en-us_topic_0249896621_li43183204569"></a><a name="en-us_topic_0249896621_li43183204569"></a><span><img id="cce_10_0006__en-us_topic_0249896621_image9125145402111" src="en-us_image_0258203193.png"></span></p>
|
||||
<ol id="cce_10_0006__en-us_topic_0249896621_ol117020203559"><li id="cce_10_0006__en-us_topic_0249896621_li183871501692">A StatefulSet provides a fixed name for each pod following a fixed number ranging from 0 to N. After a pod is rescheduled, the pod name and the host name remain unchanged.</li><li id="cce_10_0006__en-us_topic_0249896621_li1789810518913">A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in following sections).</li><li id="cce_10_0006__en-us_topic_0249896621_li43183204569">The StatefulSet creates PersistentVolumeClaims (PVCs) with fixed identifiers to ensure that pods can access the same persistent data after being rescheduled.<p id="cce_10_0006__en-us_topic_0249896621_p8536185392116"><a name="cce_10_0006__en-us_topic_0249896621_li43183204569"></a><a name="en-us_topic_0249896621_li43183204569"></a><span><img id="cce_10_0006__en-us_topic_0249896621_image9125145402111" src="en-us_image_0000001517743628.png"></span></p>
|
||||
</li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0006__section7846281504"><h4 class="sectiontitle">DaemonSet</h4><p id="cce_10_0006__en-us_topic_0249851114_p441104813815">A DaemonSet runs a pod on each node in a cluster and ensures that there is only one pod. This works well for certain system-level applications, such as log collection and resource monitoring, since they must run on each node and need only a few pods. A good example is kube-proxy.</p>
|
||||
<p id="cce_10_0006__en-us_topic_0249851114_p5986375820">DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet will not create the same pods on other nodes.</p>
|
||||
<div class="fignone" id="cce_10_0006__en-us_topic_0249851114_fig27588261914"><span class="figcap"><b>Figure 3 </b>DaemonSet</span><br><span><img id="cce_10_0006__en-us_topic_0249851114_image13336133243518" src="en-us_image_0258871213.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0006__en-us_topic_0249851114_fig27588261914"><span class="figcap"><b>Figure 3 </b>DaemonSet</span><br><span><img id="cce_10_0006__en-us_topic_0249851114_image13336133243518" src="en-us_image_0000001518062772.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0006__section153173319578"><h4 class="sectiontitle">Job and Cron Job</h4><p id="cce_10_0006__en-us_topic_0249851115_p10889736123218">Jobs and cron jobs allow you to run short lived, one-off tasks in batch. They ensure the task pods run to completion.</p>
|
||||
<ul id="cce_10_0006__en-us_topic_0249851115_ul197714911354"><li id="cce_10_0006__en-us_topic_0249851115_li47711097352">A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after successfully completing tasks based on user configurations.</li><li id="cce_10_0006__en-us_topic_0249851115_li249061111353">A cron job runs a job periodically on a specified schedule. A cron job object is similar to a line of a crontab file in Linux.</li></ul>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -4,7 +4,7 @@
|
||||
<div id="body1522665832344"><p id="cce_10_0010__p13310145119810">You can learn about a cluster network from the following two aspects:</p>
|
||||
<ul id="cce_10_0010__ul65247121891"><li id="cce_10_0010__li14524161214917">What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see <a href="#cce_10_0010__section1131733719195">Cluster Network Structure</a>.</li><li id="cce_10_0010__li55241612391">How is pod access implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides <a href="#cce_10_0010__section1860619221134">Service</a> and <a href="#cce_10_0010__section1248852094313">Ingress</a> to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see <a href="#cce_10_0010__section1286493159">Access Scenarios</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0010__section1131733719195"><a name="cce_10_0010__section1131733719195"></a><a name="section1131733719195"></a><h4 class="sectiontitle">Cluster Network Structure</h4><p id="cce_10_0010__p3299181794916">All nodes in the cluster are located in a VPC and use the VPC network. The container network is managed by dedicated network add-ons.</p>
|
||||
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001199181334.png"></span></p>
|
||||
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001518222536.png"></span></p>
|
||||
<ul id="cce_10_0010__ul1916179122617"><li id="cce_10_0010__li13455145754315"><strong id="cce_10_0010__b19468105563811">Node Network</strong><p id="cce_10_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. You need to select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
|
||||
</li><li id="cce_10_0010__li16131141644715"><strong id="cce_10_0010__b1975815172433">Container Network</strong><p id="cce_10_0010__p523322010499">A container network assigns IP addresses to containers in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
|
||||
<p id="cce_10_0010__p3753153443514">Currently, CCE supports the following container network models:</p>
|
||||
@ -14,20 +14,20 @@
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0010__section1860619221134"><a name="cce_10_0010__section1860619221134"></a><a name="section1860619221134"></a><h4 class="sectiontitle">Service</h4><p id="cce_10_0010__p314709111318">A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods.</p>
|
||||
<div class="fignone" id="cce_10_0010__en-us_topic_0249851121_fig163156154816"><span class="figcap"><b>Figure 1 </b>Accessing pods through a Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851121_image1926812771312" src="en-us_image_0258889981.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0010__en-us_topic_0249851121_fig163156154816"><span class="figcap"><b>Figure 1 </b>Accessing pods through a Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851121_image1926812771312" src="en-us_image_0000001517743432.png"></span></div>
|
||||
<p id="cce_10_0010__p831948183818">You can configure the following types of Services:</p>
|
||||
<ul id="cce_10_0010__ul953218444116"><li id="cce_10_0010__li87791418174620">ClusterIP: used to make the Service only reachable from within a cluster.</li><li id="cce_10_0010__li17876227144612">NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node.</li><li id="cce_10_0010__li94953274615">LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, to which a load balancer routes, and external systems only need to access the load balancer.</li></ul>
|
||||
<p id="cce_10_0010__p1677717174140">For details about the Service, see <a href="cce_10_0249.html">Service Overview</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0010__section1248852094313"><a name="cce_10_0010__section1248852094313"></a><a name="section1248852094313"></a><h4 class="sectiontitle">Ingress</h4><p id="cce_10_0010__p96672218193">Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward requests using layer-7 HTTP and HTTPS protocols. Domain names and paths can be used to achieve finer granularities.</p>
|
||||
<div class="fignone" id="cce_10_0010__fig816719454212"><span class="figcap"><b>Figure 2 </b>Ingress and Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851122_image8371183511310" src="en-us_image_0258961458.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0010__fig816719454212"><span class="figcap"><b>Figure 2 </b>Ingress-Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851122_image8371183511310" src="en-us_image_0000001517903016.png"></span></div>
|
||||
<p id="cce_10_0010__p174691141141410">For details about the ingress, see <a href="cce_10_0094.html">Ingress Overview</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0010__section1286493159"><a name="cce_10_0010__section1286493159"></a><a name="section1286493159"></a><h4 class="sectiontitle">Access Scenarios</h4><p id="cce_10_0010__p1558001514155">Workload access scenarios can be categorized as follows:</p>
|
||||
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li1014213113116">Access through the internet requires an EIP to be bound the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through an intranet uses only the intranet IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
|
||||
</li><li id="cce_10_0010__li1066365520014">External access initiated by a workload:<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. </li><li id="cce_10_0010__li8257105318237">Accessing a public network: You need to assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing Public Networks from a Container</a>.</li></ul>
|
||||
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li8904911447">Access through the internet requires an EIP to be bound the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through the intranet requires an internal IP address to be bound the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
|
||||
</li><li id="cce_10_0010__li1066365520014">The workload accesses the external network.<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. </li><li id="cce_10_0010__li8257105318237">Accessing a public network: You need to assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing Public Networks from a Container</a>.</li></ul>
|
||||
</li></ul>
|
||||
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001244261169.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001568822741.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<div id="body1522736584192"><div class="section" id="cce_10_0011__section13559184110492"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0011__p32401248184910">ClusterIP Services allow workloads in the same cluster to use their cluster-internal domain names to access each other.</p>
|
||||
<p id="cce_10_0011__p653753053815">The cluster-internal domain name format is <em id="cce_10_0011__i8179113533712"><Service name></em>.<em id="cce_10_0011__i14179133519374"><Namespace of the workload></em><strong id="cce_10_0011__b164892813716">.svc.cluster.local:</strong><em id="cce_10_0011__i19337102815712"><Port></em>, for example, <strong id="cce_10_0011__b8115811381">nginx.default.svc.cluster.local:80</strong>.</p>
|
||||
<p id="cce_10_0011__p1778412445517"><a href="#cce_10_0011__fig192245420557">Figure 1</a> shows the mapping relationships between access channels, container ports, and access ports.</p>
|
||||
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001243981117.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001569023045.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>Choose <strong id="cce_10_0011__b85507206148">Networking</strong> in the navigation pane and click <strong id="cce_10_0011__b1938115214148">Create Service</strong> in the upper right corner.</span></li><li id="cce_10_0011__li3476651017144"><span>Set intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Add</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Port Settings</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
|
||||
</li></ul>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -4,20 +4,20 @@
|
||||
<div id="body1522736584192"><div class="section" id="cce_10_0014__section19854101411508"><a name="cce_10_0014__section19854101411508"></a><a name="section19854101411508"></a><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0014__p1858152125017">A workload can be accessed from public networks through a load balancer, which is more secure and reliable than EIP.</p>
|
||||
<p id="cce_10_0014__p18345124185316">The LoadBalancer access address is in the format of <IP address of public network load balancer>:<access port>, for example, <strong id="cce_10_0014__b11546131414542">10.117.117.117:80</strong>.</p>
|
||||
<p id="cce_10_0014__p7801158125217">In this access mode, requests are transmitted through an ELB load balancer to a node and then forwarded to the destination pod through the Service.</p>
|
||||
<div class="fignone" id="cce_10_0014__fig1454926316508"><span class="figcap"><b>Figure 1 </b>LoadBalancer</span><br><span><img id="cce_10_0014__image846021786" src="en-us_image_0000001244141181.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0014__fig1454926316508"><span class="figcap"><b>Figure 1 </b>LoadBalancer</span><br><span><img id="cce_10_0014__image846021786" src="en-us_image_0000001569022961.png"></span></div>
|
||||
<p id="cce_10_0014__p3662933103112">When <strong id="cce_10_0014__b7582529103312">CCE Turbo clusters and dedicated load balancers</strong> are used, passthrough networking is supported to reduce service latency and ensure zero performance loss.</p>
|
||||
<p id="cce_10_0014__p655815372328">External access requests are directly forwarded from a load balancer to pods. Internal access requests can be forwarded to a pod through a Service.</p>
|
||||
<div class="fignone" id="cce_10_0014__fig44531612193618"><span class="figcap"><b>Figure 2 </b>Passthrough networking</span><br><span><img id="cce_10_0014__image5485375324" src="en-us_image_0000001249073211.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0014__fig44531612193618"><span class="figcap"><b>Figure 2 </b>Passthrough networking</span><br><span><img id="cce_10_0014__image5485375324" src="en-us_image_0000001517903124.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0014__section11642143794611"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0014__ul1801539464"><li id="cce_10_0014__li1529952816473">LoadBalancer Services allow workloads to be accessed from public networks through <strong id="cce_10_0014__b1511118124819">ELB</strong>. This access mode has the following restrictions:<ul id="cce_10_0014__ul1241483374717"><li id="cce_10_0014__li162242024131019">It is recommended that automatically created load balancers not be used by other resources. Otherwise, these load balancers cannot be completely deleted, causing residual resources.</li><li id="cce_10_0014__li1080453124610">Do not change the listener name for the load balancer in clusters of v1.15 and earlier. Otherwise, the load balancer cannot be accessed.</li></ul>
|
||||
</li><li id="cce_10_0014__li128551156114310">After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. You are advised not to modify the Service affinity setting after the Service is created. If you need to modify it, create a Service again.</li><li id="cce_10_0014__li1553715571314">If the service affinity is set to the node level (that is, <strong id="cce_10_0014__b16405133417613">externalTrafficPolicy</strong> is set to <strong id="cce_10_0014__b12712364614">Local</strong>), the cluster may fail to access the Service by using the ELB address. For details, see <a href="#cce_10_0014__section52631714117">Why a Cluster Fails to Access Services by Using the ELB Address</a>.</li><li id="cce_10_0014__li62831358182017">CCE Turbo clusters support only cluster-level service affinity.</li><li id="cce_10_0014__li35821536336">Dedicated ELB load balancers can be used only in clusters of v1.17 and later.</li><li id="cce_10_0014__li188391194225">Dedicated load balancers must be the network type (TCP/UDP) supporting private networks (with a private IP). If the Service needs to support HTTP, the specifications of dedicated load balancers must use HTTP/HTTPS (application load balancing) in addition to TCP/UDP (network load balancing).</li><li id="cce_10_0014__li2627549105716">If you create a LoadBalancer Service on the CCE console, a random node port is automatically generated. If you use kubectl to create a LoadBalancer Service, a random node port is generated unless you specify one.</li><li id="cce_10_0014__li93797513138">In a CCE cluster, if the cluster-level affinity is configured for a LoadBalancer Service, requests are distributed to the node ports of each node using SNAT when entering the cluster. The number of node ports cannot exceed the number of available node ports on the node. If the Service affinity is at the node level (local), there is no such constraint. In a CCE Turbo cluster, this constraint applies to shared ELB load balancers, but not dedicated ones. You are advised to use dedicated ELB load balancers in CCE Turbo clusters.</li><li id="cce_10_0014__li1031414582416">When the cluster service forwarding (proxy) mode is IPVS, the node IP cannot be configured as the external IP of the Service. Otherwise, the node is unavailable.</li><li id="cce_10_0014__li202253469362">In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0014__section1325012312139"><h4 class="sectiontitle">Creating a LoadBalancer Service</h4><ol id="cce_10_0014__ol751935681319"><li id="cce_10_0014__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster.</span></li><li id="cce_10_0014__li1651955651312"><span>Choose <strong id="cce_10_0014__b20811412124117">Networking</strong> in the navigation pane and click <strong id="cce_10_0014__b4811612104119">Create Service</strong> in the upper right corner.</span></li><li id="cce_10_0014__li185190567138"><span>Set parameters.</span><p><ul id="cce_10_0014__ul4446314017144"><li id="cce_10_0014__li6462394317144"><strong id="cce_10_0014__b186253818421">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0014__li89543531070"><strong id="cce_10_0014__b555284112425">Access Type</strong>: Select <strong id="cce_10_0014__b655313416422">LoadBalancer</strong>.</li><li id="cce_10_0014__li4800017144"><strong id="cce_10_0014__b462512137439">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0014__li1758110116149"><strong id="cce_10_0014__b325014537434">Service Affinity</strong>: For details, see <a href="cce_10_0142.html#cce_10_0142__section18134208069">externalTrafficPolicy (Service Affinity)</a>.<ul id="cce_10_0014__ul158101161412"><li id="cce_10_0014__li105815113141"><strong id="cce_10_0014__b16659151119444">Cluster level</strong>: The IP addresses and access ports of all nodes in a cluster can be used to access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained.</li><li id="cce_10_0014__li185817117145"><strong id="cce_10_0014__b187631494415">Node level</strong>: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained.</li></ul>
|
||||
</li><li id="cce_10_0014__li43200017144"><strong id="cce_10_0014__b964616495410">Selector</strong>: Add a label and click <strong id="cce_10_0014__b1664616492411">Add</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0014__b13284181916449">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0014__b18284181915445">OK</strong>.</li><li id="cce_10_0014__li14384123818176"><strong id="cce_10_0014__b2310182654418">Load Balancer</strong><p id="cce_10_0014__p4855423189">Select the load balancer to interconnect. Only load balancers in the same VPC as the cluster are supported. If no load balancer is available, click <strong id="cce_10_0014__b1221291200">Create Load Balancer</strong> to create one on the ELB console.</p>
|
||||
<p id="cce_10_0014__p17766202114215">You can click <strong id="cce_10_0014__b135601348463">Edit</strong> and configure load balancer parameters in the <strong id="cce_10_0014__b72171221472">Load Balancer</strong> dialog box.</p>
|
||||
<p id="cce_10_0014__p17766202114215">You can click the edit icon in the row of <strong id="cce_10_0014__b4667115817168">Set ELB</strong> to configure load balancer parameters.</p>
|
||||
<ul id="cce_10_0014__ul943963914228"><li id="cce_10_0014__li8170555132211"><strong id="cce_10_0014__b169881428124716">Distribution Policy</strong>: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash.<div class="note" id="cce_10_0014__note14170205516225"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0014__ul1717075520227"><li id="cce_10_0014__li15170955152215"><strong id="cce_10_0014__b8139183255011">Weighted round robin</strong>: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services.</li><li id="cce_10_0014__li12170185532213"><strong id="cce_10_0014__b9879547125012">Weighted least connections</strong>: In addition to the weight assigned to each server, the number of connections processed by each backend server is also considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on <strong id="cce_10_0014__b19132751145011">least connections</strong>, the <strong id="cce_10_0014__b71328516505">weighted least connections</strong> algorithm assigns a weight to each server based on their processing capability. This algorithm is often used for persistent connections, such as database connections.</li><li id="cce_10_0014__li0170105502211"><strong id="cce_10_0014__b8109955125015">Source IP hash</strong>: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This enables requests from different clients to be distributed in load balancing mode and ensures that requests from the same client are forwarded to the same server. This algorithm applies to TCP connections without cookies.</li></ul>
|
||||
</div></div>
|
||||
</li><li id="cce_10_0014__li0170115513227"><strong id="cce_10_0014__b43411498117">Type</strong>: This function is disabled by default. You can select <strong id="cce_10_0014__b0394332121213">Source IP address</strong>. Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server.</li><li id="cce_10_0014__li14170655112210"><strong id="cce_10_0014__b9887155318122">Health Check</strong>: This function is disabled by default. The health check is for the load balancer. When TCP is selected during the <a href="#cce_10_0014__li388800117144">port settings</a>, you can choose either TCP or HTTP. When UDP is selected during the <a href="#cce_10_0014__li388800117144">port settings</a>, only UDP is supported.. By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named <strong id="cce_10_0014__b159511449710">cce-healthz</strong> will be added for the Service.</li></ul>
|
||||
</li><li id="cce_10_0014__li0170115513227"><strong id="cce_10_0014__b43411498117">Type</strong>: This function is disabled by default. You can select <strong id="cce_10_0014__b0394332121213">Source IP address</strong>. Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server.</li><li id="cce_10_0014__li14170655112210"><strong id="cce_10_0014__b166191646173">Health Check</strong>: configured for the load balancer. When TCP is selected during the <a href="#cce_10_0014__li388800117144">port settings</a>, you can choose either TCP or HTTP. When UDP is selected during the <a href="#cce_10_0014__li388800117144">port settings</a>, only UDP is supported.. By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named <strong id="cce_10_0014__b159511449710">cce-healthz</strong> will be added for the Service.</li></ul>
|
||||
</li><li id="cce_10_0014__li388800117144"><a name="cce_10_0014__li388800117144"></a><a name="li388800117144"></a><strong id="cce_10_0014__b89301584315175">Port Settings</strong><ul id="cce_10_0014__ul3499201217144"><li id="cce_10_0014__li4649265917144"><strong id="cce_10_0014__b147114610479">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0014__li353122153610"><strong id="cce_10_0014__b69812211813">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0014__li475042104417"><strong id="cce_10_0014__b7688424818">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
|
||||
</li><li id="cce_10_0014__li104962251243"><strong id="cce_10_0014__b12052012556">Annotation</strong>: The LoadBalancer Service has some advanced CCE functions, which are implemented by annotations. For details, see <a href="cce_10_0385.html">Service Annotations</a>. When you use kubectl to create a container, annotations will be used. For details, see <a href="#cce_10_0014__section1984211714368">Using kubectl to Create a Service (Using an Existing Load Balancer)</a> and <a href="#cce_10_0014__section12168131904611">Using kubectl to Create a Service (Automatically Creating a Load Balancer)</a>.</li></ul>
|
||||
</p></li><li id="cce_10_0014__li552017569135"><span>Click <strong id="cce_10_0014__b911916813568">OK</strong>.</span></li></ol>
|
||||
@ -272,11 +272,11 @@ spec:
|
||||
kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 3d
|
||||
<strong id="cce_10_0014__b1214411223310">nginx LoadBalancer 10.247.130.196 10.78.42.242 80:31540/TCP 51s</strong></pre>
|
||||
</p></li><li id="cce_10_0014__li167017242"><span>Enter the URL in the address box of the browser, for example, <strong id="cce_10_0014__b842352706164951">10.78.42.242:80</strong>. <strong id="cce_10_0014__b84235270616505">10.78.42.242</strong> indicates the IP address of the load balancer, and <strong id="cce_10_0014__b842352706165024">80</strong> indicates the access port displayed on the CCE console.</span><p><p id="cce_10_0014__p167058343415">The Nginx is accessible.</p>
|
||||
<div class="fignone" id="cce_10_0014__fig1498213713356"><span class="figcap"><b>Figure 3 </b>Accessing Nginx through the LoadBalancer Service</span><br><span><img id="cce_10_0014__image4983479359" src="en-us_image_0000001243981181.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0014__fig1498213713356"><span class="figcap"><b>Figure 3 </b>Accessing Nginx through the LoadBalancer Service</span><br><span><img id="cce_10_0014__image4983479359" src="en-us_image_0000001569182677.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0014__section12168131904611"><a name="cce_10_0014__section12168131904611"></a><a name="section12168131904611"></a><h4 class="sectiontitle">Using kubectl to Create a Service (Automatically Creating a Load Balancer)</h4><p id="cce_10_0014__p1036918271467">You can add a Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl.</p>
|
||||
<ol id="cce_10_0014__ol1236962794610"><li id="cce_10_0014__li103401710124914"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0014__li337012724615"><span>Create and edit the <strong id="cce_10_0014__b1160504865">nginx-deployment.yaml</strong> and <strong id="cce_10_0014__b748957890">nginx-elb-svc.yaml</strong> files.</span><p><p id="cce_10_0014__p1137014271463">The file names are user-defined. <strong id="cce_10_0014__b1621345180">nginx-deployment.yaml</strong> and <strong id="cce_10_0014__b83395749">nginx-elb-svc.yaml</strong> are merely example file names.</p>
|
||||
<ol id="cce_10_0014__ol1236962794610"><li id="cce_10_0014__li103401710124914"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0014__li337012724615"><span>Create and edit the <strong id="cce_10_0014__b1606652833">nginx-deployment.yaml</strong> and <strong id="cce_10_0014__b291529992">nginx-elb-svc.yaml</strong> files.</span><p><p id="cce_10_0014__p1137014271463">The file names are user-defined. <strong id="cce_10_0014__b799277615">nginx-deployment.yaml</strong> and <strong id="cce_10_0014__b594514403">nginx-elb-svc.yaml</strong> are merely example file names.</p>
|
||||
<p id="cce_10_0014__p153702275465"><strong id="cce_10_0014__b13370127184617">vi nginx-deployment.yaml</strong></p>
|
||||
<pre class="screen" id="cce_10_0014__screen17370112710466">apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@ -381,7 +381,7 @@ spec:
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="49.19%" headers="mcps1.3.5.3.2.2.9.2.5.1.4 "><p id="cce_10_0014__p17331169135014">Select a proper load balancer type as required.</p>
|
||||
<p id="cce_10_0014__p143311298508">The value can be:</p>
|
||||
<ul id="cce_10_0014__ul3415201212612"><li id="cce_10_0014__cce_10_0014_li735384716395"><strong id="cce_10_0014__cce_10_0014_b3322231123015">union</strong>: shared load balancer</li><li id="cce_10_0014__cce_10_0014_li1535310477392"><strong id="cce_10_0014__cce_10_0014_b1118173493014">performance</strong>: dedicated load balancer, which can be used only in clusters of v1.17 and later.</li></ul>
|
||||
<ul id="cce_10_0014__ul3415201212612"><li id="cce_10_0014__en-us_topic_0000001243981073_li735384716395"><strong id="cce_10_0014__b486405467">union</strong>: shared load balancer</li><li id="cce_10_0014__en-us_topic_0000001243981073_li1535310477392"><strong id="cce_10_0014__b167451413695">performance</strong>: dedicated load balancer, which can be used only in clusters of v1.17 and later.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0014__row790233013543"><td class="cellrowborder" valign="top" width="24.85%" headers="mcps1.3.5.3.2.2.9.2.5.1.1 "><p id="cce_10_0014__p143324917501">kubernetes.io/elb.subnet-id</p>
|
||||
@ -422,10 +422,10 @@ spec:
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="13.639999999999999%" headers="mcps1.3.5.3.2.2.9.2.5.1.3 "><p id="cce_10_0014__p933315995019">String</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="49.19%" headers="mcps1.3.5.3.2.2.9.2.5.1.4 "><p id="cce_10_0014__p0333699508">This parameter indicates the load balancing algorithm of the backend server group. The default value is <strong id="cce_10_0014__b978088805">ROUND_ROBIN</strong>.</p>
|
||||
<td class="cellrowborder" valign="top" width="49.19%" headers="mcps1.3.5.3.2.2.9.2.5.1.4 "><p id="cce_10_0014__p0333699508">This parameter indicates the load balancing algorithm of the backend server group. The default value is <strong id="cce_10_0014__b1080902346">ROUND_ROBIN</strong>.</p>
|
||||
<p id="cce_10_0014__p6333198503">Options:</p>
|
||||
<ul id="cce_10_0014__ul13337919508"><li id="cce_10_0014__li3333893501"><strong id="cce_10_0014__b917257088">ROUND_ROBIN</strong>: weighted round robin algorithm</li><li id="cce_10_0014__li93331191509"><strong id="cce_10_0014__b463671576">LEAST_CONNECTIONS</strong>: weighted least connections algorithm</li><li id="cce_10_0014__li1333379105016"><strong id="cce_10_0014__b1835281429">SOURCE_IP</strong>: source IP hash algorithm</li></ul>
|
||||
<p id="cce_10_0014__p833315910507">When the value is <strong id="cce_10_0014__b1391706822">SOURCE_IP</strong>, the weights of backend servers in the server group are invalid.</p>
|
||||
<ul id="cce_10_0014__ul13337919508"><li id="cce_10_0014__li3333893501"><strong id="cce_10_0014__b1497936315">ROUND_ROBIN</strong>: weighted round robin algorithm</li><li id="cce_10_0014__li93331191509"><strong id="cce_10_0014__b1755179996">LEAST_CONNECTIONS</strong>: weighted least connections algorithm</li><li id="cce_10_0014__li1333379105016"><strong id="cce_10_0014__b2003040388">SOURCE_IP</strong>: source IP hash algorithm</li></ul>
|
||||
<p id="cce_10_0014__p833315910507">When the value is <strong id="cce_10_0014__b346635155">SOURCE_IP</strong>, the weights of backend servers in the server group are invalid.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0014__row1533329185018"><td class="cellrowborder" valign="top" width="24.85%" headers="mcps1.3.5.3.2.2.9.2.5.1.1 "><p id="cce_10_0014__p1533313905015">kubernetes.io/elb.health-check-flag</p>
|
||||
@ -435,7 +435,7 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="13.639999999999999%" headers="mcps1.3.5.3.2.2.9.2.5.1.3 "><p id="cce_10_0014__p16333993504">String</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="49.19%" headers="mcps1.3.5.3.2.2.9.2.5.1.4 "><p id="cce_10_0014__p1833315910509">Whether to enable the ELB health check.</p>
|
||||
<ul id="cce_10_0014__ul19333199205012"><li id="cce_10_0014__li8333109155020">Enabling health check: Leave blank this parameter or set it to <strong id="cce_10_0014__b750650087">on</strong>.</li><li id="cce_10_0014__li103330914504">Disabling health check: Set this parameter to <strong id="cce_10_0014__b2044845713">off</strong>.</li></ul>
|
||||
<ul id="cce_10_0014__ul19333199205012"><li id="cce_10_0014__li8333109155020">Enabling health check: Leave blank this parameter or set it to <strong id="cce_10_0014__b2035367413">on</strong>.</li><li id="cce_10_0014__li103330914504">Disabling health check: Set this parameter to <strong id="cce_10_0014__b522599623">off</strong>.</li></ul>
|
||||
<p id="cce_10_0014__p510323641317">If this parameter is enabled, the <a href="#cce_10_0014__table236017471397">kubernetes.io/elb.health-check-option</a> field must also be specified at the same time.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -455,7 +455,7 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="13.639999999999999%" headers="mcps1.3.5.3.2.2.9.2.5.1.3 "><p id="cce_10_0014__p43315965016">String</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="49.19%" headers="mcps1.3.5.3.2.2.9.2.5.1.4 "><p id="cce_10_0014__p533113915503">Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server.</p>
|
||||
<ul id="cce_10_0014__ul113311191508"><li id="cce_10_0014__li11331189195017">Disabling sticky session: Do not set this parameter.</li><li id="cce_10_0014__li133313914502">Enabling sticky session: Set this parameter to <strong id="cce_10_0014__b1268304995">SOURCE_IP</strong>, indicating that the sticky session is based on the source IP address.</li></ul>
|
||||
<ul id="cce_10_0014__ul113311191508"><li id="cce_10_0014__li11331189195017">Disabling sticky session: Do not set this parameter.</li><li id="cce_10_0014__li133313914502">Enabling sticky session: Set this parameter to <strong id="cce_10_0014__b1430522096">SOURCE_IP</strong>, indicating that the sticky session is based on the source IP address.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0014__row1421317512156"><td class="cellrowborder" valign="top" width="24.85%" headers="mcps1.3.5.3.2.2.9.2.5.1.1 "><p id="cce_10_0014__p12624252121514">kubernetes.io/elb.session-affinity-option</p>
|
||||
@ -633,8 +633,8 @@ spec:
|
||||
<pre class="screen" id="cce_10_0014__screen94033273464">NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 3d
|
||||
<strong id="cce_10_0014__b15405527194615">nginx LoadBalancer 10.247.130.196 10.78.42.242 80:31540/TCP 51s</strong></pre>
|
||||
</p></li><li id="cce_10_0014__li1940672734614"><span>Enter the URL in the address box of the browser, for example, <strong id="cce_10_0014__b737105175">10.78.42.242:80</strong>. <strong id="cce_10_0014__b2104019463">10.78.42.242</strong> indicates the IP address of the load balancer, and <strong id="cce_10_0014__b1360465659">80</strong> indicates the access port displayed on the CCE console.</span><p><p id="cce_10_0014__p184066272466">The Nginx is accessible.</p>
|
||||
<div class="fignone" id="cce_10_0014__fig2406102717469"><span class="figcap"><b>Figure 4 </b>Accessing Nginx through the LoadBalancer Service</span><br><span><img id="cce_10_0014__image13406827194620" src="en-us_image_0000001199021334.png"></span></div>
|
||||
</p></li><li id="cce_10_0014__li1940672734614"><span>Enter the URL in the address box of the browser, for example, <strong id="cce_10_0014__b1816549858">10.78.42.242:80</strong>. <strong id="cce_10_0014__b1608201852">10.78.42.242</strong> indicates the IP address of the load balancer, and <strong id="cce_10_0014__b152673428">80</strong> indicates the access port displayed on the CCE console.</span><p><p id="cce_10_0014__p184066272466">The Nginx is accessible.</p>
|
||||
<div class="fignone" id="cce_10_0014__fig2406102717469"><span class="figcap"><b>Figure 4 </b>Accessing Nginx through the LoadBalancer Service</span><br><span><img id="cce_10_0014__image13406827194620" src="en-us_image_0000001517743552.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0014__section18120261746"><h4 class="sectiontitle">ELB Forwarding</h4><p id="cce_10_0014__p394033612383">After a Service of the LoadBalancer type is created, you can view the listener forwarding rules of the load balancer on the ELB console.</p>
|
||||
@ -651,11 +651,11 @@ kubernetes ClusterIP 10.247.0.1 <none> 443/TCP
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="10.768923107689233%"><p id="cce_10_0014__p895021493310">Client</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="15.608439156084392%"><p id="cce_10_0014__p7950111483311">Tunnel Network Cluster (IPVS)</p>
|
||||
<td class="cellrowborder" valign="top" width="15.608439156084392%"><p id="cce_10_0014__p7950111483311">Container Tunnel Network Cluster (IPVS)</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="17.588241175882413%"><p id="cce_10_0014__p995011423320">VPC Network Cluster (IPVS)</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="18.21817818218178%"><p id="cce_10_0014__p18950201416330">Tunnel Network Cluster (iptables)</p>
|
||||
<td class="cellrowborder" valign="top" width="18.21817818218178%"><p id="cce_10_0014__p18950201416330">Container Tunnel Network Cluster (iptables)</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="19.52804719528047%"><p id="cce_10_0014__p1595151433311">VPC Network Cluster (iptables)</p>
|
||||
</td>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -4,7 +4,7 @@
|
||||
<div id="body1522667123001"><p id="cce_10_0018__p78381781804">CCE works with AOM to collect workload logs. When creating a node, CCE installs the ICAgent for you (the DaemonSet named <strong id="cce_10_0018__b3710330164314">icagent</strong> in the kube-system namespace of the cluster). After the ICAgent collects workload logs and reports them to AOM, you can view workload logs on the CCE or AOM console.</p>
|
||||
<div class="section" id="cce_10_0018__section17884754413"><h4 class="sectiontitle">Notes and Constraints</h4><p id="cce_10_0018__p23831558355">The ICAgent only collects <strong id="cce_10_0018__b39280572146">*.log</strong>, <strong id="cce_10_0018__b1793513574146">*.trace</strong>, and <strong id="cce_10_0018__b29351157191412">*.out</strong> text log files.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001206876656.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image168953502558" src="en-us_image_0000001199181298.png"></span></div>
|
||||
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001569182673.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image168953502558" src="en-us_image_0000001569022957.png"></span></div>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Storage Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>Host Path</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>Container Path</b></span>.</span><p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0018__table115901715550" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Configuring log policies</caption><thead align="left"><tr id="cce_10_0018__row45851074554"><th align="left" class="cellrowborder" valign="top" width="22.12%" id="mcps1.3.3.2.3.2.1.2.3.1.1"><p id="cce_10_0018__p115843785517">Parameter</p>
|
||||
@ -135,7 +135,7 @@ spec:
|
||||
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0018__table1332817095114" frame="border" border="1" rules="all"><caption><b>Table 2 </b>Parameter description</caption><thead align="left"><tr id="cce_10_0018__row63291603518"><th align="left" class="cellrowborder" valign="top" width="17.06%" id="mcps1.3.4.7.2.4.1.1"><p id="cce_10_0018__p53291009514">Parameter</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="19.23%" id="mcps1.3.4.7.2.4.1.2"><p id="cce_10_0018__p3329208519">Explanation</p>
|
||||
<th align="left" class="cellrowborder" valign="top" width="19.23%" id="mcps1.3.4.7.2.4.1.2"><p id="cce_10_0018__p3329208519">Description</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="63.71%" id="mcps1.3.4.7.2.4.1.3"><p id="cce_10_0018__p93291706517">Description</p>
|
||||
</th>
|
||||
@ -146,8 +146,8 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p6329709512">Extended host path</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p32881805119">Extended host paths contain pod IDs or container names to distinguish different containers into which the host path is mounted.</p>
|
||||
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword2000378939">Pod</span>.</p>
|
||||
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b379208072">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b1336006828">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b1360762071">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b1925991408">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
|
||||
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1766445251">Pod</span>.</p>
|
||||
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b466439911">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b746148577">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b678656736">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b1079307725">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0018__row732915085118"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p17329004514">policy.logs.rotate</p>
|
||||
@ -155,7 +155,7 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p123292055113">Log dump</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p1017113396539">Log dump refers to rotating log files on a local host.</p>
|
||||
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b1526956635">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b530318652">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b816695388">.zip</strong> files. When the number of <strong id="cce_10_0018__b589371906">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b54563225">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1482156256">Disabled</strong>: AOM does not dump log files.</li></ul>
|
||||
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b228801547">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b618877522">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b67462932">.zip</strong> files. When the number of <strong id="cce_10_0018__b478147095">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1992183573">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1231713624">Disabled</strong>: AOM does not dump log files.</li></ul>
|
||||
<div class="note" id="cce_10_0018__note121711639195319"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0018__ul817183918533"><li id="cce_10_0018__li9171183945310">AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur.</li><li id="cce_10_0018__li1117153914535">Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have set rotation for log files, skip the configuration. Otherwise, conflicts may occur.</li><li id="cce_10_0018__li317113915532">You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files.</li></ul>
|
||||
</div></div>
|
||||
</td>
|
||||
|
@ -3,13 +3,13 @@
|
||||
<h1 class="topictitle1">Querying CTS Logs</h1>
|
||||
<div id="body1525226397666"><div class="section" id="cce_10_0026__section19908104613460"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0026__p1349415403233">After you enable CTS, the system starts recording operations on CCE resources. Operation records of the last 7 days can be viewed on the CTS management console.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0026__section208814582456"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0026__ol968681862911"><li id="cce_10_0026__li18356228445"><span>Log in to the management console.</span></li><li id="cce_10_0026__li14905725134512"><span>Click <span><img id="cce_10_0026__image1180502423211" src="en-us_image_0000001244141141.gif"></span> in the upper left corner and select a region.</span></li><li id="cce_10_0026__li56856187296"><span>Choose <strong id="cce_10_0026__b161841334316020">Service List</strong> from the main menu. Choose <strong id="cce_10_0026__b14174101155814">Management & Deployment</strong> > <strong id="cce_10_0026__b1917414113585">Cloud Trace Service</strong>.</span></li><li id="cce_10_0026__li6685018122920"><span>In the navigation pane of the CTS console, choose <strong id="cce_10_0026__b091641316584">Cloud Trace Service</strong> > <strong id="cce_10_0026__b6917813165811">Trace List</strong>.</span></li><li id="cce_10_0026__li0686618152911"><span>On the <strong id="cce_10_0026__b156310494616044">Trace List</strong> page, query operation records based on the search criteria. Currently, the trace list supports trace query based on the combination of the following search criteria:</span><p><ul id="cce_10_0026__ul2686318142919"><li id="cce_10_0026__li9685018132914"><strong id="cce_10_0026__b147767585916113">Trace Source</strong>, <strong id="cce_10_0026__b33843206916113">Resource Type</strong>, and <strong id="cce_10_0026__b104136949616113">Search By</strong><p id="cce_10_0026__p068517181297">Select the search criteria from the drop-down lists. Select <strong id="cce_10_0026__b987393825817">CCE</strong> from the <strong id="cce_10_0026__b1287312387583">Trace Source</strong> drop-down list.</p>
|
||||
<div class="section" id="cce_10_0026__section208814582456"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0026__ol968681862911"><li id="cce_10_0026__li18356228445"><span>Log in to the management console.</span></li><li id="cce_10_0026__li14905725134512"><span>Click <span><img id="cce_10_0026__image1180502423211" src="en-us_image_0000001569182497.gif"></span> in the upper left corner and select a region.</span></li><li id="cce_10_0026__li56856187296"><span>Choose <strong id="cce_10_0026__b161841334316020">Service List</strong> from the main menu. Choose <strong id="cce_10_0026__b14174101155814">Management & Deployment</strong> > <strong id="cce_10_0026__b1917414113585">Cloud Trace Service</strong>.</span></li><li id="cce_10_0026__li6685018122920"><span>In the navigation pane of the CTS console, choose <strong id="cce_10_0026__b091641316584">Cloud Trace Service</strong> > <strong id="cce_10_0026__b6917813165811">Trace List</strong>.</span></li><li id="cce_10_0026__li0686618152911"><span>On the <strong id="cce_10_0026__b156310494616044">Trace List</strong> page, query operation records based on the search criteria. Currently, the trace list supports trace query based on the combination of the following search criteria:</span><p><ul id="cce_10_0026__ul2686318142919"><li id="cce_10_0026__li9685018132914"><strong id="cce_10_0026__b147767585916113">Trace Source</strong>, <strong id="cce_10_0026__b33843206916113">Resource Type</strong>, and <strong id="cce_10_0026__b104136949616113">Search By</strong><p id="cce_10_0026__p068517181297">Select the search criteria from the drop-down lists. Select <strong id="cce_10_0026__b987393825817">CCE</strong> from the <strong id="cce_10_0026__b1287312387583">Trace Source</strong> drop-down list.</p>
|
||||
<p id="cce_10_0026__p26851618102915">If you select <strong id="cce_10_0026__b23175131216221">Trace name</strong> from the <strong id="cce_10_0026__b172899127516221">Search By</strong> drop-down list, specify the trace name.</p>
|
||||
<p id="cce_10_0026__p7685191818293">If you select <strong id="cce_10_0026__b33083335616231">Resource ID</strong> from the <strong id="cce_10_0026__b153919820216231">Search By</strong> drop-down list, select or enter a specific resource ID.</p>
|
||||
<p id="cce_10_0026__p166851718102917">If you select <strong id="cce_10_0026__b50135831116238">Resource name</strong> from the <strong id="cce_10_0026__b186507588316238">Search By</strong> drop-down list, select or enter a specific resource name.</p>
|
||||
</li><li id="cce_10_0026__li1968671815297"><strong id="cce_10_0026__b168444573616245">Operator</strong>: Select a specific operator (at user level rather than account level).</li><li id="cce_10_0026__li368641832910"><strong id="cce_10_0026__b113712261116258">Trace Status</strong>: Set this parameter to any of the following values: <strong id="cce_10_0026__b135890568716258">All trace statuses</strong>, <strong id="cce_10_0026__b192911413716258">normal</strong>, <strong id="cce_10_0026__b59570413316258">warning</strong>, and <strong id="cce_10_0026__b169117565716258">incident</strong>.</li><li id="cce_10_0026__li12686118112916">Time range: You can query traces generated during any time range in the last seven days.</li></ul>
|
||||
</p></li><li id="cce_10_0026__li01301836122914"><span>Click <span><img id="cce_10_0026__image07291172331" src="en-us_image_0000001199341250.png"></span> on the left of a trace to expand its details, as shown below.</span><p><div class="fignone" id="cce_10_0026__fig1324117817394"><span class="figcap"><b>Figure 1 </b>Expanding trace details</span><br><span><img id="cce_10_0026__image19242788396" src="en-us_image_0000001243981141.png"></span></div>
|
||||
</p></li><li id="cce_10_0026__li186863182294"><span>Click <strong id="cce_10_0026__b25871212163720">View Trace</strong> in the <strong id="cce_10_0026__b1597141217374">Operation</strong> column. The trace details are displayed.</span><p><div class="fignone" id="cce_10_0026__fig365411360512"><span class="figcap"><b>Figure 2 </b>Viewing event details</span><br><span><img id="cce_10_0026__image21436386418" src="en-us_image_0000001244141139.png"></span></div>
|
||||
</p></li><li id="cce_10_0026__li01301836122914"><span>Click <span><img id="cce_10_0026__image07291172331" src="en-us_image_0000001569182505.png"></span> on the left of a trace to expand its details, as shown below.</span><p><div class="fignone" id="cce_10_0026__fig1324117817394"><span class="figcap"><b>Figure 1 </b>Expanding trace details</span><br><span><img id="cce_10_0026__image19242788396" src="en-us_image_0000001569022781.png"></span></div>
|
||||
</p></li><li id="cce_10_0026__li186863182294"><span>Click <strong id="cce_10_0026__b25871212163720">View Trace</strong> in the <strong id="cce_10_0026__b1597141217374">Operation</strong> column. The trace details are displayed.</span><p><div class="fignone" id="cce_10_0026__fig365411360512"><span class="figcap"><b>Figure 2 </b>Viewing event details</span><br><span><img id="cce_10_0026__image21436386418" src="en-us_image_0000001517743372.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -3,7 +3,7 @@
|
||||
<h1 class="topictitle1">Creating a CCE Cluster</h1>
|
||||
<div id="body1505899032898"><p id="cce_10_0028__p126541913151116">On the CCE console, you can easily create Kubernetes clusters. Kubernetes can manage container clusters at scale. A cluster manages a group of node resources.</p>
|
||||
<p id="cce_10_0028__p162026117205">In CCE, you can create a CCE cluster to manage VMs. By using high-performance network models, hybrid clusters provide a multi-scenario, secure, and stable runtime environment for containers.</p>
|
||||
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_10_0028__li124606217339">You can create a maximum of 50 clusters in a single region.</li><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
|
||||
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_10_0028__li124606217339">You can create a maximum of 50 clusters in a single region.</li><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0028__section176228482126"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0028__ol1233331493511"><li id="cce_10_0028__li833491416359"><span>Log in to the CCE console. Choose <strong id="cce_10_0028__b1563535515135">Clusters</strong>. On the displayed page, click <strong id="cce_10_0028__b1861116237141">Create</strong> next to <strong id="cce_10_0028__b1563618552135">CCE cluster</strong>.</span></li><li id="cce_10_0028__li1569162220359"><span>Set cluster parameters.</span><p><div class="p" id="cce_10_0028__p5653205823718"><strong id="cce_10_0028__b14641318112618">Basic Settings</strong><ul id="cce_10_0028__ul5395195853710"><li id="cce_10_0028__li1739455810379"><strong id="cce_10_0028__b15847145841720">Cluster Name</strong></li><li id="cce_10_0028__li163957587379"><strong id="cce_10_0028__b89145218188">Cluster Version</strong>: Select the Kubernetes version used by the cluster.</li><li id="cce_10_0028__li5395358163711"><strong id="cce_10_0028__b01681447141713">Cluster Scale</strong>: maximum number of nodes that can be managed by the cluster. </li><li id="cce_10_0028__li467617271013"><strong id="cce_10_0028__b1538713714413">HA</strong>: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities.<div class="p" id="cce_10_0028__p15811036101">You can also expand advanced settings and customize the master node distribution mode. The following two modes are supported:<ul id="cce_10_0028__ul729432918812"><li id="cce_10_0028__li1529418293815"><strong id="cce_10_0028__b939210361624">Random</strong>: Master nodes are created in different AZs for DR.</li><li id="cce_10_0028__li103958393117"><strong id="cce_10_0028__b5810610331">Custom</strong>: You can determine the location of each master node.<ul id="cce_10_0028__ul1220719413117"><li id="cce_10_0028__li62941529381"><strong id="cce_10_0028__b292085817517">Host</strong>: Master nodes are created on different hosts in the same AZ.</li><li id="cce_10_0028__li32946293815"><strong id="cce_10_0028__b01923920215">Custom</strong>: You can determine the location of each master node.</li></ul>
|
||||
@ -20,12 +20,14 @@
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li><li id="cce_10_0028__li8833185203815"><strong id="cce_10_0028__b891711174919">Description</strong>: The value can contain a maximum of 200 English characters.</li></ul>
|
||||
</p></li><li id="cce_10_0028__li9641724418"><span>Click <strong id="cce_10_0028__cce_10_0298_b05029251885">Next: Add-on Configuration</strong>.</span><p><p id="cce_10_0028__cce_10_0298_p292215338261">By default, <a href="cce_10_0129.html">cordens</a> and <a href="cce_10_0066.html">everest</a> add-ons are installed.</p>
|
||||
<div class="p" id="cce_10_0028__cce_10_0298_p1042341817336"><strong id="cce_10_0028__cce_10_0298_b54341755383">Service log</strong><ul id="cce_10_0028__cce_10_0298_ul1532032363417"><li id="cce_10_0028__cce_10_0298_li078322903611"><strong id="cce_10_0028__cce_10_0298_b111439411113">ICAgent</strong>:<p id="cce_10_0028__cce_10_0298_p5238153093619">A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured.</p>
|
||||
<p id="cce_10_0028__cce_10_0298_p161195033716">You can collect stdout logs as required.</p>
|
||||
</p></li><li id="cce_10_0028__li9641724418"><span>Click <strong id="cce_10_0028__b194907314482">Next: Add-on Configuration</strong>.</span><p><p id="cce_10_0028__en-us_topic_0000001243981077_p157905523575"><strong id="cce_10_0028__b595244015487">Domain Name Resolution</strong>: Uses the <a href="cce_10_0129.html">coredns</a> add-on, installed by default, to resolve domain names and connect to the cloud DNS server.</p>
|
||||
<p id="cce_10_0028__en-us_topic_0000001243981077_p292215338261"><strong id="cce_10_0028__b3546177134911">Container Storage</strong>: Uses the <a href="cce_10_0066.html">everest</a> add-on, installed by default, to provide container storage based on CSI and connect to cloud storage services such as EVS.</p>
|
||||
<div class="p" id="cce_10_0028__en-us_topic_0000001243981077_p1042341817336"><strong id="cce_10_0028__b078412875610">Service logs</strong><ul id="cce_10_0028__en-us_topic_0000001243981077_ul1532032363417"><li id="cce_10_0028__en-us_topic_0000001243981077_li078322903611">Using ICAgent:<p id="cce_10_0028__en-us_topic_0000001243981077_p5238153093619"><a name="cce_10_0028__en-us_topic_0000001243981077_li078322903611"></a><a name="en-us_topic_0000001243981077_li078322903611"></a>A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured.</p>
|
||||
<p id="cce_10_0028__en-us_topic_0000001243981077_p161195033716">You can collect stdout logs as required.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0028__li72711456163617"><span>After the parameters are specified, click <span class="uicontrol" id="cce_10_0028__uicontrol36131794526"><b>Next: Confirm</b></span>. The cluster resource list is displayed. Confirm the information and click <span class="uicontrol" id="cce_10_0028__uicontrol1752232819449"><b>Submit</b></span>.</span><p><p id="cce_10_0028__p1020211168316">It takes about 6 to 10 minutes to create a cluster. You can click <strong id="cce_10_0028__b1712383711547">Back to Cluster List</strong> to perform other operations on the cluster or click <strong id="cce_10_0028__b3123193725416">Go to Cluster Events</strong> to view the cluster details.</p>
|
||||
<p id="cce_10_0028__en-us_topic_0000001243981077_p357714145121"><strong id="cce_10_0028__b167302337554">Overload Control</strong>: If overload control is enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available.</p>
|
||||
</p></li><li id="cce_10_0028__li72711456163617"><span>After setting the parameters, click <span class="uicontrol" id="cce_10_0028__uicontrol677013344165"><b>Next: Confirm</b></span>. After confirming that the cluster configuration information is correct, select <strong id="cce_10_0028__b10770193415164">I have read and understand the preceding instructions</strong> and click <strong id="cce_10_0028__b4771183411610">Submit</strong>.</span><p><p id="cce_10_0028__p1020211168316">It takes about 6 to 10 minutes to create a cluster. You can click <strong id="cce_10_0028__b1712383711547">Back to Cluster List</strong> to perform other operations on the cluster or click <strong id="cce_10_0028__b3123193725416">Go to Cluster Events</strong> to view the cluster details.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0028__section125261255139"><h4 class="sectiontitle">Related Operations</h4><ul id="cce_10_0028__ul912451119262"><li id="cce_10_0028__li1030825181117">After creating a cluster, you can use the Kubernetes command line (CLI) tool kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</li><li id="cce_10_0028__li312413114263">Add nodes to the cluster. For details, see <a href="cce_10_0363.html">Creating a Node</a>.</li></ul>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<div id="body1506157580881"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0213.html">Managing Cluster Components</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0213.html">Cluster Configuration Management</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0212.html">Deleting a Cluster</a></strong><br>
|
||||
</li>
|
||||
|
22
docs/cce/umn/cce_10_00356.html
Normal file
22
docs/cce/umn/cce_10_00356.html
Normal file
@ -0,0 +1,22 @@
|
||||
<a name="cce_10_00356"></a><a name="cce_10_00356"></a>
|
||||
|
||||
<h1 class="topictitle1">Accessing a Container</h1>
|
||||
<div id="body0000001151211236"><div class="section" id="cce_10_00356__section7379040716"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_00356__p1134114511811">If you encounter unexpected problems when using a container, you can log in to the container for debugging.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_00356__section1293318163114"><h4 class="sectiontitle">Logging In to a Container Using kubectl</h4><ol id="cce_10_00356__ol1392823394416"><li id="cce_10_00356__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_00356__li1020013819415"><span id="cce_10_00356__p49510201338">Run the following command to view the created pod:</span><p><pre class="screen" id="cce_10_00356__screen156898195914">kubectl get pod</pre>
|
||||
<div class="p" id="cce_10_00356__p18257204595920">The example output is as follows:<pre class="screen" id="cce_10_00356__screen7944553592">NAME READY STATUS RESTARTS AGE
|
||||
nginx-59d89cb66f-mhljr 1/1 Running 0 11m</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_10_00356__li356233617436"><span>Query the name of the container in the pod.</span><p><pre class="screen" id="cce_10_00356__screen5352174217439">kubectl get po <i><span class="varname" id="cce_10_00356__varname373018473433">nginx-59d89cb66f-mhljr</span></i> -o jsonpath='{range .spec.containers[*]}{.name}{end}{"\n"}'</pre>
|
||||
<div class="p" id="cce_10_00356__p3651112824414">The example output is as follows:<pre class="screen" id="cce_10_00356__screen1965142811442">container-1</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_10_00356__li15567184714456"><span>Run the following command to log in to the container named <strong id="cce_10_00356__b1875816432427">container-1</strong> in <strong id="cce_10_00356__b46855020427">nginx-59d89cb66f-mhljrPod</strong>:</span><p><pre class="screen" id="cce_10_00356__screen208681724173519">kubectl exec -it <i><span class="varname" id="cce_10_00356__varname42937231455">nginx-59d89cb66f-mhljr</span></i> -c <i><span class="varname" id="cce_10_00356__varname115981226164513">container-1</span></i> -- /bin/sh</pre>
|
||||
</p></li><li id="cce_10_00356__li1582141517375"><span>To exit the container, run the <strong id="cce_10_00356__b15873927134616">exit</strong> command.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0046.html">Workloads</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -5,7 +5,7 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during node deletion. Back up related data in advance.</li><li id="cce_10_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster.</span></li><li id="cce_10_0036__li6687049203616"><span>In the navigation pane, choose <strong id="cce_10_0036__b06131727172613">Nodes</strong>. In the right pane, click the name of the node to be stopped.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b1247467161417">Stop</strong> in the instance status area. In the displayed dialog box, click <strong id="cce_10_0036__b12474177131414">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image6847636155" src="en-us_image_0000001244261119.png"></span></div>
|
||||
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster.</span></li><li id="cce_10_0036__li6687049203616"><span>In the navigation pane, choose <strong id="cce_10_0036__b06131727172613">Nodes</strong>. In the right pane, click the name of the node to be stopped.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b1247467161417">Stop</strong> in the instance status area. In the displayed dialog box, click <strong id="cce_10_0036__b12474177131414">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image6847636155" src="en-us_image_0000001518062704.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,6 +1,6 @@
|
||||
<a name="cce_10_0045"></a><a name="cce_10_0045"></a>
|
||||
|
||||
<h1 class="topictitle1">Configuration Center</h1>
|
||||
<h1 class="topictitle1">ConfigMaps and Secrets</h1>
|
||||
<div id="body1507606688948"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
|
@ -24,6 +24,8 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0551.html">CPU Core Binding</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_00356.html">Accessing a Container</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0386.html">Pod Labels and Annotations</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0423.html">Volcano Scheduling</a></strong><br>
|
||||
|
@ -7,7 +7,7 @@
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0047__section1996635141916"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0047__ol2012902601117"><li id="cce_10_0047__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0047__li2075471341"><span>Click the cluster name to access the cluster details page, choose <strong id="cce_10_0047__b177043716583">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0047__b3710187125815">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0047__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0047__p1259466151612"><strong id="cce_10_0047__b1493704971917">Basic Info</strong><ul id="cce_10_0047__ul6954101318184"><li id="cce_10_0047__li11514131617185"><strong id="cce_10_0047__b17688966208">Workload Type</strong>: Select <strong id="cce_10_0047__b19319191110206">Deployment</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0047__li129541213101814"><strong id="cce_10_0047__b112962517203">Workload Name</strong>: Enter the name of the workload.</li><li id="cce_10_0047__li179541813111814"><strong id="cce_10_0047__b15588183732013">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0047__b2393134262015">default</strong>. You can also click <span class="uicontrol" id="cce_10_0047__uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0047__li18955181315189"><strong id="cce_10_0047__b1997313316218">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0047__li11753142112539"><strong id="cce_10_0047__b1236151911112">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li><li id="cce_10_0047__li1295571341818"><strong id="cce_10_0047__b197683014275">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0047__section1996635141916"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0047__ol2012902601117"><li id="cce_10_0047__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0047__li2075471341"><span>Click the cluster name to access the cluster details page, choose <strong id="cce_10_0047__b177043716583">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0047__b3710187125815">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0047__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0047__p1259466151612"><strong id="cce_10_0047__b1493704971917">Basic Info</strong><ul id="cce_10_0047__ul6954101318184"><li id="cce_10_0047__li11514131617185"><strong id="cce_10_0047__b17688966208">Workload Type</strong>: Select <strong id="cce_10_0047__b19319191110206">Deployment</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0047__li129541213101814"><strong id="cce_10_0047__b112962517203">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0047__li179541813111814"><strong id="cce_10_0047__b15588183732013">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0047__b2393134262015">default</strong>. You can also click <span class="uicontrol" id="cce_10_0047__uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0047__li18955181315189"><strong id="cce_10_0047__b1997313316218">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0047__li11753142112539"><strong id="cce_10_0047__b1236151911112">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li><li id="cce_10_0047__li1295571341818"><strong id="cce_10_0047__b197683014275">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
|
||||
</div>
|
||||
<div class="p" id="cce_10_0047__p206571518181616"><strong id="cce_10_0047__b062716554277">Container Settings</strong><ul id="cce_10_0047__ul42071022103320"><li id="cce_10_0047__li8770480458">Container Information<div class="p" id="cce_10_0047__p10493941854"><a name="cce_10_0047__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0047__uicontrol2024214181967"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0047__ul10714183717111"><li id="cce_10_0047__li1471463741113"><strong id="cce_10_0047__b996714571008">Basic Info</strong>: See <a href="cce_10_0396.html">Setting Basic Container Information</a>.</li><li id="cce_10_0047__li127141737191112"><strong id="cce_10_0047__b854817461802">Lifecycle</strong>: See <a href="cce_10_0105.html">Setting Container Lifecycle Parameters</a>.</li><li id="cce_10_0047__li9714123711114"><strong id="cce_10_0047__b1428584819018">Health Check</strong>: See <a href="cce_10_0112.html">Setting Health Check for a Container</a>.</li><li id="cce_10_0047__li5714123721119"><strong id="cce_10_0047__b547425013014">Environment Variables</strong>: See <a href="cce_10_0113.html">Setting an Environment Variable</a>.</li><li id="cce_10_0047__li571418378113"><strong id="cce_10_0047__b187511352406">Data Storage</strong>: See <a href="cce_10_0307.html">Overview</a>.<div class="note" id="cce_10_0047__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0047__p17126153413513">If the workload contains more than one pod, EVS volumes cannot be mounted.</p>
|
||||
</div></div>
|
||||
|
@ -10,7 +10,7 @@
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0048__section16385130102112"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0048__ol2012902601117"><li id="cce_10_0048__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0048__li2075471341"><span>Click the cluster name to access the cluster details page, choose <strong id="cce_10_0048__b12863171618585">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0048__b7869116135814">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0048__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0048__p1259466151612"><strong id="cce_10_0048__b64930521915">Basic Info</strong><ul id="cce_10_0048__ul6954101318184"><li id="cce_10_0048__li11514131617185"><strong id="cce_10_0048__b19311135410116">Workload Type</strong>: Select <strong id="cce_10_0048__b0311195410110">StatefulSet</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0048__li129541213101814"><strong id="cce_10_0048__b16758113322">Workload Name</strong>: Enter the name of the workload.</li><li id="cce_10_0048__li179541813111814"><strong id="cce_10_0048__b122601351320">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0048__b0261451528">default</strong>. You can also click <span class="uicontrol" id="cce_10_0048__uicontrol1319579216"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0048__li18955181315189"><strong id="cce_10_0048__b124441488216">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0048__li11753142112539"><strong id="cce_10_0048__b171946101820">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li><li id="cce_10_0048__li1295571341818"><strong id="cce_10_0048__b9718913120">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0048__section16385130102112"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0048__ol2012902601117"><li id="cce_10_0048__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0048__li2075471341"><span>Click the cluster name to access the cluster details page, choose <strong id="cce_10_0048__b12863171618585">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0048__b7869116135814">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0048__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0048__p1259466151612"><strong id="cce_10_0048__b64930521915">Basic Info</strong><ul id="cce_10_0048__ul6954101318184"><li id="cce_10_0048__li11514131617185"><strong id="cce_10_0048__b19311135410116">Workload Type</strong>: Select <strong id="cce_10_0048__b0311195410110">StatefulSet</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0048__li129541213101814"><strong id="cce_10_0048__b16758113322">Workload Name</strong>: Enter the name of the workload. Enter 1 to 52 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0048__li179541813111814"><strong id="cce_10_0048__b122601351320">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0048__b0261451528">default</strong>. You can also click <span class="uicontrol" id="cce_10_0048__uicontrol1319579216"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0048__li18955181315189"><strong id="cce_10_0048__b124441488216">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0048__li11753142112539"><strong id="cce_10_0048__b171946101820">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li><li id="cce_10_0048__li1295571341818"><strong id="cce_10_0048__b9718913120">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
|
||||
</div>
|
||||
<div class="p" id="cce_10_0048__p206571518181616"><strong id="cce_10_0048__b163231218124">Container Settings</strong><ul id="cce_10_0048__ul42071022103320"><li id="cce_10_0048__li8770480458">Container Information<div class="p" id="cce_10_0048__p10493941854"><a name="cce_10_0048__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0048__uicontrol75255211621"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0048__ul481018470119"><li id="cce_10_0048__li18101047191117"><strong id="cce_10_0048__b2751850468">Basic Info</strong>: See <a href="cce_10_0396.html">Setting Basic Container Information</a>.</li><li id="cce_10_0048__li4810204715113"><strong id="cce_10_0048__b179585713463">Lifecycle</strong>: See <a href="cce_10_0105.html">Setting Container Lifecycle Parameters</a>.</li><li id="cce_10_0048__li4810134791115"><strong id="cce_10_0048__b528315118462">Health Check</strong>: See <a href="cce_10_0112.html">Setting Health Check for a Container</a>.</li><li id="cce_10_0048__li1810447181110"><strong id="cce_10_0048__b14754135814516">Environment Variables</strong>: See <a href="cce_10_0113.html">Setting an Environment Variable</a>.</li><li id="cce_10_0048__li4810124731117"><strong id="cce_10_0048__b125710561453">Data Storage</strong>: See <a href="cce_10_0307.html">Overview</a>.<div class="note" id="cce_10_0048__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0048__ul26865762616"><li id="cce_10_0048__li4956180135815">StatefulSets support dynamically provisioned EVS volumes.<p id="cce_10_0048__p270761115810"><a name="cce_10_0048__li4956180135815"></a><a name="li4956180135815"></a>Dynamic mounting is achieved by using the <strong id="cce_10_0048__b35631133121417"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" target="_blank" rel="noopener noreferrer">volumeClaimTemplates</a></strong> field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a unique PVC using the <strong id="cce_10_0048__b1490093413557">volumeClaimTemplates</strong> field, and the PVCs are bound to their corresponding PVs. Therefore, after the pod is rescheduled, the original data can still be mounted thanks to the PVC.</p>
|
||||
</li><li id="cce_10_0048__li126861777269">After a workload is created, the storage that is dynamically mounted cannot be updated.</li></ul>
|
||||
@ -26,7 +26,7 @@
|
||||
<p id="cce_10_0048__p13343123113612">You can also create a Service after creating a workload. For details about the Service, see <a href="cce_10_0249.html">Service Overview</a>.</p>
|
||||
<div class="p" id="cce_10_0048__p310913521612"><strong id="cce_10_0048__b1384716276315">Advanced Settings</strong><ul id="cce_10_0048__ul142811417"><li id="cce_10_0048__li0421513417"><strong id="cce_10_0048__b6900932134716">Upgrade</strong>: See <a href="cce_10_0397.html">Configuring the Workload Upgrade Policy</a>.</li><li id="cce_10_0048__li5292111713411"><strong id="cce_10_0048__b1827703613471">Scheduling</strong>: See <a href="cce_10_0232.html">Scheduling Policy (Affinity/Anti-affinity)</a>.</li><li id="cce_10_0048__li206428507436"><strong id="cce_10_0048__b1840219331836">Instances Management Policies</strong><p id="cce_10_0048__p151323251334">For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers.</p>
|
||||
<ul id="cce_10_0048__ul758812493316"><li id="cce_10_0048__li258832417338"><strong id="cce_10_0048__b13534251116">OrderedReady</strong>: The StatefulSet will deploy, delete, or scale pods in order and one by one. (The StatefulSet continues only after the previous pod is ready or deleted.) This is the default policy.</li><li id="cce_10_0048__li1558862416338"><strong id="cce_10_0048__b112293521039">Parallel</strong>: The StatefulSet will create pods in parallel to match the desired scale without waiting, and will delete all pods at once.</li></ul>
|
||||
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__cce_10_0047_b11222531193614">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0352.html#cce_10_0352__section2047442210417">Tolerations</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__b4641659154115">Labels and Annotations</strong>: See <a href="cce_10_0386.html">Pod Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__b18993554214">DNS</strong>: See <a href="cce_10_0365.html">DNS Configuration</a>.</li></ul>
|
||||
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__b8773133531614">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0352.html#cce_10_0352__section2047442210417">Tolerations</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__b4641659154115">Labels and Annotations</strong>: See <a href="cce_10_0386.html">Pod Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__b18993554214">DNS</strong>: See <a href="cce_10_0365.html">DNS Configuration</a>.</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0048__li01417411620"><span>Click <strong id="cce_10_0048__b2185122912413">Create Workload</strong> in the lower right corner.</span></li></ol>
|
||||
</div>
|
||||
|
322
docs/cce/umn/cce_10_0054.html
Normal file
322
docs/cce/umn/cce_10_0054.html
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -4,7 +4,7 @@
|
||||
<div id="body8662426"><div class="section" id="cce_10_0063__section127666327248"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0063__p192873216229">After a node scaling policy is created, you can delete, edit, disable, enable, or clone the policy.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0063__section102878407207"><h4 class="sectiontitle">Viewing a Node Scaling Policy</h4><p id="cce_10_0063__p713741135215">You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed.</p>
|
||||
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li3967519744"><span>Choose <strong id="cce_10_0063__b75474128512">Node Scaling</strong> in the navigation pane and click <span><img id="cce_10_0063__image1254712122518" src="en-us_image_0000001244261161.png"></span> in front of the policy to be viewed.</span></li><li id="cce_10_0063__li641003813527"><span>In the expanded area, the <span class="uicontrol" id="cce_10_0063__uicontrol864413924614"><b>Associated Node Pools</b></span>, <span class="uicontrol" id="cce_10_0063__uicontrol1164419910465"><b>Rules</b></span>, and <span class="uicontrol" id="cce_10_0063__uicontrol1964516974613"><b>Scaling History</b></span> tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0063__p268214718213">You can also disable or enable auto scaling on the <strong id="cce_10_0063__b57750163232">Node Pools</strong> page.</p>
|
||||
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li3967519744"><span>Choose <strong id="cce_10_0063__b75474128512">Node Scaling</strong> in the navigation pane and click <span><img id="cce_10_0063__image1254712122518" src="en-us_image_0000001517743464.png"></span> in front of the policy to be viewed.</span></li><li id="cce_10_0063__li641003813527"><span>In the expanded area, the <span class="uicontrol" id="cce_10_0063__uicontrol864413924614"><b>Associated Node Pools</b></span>, <span class="uicontrol" id="cce_10_0063__uicontrol1164419910465"><b>Rules</b></span>, and <span class="uicontrol" id="cce_10_0063__uicontrol1964516974613"><b>Scaling History</b></span> tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0063__p268214718213">You can also disable or enable auto scaling on the <strong id="cce_10_0063__b57750163232">Node Pools</strong> page.</p>
|
||||
<ol type="a" id="cce_10_0063__ol15169162582120"><li id="cce_10_0063__li13169425162117">Log in to the CCE console and access the cluster console.</li><li id="cce_10_0063__li716942518219">In the navigation pane, choose <strong id="cce_10_0063__b189612560310">Nodes</strong> and switch to the <strong id="cce_10_0063__b19818721244">Node Pools</strong> tab page.</li><li id="cce_10_0063__li2016919259214">Click <span class="uicontrol" id="cce_10_0063__uicontrol1689716319372"><b>Edit</b></span> of the node pool to be operated. In the <span class="uicontrol" id="cce_10_0063__uicontrol3989194019311"><b>Edit Node Pool</b></span> dialog box that is displayed, set the limits of the number of nodes.</li></ol>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
|
@ -14,6 +14,7 @@
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0066__li921715919291"><span>Whether to deploy the add-on instance across multiple AZs.</span><p><ul id="cce_10_0066__ul4214181752714"><li id="cce_10_0066__en-us_topic_0000001199341168_li5214161718270"><strong id="cce_10_0066__b12879443056025">Preferred</strong>: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ.</li><li id="cce_10_0066__en-us_topic_0000001199341168_li4214917142716"><strong id="cce_10_0066__b3515872306111">Required</strong>: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run.</li></ul>
|
||||
</p></li><li id="cce_10_0066__li1188334214330"><span>Set related parameters.</span><p><div class="p" id="cce_10_0066__p11883194243320">In everest 1.2.26 or later, the performance of attaching a large number of EVS volumes is optimized. The following three parameters are provided:<ul id="cce_10_0066__ul68831142163317"><li id="cce_10_0066__li17883174214333"><strong id="cce_10_0066__b143976403492">csi_attacher_worker_threads</strong>: number of workers that can concurrently mount EVS volumes. The default value is <strong id="cce_10_0066__b1666753135014">60</strong>.</li><li id="cce_10_0066__li9883114223314"><strong id="cce_10_0066__b1573145185014">csi_attacher_detach_worker_threads</strong>: number of workers that can concurrently unmount EVS volumes. The default value is <strong id="cce_10_0066__b11574452507">60</strong>.</li><li id="cce_10_0066__li14883742193315"><strong id="cce_10_0066__b42378570502">volume_attaching_flow_ctrl</strong>: maximum number of EVS volumes that can be mounted by the everest add-on within one minute. The default value is <strong id="cce_10_0066__b111476236513">0</strong>, indicating that the EVS volume mounting performance is determined by the underlying storage resources.</li></ul>
|
||||
</div>
|
||||
<p id="cce_10_0066__p1088304217337">The preceding three parameters are associated with each other and are constrained by the underlying storage resources in the region where the cluster is located. If you want to mount a large number of volumes (more than 500 EVS volumes per minute), you can contact the customer service personnel and configure the parameters under their guidance to prevent the everest add-on from running abnormally due to improper parameter settings.</p>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<div id="body1508729244098"><div class="section" id="cce_10_0083__section11873141710246"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0083__p799618243249">After an HPA policy is created, you can update, clone, edit, and delete the policy, as well as edit the YAML file.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0083__section14993443181414"><h4 class="sectiontitle">Checking an HPA Policy</h4><p id="cce_10_0083__p713741135215">You can view the rules, status, and events of an HPA policy and handle exceptions based on the error information displayed.</p>
|
||||
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611">Workload Scaling</strong>. On the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab page, click <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001244261103.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, you can view the <span class="uicontrol" id="cce_10_0083__uicontrol783043616"><b>Rules</b></span>, <span class="uicontrol" id="cce_10_0083__uicontrol79110193616"><b>Status</b></span>, and <span class="uicontrol" id="cce_10_0083__uicontrol897073610"><b>Events</b></span> tab pages. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
|
||||
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611">Workload Scaling</strong>. On the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab page, click <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001568902521.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, you can view the <span class="uicontrol" id="cce_10_0083__uicontrol783043616"><b>Rules</b></span>, <span class="uicontrol" id="cce_10_0083__uicontrol79110193616"><b>Status</b></span>, and <span class="uicontrol" id="cce_10_0083__uicontrol897073610"><b>Events</b></span> tab pages. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
|
||||
<ol type="a" id="cce_10_0083__ol1691347738"><li id="cce_10_0083__li5468556932">Log in to the CCE console and access the cluster console.</li><li id="cce_10_0083__li87313521749">In the navigation pane, choose <strong id="cce_10_0083__b01748420311">Workloads</strong>. Click the workload name to view its details.</li><li id="cce_10_0083__li1769110474318">On the workload details page, swich to the <strong id="cce_10_0083__b3716156354">Auto Scaling</strong> tab page to view the HPA policies. You can also view the scaling policies you configured in <strong id="cce_10_0083__b81591132105417">Workload Scaling</strong>.</li></ol>
|
||||
</div></div>
|
||||
|
||||
|
@ -3,14 +3,14 @@
|
||||
<h1 class="topictitle1">Ingress Overview</h1>
|
||||
<div id="body0000001159453456"><div class="section" id="cce_10_0094__section17868123416122"><h4 class="sectiontitle">Why We Need Ingresses</h4><p id="cce_10_0094__p19813582419">A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, that is, ingress.</p>
|
||||
<p id="cce_10_0094__p168757241679">An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in <a href="#cce_10_0094__fig18155819416">Figure 1</a>, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.</p>
|
||||
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001243981115.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001517903200.png"></span></div>
|
||||
<p id="cce_10_0094__p128258846">The following describes the ingress-related definitions:</p>
|
||||
<ul id="cce_10_0094__ul2875811411"><li id="cce_10_0094__li78145815413">Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.</li><li id="cce_10_0094__li148115817417">Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0094__section162271821192312"><h4 class="sectiontitle">Working Principle of ELB Ingress Controller</h4><p id="cce_10_0094__p172542048121220">ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.</p>
|
||||
<p id="cce_10_0094__p4254124831218">ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). <a href="#cce_10_0094__fig122542486129">Figure 2</a> shows the working principle of ELB Ingress Controller.</p>
|
||||
<ol id="cce_10_0094__ol525410483123"><li id="cce_10_0094__li8254184813127">A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port.</li><li id="cce_10_0094__li1225474817126">When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule.</li><li id="cce_10_0094__li115615167193">When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service.</li></ol>
|
||||
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working principle of ELB Ingress Controller</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001199501200.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working principle of ELB Ingress Controller</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001568822925.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -12,15 +12,18 @@
|
||||
</li></ul>
|
||||
</div>
|
||||
<p id="cce_10_0107__p2842139103716">Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure:</p>
|
||||
<ol id="cce_10_0107__ol6469105613170"><li id="cce_10_0107__li194691356201712"><a name="cce_10_0107__li194691356201712"></a><a name="li194691356201712"></a><span>Download kubectl.</span><p><div class="p" id="cce_10_0107__p1828181171820">On the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/README.md" target="_blank" rel="noopener noreferrer">Kubernetes release</a> page, click the corresponding link based on the cluster version, click <strong id="cce_10_0107__b1450212240525">Client Binaries</strong>, and download the corresponding platform software package. Alternatively, you can install kubectl with curl following the guide in <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" target="_blank" rel="noopener noreferrer">Install Tools</a>.<div class="fignone" id="cce_10_0107__fig978018401170"><span class="figcap"><b>Figure 1 </b>Downloading kubectl</span><br><span><img id="cce_10_0107__image17910133212172" src="en-us_image_0000001336475537.png"></span></div>
|
||||
</div>
|
||||
<ol id="cce_10_0107__ol6469105613170"><li id="cce_10_0107__li194691356201712"><span>Download kubectl.</span><p><p id="cce_10_0107__p53069487256">Prepare a computer that can access the public network and install kubectl in CLI mode. You can run the <strong id="cce_10_0107__b188225352515">kubectl version</strong> command to check whether kubectl has been installed. If kubectl has been installed, skip this step.</p>
|
||||
<p id="cce_10_0107__p125851851153510">This section uses the Linux environment as an example to describe how to install and configure kubectl. For details, see <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" target="_blank" rel="noopener noreferrer">Installing kubectl</a>.</p>
|
||||
<ol type="a" id="cce_10_0107__ol735517018289"><li id="cce_10_0107__li551132463520">Log in to your client and download kubectl.<pre class="screen" id="cce_10_0107__screen8511142418352">cd /home
|
||||
curl -LO https://dl.k8s.io/release/<em id="cce_10_0107__i13511182443516">{v1.25.0}</em>/bin/linux/amd64/kubectl</pre>
|
||||
<p id="cce_10_0107__p6511924173518"><em id="cce_10_0107__i1251116243353"><strong id="cce_10_0107__b3575202815342">{v1.25.0}</strong></em> specifies the version number. Replace it as required.</p>
|
||||
</li><li id="cce_10_0107__li1216814211286">Install kubectl.<pre class="screen" id="cce_10_0107__screen16892115815271">chmod +x kubectl
|
||||
mv -f kubectl /usr/local/bin</pre>
|
||||
</li></ol>
|
||||
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span>Obtain the kubectl configuration file (kubeconfig).</span><p><p id="cce_10_0107__p1295818109256">On the <strong id="cce_10_0107__b450013549611">Connection Information</strong> pane on the cluster details page, click <strong id="cce_10_0107__b136512181078">Learn more</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the window displayed, download the configuration file.</p>
|
||||
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig.json</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li62692399615">By default, two-way authentication is disabled for domain names in the current cluster. You can run the <strong id="cce_10_0107__b76312129249">kubectl config use-context externalTLSVerify</strong> command to enable two-way authentication. For details, see <a href="#cce_10_0107__section1559919152711">Two-Way Authentication for Domain Names</a>. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_10_0107__b940713611819">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
|
||||
</div></div>
|
||||
</p></li><li id="cce_10_0107__li25451059122317"><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Install and configure kubectl (A Linux OS is used as an example).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Copy the kubectl downloaded in <a href="#cce_10_0107__li194691356201712">1</a> and the configuration file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b1554218410219">/home</strong> directory on your client.</li><li id="cce_10_0107__li1484374852013">Log in to your client and configure kubectl. If you have installed kubectl, skip this step.<pre class="screen" id="cce_10_0107__screen14487191815222">cd /home
|
||||
chmod +x kubectl
|
||||
mv -f kubectl /usr/local/bin</pre>
|
||||
</li><li id="cce_10_0107__li114766383477">Log in to your client and configure the kubeconfig file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
|
||||
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the kubeconfig.json configuration file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b13872204384014">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
|
||||
mkdir -p $HOME/.kube
|
||||
mv -f kubeconfig.json $HOME/.kube/config</pre>
|
||||
</li><li id="cce_10_0107__li1480512253214">Switch the kubectl access mode based on service scenarios.<ul id="cce_10_0107__ul91037595229"><li id="cce_10_0107__li5916145112313">Run this command to enable intra-VPC access:<pre class="screen" id="cce_10_0107__screen279213242247">kubectl config use-context internal</pre>
|
||||
@ -33,12 +36,16 @@ mv -f kubeconfig.json $HOME/.kube/config</pre>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section1559919152711"><a name="cce_10_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle">Two-Way Authentication for Domain Names</h4><p id="cce_10_0107__p138948491274">Currently, CCE supports two-way authentication for domain names.</p>
|
||||
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">Two-way authentication is disabled for domain names by default. You can run the <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> command to switch to the externalTLSVerify context to enable it.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.json</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 2</a>. To use two-way authentication, you can download the <strong id="cce_10_0107__b549311585216">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 2 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001199021320.png"></span></div>
|
||||
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">Two-way authentication is disabled for domain names by default. You can run the <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> command to switch to the externalTLSVerify context to enable it.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.json</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, you can download the <strong id="cce_10_0107__b549311585216">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001568822965.png"></span></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">Common Issue (Error from server Forbidden)</h4><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
|
||||
<p id="cce_10_0107__p581934618458"># kubectl get deploy Error from server (Forbidden): deployments.apps is forbidden: User "0c97ac3cb280f4d91fa7c0096739e1f8" cannot list resource "deployments" in API group "apps" in the namespace "default"</p>
|
||||
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">Common Issues</h4><ul id="cce_10_0107__ul1374831051115"><li id="cce_10_0107__li4748810121112"><strong id="cce_10_0107__b456677171119">Error from server Forbidden</strong><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
|
||||
<pre class="screen" id="cce_10_0107__screen5530165114117"># kubectl get deploy Error from server (Forbidden): deployments.apps is forbidden: User "0c97ac3cb280f4d91fa7c0096739e1f8" cannot list resource "deployments" in API group "apps" in the namespace "default"</pre>
|
||||
<p id="cce_10_0107__p1418636115119">The cause is that the user does not have the permissions to operate the Kubernetes resources. For details about how to assign permissions, see <a href="cce_10_0189.html">Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
|
||||
</li><li id="cce_10_0107__li0365152110"><strong id="cce_10_0107__b1829619716131">The connection to the server localhost:8080 was refused</strong><p id="cce_10_0107__p1776396131212">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
|
||||
<pre class="screen" id="cce_10_0107__screen197636617124">The connection to the server localhost:8080 was refused - did you specify the right host or port?</pre>
|
||||
<p id="cce_10_0107__p87631764129">The cause is that cluster authentication is not configured for the kubectl client. For details, see <a href="#cce_10_0107__li25451059122317">3</a>.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -3,7 +3,7 @@
|
||||
<h1 class="topictitle1">Setting Health Check for a Container</h1>
|
||||
<div id="body1512535109871"><div class="section" id="cce_10_0112__section1731112174912"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0112__p8242924192"><span class="keyword" id="cce_10_0112__keyword22817116429">Health check</span> regularly checks the health status of containers during container running. If the health check function is not configured, a pod cannot detect application exceptions or automatically restart the application to restore it. This will result in a situation where the pod status is normal but the application in the pod is abnormal.</p>
|
||||
<p id="cce_10_0112__a77e71e69afde4757ab0ef6087b2e30de">Kubernetes provides the following health check probes:</p>
|
||||
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b1209722181417">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b1821422218147">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are started.</li></ul>
|
||||
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b125689275012">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b17568627507">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0112__section476025319384"><h4 class="sectiontitle">Check Method</h4><ul id="cce_10_0112__ul2492162133910"><li id="cce_10_0112__li19505918465"><strong id="cce_10_0112__b84235270695216"><span class="keyword" id="cce_10_0112__keyword122935940517318">HTTP request</span></strong><p id="cce_10_0112__p17738122617398">This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200–399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path.</p>
|
||||
<p id="cce_10_0112__p051511331505">For example, for a container that provides HTTP services, the HTTP check path is <strong id="cce_10_0112__b2043313277265">/health-check</strong>, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. You can also add one or more headers to an HTTP request. For example, set the request header name to <strong id="cce_10_0112__b1372104918911">Custom-Header</strong> and the corresponding value to <strong id="cce_10_0112__b9755554916">example</strong>.</p>
|
||||
@ -11,9 +11,9 @@
|
||||
<p id="cce_10_0112__p1525113371164">For example, if you have a Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails.</p>
|
||||
</li><li id="cce_10_0112__li104061647154310"><strong id="cce_10_0112__b84235270695818"><span class="keyword" id="cce_10_0112__keyword1395397266173145">CLI</span></strong><p id="cce_10_0112__p105811510164113">CLI is an efficient tool for health check. When using the CLI, you must specify an executable command in a container. The cluster periodically runs the command in the container. If the command output is 0, the health check is successful. Otherwise, the health check fails.</p>
|
||||
<p id="cce_10_0112__p1658131014413">The CLI mode can be used to replace the HTTP request-based and TCP port-based health check.</p>
|
||||
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can write a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b11599347141615">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b11599443121612">–1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can write a program script to run the <strong id="cce_10_0112__b1767410318172">wget</strong> command for a container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
|
||||
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b167704361017">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b177010361817">–1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can use the script command to run the <strong id="cce_10_0112__b18539192117411">wget</strong> command to detect the container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
|
||||
<p id="cce_10_0112__p13488133464119">Check the return code of the response. If the return code is within 200–399, the script returns <strong id="cce_10_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b427293111227">–1</strong>. </p>
|
||||
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b842352706102616">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b842352706102629">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
|
||||
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b1134963416348">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b183492034173418">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC Check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and query its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
|
||||
|
@ -9,7 +9,7 @@
|
||||
<p id="cce_10_0113__p78261119155911">Environment variables can be set in the following modes:</p>
|
||||
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b116911771613">Custom</strong></li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b151552136536">Added from ConfigMap</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap key</strong>: Import a key in a ConfigMap as the value of an environment variable. For example, if you import <strong id="cce_10_0113__b766612214405">configmap_value</strong> of <strong id="cce_10_0113__b1650503044013">configmap_key</strong> in a ConfigMap as the value of environment variable <strong id="cce_10_0113__b1518565015405">key1</strong>, an environment variable named <strong id="cce_10_0113__b434614560403">key1</strong> with its value <strong id="cce_10_0113__b215043504113">is configmap_value</strong> exists in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. For example, if you import <strong id="cce_10_0113__b20412138105018">secret_value</strong> of <strong id="cce_10_0113__b1248714112506">secret_key</strong> in secret <strong id="cce_10_0113__b2010675411500">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b1260612005113">key2</strong>, an environment variable named <strong id="cce_10_0113__b2906162410511">key2</strong> with its value <strong id="cce_10_0113__b26293438519">secret_value</strong> exists in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b31881558104120">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable, for example, the pod name.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b11429919184010">Resource Reference</strong>: Use the field defined by a container as the value of the environment variable, for example, the CPU limit of the container.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li190412461831"><span>Log in to the CCE console. When creating a workload, select <strong id="cce_10_0113__b23218253516">Environment Variables</strong> under <strong id="cce_10_0113__b1763818524318">Container Settings</strong>.</span></li><li id="cce_10_0113__li468251942720"><span>Set environment variables.</span><p><p id="cce_10_0113__p886115513386"><span><img id="cce_10_0113__image486125516381" src="en-us_image_0000001247802971.png"></span></p>
|
||||
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li190412461831"><span>Log in to the CCE console. When creating a workload, select <strong id="cce_10_0113__b23218253516">Environment Variables</strong> under <strong id="cce_10_0113__b1763818524318">Container Settings</strong>.</span></li><li id="cce_10_0113__li468251942720"><span>Set environment variables.</span><p><p id="cce_10_0113__p886115513386"><span><img id="cce_10_0113__image486125516381" src="en-us_image_0000001569022913.png"></span></p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0113__section19591158201313"><h4 class="sectiontitle">YAML Example</h4><pre class="screen" id="cce_10_0113__screen1034117614147">apiVersion: apps/v1
|
||||
|
@ -31,6 +31,11 @@
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><p id="cce_10_0129__p93701640145120">Number of pods that will be created to match the selected add-on specifications.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0129__row158631947192617"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.2.2.1.2.3.1.1 "><p id="cce_10_0129__p15864134742615">Multi AZ</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><ul id="cce_10_0129__ul4214181752714"><li id="cce_10_0129__li5214161718270"><strong id="cce_10_0129__b14832131496">Preferred</strong>: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ.</li><li id="cce_10_0129__li4214917142716"><strong id="cce_10_0129__b821216403910">Required</strong>: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0129__row4370840165119"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.2.2.1.2.3.1.1 "><p id="cce_10_0129__p937054045117">Containers</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><p id="cce_10_0129__p1437014065110">CPU and memory quotas of the container allowed for the selected add-on specifications.</p>
|
||||
@ -39,7 +44,7 @@
|
||||
<tr id="cce_10_0129__row53701440125116"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.2.2.1.2.3.1.1 "><p id="cce_10_0129__p8370124035118">Parameters</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><ul id="cce_10_0129__ul221832131618"><li id="cce_10_0129__li12588227858"><strong id="cce_10_0129__b192571453114019">parameterSyncStrategy</strong>: indicates whether to configure consistency check when an add-on is upgraded.<ul id="cce_10_0129__ul9522414615"><li id="cce_10_0129__li19121561511"><strong id="cce_10_0129__b690911564401">ensureConsistent</strong>: indicates that the configuration consistency check is enabled. If the configuration recorded in the cluster is inconsistent with the actual configuration, the add-on cannot be upgraded.</li><li id="cce_10_0129__li119784364"><strong id="cce_10_0129__b13174144274110">force</strong>: indicates that the configuration consistency check is ignored during an upgrade. Ensure that the current effective configuration is the same as the original configuration. After the add-on is upgraded, restore the value of <strong id="cce_10_0129__b4346786426">parameterSyncStrategy</strong> to <strong id="cce_10_0129__b1024311118426">ensureConsistent</strong> and enable the configuration consistency check again.</li></ul>
|
||||
</li><li id="cce_10_0129__li132885218163"><strong id="cce_10_0129__b234014615612">stub_domains</strong>: A domain name server for a user-defined domain name. The format is a key-value pair. The key is a suffix of DNS domain name, and the value is one or more DNS IP addresses.</li><li id="cce_10_0129__li1821862111168"><strong id="cce_10_0129__b2047082195610">upstream_nameservers</strong>: IP address of the upstream DNS server.</li><li id="cce_10_0129__li93661612125"><strong id="cce_10_0129__b14453761447">servers</strong>: The servers configuration is available since coredns 1.23.1. You can customize the servers configuration. For details, see <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" target="_blank" rel="noopener noreferrer">dns-custom-nameservers</a>. <strong id="cce_10_0129__b1485722511299">plugins</strong> indicates the configuration of each component in coredns (https://coredns.io/manual/plugins/). You are advised to retain the default configurations in common scenarios to prevent CoreDNS from being unavailable due to configuration errors. Each plugin component contains <strong id="cce_10_0129__b18101940181616">name</strong>, <strong id="cce_10_0129__b9211944201611">parameters</strong> (optional), and <strong id="cce_10_0129__b19277104816161">configBlock</strong> (optional). The format of the generated Corefile is as follows:<p id="cce_10_0129__p17731113172317">$name $parameters {</p>
|
||||
</li><li id="cce_10_0129__li132885218163"><strong id="cce_10_0129__b234014615612">stub_domains</strong>: A domain name server for a user-defined domain name. The format is a key-value pair. The key is a suffix of DNS domain name, and the value is one or more DNS IP addresses.</li><li id="cce_10_0129__li1821862111168"><strong id="cce_10_0129__b2047082195610">upstream_nameservers</strong>: IP address of the upstream DNS server.</li><li id="cce_10_0129__li93661612125"><strong id="cce_10_0129__b16664191691620">servers</strong>: The servers configuration is available since coredns 1.23.1. You can customize the servers configuration. For details, see <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" target="_blank" rel="noopener noreferrer">dns-custom-nameservers</a>. <strong id="cce_10_0129__b116642016121614">plugins</strong> indicates the configuration of each component in coredns (https://coredns.io/manual/plugins/). You are advised to retain the default configurations in common scenarios to prevent CoreDNS from being unavailable due to configuration errors. Each plugin component contains <strong id="cce_10_0129__b116641916181618">name</strong>, <strong id="cce_10_0129__b466431611161">parameters</strong> (optional), and <strong id="cce_10_0129__b36641716161612">configBlock</strong> (optional). The format of the generated Corefile is as follows:<p id="cce_10_0129__p17731113172317">$name $parameters {</p>
|
||||
<p id="cce_10_0129__p1122616245232">$configBlock</p>
|
||||
<p id="cce_10_0129__p2035773019227">}</p>
|
||||
<p id="cce_10_0129__p187693475389"><a href="#cce_10_0129__table1420814384015">Table 2</a> describes common plugins.</p>
|
||||
@ -178,7 +183,7 @@
|
||||
<ol id="cce_10_0129__ol1895815493314"><li id="cce_10_0129__li29576413330">The query is first sent to the DNS caching layer in coredns.</li><li id="cce_10_0129__li79589463318">From the caching layer, the suffix of the request is examined and then the request is forwarded to the corresponding DNS:<ul id="cce_10_0129__ul29582417338"><li id="cce_10_0129__li495814453313">Names with the cluster suffix, for example, <strong id="cce_10_0129__b11610940133413">.cluster.local</strong>: The request is sent to coredns.</li></ul>
|
||||
<ul id="cce_10_0129__ul189581349330"><li id="cce_10_0129__li169582413313">Names with the stub domain suffix, for example, <strong id="cce_10_0129__b208218633511">.acme.local</strong>: The request is sent to the configured custom DNS resolver that listens, for example, on 1.2.3.4.</li><li id="cce_10_0129__li195815453320">Names that do not match the suffix (for example, <strong id="cce_10_0129__b13519452133513">widget.com</strong>): The request is forwarded to the upstream DNS.</li></ul>
|
||||
</li></ol>
|
||||
<div class="fignone" id="cce_10_0129__fig7582181514118"><span class="figcap"><b>Figure 1 </b>Routing</span><br><span><img id="cce_10_0129__image23305161015" src="en-us_image_0000001199021308.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0129__fig7582181514118"><span class="figcap"><b>Figure 1 </b>Routing</span><br><span><img id="cce_10_0129__image23305161015" src="en-us_image_0000001568902577.png"></span></div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -146,7 +146,7 @@ kubectl exec -it podname -c containername bash</pre>
|
||||
<div class="notice" id="cce_10_0139__note10339910193519"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0139__p133911104351">Resource names cannot be updated.</p>
|
||||
</div></div>
|
||||
<p id="cce_10_0139__p17348134894517"><strong id="cce_10_0139__b1687517815316">apply*</strong></p>
|
||||
<p id="cce_10_0139__p19348174824514">The <strong id="cce_10_0139__b1921172612543">apply</strong> command provides a more strict control on resource updating than <strong id="cce_10_0139__b12982204475517">patch</strong> and <strong id="cce_10_0139__b16365194710554">edit</strong> commands. The <strong id="cce_10_0139__b165890325617">apply</strong> command applies a configuration to a resource and maintains a set of configuration files in source control. Whenever there is an update, the configuration file is pushed to the server, and then the kubectl <strong id="cce_10_0139__b3271623145917">apply</strong> command applies the latest configuration to the resource. The Kubernetes compares the new configuration file with the original one and updates only the changed configuration instead of the whole file. The configuration that is not contained in the <strong id="cce_10_0139__b20182121210106">-f</strong> flag will remain unchanged. Unlike the <strong id="cce_10_0139__b1230317108526">replace</strong> command which deletes the resource and creates a new one, the <strong id="cce_10_0139__b133041210175212">apply</strong> command directly updates the original resource. Similar to the git operation, the <strong id="cce_10_0139__b1586387101315">apply</strong> command adds an annotation to the resource to mark the current apply.</p>
|
||||
<p id="cce_10_0139__p19348174824514">The <strong id="cce_10_0139__b1921172612543">apply</strong> command provides a more strict control on resource updating than <strong id="cce_10_0139__b12982204475517">patch</strong> and <strong id="cce_10_0139__b16365194710554">edit</strong> commands. The <strong id="cce_10_0139__b165890325617">apply</strong> command applies a configuration to a resource and maintains a set of configuration files in source control. Whenever there is an update, the configuration file is pushed to the server, and then the kubectl <strong id="cce_10_0139__b8568153011477">apply</strong> command applies the latest configuration to the resource. The Kubernetes compares the new configuration file with the original one and updates only the changed configuration instead of the whole file. The configuration that is not contained in the <strong id="cce_10_0139__b20182121210106">-f</strong> flag will remain unchanged. Unlike the <strong id="cce_10_0139__b1230317108526">replace</strong> command which deletes the resource and creates a new one, the <strong id="cce_10_0139__b133041210175212">apply</strong> command directly updates the original resource. Similar to the git operation, the <strong id="cce_10_0139__b1586387101315">apply</strong> command adds an annotation to the resource to mark the current apply.</p>
|
||||
<pre class="screen" id="cce_10_0139__screen2875112953610">kubectl apply -f</pre>
|
||||
<p id="cce_10_0139__p23488489456"><strong id="cce_10_0139__b1122801693111">patch</strong></p>
|
||||
<p id="cce_10_0139__p1234884824518">If you want to modify attributes of a running container without first deleting the container or using the <strong id="cce_10_0139__b116294617221">replace</strong> command, the <strong id="cce_10_0139__b11555328102214">patch</strong> command is to the rescue. The <strong id="cce_10_0139__b9307161152315">patch</strong> command updates field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. For example, to change a pod label from <strong id="cce_10_0139__b1895214246257">app=nginx1</strong> to <strong id="cce_10_0139__b8964637172517">app=nginx2</strong> while the pod is running, use the following command:</p>
|
||||
|
@ -15,11 +15,11 @@
|
||||
<p id="cce_10_0141__p1428853218123">Container:</p>
|
||||
<pre class="screen" id="cce_10_0141__screen11900202601214">cd /usr/local/nvidia/bin && ./nvidia-smi</pre>
|
||||
<p id="cce_10_0141__p7254950101912">If GPU information is returned, the device is available and the add-on is successfully installed.</p>
|
||||
<p id="cce_10_0141__p78452015208"><span><img id="cce_10_0141__image5372171217135" src="en-us_image_0000001238225460.png"></span></p>
|
||||
<p id="cce_10_0141__p78452015208"><span><img id="cce_10_0141__image5372171217135" src="en-us_image_0000001518062812.png"></span></p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0141__section95451728192112"><a name="cce_10_0141__section95451728192112"></a><a name="section95451728192112"></a><h4 class="sectiontitle">Obtaining the Driver Link from Public Network</h4><ol id="cce_10_0141__ol1138125974915"><li id="cce_10_0141__li19138125912498"><span>Log in to the CCE console.</span></li><li id="cce_10_0141__li111387599493"><span>Click <strong id="cce_10_0141__b7473141016405">Create Node</strong> and select the GPU node to be created in the <strong id="cce_10_0141__b13473161014016">Specifications</strong> area. The GPU card model of the node is displayed in the lower part of the page.</span></li></ol><ol start="3" id="cce_10_0141__ol195031456154814"><li id="cce_10_0141__li165032056184815"><span>Visit <em id="cce_10_0141__i2070996145418"><a href="https://www.nvidia.com/Download/Find.aspx?lang=en" target="_blank" rel="noopener noreferrer">https://www.nvidia.com/Download/Find.aspx?lang=en</a></em>.</span></li><li id="cce_10_0141__li16232124410505"><span>Select the driver information on the <span class="uicontrol" id="cce_10_0141__uicontrol1291212498541"><b>NVIDIA Driver Downloads</b></span> page, as shown in <a href="#cce_10_0141__fig11696366517">Figure 1</a>. <span class="uicontrol" id="cce_10_0141__uicontrol1650164444518"><b>Operating System</b></span> must be <strong id="cce_10_0141__b10981947121910">Linux 64-bit</strong>.</span><p><div class="fignone" id="cce_10_0141__fig11696366517"><a name="cce_10_0141__fig11696366517"></a><a name="fig11696366517"></a><span class="figcap"><b>Figure 1 </b>Setting parameters</span><br><span><img id="cce_10_0141__image18514163918398" src="en-us_image_0000001531533921.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li1682301014493"><span>After confirming the driver information, click <span class="uicontrol" id="cce_10_0141__uicontrol1411775314551"><b>SEARCH</b></span>. A page is displayed, showing the driver information, as shown in <a href="#cce_10_0141__fig7873421145213">Figure 2</a>. Click <span class="uicontrol" id="cce_10_0141__uicontrol163131533185618"><b>DOWNLOAD</b></span>.</span><p><div class="fignone" id="cce_10_0141__fig7873421145213"><a name="cce_10_0141__fig7873421145213"></a><a name="fig7873421145213"></a><span class="figcap"><b>Figure 2 </b>Driver information</span><br><span><img id="cce_10_0141__image6928629163818" src="en-us_image_0000001531373685.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li624514474513"><span>Obtain the driver link in either of the following ways:</span><p><ul id="cce_10_0141__ul18225815213"><li id="cce_10_0141__li68351817115313">Method 1: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, find <em id="cce_10_0141__i964302410817">url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</em> in the browser address box. Then, supplement it to obtain the driver link <a href="https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run" target="_blank" rel="noopener noreferrer">https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</a>. By using this method, you must bind an EIP to each GPU node.</li><li id="cce_10_0141__li193423205231">Method 2: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, click <span class="uicontrol" id="cce_10_0141__uicontrol1435254915542"><b>AGREE & DOWNLOAD</b></span> to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.<div class="fignone" id="cce_10_0141__fig5901194614534"><a name="cce_10_0141__fig5901194614534"></a><a name="fig5901194614534"></a><span class="figcap"><b>Figure 3 </b>Obtaining the link</span><br><span><img id="cce_10_0141__image293362819366" src="en-us_image_0000001531533045.png"></span></div>
|
||||
<div class="section" id="cce_10_0141__section95451728192112"><a name="cce_10_0141__section95451728192112"></a><a name="section95451728192112"></a><h4 class="sectiontitle">Obtaining the Driver Link from Public Network</h4><ol id="cce_10_0141__ol1138125974915"><li id="cce_10_0141__li19138125912498"><span>Log in to the CCE console.</span></li><li id="cce_10_0141__li111387599493"><span>Click <strong id="cce_10_0141__b7473141016405">Create Node</strong> and select the GPU node to be created in the <strong id="cce_10_0141__b13473161014016">Specifications</strong> area. The GPU card model of the node is displayed in the lower part of the page.</span></li></ol><ol start="3" id="cce_10_0141__ol195031456154814"><li id="cce_10_0141__li165032056184815"><span>Visit <em id="cce_10_0141__i2070996145418"><a href="https://www.nvidia.com/Download/Find.aspx?lang=en" target="_blank" rel="noopener noreferrer">https://www.nvidia.com/Download/Find.aspx?lang=en</a></em>.</span></li><li id="cce_10_0141__li16232124410505"><span>Select the driver information on the <span class="uicontrol" id="cce_10_0141__uicontrol1291212498541"><b>NVIDIA Driver Downloads</b></span> page, as shown in <a href="#cce_10_0141__fig11696366517">Figure 1</a>. <span class="uicontrol" id="cce_10_0141__uicontrol1650164444518"><b>Operating System</b></span> must be <strong id="cce_10_0141__b10981947121910">Linux 64-bit</strong>.</span><p><div class="fignone" id="cce_10_0141__fig11696366517"><a name="cce_10_0141__fig11696366517"></a><a name="fig11696366517"></a><span class="figcap"><b>Figure 1 </b>Setting parameters</span><br><span><img id="cce_10_0141__image18514163918398" src="en-us_image_0000001518062808.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li1682301014493"><span>After confirming the driver information, click <span class="uicontrol" id="cce_10_0141__uicontrol1411775314551"><b>SEARCH</b></span>. A page is displayed, showing the driver information, as shown in <a href="#cce_10_0141__fig7873421145213">Figure 2</a>. Click <span class="uicontrol" id="cce_10_0141__uicontrol163131533185618"><b>DOWNLOAD</b></span>.</span><p><div class="fignone" id="cce_10_0141__fig7873421145213"><a name="cce_10_0141__fig7873421145213"></a><a name="fig7873421145213"></a><span class="figcap"><b>Figure 2 </b>Driver information</span><br><span><img id="cce_10_0141__image6928629163818" src="en-us_image_0000001517743660.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li624514474513"><span>Obtain the driver link in either of the following ways:</span><p><ul id="cce_10_0141__ul18225815213"><li id="cce_10_0141__li68351817115313">Method 1: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, find <em id="cce_10_0141__i964302410817">url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</em> in the browser address box. Then, supplement it to obtain the driver link <a href="https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run" target="_blank" rel="noopener noreferrer">https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</a>. By using this method, you must bind an EIP to each GPU node.</li><li id="cce_10_0141__li193423205231">Method 2: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, click <span class="uicontrol" id="cce_10_0141__uicontrol1435254915542"><b>AGREE & DOWNLOAD</b></span> to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.<div class="fignone" id="cce_10_0141__fig5901194614534"><a name="cce_10_0141__fig5901194614534"></a><a name="fig5901194614534"></a><span class="figcap"><b>Figure 3 </b>Obtaining the link</span><br><span><img id="cce_10_0141__image293362819366" src="en-us_image_0000001517903240.png"></span></div>
|
||||
</li></ul>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
<h1 class="topictitle1">NodePort</h1>
|
||||
<div id="body1553224785332"><div class="section" id="cce_10_0142__section13654155944916"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0142__p028915126124">A Service is exposed on each node's IP address at a static port (NodePort). A ClusterIP Service, to which the NodePort Service will route, is automatically created. By requesting <NodeIP>:<NodePort>, you can access a NodePort Service from outside the cluster.</p>
|
||||
<div class="fignone" id="cce_10_0142__fig6819133414131"><span class="figcap"><b>Figure 1 </b>NodePort access</span><br><span><img id="cce_10_0142__image10510139711" src="en-us_image_0000001199501230.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0142__fig6819133414131"><span class="figcap"><b>Figure 1 </b>NodePort access</span><br><span><img id="cce_10_0142__image10510139711" src="en-us_image_0000001517743380.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0142__section8501151104219"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0142__ul1685519569431"><li id="cce_10_0142__li1585575616436">By default, a NodePort Service is accessed within a VPC. If you need to use an EIP to access a NodePort Service through public networks, bind an EIP to the node in the cluster in advance.</li><li id="cce_10_0142__li128551156114310">After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. You are advised not to modify the Service affinity setting after the Service is created. If you need to modify it, create a Service again.</li><li id="cce_10_0142__li62831358182017">CCE Turbo clusters support only cluster-level service affinity.</li><li id="cce_10_0142__li217783916207">In VPC network mode, when container A is published through a NodePort service and the service affinity is set to the node level (that is, <strong id="cce_10_0142__b6606348122314">externalTrafficPolicy</strong> is set to <strong id="cce_10_0142__b1893585192319">local</strong>), container B deployed on the same node cannot access container A through the node IP address and NodePort service.</li><li id="cce_10_0142__li14613571073">When a NodePort service is created in a cluster of v1.21.7 or later, the port on the node is not displayed using <strong id="cce_10_0142__b9614551172511">netstat</strong> by default. If the cluster forwarding mode is <strong id="cce_10_0142__b1668514182617">iptables</strong>, run the <strong id="cce_10_0142__b17716161211103">iptables -t nat -L</strong> command to view the port. If the cluster forwarding mode is <strong id="cce_10_0142__b1037289122614">ipvs</strong>, run the <strong id="cce_10_0142__b23917223106">ipvsadm -nL</strong> command to view the port.</li></ul>
|
||||
</div>
|
||||
|
@ -33,7 +33,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="78%" headers="mcps1.3.3.3.2.4.2.2.3.1.2 "><p id="cce_10_0146__p1678472115013">Describes configuration parameters required by templates.</p>
|
||||
<div class="notice" id="cce_10_0146__note11415171194911"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0146__p394216481648">Make sure that the image address set in the <strong id="cce_10_0146__b169837156417">values.yaml</strong> file is the same as the image address in the container image repository. Otherwise, an exception occurs when you create a workload, and the system displays a message indicating that the image fails to be pulled.</p>
|
||||
<p id="cce_10_0146__p04177113498">To obtain the image address, perform the following operations: Log in to the CCE console. In the navigation pane, choose <strong id="cce_10_0146__b860412174116">Image Repository</strong> to access the SWR console. Choose <strong id="cce_10_0146__b10171926114117">My Images</strong> > <strong id="cce_10_0146__b12372684119">Private Images</strong> and click the name of the uploaded image. On the <strong id="cce_10_0146__b223726104111">Image Tags</strong> tab page, obtain the image address from the pull command. You can click <span><img id="cce_10_0146__image292113414153" src="en-us_image_0000001206959574.png"></span> to copy the command in the <strong id="cce_10_0146__b723192619418">Image Pull Command</strong> column.</p>
|
||||
<p id="cce_10_0146__p04177113498">To obtain the image address, perform the following operations: Log in to the CCE console. In the navigation pane, choose <strong id="cce_10_0146__b860412174116">Image Repository</strong> to access the SWR console. Choose <strong id="cce_10_0146__b10171926114117">My Images</strong> > <strong id="cce_10_0146__b12372684119">Private Images</strong> and click the name of the uploaded image. On the <strong id="cce_10_0146__b223726104111">Image Tags</strong> tab page, obtain the image address from the pull command. You can click <span><img id="cce_10_0146__image292113414153" src="en-us_image_0000001517743456.png"></span> to copy the command in the <strong id="cce_10_0146__b723192619418">Image Pull Command</strong> column.</p>
|
||||
</div></div>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -8,7 +8,7 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0150__s50bf087555b1437aa249c1259138706c"><h4 class="sectiontitle">Prerequisites</h4><p id="cce_10_0150__p1695632510556">Resources have been created. For details, see <a href="cce_10_0363.html">Creating a Node</a>. If clusters and nodes are available, you need not create them again.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0150__sb8a02965b2624dbbabab320046ca4973"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0150__ol2012902601117"><li id="cce_10_0150__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0150__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0150__b94442390613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0150__b1844413910614">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0150__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0150__p1259466151612"><strong id="cce_10_0150__b784105815422">Basic Info</strong><ul id="cce_10_0150__ul6954101318184"><li id="cce_10_0150__li11514131617185"><strong id="cce_10_0150__b188592071436">Workload Type</strong>: Select <strong id="cce_10_0150__b117010384313">Job</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0150__li129541213101814"><strong id="cce_10_0150__b92691311154315">Workload Name</strong>: Enter the name of the workload.</li><li id="cce_10_0150__li179541813111814"><strong id="cce_10_0150__b187251219436">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0150__b98729126437">default</strong>. You can also click <span class="uicontrol" id="cce_10_0150__uicontrol7512314164311"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0150__li18955181315189"><strong id="cce_10_0150__b1692581513436">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0150__li11753142112539"><strong id="cce_10_0150__b52461818134319">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0150__sb8a02965b2624dbbabab320046ca4973"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0150__ol2012902601117"><li id="cce_10_0150__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0150__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0150__b94442390613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0150__b1844413910614">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0150__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0150__p1259466151612"><strong id="cce_10_0150__b784105815422">Basic Info</strong><ul id="cce_10_0150__ul6954101318184"><li id="cce_10_0150__li11514131617185"><strong id="cce_10_0150__b188592071436">Workload Type</strong>: Select <strong id="cce_10_0150__b117010384313">Job</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0150__li129541213101814"><strong id="cce_10_0150__b92691311154315">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0150__li179541813111814"><strong id="cce_10_0150__b187251219436">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0150__b98729126437">default</strong>. You can also click <span class="uicontrol" id="cce_10_0150__uicontrol7512314164311"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0150__li18955181315189"><strong id="cce_10_0150__b1692581513436">Pods</strong>: Enter the number of pods.</li><li id="cce_10_0150__li11753142112539"><strong id="cce_10_0150__b52461818134319">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li></ul>
|
||||
</div>
|
||||
<div class="p" id="cce_10_0150__p206571518181616"><strong id="cce_10_0150__b855213715437">Container Settings</strong><ul id="cce_10_0150__ul42071022103320"><li id="cce_10_0150__li8770480458">Container Information<div class="p" id="cce_10_0150__p10493941854"><a name="cce_10_0150__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0150__uicontrol1760133894311"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0150__ul3107134161216"><li id="cce_10_0150__li310624131220"><strong id="cce_10_0150__b16677312174910">Basic Info</strong>: See <a href="cce_10_0396.html">Setting Basic Container Information</a>.</li><li id="cce_10_0150__li51065491219"><strong id="cce_10_0150__b214013152497">Lifecycle</strong>: See <a href="cce_10_0105.html">Setting Container Lifecycle Parameters</a>.</li><li id="cce_10_0150__li9107648124"><strong id="cce_10_0150__b145617174914">Environment Variables</strong>: See <a href="cce_10_0113.html">Setting an Environment Variable</a>.</li><li id="cce_10_0150__li16107114201215"><strong id="cce_10_0150__b298814182495">Data Storage</strong>: See <a href="cce_10_0307.html">Overview</a>.<div class="note" id="cce_10_0150__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0150__p17126153413513">If the workload contains more than one pod, EVS volumes cannot be mounted.</p>
|
||||
</div></div>
|
||||
|
@ -9,7 +9,7 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0151__s50bf087555b1437aa249c1259138706c"><h4 class="sectiontitle">Prerequisites</h4><p id="cce_10_0151__p777418357559">Resources have been created. For details, see <a href="cce_10_0363.html">Creating a Node</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0151__section345135735520"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0151__ol2012902601117"><li id="cce_10_0151__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0151__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0151__b1885417579613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0151__b1685418571868">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0151__li67891737151520"><span>Set basic information about the workload.</span><p><div class="p" id="cce_10_0151__p1259466151612"><strong id="cce_10_0151__b12586554174511">Basic Info</strong><ul id="cce_10_0151__ul6954101318184"><li id="cce_10_0151__li11514131617185"><strong id="cce_10_0151__b011355684516">Workload Type</strong>: Select <strong id="cce_10_0151__b8113135614512">Cron Job</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0151__li129541213101814"><strong id="cce_10_0151__b144732064465">Workload Name</strong>: Enter the name of the workload.</li><li id="cce_10_0151__li179541813111814"><strong id="cce_10_0151__b13284275464">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0151__b228411710464">default</strong>. You can also click <span class="uicontrol" id="cce_10_0151__uicontrol2139109154616"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0151__li11753142112539"><strong id="cce_10_0151__b5332810174614">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0151__section345135735520"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0151__ol2012902601117"><li id="cce_10_0151__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0151__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0151__b1885417579613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0151__b1685418571868">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0151__li67891737151520"><span>Set basic information about the workload.</span><p><div class="p" id="cce_10_0151__p1259466151612"><strong id="cce_10_0151__b12586554174511">Basic Info</strong><ul id="cce_10_0151__ul6954101318184"><li id="cce_10_0151__li11514131617185"><strong id="cce_10_0151__b011355684516">Workload Type</strong>: Select <strong id="cce_10_0151__b8113135614512">Cron Job</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0151__li129541213101814"><strong id="cce_10_0151__b144732064465">Workload Name</strong>: Enter the name of the workload. Enter 1 to 52 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0151__li179541813111814"><strong id="cce_10_0151__b13284275464">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0151__b228411710464">default</strong>. You can also click <span class="uicontrol" id="cce_10_0151__uicontrol2139109154616"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0151__li11753142112539"><strong id="cce_10_0151__b5332810174614">Container Runtime</strong>: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see <a href="cce_10_0463.html">Kata Containers and Common Containers</a>.</li></ul>
|
||||
</div>
|
||||
<div class="p" id="cce_10_0151__p206571518181616"><strong id="cce_10_0151__b101031654614">Container Settings</strong><ul id="cce_10_0151__ul42071022103320"><li id="cce_10_0151__li8770480458">Container Information<div class="p" id="cce_10_0151__p10493941854"><a name="cce_10_0151__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0151__uicontrol127468176466"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0151__ul3659182116126"><li id="cce_10_0151__li106592216120"><strong id="cce_10_0151__b20340145912491">Basic Info</strong>: See <a href="cce_10_0396.html">Setting Basic Container Information</a>.</li><li id="cce_10_0151__li1865942191210"><strong id="cce_10_0151__b198848416507">Lifecycle</strong>: See <a href="cce_10_0105.html">Setting Container Lifecycle Parameters</a>.</li><li id="cce_10_0151__li365922119124"><strong id="cce_10_0151__b1468148185018">Environment Variables</strong>: See <a href="cce_10_0113.html">Setting an Environment Variable</a>.</li></ul>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -34,6 +34,11 @@
|
||||
<ul id="cce_10_0154__ul1634319358233"><li id="cce_10_0154__li2343135152319"><strong id="cce_10_0154__b15151353408">Single</strong>: The add-on is deployed with only one pod.</li><li id="cce_10_0154__li33431235192311"><strong id="cce_10_0154__b4574123141513">HA50</strong>: The add-on is deployed with two pods, serving a cluster with 50 nodes and ensuring high availability.</li><li id="cce_10_0154__li482912427514"><strong id="cce_10_0154__b1411151181815">HA200</strong>: The add-on is deployed with two pods, serving a cluster with 50 nodes and ensuring high availability. Each pod uses more resources than those of the <strong id="cce_10_0154__b2191221161916">HA50</strong> specification.</li><li id="cce_10_0154__li7281273523"><strong id="cce_10_0154__b42561826161918">Custom</strong>: You can customize the number of pods and specifications as required.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0154__row881114426378"><td class="cellrowborder" valign="top" width="22.89%" headers="mcps1.3.4.2.2.2.1.2.3.1.1 "><p id="cce_10_0154__p15864134742615">Multi AZ</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77.11%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><ul id="cce_10_0154__ul4214181752714"><li id="cce_10_0154__en-us_topic_0000001199341168_li5214161718270"><strong id="cce_10_0154__b4856778946027">Preferred</strong>: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ.</li><li id="cce_10_0154__en-us_topic_0000001199341168_li4214917142716"><strong id="cce_10_0154__b3983534296113">Required</strong>: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
@ -16,7 +16,7 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0465.html">Pod Security</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0477_0.html">Service Account Token Security Improvement</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0477.html">Service Account Token Security Improvement</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -3,7 +3,7 @@
|
||||
<h1 class="topictitle1">Obtaining a Cluster Certificate</h1>
|
||||
<div id="body1556615866530"><div class="section" id="cce_10_0175__section160213214302"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0175__p1840111417517">This section describes how to obtain the cluster certificate from the console and use it to access Kubernetes clusters.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0175__section1590914113306"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0175__ol898314521505"><li id="cce_10_0175__li4829928181812"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0175__li179831852301"><span>Choose <strong id="cce_10_0175__b1998785317534">Cluster Information</strong> from the navigation pane and click <strong id="cce_10_0175__b18988553105313">Download</strong> next to <strong id="cce_10_0175__b1398875317539">Authentication Mode</strong> in the <strong id="cce_10_0175__b169881453125311">Connection Information</strong> area.</span></li><li id="cce_10_0175__li1979910715109"><span>In the <span class="uicontrol" id="cce_10_0175__uicontrol13516511412"><b>Download X.509 Certificate</b></span> dialog box displayed, select the certificate expiration time and download the X.509 certificate of the cluster as prompted.</span><p><div class="fignone" id="cce_10_0175__fig873583013712"><span class="figcap"><b>Figure 1 </b>Downloading a certificate</span><br><span><img id="cce_10_0175__image282017212710" src="en-us_image_0000001199181228.png"></span></div>
|
||||
<div class="section" id="cce_10_0175__section1590914113306"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0175__ol898314521505"><li id="cce_10_0175__li4829928181812"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0175__li179831852301"><span>Choose <strong id="cce_10_0175__b1998785317534">Cluster Information</strong> from the navigation pane and click <strong id="cce_10_0175__b18988553105313">Download</strong> next to <strong id="cce_10_0175__b1398875317539">Authentication Mode</strong> in the <strong id="cce_10_0175__b169881453125311">Connection Information</strong> area.</span></li><li id="cce_10_0175__li1979910715109"><span>In the <span class="uicontrol" id="cce_10_0175__uicontrol13516511412"><b>Download X.509 Certificate</b></span> dialog box displayed, select the certificate expiration time and download the X.509 certificate of the cluster as prompted.</span><p><div class="fignone" id="cce_10_0175__fig873583013712"><span class="figcap"><b>Figure 1 </b>Downloading a certificate</span><br><span><img id="cce_10_0175__image282017212710" src="en-us_image_0000001568822637.png"></span></div>
|
||||
<div class="notice" id="cce_10_0175__note21816913343"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0175__ul45041635102414"><li id="cce_10_0175__li050403542411">The downloaded certificate contains three files: <strong id="cce_10_0175__b1790092752911">client.key</strong>, <strong id="cce_10_0175__b990002710298">client.crt</strong>, and <strong id="cce_10_0175__b690015272292">ca.crt</strong>. Keep these files secure.</li><li id="cce_10_0175__li150414359248">Certificates are not required for mutual access between containers in a cluster.</li></ul>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
|
@ -1,14 +1,14 @@
|
||||
<a name="cce_10_0178"></a><a name="cce_10_0178"></a>
|
||||
|
||||
<h1 class="topictitle1">Formula for Calculating the Reserved Resources of a Node</h1>
|
||||
<div id="body1524713564673"><p id="cce_10_0178__p591031155413">Some of the resources on the <span class="keyword" id="cce_10_0178__span2167926204817">node</span> need to run some necessary <span class="keyword" id="cce_10_0178__span416772624814">Kubernetes</span> system components and resources to make the node as part of your cluster. Therefore, the total number of node resources and the number of assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, Kubernetes needs to reserve more resources.</p>
|
||||
<div id="body1524713564673"><p id="cce_10_0178__p591031155413">Some of the resources on the <span class="keyword" id="cce_10_0178__span2167926204817">node</span> need to run some necessary <span class="keyword" id="cce_10_0178__span416772624814">Kubernetes</span> system components and resources to make the node as part of your cluster. Therefore, the total number of node resources and the number of assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components.</p>
|
||||
<p id="cce_10_0178__p9278143105716">To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications.</p>
|
||||
<p id="cce_10_0178__p169111185415">CCE calculates the resources that can be allocated to user nodes as follows:</p>
|
||||
<p id="cce_10_0178__p15911151155419"><strong id="cce_10_0178__b1713634541010">Allocatable resources = Total amount - Reserved amount - Eviction threshold</strong></p>
|
||||
<p id="cce_10_0178__p16231944205019"> The memory eviction threshold is fixed at 100 MB.</p>
|
||||
<p id="cce_10_0178__p20379184215543">When the memory consumed by all pods on a node increases, the following behaviors may occur:</p>
|
||||
<ol id="cce_10_0178__ol17831358175420"><li id="cce_10_0178__li20783658135414">If the memory is greater than or equal to the allocatable amount on the node, kubelet is triggered to evict pods.</li><li id="cce_10_0178__li167841588547">When the memory approaches the allocatable amount and eviction threshold (total minus reserved), OS OOM is triggered.</li></ol>
|
||||
<div class="section" id="cce_10_0178__section16856143934620"><h4 class="sectiontitle">Rules for Reserving Node Memory </h4><p id="cce_10_0178__p671312452573">You can use the following formula calculate how much memory you should reserve for running containers on a node:</p>
|
||||
<ol id="cce_10_0178__ol17831358175420"><li id="cce_10_0178__li20783658135414">When the available memory on a node is lower than the eviction threshold, kubelet is triggered to evict pods. For details about Kubernetes eviction threshold, see <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction" target="_blank" rel="noopener noreferrer">Node-pressure Eviction</a>.</li><li id="cce_10_0178__li167841588547">If a node triggers an OS Out-Of-Memory (OOM) event before kubelet reclaims memory, the system terminates the container. However, kubelet does not evict the pod, but restarts the container based on the RestartPolicy of the pod.</li></ol>
|
||||
<div class="section" id="cce_10_0178__section16856143934620"><h4 class="sectiontitle">Rules for Reserving Node Memory</h4><p id="cce_10_0178__p671312452573">You can use the following formula calculate how much memory you should reserve for running containers on a node:</p>
|
||||
<p id="cce_10_0178__p1695774420316">Total reserved amount = Reserved memory for system components + Reserved memory for kubelet to manage pods</p>
|
||||
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0178__table19962121035915" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Reservation rules for system components</caption><thead align="left"><tr id="cce_10_0178__row15963910105914"><th align="left" class="cellrowborder" valign="top" width="34.61%" id="mcps1.3.8.4.2.3.1.1"><p id="cce_10_0178__p1696331012593">Total Memory (TM)</p>
|
||||
|
@ -7,7 +7,7 @@
|
||||
<div class="section" id="cce_10_0184__section299918342346"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0184__ul121015107312"><li id="cce_10_0184__li3101810193119">Data, including the VM status, ECS names, number of CPUs, size of memory, ECS specifications, and public IP addresses, can be synchronized.<p id="cce_10_0184__p2901244183112"><a name="cce_10_0184__li3101810193119"></a><a name="li3101810193119"></a>If an ECS name is specified as the Kubernetes node name, the change of the ECS name cannot be synchronized to the CCE console.</p>
|
||||
</li><li id="cce_10_0184__li8102110103118">Data, such as the OS and image ID, cannot be synchronized. (Such parameters cannot be modified on the ECS console.)</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0184__section2076543461216"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0184__ol1882502762811"><li id="cce_10_0184__li849281818811"><span>Log in to the CCE console.</span></li><li id="cce_10_0184__li13907715194011"><span>Click the cluster name to access the cluster console. Choose <span class="uicontrol" id="cce_10_0184__uicontrol1346492517398"><b>Nodes</b></span> in the navigation pane.</span></li><li id="cce_10_0184__li1382582719286"><span>Choose <strong id="cce_10_0184__b19930131353810">More</strong> > <strong id="cce_10_0184__b398116376321">Sync Server Data</strong> next to the node.</span><p><div class="fignone" id="cce_10_0184__fig983933294015"><span class="figcap"><b>Figure 1 </b>Synchronizing server data</span><br><span><img id="cce_10_0184__image3503275220" src="en-us_image_0000001243981203.png"></span></div>
|
||||
<div class="section" id="cce_10_0184__section2076543461216"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0184__ol1882502762811"><li id="cce_10_0184__li849281818811"><span>Log in to the CCE console.</span></li><li id="cce_10_0184__li13907715194011"><span>Click the cluster name to access the cluster console. Choose <span class="uicontrol" id="cce_10_0184__uicontrol1346492517398"><b>Nodes</b></span> in the navigation pane.</span></li><li id="cce_10_0184__li1382582719286"><span>Choose <strong id="cce_10_0184__b19930131353810">More</strong> > <strong id="cce_10_0184__b398116376321">Sync Server Data</strong> next to the node.</span><p><div class="fignone" id="cce_10_0184__fig983933294015"><span class="figcap"><b>Figure 1 </b>Synchronizing server data</span><br><span><img id="cce_10_0184__image3503275220" src="en-us_image_0000001517743520.png"></span></div>
|
||||
<p id="cce_10_0184__p17635154314012">After the synchronization is complete, the <strong id="cce_10_0184__b64827914309">ECS data synchronization requested</strong> message is displayed in the upper right corner.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
|
@ -3,9 +3,7 @@
|
||||
<h1 class="topictitle1">Deleting a Node</h1>
|
||||
<div id="body1559203372010"><div class="section" id="cce_10_0186__section748912450371"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0186__p19987354135915">When a node in a CCE cluster is deleted, services running on the node will also be deleted. Exercise caution when performing this operation.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0186__section1999130951"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0186__ul1687437184518"><li id="cce_10_0186__li450618444619">After a CCE cluster is deleted, the ECS nodes in the cluster are also deleted.</li><li id="cce_10_0186__li2755131805912"><div class="notice" id="cce_10_0186__note151271315132912"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0186__p1614541592911">For clusters of v1.17.11 or later, after a VM is deleted on the ECS console, the corresponding node in the CCE cluster is automatically deleted.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
<div class="section" id="cce_10_0186__section1999130951"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0186__ul1687437184518"><li id="cce_10_0186__li1927258101220">VM nodes that are being used by CCE do not support deletion on the ECS page.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0186__section83421713122615"><h4 class="sectiontitle">Precautions</h4><ul id="cce_10_0186__ul189321612123615"><li id="cce_10_0186__li159325122367">Deleting a node will lead to pod migration, which may affect services. Perform this operation during off-peak hours.</li><li id="cce_10_0186__li29322128366">Unexpected risks may occur during the operation. Back up related data in advance.</li><li id="cce_10_0186__li893261218365">During the operation, the backend will set the node to the unschedulable state.</li><li id="cce_10_0186__li139331412133615">Only worker nodes can be deleted.</li></ul>
|
||||
</div>
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user