forked from docs/doc-exports
CCE UMN 20230213 version for new console
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com> Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com> Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
This commit is contained in:
parent
5a1135747c
commit
e00cefc755
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,21 +0,0 @@
|
||||
<a name="cce_01_0003"></a><a name="cce_01_0003"></a>
|
||||
|
||||
<h1 class="topictitle1">Resetting a Node</h1>
|
||||
<div id="body1522736306717"><div class="section" id="cce_01_0003__section87051629113714"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0003__p13461109175017">You can reset a node to modify the node configuration, such as the node OS and login mode.</p>
|
||||
<p id="cce_01_0003__p341155285120">Resetting a node will reinstall the node OS and the Kubernetes software on the node. If a node is unavailable because you modify the node configuration, you can reset the node to rectify the fault.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0003__section0339185914138"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_01_0003__ul975585510397"><li id="cce_01_0003__li15755125513910">The cluster version must be v1.13 or later.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0003__section83421713122615"><h4 class="sectiontitle">Notes</h4><ul id="cce_01_0003__ul189321612123615"><li id="cce_01_0003__li139331412133615">Only worker nodes can be reset. If the node is still unavailable after the resetting, delete the node and create a new one.</li><li id="cce_01_0003__li133748101461"><strong id="cce_01_0003__b1724353045920">Resetting a node will reinstall the node OS and interrupt workload services running on the node. Therefore, perform this operation during off-peak hours.</strong></li><li id="cce_01_0003__li11336171744612"><strong id="cce_01_0003__b1654931913492">Data in the system disk and Docker data disks will be cleared. Back up important data before resetting the node.</strong></li><li id="cce_01_0003__li159325122367"><strong id="cce_01_0003__b18976436631">When an extra data disk is mounted to a node, data in this disk will be cleared if the disk has not been unmounted before the node reset. To prevent data loss, back up data in advance and mount the data disk again after the node reset is complete.</strong></li><li id="cce_01_0003__li18904821103817">The IP addresses of the workload pods on the node will change, but the container network access is not affected.</li><li id="cce_01_0003__li33901348389">There is remaining EVS disk quota.</li><li id="cce_01_0003__li893261218365">While the node is being deleted, the backend will set the node to the unschedulable state.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0003__section144215001311"><h4 class="sectiontitle">Procedure</h4><ol id="cce_01_0003__ol856117271009"><li id="cce_01_0003__li1856111271603"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0003__b2784174517571">Resource Management</strong> > <strong id="cce_01_0003__b11790194510573">Nodes</strong>. In the same row as the node you will reset, choose <strong id="cce_01_0003__b1979011451573">More</strong> > <strong id="cce_01_0003__b679034518574">Reset</strong>.</span></li><li id="cce_01_0003__li18203142544612"><span>In the dialog box displayed, enter <strong id="cce_01_0003__b14488101611562">RESET</strong> and reconfigure the key pair for login.</span><p><div class="fignone" id="cce_01_0003__fig10143855684"><span class="figcap"><b>Figure 1 </b>Resetting the selected node</span><br><span><img id="cce_01_0003__image119110714912" src="en-us_image_0000001190302085.png"></span></div>
|
||||
</p></li><li id="cce_01_0003__li6145420265"><span>Click <span class="uicontrol" id="cce_01_0003__uicontrol102341912773"><b>Yes</b></span> and wait until the node is reset.</span><p><p id="cce_01_0003__p1557151272914">After the node is reset, pods on it are automatically migrated to other available nodes. </p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0183.html">Nodes</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,95 +0,0 @@
|
||||
<a name="cce_01_0004"></a><a name="cce_01_0004"></a>
|
||||
|
||||
<h1 class="topictitle1">Managing Node Labels</h1>
|
||||
<div id="body1523168310157"><div class="section" id="cce_01_0004__section825504204814"><h4 class="sectiontitle">Node Label Usage Scenario</h4><p id="cce_01_0004__p780125519482">Node labels are mainly used in the following scenarios:</p>
|
||||
<ul id="cce_01_0004__ul1269074720287"><li id="cce_01_0004__li1269054722816">Node management: Node labels are used to classify nodes.</li><li id="cce_01_0004__li13690184719287">Affinity and anti-affinity between a workload and node:<ul id="cce_01_0004__ul1329315507281"><li id="cce_01_0004__li17292050192815">Some workloads require a large CPU, some require a large memory, some require a large I/O, and other workloads may be affected. In this case, you are advised to add different labels to nodes. When deploying a workload, you can select nodes with specified labels for affinity deployment to ensure the normal operation of the system. Otherwise, node anti-affinity deployment can be used.</li><li id="cce_01_0004__li1229255012816">A system can be divided into multiple modules. Each module consists of multiple microservices. To ensure the efficiency of subsequent O&M, you can add a module label to each node so that each module can be deployed on the corresponding node, does not interfere with other modules, and can be easily developed and maintained on its node.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0004__section74111324152813"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0004__en-us_topic_0000001199181148_keyword544709935144944">Inherent Label of a Node</span></h4><p id="cce_01_0004__en-us_topic_0000001199181148_p096179164111">After a node is created, some fixed labels exist and cannot be deleted. For details about these labels, see <a href="#cce_01_0004__en-us_topic_0000001199181148_table83962234533">Table 1</a>.</p>
|
||||
|
||||
<div class="tablenoborder"><a name="cce_01_0004__en-us_topic_0000001199181148_table83962234533"></a><a name="en-us_topic_0000001199181148_table83962234533"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_01_0004__en-us_topic_0000001199181148_table83962234533" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Inherent label of a node</caption><thead align="left"><tr id="cce_01_0004__en-us_topic_0000001199181148_row941112314533"><th align="left" class="cellrowborder" valign="top" width="34%" id="mcps1.4.2.3.2.3.1.1"><p id="cce_01_0004__en-us_topic_0000001199181148_p1541113238536">Key</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="66%" id="mcps1.4.2.3.2.3.1.2"><p id="cce_01_0004__en-us_topic_0000001199181148_p1741119232538">Description</p>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_01_0004__en-us_topic_0000001199181148_row846191265011"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p107369065316">New: topology.kubernetes.io/region</p>
|
||||
<p id="cce_01_0004__en-us_topic_0000001199181148_p841172365311">Old: failure-domain.beta.kubernetes.io/region</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p38743391437">Region where the node is located</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row11957143818577"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p67558441573">New: topology.kubernetes.io/zone</p>
|
||||
<p id="cce_01_0004__en-us_topic_0000001199181148_p27551644185718">Old: failure-domain.beta.kubernetes.io/zone</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p5141948155717">AZ where the node is located</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row186452248235"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p935922465512">New: node.kubernetes.io/baremetal</p>
|
||||
<p id="cce_01_0004__en-us_topic_0000001199181148_p1664611247230">Old: failure-domain.beta.kubernetes.io/is-baremetal</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p10646132416235">Whether the node is a bare metal node</p>
|
||||
<p id="cce_01_0004__en-us_topic_0000001199181148_p878819218284"><strong id="cce_01_0004__en-us_topic_0000001199181148_b137781937201815">false</strong> indicates that the node is not a bare metal node.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row5551359185318"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p126155014549">node.kubernetes.io/instance-type</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p11552159195316">Node specifications</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row11411923145318"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p89228418575">kubernetes.io/arch</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p3103855811">Node processor architecture</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row18812913105819"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p981281335818">kubernetes.io/hostname</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p1981261314582">Node name</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row15479121185815"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p1931642055818">kubernetes.io/os</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p7479101125812">OS type</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row85011821447"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p950218211147">node.kubernetes.io/subnetid</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p950282110419">ID of the subnet where the node is located.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row15411523165312"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p2411192310532">os.architecture</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p1741162315319">Node processor architecture</p>
|
||||
<p id="cce_01_0004__en-us_topic_0000001199181148_p11218831135415">For example, <strong id="cce_01_0004__en-us_topic_0000001199181148_b842352706145330">amd64</strong> indicates a AMD64-bit processor.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row17411162365318"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p8411102345311">os.name</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p7411112315537">Node OS name</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0004__en-us_topic_0000001199181148_row1041115238531"><td class="cellrowborder" valign="top" width="34%" headers="mcps1.4.2.3.2.3.1.1 "><p id="cce_01_0004__en-us_topic_0000001199181148_p2411323135319">os.version</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="66%" headers="mcps1.4.2.3.2.3.1.2 "><p id="cce_01_0004__en-us_topic_0000001199181148_p641192311530">Node OS kernel version</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0004__section33951611481"><h4 class="sectiontitle">Adding a Node Label</h4><ol id="cce_01_0004__ol4618636938"><li id="cce_01_0004__li19833133920416"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0004__b1435391912552">Resource Management</strong> > <strong id="cce_01_0004__b163534191551">Nodes</strong>.</span></li><li id="cce_01_0004__li161816361831"><span>In the same row as the node for which you will add labels, choose <strong id="cce_01_0004__b527481414242">Operation</strong> > <strong id="cce_01_0004__b1891522117305">More</strong> > <strong id="cce_01_0004__b557931762416">Manage Labels</strong>.</span></li><li id="cce_01_0004__li461622412711"><span>In the dialog box displayed, click <span class="uicontrol" id="cce_01_0004__uicontrol579618153013"><b>Add Label</b></span> below the label list, enter the key and value of the label to be added, and click <span class="uicontrol" id="cce_01_0004__uicontrol51683011307"><b>OK</b></span>.</span><p><p id="cce_01_0004__p12647141114247">As shown in the figure, the key is <strong id="cce_01_0004__b842352706145648">deploy_qa</strong> and the value is <strong id="cce_01_0004__b842352706145652">true</strong>, indicating that the node is used to deploy the QA (test) environment.</p>
|
||||
</p></li><li id="cce_01_0004__li68199221571"><span>After the label is added, click <strong id="cce_01_0004__b16434125017341">Manage Labels</strong>. Then, you will see the label that you have added.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0004__section947332017485"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0004__keyword1945340339145914">Deleting a Node Label</span></h4><p id="cce_01_0004__p189437339488">Only labels added by users can be deleted. Labels that are fixed on the node cannot be deleted.</p>
|
||||
<ol id="cce_01_0004__ol13733101093118"><li id="cce_01_0004__li1460522594314"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0004__b1845712155516">Resource Management</strong> > <strong id="cce_01_0004__b845772175513">Nodes</strong>.</span></li><li id="cce_01_0004__li13733141003110"><span>In the same row as the node for which you will delete labels, choose <strong id="cce_01_0004__b1812010163281">Operation</strong> > <strong id="cce_01_0004__b125337420401">More</strong> > <strong id="cce_01_0004__b712010161281">Manage Labels</strong>.</span></li><li id="cce_01_0004__li3733110193114"><span>Click <strong id="cce_01_0004__b84235270615037">Delete</strong>, and then click <strong id="cce_01_0004__b84235270615040">OK</strong> to delete the label.</span><p><p id="cce_01_0004__p22371816555"><strong id="cce_01_0004__b11451165073419">Label updated successfully</strong> is displayed.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0004__section3741052121216"><h4 class="sectiontitle">Searching for a Node by Label</h4><ol id="cce_01_0004__ol175957485139"><li id="cce_01_0004__li559574831319"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0004__b54653245551">Resource Management</strong> > <strong id="cce_01_0004__b8465824135518">Nodes</strong>.</span></li><li id="cce_01_0004__li459515480135"><span>In the upper right corner of the node list, click <strong id="cce_01_0004__b328816538273">Search by Label</strong>.</span></li><li id="cce_01_0004__li990316161147"><span>Enter a Kubernetes label to find the target node.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0183.html">Nodes</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,37 +0,0 @@
|
||||
<a name="cce_01_0009"></a><a name="cce_01_0009"></a>
|
||||
|
||||
<h1 class="topictitle1">Using a Third-Party Image</h1>
|
||||
<div id="body1523239642063"><div class="section" id="cce_01_0009__section96721544452"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0009__p106141253452">CCE allows you to create workloads using images pulled from third-party image repositories.</p>
|
||||
<p id="cce_01_0009__p1261413531252">Generally, a third-party image repository can be accessed only after authentication (using your account and password). CCE uses the secret-based authentication to pull images. Therefore, you need to create a secret for an image repository before pulling images from the repository.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0009__section14876601632"><h4 class="sectiontitle">Prerequisites</h4><p id="cce_01_0009__p545510319312">The node where the workload is running is accessible from public networks. You can access public networks through <a href="cce_01_0014.html">LoadBalancer</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0009__section0402183334411"><h4 class="sectiontitle">Using the Console</h4><ol id="cce_01_0009__ol1748117409446"><li id="cce_01_0009__li16481144064414"><a name="cce_01_0009__li16481144064414"></a><a name="li16481144064414"></a><span>Create a secret for accessing a third-party image repository.</span><p><p id="cce_01_0009__p75695254516">In the navigation pane, choose <strong id="cce_01_0009__b11729201473811">Configuration Center</strong> > <strong id="cce_01_0009__b7729114133811">Secret</strong>, and click <strong id="cce_01_0009__b17729214133816">Create Secret</strong>. <strong id="cce_01_0009__b19729914183819">Type</strong> must be set to <strong id="cce_01_0009__b07293141388">kubernetes.io/dockerconfigjson</strong>. For details, see <a href="cce_01_0153.html">Creating a Secret</a>.</p>
|
||||
<p id="cce_01_0009__p819111064514">Enter the user name and password used to access the third-party image repository.</p>
|
||||
</p></li><li id="cce_01_0009__li13221161713456"><span>Create a workload. For details, see <a href="cce_01_0047.html">Creating a Deployment</a> or <a href="cce_01_0048.html">Creating a StatefulSet</a>. If the workload will be created from a third-party image, set the image parameters as follows:</span><p><ol type="a" id="cce_01_0009__ol8645134085919"><li id="cce_01_0009__li17283133917595">Set <strong id="cce_01_0009__b95353255715312">Secret Authentication</strong> to <strong id="cce_01_0009__b22537885915312">Yes</strong>.</li><li id="cce_01_0009__li886114816598">Select the secret created in step <a href="#cce_01_0009__li16481144064414">1</a>.</li><li id="cce_01_0009__li25271611405">Enter the image address.</li></ol>
|
||||
</p></li><li id="cce_01_0009__li1682113518595"><span>Click <span class="uicontrol" id="cce_01_0009__uicontrol122057192514"><b>Create</b></span>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0009__section18217101117197"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_01_0009__ol84677271516"><li id="cce_01_0009__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_01_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_01_0009__li54671627213"><span>Create a secret of the dockercfg type using kubectl.</span><p><pre class="screen" id="cce_01_0009__screen1466527017">kubectl create secret docker-registry <strong id="cce_01_0009__b184651127812">myregistrykey </strong>--docker-server=<strong id="cce_01_0009__b124669278112">DOCKER_REGISTRY_SERVER</strong> --docker-username=<strong id="cce_01_0009__b9466927114">DOCKER_USER</strong> --docker-password=<strong id="cce_01_0009__b1046662715116">DOCKER_PASSWORD</strong> --docker-email=<strong id="cce_01_0009__b54661627119">DOCKER_EMAIL</strong></pre>
|
||||
<p id="cce_01_0009__p164665271714">In the preceding commands, <strong id="cce_01_0009__b740124517418">myregistrykey</strong> indicates the secret name, and other parameters are described as follows:</p>
|
||||
<ul id="cce_01_0009__ul84670278112"><li id="cce_01_0009__li4467142711112"><strong id="cce_01_0009__b640184594119">DOCKER_REGISTRY_SERVER</strong>: address of a third-party image repository, for example, <strong id="cce_01_0009__b240104584114">www.3rdregistry.com</strong> or <strong id="cce_01_0009__b1440215458415">10.10.10.10:443</strong></li><li id="cce_01_0009__li13467127716"><strong id="cce_01_0009__b164021745114117">DOCKER_USER</strong>: account used for logging in to a third-party image repository</li><li id="cce_01_0009__li746782712110"><strong id="cce_01_0009__b1539245574117">DOCKER</strong><strong id="cce_01_0009__b4392185511418">_PASSWORD</strong>: password used for logging in to a third-party image repository</li><li id="cce_01_0009__li1546712278117"><strong id="cce_01_0009__b10402845154110">DOCKER_EMAIL</strong>: email of a third-party image repository</li></ul>
|
||||
</p></li><li id="cce_01_0009__li161523518110"><span>Use a third-party image to create a workload.</span><p><div class="p" id="cce_01_0009__p13583471429">A dockecfg secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication.<pre class="screen" id="cce_01_0009__screen0583771125">apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: foo
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: foo
|
||||
image: www.3rdregistry.com/janedoe/awesomeapp:v1
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey #Use the created secret.</pre>
|
||||
</div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0130.html">Configuring a Container</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,38 +0,0 @@
|
||||
<a name="cce_01_0010"></a><a name="cce_01_0010"></a>
|
||||
|
||||
<h1 class="topictitle1">Overview</h1>
|
||||
<div id="body1522665832344"><p id="cce_01_0010__p13310145119810">You can learn about a cluster network from the following two aspects:</p>
|
||||
<ul id="cce_01_0010__ul65247121891"><li id="cce_01_0010__li14524161214917">What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see <a href="#cce_01_0010__section1131733719195">Cluster Network Structure</a>.</li><li id="cce_01_0010__li55241612391">How is pod access is implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides <a href="#cce_01_0010__section1860619221134">Service</a> and <a href="#cce_01_0010__section1248852094313">Ingress</a> to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see <a href="#cce_01_0010__section1286493159">Access Scenarios</a>.</li></ul>
|
||||
<div class="section" id="cce_01_0010__section1131733719195"><a name="cce_01_0010__section1131733719195"></a><a name="section1131733719195"></a><h4 class="sectiontitle">Cluster Network Structure</h4><p id="cce_01_0010__p3299181794916">All nodes in the cluster are located in a VPC and use the VPC network. The container network is managed by dedicated network add-ons.</p>
|
||||
<p id="cce_01_0010__p452843519446"><span><img id="cce_01_0010__image94831936164418" src="en-us_image_0000001159292060.png"></span></p>
|
||||
<ul id="cce_01_0010__ul1916179122617"><li id="cce_01_0010__li13455145754315"><strong id="cce_01_0010__b19468105563811">Node Network</strong><p id="cce_01_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. You need to select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
|
||||
</li><li id="cce_01_0010__li16131141644715"><strong id="cce_01_0010__b1975815172433">Container Network</strong><p id="cce_01_0010__p523322010499">A container network assigns IP addresses to containers in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
|
||||
<p id="cce_01_0010__p3753153443514">Currently, CCE supports the following container network models:</p>
|
||||
<ul id="cce_01_0010__ul1751111534368"><li id="cce_01_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_01_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in the cluster can be directly accessed from outside the cluster.</li><li id="cce_01_0010__li5395140132618">Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance.</li></ul>
|
||||
<p id="cce_01_0010__p397482011109">The performance, networking scale, and application scenarios of a container network vary according to the container network model. For details about the functions and features of different container network models, see <a href="cce_01_0281.html">Overview</a>.</p>
|
||||
</li><li id="cce_01_0010__li9139522183714"><strong id="cce_01_0010__b1885317214113">Service Network</strong><p id="cce_01_0010__p584703114499">Service is also a Kubernetes object. Each Service has a fixed IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0010__section1860619221134"><a name="cce_01_0010__section1860619221134"></a><a name="section1860619221134"></a><h4 class="sectiontitle">Service</h4><p id="cce_01_0010__p314709111318">A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods.</p>
|
||||
<div class="fignone" id="cce_01_0010__en-us_topic_0249851121_fig163156154816"><span class="figcap"><b>Figure 1 </b>Accessing pods through a Service</span><br><span><img id="cce_01_0010__en-us_topic_0249851121_image1926812771312" src="en-us_image_0258889981.png"></span></div>
|
||||
<p id="cce_01_0010__p831948183818">You can configure the following types of Services:</p>
|
||||
<ul id="cce_01_0010__ul953218444116"><li id="cce_01_0010__li87791418174620">ClusterIP: used to make the Service only reachable from within a cluster.</li><li id="cce_01_0010__li17876227144612">NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node.</li><li id="cce_01_0010__li94953274615">LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, to which a load balancer routes, and external systems only need to access the load balancer.</li><li id="cce_01_0010__li1462811212513">ENI LoadBalancer: used for access from outside the cluster. An ENI LoadBalancer Service directs traffic from a load balancer at backend pods, reducing the latency and avoiding performance loss for containerized applications.</li></ul>
|
||||
<p id="cce_01_0010__p1677717174140">For details about the Service, see <a href="cce_01_0249.html">Overview</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0010__section1248852094313"><a name="cce_01_0010__section1248852094313"></a><a name="section1248852094313"></a><h4 class="sectiontitle">Ingress</h4><p id="cce_01_0010__p96672218193">Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward requests using layer-7 HTTP and HTTPS protocols. Domain names and paths can be used to achieve finer granularities.</p>
|
||||
<div class="fignone" id="cce_01_0010__fig816719454212"><span class="figcap"><b>Figure 2 </b>Ingress and Service</span><br><span><img id="cce_01_0010__en-us_topic_0249851122_image8371183511310" src="en-us_image_0258961458.png"></span></div>
|
||||
<p id="cce_01_0010__p174691141141410">For details about the ingress, see <a href="cce_01_0094.html">Overview</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0010__section1286493159"><a name="cce_01_0010__section1286493159"></a><a name="section1286493159"></a><h4 class="sectiontitle">Access Scenarios</h4><p id="cce_01_0010__p1558001514155">Workload access scenarios can be categorized as follows:</p>
|
||||
<ul id="cce_01_0010__ul125010117542"><li id="cce_01_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_01_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_01_0010__ul101426119117"><li id="cce_01_0010__li1014213113116">Access through the internet requires an EIP to be bound the node or load balancer.</li><li id="cce_01_0010__li2501311125411">Access through an intranet uses only the intranet IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
|
||||
</li><li id="cce_01_0010__li1066365520014">External access initiated by a workload:<ul id="cce_01_0010__ul17529512239"><li id="cce_01_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. </li><li id="cce_01_0010__li8257105318237">Accessing a public network: You need to assign an EIP to the node where the workload runs, or configure SNAT rules through the NAT gateway.</li></ul>
|
||||
</li></ul>
|
||||
<div class="fignone" id="cce_01_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_01_0010__image445972519529" src="en-us_image_0000001160748146.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0020.html">Networking</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,34 +0,0 @@
|
||||
<a name="cce_01_0013"></a><a name="cce_01_0013"></a>
|
||||
|
||||
<h1 class="topictitle1">Managing Pods</h1>
|
||||
<div id="body1564122277019"><div class="section" id="cce_01_0013__section204441317154411"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0013__p962213172051">A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates an application's container (or, in some cases, multiple containers), storage resources, a unique network identity (IP address), as well as options that govern how the container(s) should run. A pod represents a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.</p>
|
||||
<p id="cce_01_0013__p589774718597">Pods in a Kubernetes cluster can be used in either of the following ways:</p>
|
||||
<ul id="cce_01_0013__ul9462959175919"><li id="cce_01_0013__li16462559205910"><strong id="cce_01_0013__b1627921013913">Pods that run a single container.</strong> The "one-container-per-pod" model is the most common Kubernetes use case. In this case, a pod functions as a wrapper around a single container, and Kubernetes manages the pods rather than the containers directly.</li><li id="cce_01_0013__li64621959125917"><strong id="cce_01_0013__b746765415107">Pods that run multiple containers that need to work together.</strong> A pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. The possible scenarios are as follows:<ul id="cce_01_0013__ul17380211501"><li id="cce_01_0013__li194310912018">Content management systems, file and data loaders, local cache managers, etc;</li><li id="cce_01_0013__li4431292016">Log and checkpoint backup, compression, rotation, snapshotting, etc;</li><li id="cce_01_0013__li1543229100">Data change watchers, log tailers, logging and monitoring adapters, event publishers, etc;</li><li id="cce_01_0013__li64321096015">Proxies, bridges, adapters, etc;</li><li id="cce_01_0013__li9433291007">Controllers, managers, configurators, and updaters</li></ul>
|
||||
</li></ul>
|
||||
<p id="cce_01_0013__p580215651118">You can easily manage pods on CCE, such as editing YAML files and monitoring pods.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0013__section13937181015513"><h4 class="sectiontitle">Editing a YAML File</h4><p id="cce_01_0013__p879119319360">To edit and download the YAML file of a pod online, do as follows:</p>
|
||||
<ol id="cce_01_0013__ol1879112311361"><li id="cce_01_0013__li279113103612"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0013__b6666163091312">Workloads</strong> > <strong id="cce_01_0013__b767211308131">Pods</strong>.</span></li><li id="cce_01_0013__li379163103617"><span>Click <strong id="cce_01_0013__b18107151114183">Edit YAML</strong> at the same row as the target pod. In the <strong id="cce_01_0013__b11621193891817">Edit YAML</strong> dialog box displayed, modify the YAML file of the pod.</span></li><li id="cce_01_0013__li97921133367"><span>Click <strong id="cce_01_0013__b24151514186">Edit</strong> and then <strong id="cce_01_0013__b242045117184">OK</strong> to save the changes.</span><p><div class="note" id="cce_01_0013__note1365975191714"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0013__p766045111714">If a pod is created by another workload, its YAML file cannot be modified individually on the <strong id="cce_01_0013__b17661536142912">Pods</strong> page.</p>
|
||||
</div></div>
|
||||
</p></li><li id="cce_01_0013__li87324268415"><span>(Optional) In the <strong id="cce_01_0013__b1045015126211">Edit YAML</strong> window, click <strong id="cce_01_0013__b1245615128214">Download</strong> to download the YAML file.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0013__section88077333511"><h4 class="sectiontitle">Monitoring Pods</h4><p id="cce_01_0013__p785625243110">On the CCE console, you can view the CPU and memory usage, upstream and downstream rates, and disk read/write rates of a workload pod to determine the required resource specifications.</p>
|
||||
<ol id="cce_01_0013__ol121998089396"><li id="cce_01_0013__li9879311402"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0013__b163290354135">Workloads</strong> > <strong id="cce_01_0013__b143298357136">Pods</strong>.</span></li><li id="cce_01_0013__li2774856895942"><span>Click <strong id="cce_01_0013__b0397052418">Monitoring</strong> at the same row as the target pod to view the CPU and memory usage, upstream and downstream rates, and disk read/write rates of the pod.</span><p><div class="note" id="cce_01_0013__note23359528201758"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0013__p8909166201758">You cannot view the monitoring data of a pod that is not running.</p>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0013__section1917010503513"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0013__keyword17515174102412">Deleting a Pod</span></h4><p id="cce_01_0013__p44461328132920">If a pod is no longer needed, you can delete it. Deleted pods cannot be recovered. Exercise caution when performing this operation.</p>
|
||||
<ol id="cce_01_0013__ol16301162312555"><li id="cce_01_0013__li12308471214"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0013__b43361435121318">Workloads</strong> > <strong id="cce_01_0013__b11336173519133">Pods</strong>.</span></li><li id="cce_01_0013__li23014231555"><span>Click <strong id="cce_01_0013__b19516108195318">Delete</strong> at the same row as the pod to be deleted.</span><p><p id="cce_01_0013__p11245223162515">Read the system prompts carefully. A pod cannot be restored after it is deleted. Exercise caution when performing this operation.</p>
|
||||
</p></li><li id="cce_01_0013__li1566102365617"><span>Click <strong id="cce_01_0013__b1910253213588">Yes</strong> to delete the pod.</span><p><div class="note" id="cce_01_0013__note1933510551189"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_01_0013__ul204031813191914"><li id="cce_01_0013__li7404151371913">If the node where the pod is located is unavailable or shut down and the workload cannot be deleted, you can forcibly delete the pod from the pod list on the workload details page.</li><li id="cce_01_0013__li10404113191914">Ensure that the storage volumes to be deleted are not used by other workloads. If these volumes are imported or have snapshots, you can only unbind them.</li></ul>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0013__section367274071320"><h4 class="sectiontitle">Helpful Links</h4><ul id="cce_01_0013__ul1423164118113"><li id="cce_01_0013__li184231841710"><a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns" target="_blank" rel="noopener noreferrer">The Distributed System Toolkit: Patterns for Composite Containers</a></li><li id="cce_01_0013__li24235411313"><a href="https://kubernetes.io/blog/2016/06/container-design-patterns/" target="_blank" rel="noopener noreferrer">Container Design Patterns</a></li></ul>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0046.html">Workloads</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,86 +0,0 @@
|
||||
<a name="cce_01_0016"></a><a name="cce_01_0016"></a>
|
||||
|
||||
<h1 class="topictitle1">Using a Secret</h1>
|
||||
<div id="body1523236302435"><div class="notice" id="cce_01_0016__note13556115019429"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_01_0016__p39918120437">The following secrets are used by the CCE system. Do not perform any operations on them.</p>
|
||||
<ul id="cce_01_0016__ul13678122414717"><li id="cce_01_0016__li19678132424718">Do not operate secrets under kube-system.</li><li id="cce_01_0016__li206781324204718">Do not operate default-secret and paas.elb in any of the namespaces. The default-secret is used to pull the private image of SWR, and the paas.elb is used to connect the service in the namespace to the ELB service.</li></ul>
|
||||
</div></div>
|
||||
<ul id="cce_01_0016__ul784252913353"><li id="cce_01_0016__li0842229163518"><a href="#cce_01_0016__section472505211214">Configuring the Data Volume of a Pod</a></li><li id="cce_01_0016__li56474053519"><a href="#cce_01_0016__section207271352141216">Setting Environment Variables of a Pod</a></li></ul>
|
||||
<p id="cce_01_0016__p1119921953413">The following example shows how to use a secret.</p>
|
||||
<pre class="screen" id="cce_01_0016__screen1032654214366">apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
username: ****** #The value must be Base64-encoded.
|
||||
password: ****** #The value must be encoded using Base64.</pre>
|
||||
<div class="notice" id="cce_01_0016__note250219438139"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_01_0016__p132702112389">When a secret is used in a pod, the pod and secret must be in the same cluster and namespace.</p>
|
||||
</div></div>
|
||||
<div class="section" id="cce_01_0016__section472505211214"><a name="cce_01_0016__section472505211214"></a><a name="section472505211214"></a><h4 class="sectiontitle">Configuring the Data Volume of a Pod</h4><div class="p" id="cce_01_0016__p9949138153913">A secret can be used as a file in a pod. As shown in the following example, the username and password of the <strong id="cce_01_0016__b18969914308">mysecret </strong>secret are saved in the <strong id="cce_01_0016__b396199143010">/etc/foo</strong> directory as files.<pre class="screen" id="cce_01_0016__screen1237163595414">apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: <span style="color:#FF0000;">mysecret</span></pre>
|
||||
</div>
|
||||
<div class="p" id="cce_01_0016__p234055713124">In addition, you can specify the directory and permission to access a secret. The username is stored in the <strong id="cce_01_0016__b5930378811228">/etc/foo/my-group/my-username</strong> directory of the container.<pre class="screen" id="cce_01_0016__screen12711144211311">apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: <span style="color:#FF0000;">mysecret</span>
|
||||
items:
|
||||
- key: <span style="color:#FF0000;">username</span>
|
||||
path: <span style="color:#FF0000;">my-group/my-username</span>
|
||||
mode: <span style="color:#FF0000;">511</span></pre>
|
||||
</div>
|
||||
<p id="cce_01_0016__p2031118172817">To mount a secret to a data volume, you can also perform operations on the CCE console. When creating a workload, set advanced settings for the container, choose <strong id="cce_01_0016__b159591743143117">Data Storage > Local Volume</strong>, click <strong id="cce_01_0016__b89592043143116">Add Local Volume</strong>, and select <strong id="cce_01_0016__b10959144373116">Secret</strong>. For details, see <a href="cce_01_0053.html#cce_01_0053__en-us_topic_0000001199341206_section10197243134710">Secret</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0016__section207271352141216"><a name="cce_01_0016__section207271352141216"></a><a name="section207271352141216"></a><h4 class="sectiontitle">Setting Environment Variables of a Pod</h4><div class="p" id="cce_01_0016__p765814229403">A secret can be used as an environment variable of a pod. As shown in the following example, the username and password of the <strong id="cce_01_0016__b106728345329">mysecret</strong> secret are defined as an environment variable of the pod.<pre class="screen" id="cce_01_0016__screen67991151399">apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-env-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: mycontainer
|
||||
image: redis
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: <span style="color:#FF0000;">mysecret</span>
|
||||
key: <span style="color:#FF0000;">username</span>
|
||||
- name: SECRET_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: <span style="color:#FF0000;">mysecret</span>
|
||||
key: <span style="color:#FF0000;">password</span>
|
||||
restartPolicy: Never</pre>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0045.html">Configuration Center</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,11 +0,0 @@
|
||||
<a name="cce_01_0019"></a><a name="cce_01_0019"></a>
|
||||
|
||||
<h1 class="topictitle1">Charts (Helm)</h1>
|
||||
<div id="body1522665832345"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0143.html">My Charts</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,21 +0,0 @@
|
||||
<a name="cce_01_0020"></a><a name="cce_01_0020"></a>
|
||||
|
||||
<h1 class="topictitle1">Networking</h1>
|
||||
<div id="body1506570432072"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0010.html">Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0280.html">Container Network Models</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0247.html">Services</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0248.html">Ingress</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0059.html">Network Policies</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0288.html">SecurityGroups</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,28 +0,0 @@
|
||||
<a name="cce_01_0023"></a><a name="cce_01_0023"></a>
|
||||
|
||||
<h1 class="topictitle1">kubectl Usage Guide</h1>
|
||||
<div id="body1522736585969"><p id="cce_01_0023__p835118311455">Before running <span class="keyword" id="cce_01_0023__keyword153407726816273">kubectl</span> commands, you should have the kubectl development skills and understand the kubectl operations. For details, see <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" target="_blank" rel="noopener noreferrer">Kubernetes API</a> and <a href="https://kubernetes.io/docs/reference/kubectl/overview/" target="_blank" rel="noopener noreferrer">kubectl CLI</a>.</p>
|
||||
<p id="cce_01_0023__p11412125284012">Go to the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/README.md" target="_blank" rel="noopener noreferrer">Kubernetes release page</a> to download kubectl corresponding to the cluster version or a later version.</p>
|
||||
<div class="section" id="cce_01_0023__section223415528535"><h4 class="sectiontitle">Cluster Connection</h4><ul id="cce_01_0023__ul9218101025420"><li id="cce_01_0023__li421811020546"><a href="cce_01_0107.html">Connecting to a Kubernetes cluster using kubectl</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section81661268550"><h4 class="sectiontitle">Workload Creation</h4><ul id="cce_01_0023__ul043813218553"><li id="cce_01_0023__li54381320555"><a href="cce_01_0047.html#cce_01_0047__section155246177178">Creating a Deployment using kubectl</a></li><li id="cce_01_0023__li1143816328559"><a href="cce_01_0048.html#cce_01_0048__section113441881214">Creating a StatefulSet using kubectl</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section1294518341552"><h4 class="sectiontitle">Workload <span class="keyword" id="cce_01_0023__keyword2125179630163515">Affinity</span>/<span class="keyword" id="cce_01_0023__keyword1303019494163518">Anti-affinity</span> Scheduling</h4><ul id="cce_01_0023__ul16157916145711"><li id="cce_01_0023__li16157161613574"><a href="cce_01_0225.html#cce_01_0225__section711574271117">Example YAML for workload-node affinity</a></li><li id="cce_01_0023__li1015719169575"><a href="cce_01_0226.html#cce_01_0226__section1361482522712">Example YAML for workload-node anti-affinity</a></li><li id="cce_01_0023__li16157141613572"><a href="cce_01_0220.html#cce_01_0220__section5140193643912">Example YAML for workload-workload affinity</a></li><li id="cce_01_0023__li0158131605713"><a href="cce_01_0227.html#cce_01_0227__section1894310152317">Example YAML for workload-workload anti-affinity</a></li><li id="cce_01_0023__li9158416125718"><a href="cce_01_0228.html#cce_01_0228__section4201420133117">Example YAML for workload-AZ affinity</a></li><li id="cce_01_0023__li8158151665719"><a href="cce_01_0229.html#cce_01_0229__section102822029173111">Example YAML for workload-AZ anti-affinity</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section557132035713"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0023__keyword1218303978163611">Workload Access Mode</span> Settings</h4><ul id="cce_01_0023__ul153741345812"><li id="cce_01_0023__li143751314587"><a href="cce_01_0011.html#cce_01_0011__section9813121512319">Implementing intra-cluster access using kubectl</a></li><li id="cce_01_0023__li103781345818"><a href="cce_01_0142.html#cce_01_0142__section7114174773118">Implementing node access using kubectl</a></li><li id="cce_01_0023__li193715130582"><a href="cce_01_0014.html#cce_01_0014__section1984211714368">Implementing Layer 4 load balancing using kubectl</a></li><li id="cce_01_0023__li123781319584"><a href="cce_01_0252.html">Implementing Layer 7 load balancing using kubectl</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section927251814582"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0023__keyword1569546850163650">Advanced Workload Settings</span></h4><ul id="cce_01_0023__ul1043016377582"><li id="cce_01_0023__li743017374580"><a href="cce_01_0105.html#cce_01_0105__section151181981167">Example YAML for setting the container lifecycle</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section1660674011584"><h4 class="sectiontitle">Job Management</h4><ul id="cce_01_0023__ul1990515291014"><li id="cce_01_0023__li14905629916"><a href="cce_01_0150.html#cce_01_0150__section450152719412">Creating a job using kubectl</a></li><li id="cce_01_0023__li6905192917112"><a href="cce_01_0151.html#cce_01_0151__section13519162224919">Creating a cron job using kubectl</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section12376151215916"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0023__keyword1611535067163714">Configuration Center</span></h4><ul id="cce_01_0023__ul5692715111"><li id="cce_01_0023__li4617272111"><a href="cce_01_0152.html#cce_01_0152__section639712716372">Creating a ConfigMap using kubectl</a></li><li id="cce_01_0023__li16827616"><a href="cce_01_0153.html#cce_01_0153__section821112149514">Creating a secret using kubectl</a></li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0023__section274418453590"><h4 class="sectiontitle">Storage Management</h4><ul id="cce_01_0023__ul56081523813"><li id="cce_01_0023__li1160882312115"><a href="cce_01_0379.html">Creating a PV using kubectl</a></li><li id="cce_01_0023__li106081723819"><a href="cce_01_0378.html">Creating a PVC using kubectl</a></li></ul>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0140.html">Using kubectl to Run a Cluster</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,21 +0,0 @@
|
||||
<a name="cce_01_0026"></a><a name="cce_01_0026"></a>
|
||||
|
||||
<h1 class="topictitle1">Querying CTS Logs</h1>
|
||||
<div id="body1525226397666"><div class="section" id="cce_01_0026__section19908104613460"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0026__p1349415403233">After you enable CTS, the system starts recording operations on CCE resources. Operation records of the last 7 days can be viewed on the CTS management console.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0026__section208814582456"><h4 class="sectiontitle">Procedure</h4><ol id="cce_01_0026__ol968681862911"><li id="cce_01_0026__li18356228445"><span>Log in to the management console.</span></li><li id="cce_01_0026__li14905725134512"><span>Click <span><img id="cce_01_0026__image1180502423211" src="en-us_image_0144054048.gif"></span> in the upper left corner and select a region.</span></li><li id="cce_01_0026__li56856187296"><span>Choose <strong id="cce_01_0026__b161841334316020">Service List</strong> from the main menu. Choose <strong id="cce_01_0026__b14174101155814">Management & Deployment</strong> > <strong id="cce_01_0026__b1917414113585">Cloud Trace Service</strong>.</span></li><li id="cce_01_0026__li6685018122920"><span>In the navigation pane of the CTS console, choose <strong id="cce_01_0026__b091641316584">Cloud Trace Service</strong> > <strong id="cce_01_0026__b6917813165811">Trace List</strong>.</span></li><li id="cce_01_0026__li0686618152911"><span>On the <strong id="cce_01_0026__b156310494616044">Trace List</strong> page, query operation records based on the search criteria. Currently, the trace list supports trace query based on the combination of the following search criteria:</span><p><ul id="cce_01_0026__ul2686318142919"><li id="cce_01_0026__li9685018132914"><strong id="cce_01_0026__b147767585916113">Trace Source</strong>, <strong id="cce_01_0026__b33843206916113">Resource Type</strong>, and <strong id="cce_01_0026__b104136949616113">Search By</strong><p id="cce_01_0026__p068517181297">Select the search criteria from the drop-down lists. Select <strong id="cce_01_0026__b987393825817">CCE</strong> from the <strong id="cce_01_0026__b1287312387583">Trace Source</strong> drop-down list.</p>
|
||||
<p id="cce_01_0026__p26851618102915">If you select <strong id="cce_01_0026__b23175131216221">Trace name</strong> from the <strong id="cce_01_0026__b172899127516221">Search By</strong> drop-down list, specify the trace name.</p>
|
||||
<p id="cce_01_0026__p7685191818293">If you select <strong id="cce_01_0026__b33083335616231">Resource ID</strong> from the <strong id="cce_01_0026__b153919820216231">Search By</strong> drop-down list, select or enter a specific resource ID.</p>
|
||||
<p id="cce_01_0026__p166851718102917">If you select <strong id="cce_01_0026__b50135831116238">Resource name</strong> from the <strong id="cce_01_0026__b186507588316238">Search By</strong> drop-down list, select or enter a specific resource name.</p>
|
||||
</li><li id="cce_01_0026__li1968671815297"><strong id="cce_01_0026__b168444573616245">Operator</strong>: Select a specific operator (at user level rather than account level).</li><li id="cce_01_0026__li368641832910"><strong id="cce_01_0026__b113712261116258">Trace Status</strong>: Set this parameter to any of the following values: <strong id="cce_01_0026__b135890568716258">All trace statuses</strong>, <strong id="cce_01_0026__b192911413716258">normal</strong>, <strong id="cce_01_0026__b59570413316258">warning</strong>, and <strong id="cce_01_0026__b169117565716258">incident</strong>.</li><li id="cce_01_0026__li12686118112916">Time range: You can query traces generated during any time range in the last seven days.</li></ul>
|
||||
</p></li><li id="cce_01_0026__li01301836122914"><span>Click <span><img id="cce_01_0026__image07291172331" src="en-us_image_0144049227.png"></span> on the left of a trace to expand its details, as shown below.</span><p><div class="fignone" id="cce_01_0026__fig1324117817394"><span class="figcap"><b>Figure 1 </b>Expanding trace details</span><br><span><img id="cce_01_0026__image19242788396" src="en-us_image_0000001144779790.png"></span></div>
|
||||
</p></li><li id="cce_01_0026__li186863182294"><span>Click <strong id="cce_01_0026__b25871212163720">View Trace</strong> in the <strong id="cce_01_0026__b1597141217374">Operation</strong> column. The trace details are displayed.</span><p><div class="fignone" id="cce_01_0026__fig365411360512"><span class="figcap"><b>Figure 2 </b>Viewing event details</span><br><span><img id="cce_01_0026__image21436386418" src="en-us_image_0000001144620002.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0024.html">Cloud Trace Service (CTS)</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,31 +0,0 @@
|
||||
<a name="cce_01_0027"></a><a name="cce_01_0027"></a>
|
||||
|
||||
<h1 class="topictitle1">Clusters</h1>
|
||||
<div id="body1505899032898"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0002.html">Cluster Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0342.html">CCE Turbo Clusters and CCE Clusters</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0298.html">Creating a CCE Turbo Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0028.html">Creating a CCE Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0140.html">Using kubectl to Run a Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0157.html">Setting Cluster Auto Scaling</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0215.html">Upgrading a Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0031.html">Managing a Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0175.html">Obtaining a Cluster Certificate</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0085.html">Controlling Cluster Permissions</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0347.html">Cluster Parameters</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,17 +0,0 @@
|
||||
<a name="cce_01_0030"></a><a name="cce_01_0030"></a>
|
||||
|
||||
<h1 class="topictitle1">Namespaces</h1>
|
||||
<div id="body1505966783221"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0278.html">Creating a Namespace</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0285.html">Managing Namespaces</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0286.html">Configuring a Namespace-level Network Policy</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0287.html">Setting a Resource Quota</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,19 +0,0 @@
|
||||
<a name="cce_01_0031"></a><a name="cce_01_0031"></a>
|
||||
|
||||
<h1 class="topictitle1">Managing a Cluster</h1>
|
||||
<div id="body1506157580881"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0212.html">Deleting a Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0214.html">Hibernating and Waking Up a Cluster</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0213.html">Configuring Kubernetes Parameters</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0027.html">Clusters</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,76 +0,0 @@
|
||||
<a name="cce_01_0033"></a><a name="cce_01_0033"></a>
|
||||
|
||||
<h1 class="topictitle1">Creating a Node</h1>
|
||||
<div id="body1505899032898"><div class="section" id="cce_01_0033__section1372154273312"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0033__p18355115417330">A node is a virtual or physical machine that provides computing resources. Sufficient nodes must be available in your project to ensure that operations, such as creating workloads, can be performed.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0033__section103205496263"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_01_0033__ul1074615557447"><li id="cce_01_0033__li474645512448">At least one cluster is available. For details on how to create a cluster, see <a href="cce_01_0028.html">Creating a CCE Cluster</a>.</li><li id="cce_01_0033__li1919515393327">A key pair has been created. The key pair will be used for identity authentication upon remote node login.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0033__section172601129872"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_01_0033__ul1963914497274"><li id="cce_01_0033__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_01_0033__li186405498273">Only KVM nodes can be created. Non-KVM nodes cannot be used after being created.</li><li id="cce_01_0033__li1164034982716">Once a node is created, its AZ cannot be changed.</li><li id="cce_01_0033__li121985211519">CCE supports GPUs through an add-on named <a href="cce_01_0141.html">gpu-beta</a>. You need to install this add-on to use GPU-enabled nodes in your cluster.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0033__section19320144922620"><h4 class="sectiontitle">Procedure</h4><ol id="cce_01_0033__ol1432013497268"><li id="cce_01_0033__li28183131548"><span>Log in to the CCE console. Use either of the following methods to add a node:</span><p><ul id="cce_01_0033__ul05711943842"><li id="cce_01_0033__li105711143440">In the navigation pane, choose <strong id="cce_01_0033__b51641611872">Resource Management</strong> > <strong id="cce_01_0033__b14171111179">Nodes</strong>. Select the cluster to which the node will belong and click <strong id="cce_01_0033__b121721116719">Create</strong><strong id="cce_01_0033__b61721211471"> Node</strong> on the upper part of the node list page.</li><li id="cce_01_0033__li1857110431343">In the navigation pane, choose <strong id="cce_01_0033__b19396182">Resource Management</strong> > <strong id="cce_01_0033__b41001861288">Clusters</strong>. In the card view of the cluster to which you will add nodes, click <strong id="cce_01_0033__b121000613817"></strong><strong id="cce_01_0033__b17100063818">Create</strong><strong id="cce_01_0033__b1710110618812"> Node</strong>.</li></ul>
|
||||
</p></li><li id="cce_01_0033__li8652222123119"><span>Select a region and an AZ.</span><p><ul id="cce_01_0033__ul587193815262"><li id="cce_01_0033__li1687143810264"><strong id="cce_01_0033__cce_01_0028_b1536414140279">Current Region</strong>: geographic location of the nodes to be created.</li><li id="cce_01_0033__li25561440112615"><strong id="cce_01_0033__cce_01_0028_b14136152617272">AZ</strong>: Set this parameter based on the site requirements. An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network.<p id="cce_01_0033__cce_01_0028_p1820919422327">You are advised to deploy worker nodes in different AZs after the cluster is created to make your workloads more reliable. When creating a cluster, you can deploy nodes only in one AZ.</p>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_01_0033__li14558457324"><span>Configure node parameters.</span><p><ul id="cce_01_0033__ul9458227102717"><li id="cce_01_0033__li1345852792710"><strong id="cce_01_0033__cce_01_0028_b333451116287">Node Type</strong><ul id="cce_01_0033__cce_01_0028_ul521215421329"><li id="cce_01_0033__cce_01_0028_li12105427323"><strong id="cce_01_0033__cce_01_0028_b1557215574277">VM node</strong>: A VM node will be created in the cluster.</li></ul>
|
||||
</li><li id="cce_01_0033__li592014281271"><strong id="cce_01_0033__cce_01_0028_b0421527102814">Node Name</strong>: Enter a node name. A node name contains 1 to 56 characters starting with a lowercase letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_01_0033__li85443322812"><strong id="cce_01_0033__cce_01_0028_b16100183432814">Specifications</strong>: Select node specifications that best fit your business needs.<ul id="cce_01_0033__cce_01_0028_ul2581647145016"><li id="cce_01_0033__cce_01_0028_li1458211473505"><strong id="cce_01_0033__cce_01_0028_b62505295394">General-purpose</strong>: provides a balance of computing, memory, and network resources. It is a good choice for many applications, such as web servers, workload development, workload testing, and small-scale databases.</li><li id="cce_01_0033__cce_01_0028_li64951927195119"><strong id="cce_01_0033__cce_01_0028_b19897143519544">Memory-optimized</strong>: provides higher memory capacity than general-purpose nodes and is suitable for relational databases, NoSQL, and other workloads that are both memory-intensive and data-intensive.</li><li id="cce_01_0033__cce_01_0028_li1050769193112"><strong id="cce_01_0033__cce_01_0028_b2152663587">GPU-accelerated</strong>: provides powerful floating-point computing and is suitable for real-time, highly concurrent massive computing. Graphical processing units (GPUs) of P series are suitable for deep learning, scientific computing, and CAE. GPUs of G series are suitable for 3D animation rendering and CAD. <strong id="cce_01_0033__cce_01_0028_b193285495448">GPU-accelerated nodes can be created only in clusters of v1.11 or later</strong>. GPU-accelerated nodes are available only in certain regions.</li><li id="cce_01_0033__cce_01_0028_li7155144113514"><strong id="cce_01_0033__cce_01_0028_b1243213616911">General computing-plus</strong>: provides stable performance and exclusive resources to enterprise-class workloads with high and stable computing performance.</li><li id="cce_01_0033__cce_01_0028_li162155965212"><strong id="cce_01_0033__cce_01_0028_b19525182091018">Disk-intensive</strong>: supports <a href="cce_01_0053.html">local disk storage</a> and provides high network performance. It is designed for workloads requiring high throughput and data switching, such as big data workloads.</li></ul>
|
||||
<p id="cce_01_0033__cce_01_0028_p9915153163017">To ensure node stability, CCE automatically reserves some resources to run necessary system components. For details, see <a href="cce_01_0178.html">Formula for Calculating the Reserved Resources of a Node</a>.</p>
|
||||
</li><li id="cce_01_0033__li171631839172816"><strong id="cce_01_0033__cce_01_0028_b16835153671716">OS</strong>: Select an OS for the node to be created.
|
||||
<p id="cce_01_0033__cce_01_0028_p294212842712">Reinstalling the OS or modifying OS configurations could make the node unavailable. Exercise caution when performing these operations.</p>
|
||||
</li><li id="cce_01_0033__li1870117486571"><strong id="cce_01_0033__cce_01_0028_b192514299515">System Disk</strong>: Set the system disk space of the worker node. The value ranges from <span id="cce_01_0033__cce_01_0028_text1330724217365">40GB</span> to 1024 GB. The default value is <span id="cce_01_0033__cce_01_0028_text159658476365">40GB</span>.<p id="cce_01_0033__cce_01_0028_p1922117423324">By default, system disks support <span id="cce_01_0033__cce_01_0028_text0858201711334">Common I/O (SATA), High I/O (SAS), and Ultra-high I/O (SSD)High I/O (SAS) and Ultra-high I/O (SSD)</span> EVS disks.</p>
|
||||
<div class="p" id="cce_01_0033__cce_01_0028_p196385417139"><strong id="cce_01_0033__cce_01_0028_b2864728125818">Encryption</strong>: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. <strong id="cce_01_0033__cce_01_0028_b134171320013">This function is available only in certain regions.</strong><ul id="cce_01_0033__cce_01_0028_ul6195114261211"><li id="cce_01_0033__cce_01_0028_li5195194211127"><strong id="cce_01_0033__cce_01_0028_b593951417016">Encryption</strong> is not selected by default.</li><li id="cce_01_0033__cce_01_0028_li5195184201217">After you select <strong id="cce_01_0033__cce_01_0028_b135345191605">Encryption</strong>, you can select an existing key in the displayed <strong id="cce_01_0033__cce_01_0028_b125655191803">Encryption Setting</strong> dialog box. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon.</li></ul>
|
||||
</div>
|
||||
</li><li id="cce_01_0033__li12223421320"><a name="cce_01_0033__li12223421320"></a><a name="li12223421320"></a><strong id="cce_01_0033__cce_01_0028_b97242441917">Data Disk</strong>: Set the data disk space of the worker node. The value ranges from 100 GB to 32,768 GB. The default value is 100 GB. The EVS disk types provided for the data disk are the same as those for the system disk.<div class="caution" id="cce_01_0033__cce_01_0028_note164211755115812"><span class="cautiontitle"><img src="public_sys-resources/caution_3.0-en-us.png"> </span><div class="cautionbody"><p id="cce_01_0033__cce_01_0028_p342185565817">If the data disk is uninstalled or damaged, the Docker service becomes abnormal and the node becomes unavailable. You are advised not to delete the data disk.</p>
|
||||
</div></div>
|
||||
<ul id="cce_01_0033__cce_01_0028_ul295084018332"><li id="cce_01_0033__cce_01_0028_li2512044204319"><strong id="cce_01_0033__cce_01_0028_b1077365616343">LVM</strong>: If this option is selected, CCE data disks are managed by the Logical Volume Manager (LVM). On this condition, you can adjust the disk space allocation for different resources. This option is selected for the first disk by default and cannot be unselected. You can choose to enable or disable LVM for new data disks.<ul id="cce_01_0033__cce_01_0028_ul5276249174617"><li id="cce_01_0033__cce_01_0028_li17431415461">This option is selected by default, indicating that LVM management is enabled.</li><li id="cce_01_0033__cce_01_0028_li129920427482">You can deselect the check box to disable LVM management.<div class="caution" id="cce_01_0033__cce_01_0028_note144345175814"><span class="cautiontitle"><img src="public_sys-resources/caution_3.0-en-us.png"> </span><div class="cautionbody"><ul id="cce_01_0033__cce_01_0028_ul148441643567"><li id="cce_01_0033__cce_01_0028_li1022153213250">Disk space of the data disks managed by LVM will be allocated according to the ratio you set.</li><li id="cce_01_0033__cce_01_0028_li78443455617">When creating a node in a cluster of v1.13.10 or later, if LVM is not selected for a data disk, follow instructions in <a href="cce_01_0344.html">Adding a Second Data Disk to a Node in a CCE Cluster</a> to fill in the pre-installation script and format the data disk. Otherwise, the data disk will still be managed by LVM.</li><li id="cce_01_0033__cce_01_0028_li284412417565">When creating a node in a cluster earlier than v1.13.10, you must format the data disks that are not managed by LVM. Otherwise, either these data disks or the first data disk will be managed by LVM.</li></ul>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li13480132822014"><strong id="cce_01_0033__cce_01_0028_b17382155322214">Encryption</strong>: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function.<div class="p" id="cce_01_0033__cce_01_0028_p59911430112010"><strong id="cce_01_0033__cce_01_0028_b12100193914235">This function is supported only for clusters of v1.13.10 or later in certain regions,</strong> and is not displayed for clusters of v1.13.10 or earlier.<ul id="cce_01_0033__cce_01_0028_ul1998753519474"><li id="cce_01_0033__cce_01_0028_li3162162914479"><strong id="cce_01_0033__cce_01_0028_b2533191113497">Encryption</strong> is not selected by default.</li><li id="cce_01_0033__cce_01_0028_li169101850184415">After you select <strong id="cce_01_0033__cce_01_0028_b8585114154919">Encryption</strong>, you can select an existing key in the displayed <strong id="cce_01_0033__cce_01_0028_b75851314134919">Encryption Setting</strong> dialog box. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon.</li></ul>
|
||||
</div>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li542714325504"><strong id="cce_01_0033__cce_01_0028_b813723085014">Add Data Disk</strong>: Currently, a maximum of two data disks can be attached to a node. After the node is created, you can go to the ECS console to attach more data disks. This function is available only to clusters of certain versions.</li><li id="cce_01_0033__cce_01_0028_li28006368331"><strong id="cce_01_0033__cce_01_0028_b5674125201617">Data disk space allocation</strong>: Click <span><img id="cce_01_0033__cce_01_0028_image331213511271" src="en-us_image_0273156799.png"></span> to specify the resource ratio for <strong id="cce_01_0033__cce_01_0028_b12931104261718">Kubernetes Space</strong> and <strong id="cce_01_0033__cce_01_0028_b9882456172">User Space</strong>. Disk space of the data disks managed by LVM will be allocated according to the ratio you set. This function is available only to clusters of certain versions.<ul id="cce_01_0033__cce_01_0028_ul15312175122713"><li id="cce_01_0033__cce_01_0028_li1312051142712"><strong id="cce_01_0033__cce_01_0028_b154411322111416">Kubernetes Space</strong>: You can specify the ratio of the data disk space for storing Docker and kubelet resources. Docker resources include the Docker working directory, Docker images, and image metadata. kubelet resources include pod configuration files, secrets, and emptyDirs.<p id="cce_01_0033__cce_01_0028_p20126104712595">The Docker space cannot be less than 10%, and the space size cannot be less than 60 GB. The kubelet space cannot be less than 10%.</p>
|
||||
<p id="cce_01_0033__cce_01_0028_p6174194615594">The Docker space size is determined by your service requirements. For details, see <a href="cce_01_0341.html">Data Disk Space Allocation</a>.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li3312145116273"><strong id="cce_01_0033__cce_01_0028_b12105194652812">User Space</strong>: You can set the ratio of the disk space that is not allocated to Kubernetes resources and the path to which the user space is mounted.<div class="note" id="cce_01_0033__cce_01_0028_note168401426124510"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0033__cce_01_0028_p732075742919">Note that the mount path cannot be <strong id="cce_01_0033__cce_01_0028_b12678109121212">/</strong>, <strong id="cce_01_0033__cce_01_0028_b286214119126">/home/paas</strong>, <strong id="cce_01_0033__cce_01_0028_b1857531417123">/var/paas</strong>, <strong id="cce_01_0033__cce_01_0028_b750101831214">/var/lib</strong>, <strong id="cce_01_0033__cce_01_0028_b320519229122">/var/script</strong>, <strong id="cce_01_0033__cce_01_0028_b5501152417124">/var/log</strong>, <strong id="cce_01_0033__cce_01_0028_b979114261127">/mnt/paas</strong>, or <strong id="cce_01_0033__cce_01_0028_b2113144491215">/opt/cloud</strong>, and cannot conflict with the system directories (such as <strong id="cce_01_0033__cce_01_0028_b9614105712121">bin</strong>, <strong id="cce_01_0033__cce_01_0028_b139416594126">lib</strong>, <strong id="cce_01_0033__cce_01_0028_b1137591171316">home</strong>, <strong id="cce_01_0033__cce_01_0028_b109825231317">root</strong>, <strong id="cce_01_0033__cce_01_0028_b11543194171312">boot</strong>, <strong id="cce_01_0033__cce_01_0028_b1371816571311">dev</strong>, <strong id="cce_01_0033__cce_01_0028_b19151678139">etc</strong>, <strong id="cce_01_0033__cce_01_0028_b9328181018132">lost+found</strong>, <strong id="cce_01_0033__cce_01_0028_b84469113135">mnt</strong>, <strong id="cce_01_0033__cce_01_0028_b210216136131">proc</strong>, <strong id="cce_01_0033__cce_01_0028_b6971014171310">sbin</strong>, <strong id="cce_01_0033__cce_01_0028_b1611714166139">srv</strong>, <strong id="cce_01_0033__cce_01_0028_b387417178138">tmp</strong>, <strong id="cce_01_0033__cce_01_0028_b101401319171317">var</strong>, <strong id="cce_01_0033__cce_01_0028_b33846205138">media</strong>, <strong id="cce_01_0033__cce_01_0028_b248112213133">opt</strong>, <strong id="cce_01_0033__cce_01_0028_b198571622141310">selinux</strong>, <strong id="cce_01_0033__cce_01_0028_b392817279133">sys</strong>, and <strong id="cce_01_0033__cce_01_0028_b9694829151318">usr</strong>). Otherwise, the system or node installation will fail.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li></ul>
|
||||
<div class="p" id="cce_01_0033__cce_01_0028_p19423163173413"><strong id="cce_01_0033__cce_01_0028_b1184048101710">If the cluster version is v1.13.10-r0 or later and the node specification is <span id="cce_01_0033__cce_01_0028_text15505352268">Disk-intensive</span>, the following options are displayed for data disks:</strong><ul id="cce_01_0033__cce_01_0028_ul83121151162719"><li id="cce_01_0033__cce_01_0028_li14312751122720"><strong id="cce_01_0033__cce_01_0028_b52749232337">EVS</strong>: Parameters are the same as those when the node type is not <span id="cce_01_0033__cce_01_0028_text1334715452614">Disk-intensive</span>. For details, see <a href="#cce_01_0033__li12223421320">Data Disk</a> above.</li><li id="cce_01_0033__cce_01_0028_li17312351192713"><strong id="cce_01_0033__cce_01_0028_b1397611819250">Local disk</strong>: Local disks may break down and do not ensure data reliability. It is recommended that you store service data in EVS disks, which are more reliable than local disks.<div class="p" id="cce_01_0033__cce_01_0028_p103124515276">Local disk parameters are as follows:<ul id="cce_01_0033__cce_01_0028_ul731225122719"><li id="cce_01_0033__cce_01_0028_li20312185114276"><strong id="cce_01_0033__cce_01_0028_b12607039194011">Disk Mode</strong>: If the node type is <strong id="cce_01_0033__cce_01_0028_b9607143994010">disk-intensive</strong>, the supported disk mode is HDD.</li><li id="cce_01_0033__cce_01_0028_li2312551172713"><strong id="cce_01_0033__cce_01_0028_b18107122625717">Read/Write Mode</strong>: When multiple local disks exist, you can set the read/write mode. The serial and sequential modes are supported. <strong id="cce_01_0033__cce_01_0028_b1058165413386">Sequential</strong> indicates that data is read and written in linear mode. When a disk is used up, the next disk is used. <strong id="cce_01_0033__cce_01_0028_b115014811394">Serial</strong> indicates that data is read and written in striping mode, allowing multiple local disks to be read and written at the same time.</li><li id="cce_01_0033__cce_01_0028_li3312185192713"><strong id="cce_01_0033__cce_01_0028_b11515316378">Kubernetes Space</strong>: You can specify the ratio of the data disk space for storing Docker and kubelet resources. Docker resources include the Docker working directory, Docker images, and image metadata. kubelet resources include pod configuration files, secrets, and emptyDirs.</li><li id="cce_01_0033__cce_01_0028_li1131255118278"><strong id="cce_01_0033__cce_01_0028_b822438863">User Space</strong>: You can set the ratio of the disk space that is not allocated to Kubernetes resources and the path to which the user space is mounted.</li></ul>
|
||||
</div>
|
||||
</li></ul>
|
||||
<div class="notice" id="cce_01_0033__cce_01_0028_note731210512270"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_01_0033__cce_01_0028_ul1631215519271"><li id="cce_01_0033__cce_01_0028_li1731213514278">The ratio of disk space allocated to the Kubernetes space and user space must be equal to 100% in total. You can click <span><img id="cce_01_0033__cce_01_0028_image3376115316309" src="en-us_image_0220702939.png"></span> to refresh the data after you have modified the ratio.</li><li id="cce_01_0033__cce_01_0028_li1931211515272">By default, disks run in the direct-lvm mode. If data disks are removed, the loop-lvm mode will be used and this will impair system stability.</li></ul>
|
||||
</div></div>
|
||||
</div>
|
||||
</li><li id="cce_01_0033__li1488319138297"><strong id="cce_01_0033__cce_01_0028_b480013111544">VPC</strong>: A VPC where the current cluster is located. This parameter cannot be changed and is displayed only for clusters of v1.13.10-r0 or later.</li><li id="cce_01_0033__li796613104535"><strong id="cce_01_0033__cce_01_0028_b95758189385">Subnet</strong>: A subnet improves network security by providing exclusive network resources that are isolated from other networks. You can select any subnet in the cluster VPC. Cluster nodes can belong to different subnets.<p id="cce_01_0033__cce_01_0028_p4796113615141">During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</p>
|
||||
<p id="cce_01_0033__cce_01_0028_p02618161333"></p>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_01_0033__li1073152764211"><span><strong id="cce_01_0033__cce_01_0028_b777614111929">EIP</strong>: an independent public IP address. If the nodes to be created require public network access, select <strong id="cce_01_0033__cce_01_0028_b0875102912213">Automatically assign</strong> or <strong id="cce_01_0033__cce_01_0028_b1190393312212">Use existing</strong>. </span><p><div class="p" id="cce_01_0033__cce_01_0028_p132804247236">An EIP bound to the node allows public network access. EIP bandwidth can be modified at any time. An ECS without a bound EIP cannot access the Internet or be accessed by public networks. <ul id="cce_01_0033__cce_01_0028_ul83111819172314"><li id="cce_01_0033__cce_01_0028_li631131920236"><strong id="cce_01_0033__cce_01_0028_b49331471105">Do not use</strong>: A node without an EIP cannot be accessed from public networks. It can be used only as a cloud server for deploying services or clusters on a private network.</li><li id="cce_01_0033__cce_01_0028_li143119192235"><strong id="cce_01_0033__cce_01_0028_b34240613419">Automatically assign</strong>: An EIP with specified configurations is automatically assigned to each node. If the number of EIPs is smaller than the number of nodes, the EIPs are randomly bound to the nodes.<p id="cce_01_0033__cce_01_0028_p18311519152312">Configure the EIP specifications, billing factor, bandwidth type, and bandwidth size as required. When creating an ECS, ensure that the elastic IP address quota is sufficient.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li103111519182314"><strong id="cce_01_0033__cce_01_0028_b836911281616">Use existing</strong>: Existing EIPs are assigned to the nodes to be created.</li></ul>
|
||||
<div class="note" id="cce_01_0033__cce_01_0028_note731161915238"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0033__cce_01_0028_p2031191911236">By default, VPC's SNAT feature is disabled for CCE. If SNAT is enabled, you do not need to use EIPs to access public networks. For details about SNAT, see <a href="cce_01_0188.html#cce_01_0188__section1437818291149">Custom Policies</a>.</p>
|
||||
</div></div>
|
||||
</div>
|
||||
</p></li><li id="cce_01_0033__li022815451332"><span><strong id="cce_01_0033__cce_01_0028_b15704748145913">Login Mode</strong>:</span><p><ul id="cce_01_0033__cce_01_0028_ul10227542183218"><li id="cce_01_0033__cce_01_0028_li922784273210"><span class="parmvalue" id="cce_01_0033__cce_01_0028_parmvalue20179883119"><b>Key pair</b></span>: Select the key pair used to log in to the node. You can select a shared key.<p id="cce_01_0033__cce_01_0028_p102267427326">A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click <strong id="cce_01_0033__cce_01_0028_b193902038879">Create a key pair</strong>.</p>
|
||||
<div class="notice" id="cce_01_0033__cce_01_0028_note1476122284"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_01_0033__cce_01_0028_p37611821585">When creating a node using a key pair, IAM users can select only the key pairs created by their own, regardless of whether these users are in the same group. For example, user B cannot use the key pair created by user A to create a node, and the key pair is not displayed in the drop-down list on the CCE console.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_01_0033__li1824844253210"><span><strong id="cce_01_0033__cce_01_0028_b149511434123320">Advanced ECS Settings</strong> (optional): Click <span><img id="cce_01_0033__cce_01_0028_image13227184214322" src="en-us_image_0183134608.png"></span> to show advanced ECS settings.</span><p><ul id="cce_01_0033__cce_01_0028_ul124844233220"><li id="cce_01_0033__cce_01_0028_li17938610694"><strong id="cce_01_0033__cce_01_0028_b310182353710">ECS Group</strong>: An ECS group logically groups ECSs. The ECSs in the same ECS group comply with the same policy associated with the ECS group.<ul id="cce_01_0033__cce_01_0028_ul127311142095"><li id="cce_01_0033__cce_01_0028_li13731314598"><strong id="cce_01_0033__cce_01_0028_b68267735716">Anti-affinity</strong>: ECSs in an ECS group are deployed on different physical hosts to improve service reliability.</li></ul>
|
||||
<p id="cce_01_0033__cce_01_0028_p7755165819820">Select an existing ECS group, or click <span class="uicontrol" id="cce_01_0033__cce_01_0028_uicontrol1048116461867"><b>Create ECS Group</b></span> to create one. After the ECS group is created, click the refresh button.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li202301642143216"><strong id="cce_01_0033__cce_01_0028_b111489416348">Resource Tags</strong>: By adding tags to resources, you can classify resources.<p id="cce_01_0033__cce_01_0028_p10327184710426">You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency. </p>
|
||||
<p id="cce_01_0033__cce_01_0028_p2939181144320">CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag. A maximum of 5 tags can be added.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li11472519267"><strong id="cce_01_0033__cce_01_0028_b2837052124813">Agency</strong>: An agency is created by a tenant administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. To authorize an ECS or BMS to call cloud services, select <strong id="cce_01_0033__cce_01_0028_b7449141823516">Cloud service</strong> as the agency type, click <strong id="cce_01_0033__cce_01_0028_b1528322212357">Select</strong>, and then select <strong id="cce_01_0033__cce_01_0028_b1112992803513">ECS BMS</strong>.</li><li id="cce_01_0033__cce_01_0028_li623512421327"><strong id="cce_01_0033__cce_01_0028_b14234183363514">Pre-installation Script</strong>: Enter a maximum of 1,000 characters.<p id="cce_01_0033__cce_01_0028_p03368579295">The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed. The script is usually used to format data disks.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li824516422326"><strong id="cce_01_0033__cce_01_0028_b61796481356">Post-installation Script</strong>: Enter a maximum of 1,000 characters.<p id="cce_01_0033__cce_01_0028_p121041911114618">The script will be executed after Kubernetes software is installed and will not affect the installation. The script is usually used to modify Docker parameters.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li524744213216"><strong id="cce_01_0033__cce_01_0028_b8454171752510">Subnet IP Address</strong>: Select <strong id="cce_01_0033__cce_01_0028_b4454121712258">Automatically assign IP address</strong> (recommended) or <strong id="cce_01_0033__cce_01_0028_b1454517102518">Manually assigning IP addresses</strong>.</li></ul>
|
||||
</p></li><li id="cce_01_0033__li127380212350"><span><strong id="cce_01_0033__cce_01_0028_b15996143333619">Advanced Kubernetes Settings</strong>: (Optional) Click <span><img id="cce_01_0033__cce_01_0028_image024894211324" src="en-us_image_0183134479.png"></span> to show advanced cluster settings.</span><p><ul id="cce_01_0033__cce_01_0028_ul62521142103220"><li id="cce_01_0033__cce_01_0028_li1824984273218"><strong id="cce_01_0033__cce_01_0028_b182981344142016">Max Pods</strong>: maximum number of pods that can be created on a node, including the system's default pods. If the cluster uses the <strong id="cce_01_0033__cce_01_0028_b18337195111520">VPC network model</strong>, the maximum value is determined by the number of IP addresses that can be allocated to containers on each node.<p id="cce_01_0033__cce_01_0028_p1867372514496">This limit prevents the node from being overloaded by managing too many pods. For details, see <a href="cce_01_0348.html">Maximum Number of Pods That Can Be Created on a Node</a>.</p>
|
||||
</li><li id="cce_01_0033__cce_01_0028_li13590205314442"><strong id="cce_01_0033__cce_01_0028_b153416485118">Maximum Data Space per Container</strong>: maximum data space that can be used by a container. The value ranges from 10 GB to 500 GB. If the value of this field is larger than the data disk space allocated to Docker resources, the latter will override the value specified here. Typically, 90% of the data disk space is allocated to Docker resources. This parameter is displayed only for clusters of v1.13.10-r0 and later.</li></ul>
|
||||
</p></li><li id="cce_01_0033__li13521229144413"><span><strong id="cce_01_0033__cce_01_0028_b73262163215">Nodes</strong>: The value cannot exceed the management scale you select when configuring cluster parameters. Set this parameter based on service requirements and the remaining quota displayed on the page. Click <span><img id="cce_01_0033__cce_01_0028_image9405213153219" src="en-us_image_0250508826.png"></span> to view the factors that affect the number of nodes to be added (depending on the factor with the minimum value). </span></li><li id="cce_01_0033__li18470101114216"><span>Click <strong id="cce_01_0033__b53783339427">Next: Confirm</strong>. After confirming that the configuration is correct, click <strong id="cce_01_0033__b142531613431">Submit</strong>.</span><p><div class="p" id="cce_01_0033__p1582601641210">The node list page is displayed. If the node status is <strong id="cce_01_0033__b40028583492531">Available</strong>, the node is added successfully. It takes about 6 to 10 minutes to create a node.<div class="note" id="cce_01_0033__note532851320120"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_01_0033__ul5328113181214"><li id="cce_01_0033__li15267150143511">Do not delete the security groups and related rules automatically configured during cluster creation. Otherwise, the cluster will exhibit unexpected behavior.</li></ul>
|
||||
</div></div>
|
||||
</div>
|
||||
</p></li><li id="cce_01_0033__li8393044161317"><span>Click <strong id="cce_01_0033__b14948810192019">Back to Node List</strong>. The node has been created successfully if it changes to the <strong id="cce_01_0033__b8507105174319">Available</strong> state.</span><p><div class="note" id="cce_01_0033__note96535331218"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0033__p454185814155">The allocatable resources are calculated based on the resource request value (<strong id="cce_01_0033__b15693124320235">Request</strong>), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node.</p>
|
||||
<p id="cce_01_0033__p73492457214">The calculation formula is as follows:</p>
|
||||
<ul id="cce_01_0033__ul259653921"><li id="cce_01_0033__li1259253828">Allocatable CPUs = Total CPUs – Requested CPUs of all pods – Reserved CPUs for other resources</li><li id="cce_01_0033__li15913539216">Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources</li></ul>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0183.html">Nodes</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,15 +0,0 @@
|
||||
<a name="cce_01_0035"></a><a name="cce_01_0035"></a>
|
||||
|
||||
<h1 class="topictitle1">Node Pools</h1>
|
||||
<div id="body1564122277019"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0081.html">Node Pool Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0012.html">Creating a Node Pool</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0222.html">Managing a Node Pool</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,18 +0,0 @@
|
||||
<a name="cce_01_0036"></a><a name="cce_01_0036"></a>
|
||||
|
||||
<h1 class="topictitle1">Stopping a Node</h1>
|
||||
<div id="body1564130562761"><div class="section" id="cce_01_0036__section127213017388"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0036__p866311509249">After a node in the cluster is stopped, services on the node are also stopped. Before stopping a node, ensure that discontinuity of the services on the node will not result in adverse impacts.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0036__section1489437103610"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_01_0036__ul0917755162415"><li id="cce_01_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_01_0036__li791875552416">Unexpected risks may occur during node deletion. Back up related data in advance.</li><li id="cce_01_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_01_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_01_0036__ol5687174923613"><li id="cce_01_0036__li133915311359"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0036__b1182817351012">Resource Management</strong> > <strong id="cce_01_0036__b12834113513018">Nodes</strong>.</span></li><li id="cce_01_0036__li6687049203616"><span>In the node list, click the name of the node to be stopped.</span></li><li id="cce_01_0036__li1528433717347"><span>On the node details page displayed, click the node name.</span><p><div class="fignone" id="cce_01_0036__fig781172715419"><span class="figcap"><b>Figure 1 </b>Nodes details page</span><br><span><img id="cce_01_0036__image2788981136" src="en-us_image_0000001190302087.png"></span></div>
|
||||
</p></li><li id="cce_01_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <span class="uicontrol" id="cce_01_0036__uicontrol12772142432"><b>Stop</b></span>. In the <strong id="cce_01_0036__b14681416411">Stop ECS</strong> dialog box, click <span class="uicontrol" id="cce_01_0036__uicontrol4670148204310"><b>Yes</b></span>.</span><p><div class="fignone" id="cce_01_0036__fig19269101385311"><span class="figcap"><b>Figure 2 </b>ECS details page</span><br><span><img id="cce_01_0036__image6847636155" src="en-us_image_0000001144342232.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0183.html">Nodes</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,28 +0,0 @@
|
||||
<a name="cce_01_0042"></a><a name="cce_01_0042"></a>
|
||||
|
||||
<h1 class="topictitle1">Storage (CSI)</h1>
|
||||
<div id="body8662426"><p id="cce_01_0042__en-us_topic_0000001244261047_p8060118"></p>
|
||||
</div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0307.html">Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0053.html">Using Local Disks as Storage Volumes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0379.html">PersistentVolumes (PVs)</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0378.html">PersistentVolumeClaims (PVCs)</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0380.html">StorageClass</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0211.html">Snapshots and Backups</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0336.html">Using a Custom AK/SK to Mount an OBS Volume</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0337.html">Setting Mount Options</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0393.html">Deployment Examples</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,18 +0,0 @@
|
||||
<a name="cce_01_0044"></a><a name="cce_01_0044"></a>
|
||||
|
||||
<h1 class="topictitle1">EVS Volumes</h1>
|
||||
<div id="body0000001365917780"><p id="cce_01_0044__p8060118"></p>
|
||||
</div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0254.html">Using EVS Volumes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0257.html">Creating a Pod Mounted with an EVS Volume</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0393.html">Deployment Examples</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,19 +0,0 @@
|
||||
<a name="cce_01_0045"></a><a name="cce_01_0045"></a>
|
||||
|
||||
<h1 class="topictitle1">Configuration Center</h1>
|
||||
<div id="body1507606688948"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0152.html">Creating a ConfigMap</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0015.html">Using a ConfigMap</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0153.html">Creating a Secret</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0016.html">Using a Secret</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0388.html">Cluster Secrets</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,29 +0,0 @@
|
||||
<a name="cce_01_0046"></a><a name="cce_01_0046"></a>
|
||||
|
||||
<h1 class="topictitle1">Workloads</h1>
|
||||
<div id="body1508729244098"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0006.html">Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0047.html">Creating a Deployment</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0048.html">Creating a StatefulSet</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0216.html">Creating a DaemonSet</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0150.html">Creating a Job</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0151.html">Creating a Cron Job</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0013.html">Managing Pods</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0007.html">Managing Workloads and Jobs</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0057.html">Scaling a Workload</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0130.html">Configuring a Container</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,29 +0,0 @@
|
||||
<a name="cce_01_0051"></a><a name="cce_01_0051"></a>
|
||||
|
||||
<h1 class="topictitle1">Scheduling Policy Overview</h1>
|
||||
<div id="body1508311265137"><div class="section" id="cce_01_0051__section2169530191212"><h4 class="sectiontitle">Custom Scheduling Policies</h4><p id="cce_01_0051__p32318416269">You can configure node affinity, workload affinity, and workload anti-affinity in custom scheduling policies.</p>
|
||||
<ul id="cce_01_0051__ul79623122811"><li id="cce_01_0051__li29627118283"><a href="cce_01_0232.html">Node Affinity</a></li><li id="cce_01_0051__li1196251142818"><a href="cce_01_0233.html">Workload Affinity</a></li><li id="cce_01_0051__li129621913286"><a href="cce_01_0234.html">Workload Anti-Affinity</a></li></ul>
|
||||
<div class="note" id="cce_01_0051__note20751102210133"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0051__p6751922131312">Custom scheduling policies depend on node labels and pod labels. You can use default labels or customize labels as required.</p>
|
||||
</div></div>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0051__section10806164114720"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0051__keyword156029512711">Simple Scheduling Policies</span></h4><p id="cce_01_0051__p9438052131112">A simple scheduling policy allows you to configure affinity between workloads and AZs, between workloads and nodes, and between workloads.</p>
|
||||
</div>
|
||||
<ul id="cce_01_0051__ul18117421673"><li id="cce_01_0051__li299944111719"><strong id="cce_01_0051__b1821161819367"><span class="keyword" id="cce_01_0051__keyword799915410710">Workload-AZ affinity</span></strong>: Multiple AZ-based scheduling policies (including affinity and anti-affinity policies) can be configured. However, scheduling is performed as long as one of the scheduling policies is met.<ul id="cce_01_0051__ul49994411177"><li id="cce_01_0051__li139999411474"><strong id="cce_01_0051__b480716381254">Affinity between workloads and AZs</strong>: <a href="cce_01_0228.html">Workload-AZ Affinity</a></li><li id="cce_01_0051__li699984114710"><strong id="cce_01_0051__b63931301060">Anti-affinity between workloads and AZs</strong>: <a href="cce_01_0229.html">Workload-AZ Anti-Affinity</a></li></ul>
|
||||
</li><li id="cce_01_0051__li189991416718"><strong id="cce_01_0051__b47342033183718"><span class="keyword" id="cce_01_0051__keyword999911419715">Workload-node affinity</span></strong>: Multiple node-based scheduling policies (including affinity and anti-affinity scheduling) can be configured. However, scheduling is performed as long as one of the scheduling policies is met. For example, if a cluster contains nodes A, B, and C and two scheduling policies are set (one policy defines node A as an affinity node and the other policy defines node B as an anti-affinity node), then the workload can be scheduled to any node other than B.<ul id="cce_01_0051__ul139991541672"><li id="cce_01_0051__li79994414710"><strong id="cce_01_0051__b665191917610">Affinity between workloads and nodes</strong>: <a href="cce_01_0225.html">Workload-Node Affinity</a></li><li id="cce_01_0051__li209991441278"><strong id="cce_01_0051__b885015376617">Anti-affinity between workloads and nodes</strong>: <a href="cce_01_0226.html">Workload-Node Anti-Affinity</a></li></ul>
|
||||
</li><li id="cce_01_0051__li10174219717"><strong id="cce_01_0051__b063422094317"><span class="keyword" id="cce_01_0051__keyword15999144118713">Workload-workload affinity</span></strong>: Multiple workload-based scheduling policies can be configured, but the labels in these policies must belong to the same workload.<ul id="cce_01_0051__ul20442773"><li id="cce_01_0051__li969982310122"><strong id="cce_01_0051__b1188711210372">Affinity between workloads</strong>: For details, see <a href="cce_01_0220.html">Workload-Workload Affinity</a>. You can deploy workloads on the same node to reduce consumption of network resources.<div class="p" id="cce_01_0051__p143703252129"><a href="#cce_01_0051__fig3017424713">Figure 1</a> shows an example of affinity deployment, in which all workloads are deployed on the same node.<div class="fignone" id="cce_01_0051__fig3017424713"><a name="cce_01_0051__fig3017424713"></a><a name="fig3017424713"></a><span class="figcap"><b>Figure 1 </b>Affinity between workloads</span><br><span><img id="cce_01_0051__image1681212182717" src="en-us_image_0165899095.png"></span></div>
|
||||
</div>
|
||||
</li><li id="cce_01_0051__li165871616121216"><strong id="cce_01_0051__b3951358165514">Anti-affinity between workloads</strong>: For details, see <a href="cce_01_0227.html">Workload-Workload Anti-Affinity</a>. Constraining multiple instances of the same workload from being deployed on the same node reduces the impact of system breakdowns. Anti-affinity deployment is also recommended for workloads that may interfere with each other.<div class="p" id="cce_01_0051__p20930162019121"><a href="#cce_01_0051__fig1505421971">Figure 2</a> shows an example of anti-affinity deployment, in which four workloads are deployed on four different nodes.<div class="fignone" id="cce_01_0051__fig1505421971"><a name="cce_01_0051__fig1505421971"></a><a name="fig1505421971"></a><span class="figcap"><b>Figure 2 </b>Anti-affinity between workloads</span><br><span><img id="cce_01_0051__image521533119278" src="en-us_image_0165899282.png"></span></div>
|
||||
</div>
|
||||
</li></ul>
|
||||
</li></ul>
|
||||
<div class="notice" id="cce_01_0051__note1899711411179"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_01_0051__p1584112714376">When setting workload-workload affinity and workload-node affinity, ensure that the affinity relationships do not contradict each other; otherwise, workload deployment will fail. </p>
|
||||
<p id="cce_01_0051__p59973411714">For example, Workload 3 will fail to be deployed when the following conditions are met:</p>
|
||||
<ul id="cce_01_0051__ul159971841573"><li id="cce_01_0051__li599712411278">Anti-affinity is configured for Workload 1 and Workload 2. Workload 1 is deployed on <strong id="cce_01_0051__b1334142191413">Node A</strong> and Workload 2 is deployed on <strong id="cce_01_0051__b1582394618141">Node B</strong>.</li><li id="cce_01_0051__li15997641772">Affinity is configured between Workload 2 and Workload 3, but the target node on which Workload 3 is to be deployed is <strong id="cce_01_0051__b080052872720">Node C</strong> or <strong id="cce_01_0051__b7805728172715">Node A</strong>.</li></ul>
|
||||
</div></div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0149.html">Affinity and Anti-Affinity Scheduling</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,25 +0,0 @@
|
||||
<a name="cce_01_0063"></a><a name="cce_01_0063"></a>
|
||||
|
||||
<h1 class="topictitle1">Managing Node Scaling Policies</h1>
|
||||
<div id="body8662426"><div class="section" id="cce_01_0063__section127666327248"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0063__p192873216229">After a node scaling policy is created, you can delete, edit, disable, enable, or clone the policy.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0063__section102878407207"><h4 class="sectiontitle">Viewing a Node Scaling Policy</h4><p id="cce_01_0063__p713741135215">You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed.</p>
|
||||
<ol id="cce_01_0063__ol17409123885219"><li id="cce_01_0063__li4409153817525"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b111931432181510">Auto Scaling</strong>. On the <strong id="cce_01_0063__b5194832101513">Node Scaling</strong> tab page, click <span><img id="cce_01_0063__image1569143785619" src="en-us_image_0254986677.png"></span> in front of the policy to be viewed.</span></li><li id="cce_01_0063__li641003813527"><span>In the expanded area, the <span class="uicontrol" id="cce_01_0063__uicontrol864413924614"><b>Associated Node Pool</b></span>, <span class="uicontrol" id="cce_01_0063__uicontrol1164419910465"><b>Execution Rules</b></span>, and <span class="uicontrol" id="cce_01_0063__uicontrol1964516974613"><b>Scaling Records</b></span> tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_01_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0063__p2404132612336">You can also enable or disable auto scaling in <strong id="cce_01_0063__b1448235514169">Node Pools</strong>. Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b1736151512172">Resource Management</strong> > <strong id="cce_01_0063__b144212018141718">Node Pools</strong>, and click <strong id="cce_01_0063__b55801823151717">Edit</strong> in the upper right corner of the node pool to be operated. In the <strong id="cce_01_0063__b69381433101711">Edit Node Pool</strong> dialog box displayed, you can enable <strong id="cce_01_0063__b1349019406176">Autoscaler</strong> and set the limits of the number of nodes.</p>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_01_0063__ol14644105712488"><li id="cce_01_0063__li2619151017014"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b10225172110213">Auto Scaling</strong>. On the <strong id="cce_01_0063__b1949982762116">Node Scaling</strong> tab page, click <strong id="cce_01_0063__b134234323212">Delete</strong> in the <strong id="cce_01_0063__b47420381219">Operation</strong> column of the policy to be deleted.</span></li><li id="cce_01_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_01_0063__wintitle1627710616488"><b>Delete Node Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_01_0063__li71817016278"><span>Enter <strong id="cce_01_0063__b291658184817">DELETE</strong> in the text box.</span></li><li id="cce_01_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_01_0063__uicontrol11111121018481"><b>OK</b></span> to delete the policy.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_01_0063__ol067875612225"><li id="cce_01_0063__li1678156182213"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b1117410505211">Auto Scaling</strong>. On the <strong id="cce_01_0063__b111759508210">Node Scaling</strong> tab page, click <strong id="cce_01_0063__b1217617503218">Edit</strong> in the <strong id="cce_01_0063__b20177450112112">Operation</strong> column of the policy.</span></li><li id="cce_01_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_01_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, modify policy parameter values listed in <a href="cce_01_0209.html#cce_01_0209__table18763092201">Table 1</a>.</span></li><li id="cce_01_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_01_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_01_0063__ol1283103252519"><li id="cce_01_0063__li1383103210258"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b173376254223">Auto Scaling</strong>. On the <strong id="cce_01_0063__b7337112518226">Node Scaling</strong> tab page, click <strong id="cce_01_0063__b39590354227">More</strong> > <strong id="cce_01_0063__b93371825192217">Clone</strong> in the <strong id="cce_01_0063__b3338132519222">Operation</strong> column of the policy.</span></li><li id="cce_01_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_01_0063__uicontrol162071440144911"><b>Create Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_01_0063__li383732172512"><span>Click <span class="uicontrol" id="cce_01_0063__uicontrol1449685524914"><b>Create Now</b></span> to clone the policy. The cloned policy is displayed in the policy list on the <span class="uicontrol" id="cce_01_0063__uicontrol15497195512498"><b>Node Scaling</b></span> tab page.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_01_0063__ol0843321258"><li id="cce_01_0063__li38373213252"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0063__b6609121017289">Auto Scaling</strong>. On the <strong id="cce_01_0063__b1295315159281">Node Scaling</strong> tab page, click <strong id="cce_01_0063__b139831120132810">More</strong> > <strong id="cce_01_0063__b4864102812818">Disable</strong> or <strong id="cce_01_0063__b5842183217284">Enable</strong> in the <strong id="cce_01_0063__b181171538202812">Operation</strong> column of the policy.</span></li><li id="cce_01_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li><li id="cce_01_0063__li1384163216254"><span>Click <span class="uicontrol" id="cce_01_0063__uicontrol204341433135011"><b>Yes</b></span>. The policy status is displayed in the node scaling list.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0291.html">Scaling a Cluster/Node</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,23 +0,0 @@
|
||||
<a name="cce_01_0064"></a><a name="cce_01_0064"></a>
|
||||
|
||||
<h1 class="topictitle1">Add-ons</h1>
|
||||
<div id="body1529577481025"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0277.html">Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0129.html">coredns (System Resource Add-on, Mandatory)</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0127.html">storage-driver (System Resource Add-on, Mandatory)</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0066.html">everest (System Resource Add-on, Mandatory)</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0154.html">autoscaler</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0205.html">metrics-server</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0141.html">gpu-beta</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,25 +0,0 @@
|
||||
<a name="cce_01_0066"></a><a name="cce_01_0066"></a>
|
||||
|
||||
<h1 class="topictitle1">everest (System Resource Add-on, Mandatory)</h1>
|
||||
<div id="body1529577481025"><div class="section" id="cce_01_0066__section25311744154917"><h4 class="sectiontitle">Introduction</h4><p id="cce_01_0066__p728554610430">Everest is a cloud-native container storage system. Based on Container Storage Interface (CSI), clusters of Kubernetes v1.15 or later can interconnect with cloud storage services such as EVS, OBS, SFS, and SFS Turbo.</p>
|
||||
<p id="cce_01_0066__p17820349194112"><strong id="cce_01_0066__b14107173616557">everest is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.15 or later is created.</strong></p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0066__section202191122814"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_01_0066__ul39998883313"><li id="cce_01_0066__li1347222711015">If your cluster is upgraded from v1.13 to v1.15, <a href="cce_01_0127.html">storage-driver</a> is replaced by everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions. For details about CSI and FlexVolume, see <a href="cce_01_0306.html#cce_01_0306__section86752053123513">Differences Between CSI and FlexVolume Plug-ins</a>.</li><li id="cce_01_0066__li3787162513612">In version 1.2.0 of the everest add-on, <strong id="cce_01_0066__b51541345758">key authentication</strong> is optimized when OBS is used. After the everest add-on is upgraded from a version earlier than 1.2.0, you need to restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS.</li><li id="cce_01_0066__li139991585338">By default, this add-on is installed in <strong id="cce_01_0066__b3236525404">clusters of v1.15 and later</strong>. For clusters of v1.13 and earlier, the <a href="cce_01_0127.html">storage-driver</a> add-on is installed by default.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0066__section168341157155317"><h4 class="sectiontitle">Installing the Add-on</h4><p id="cce_01_0066__p11695354471">This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps:</p>
|
||||
<ol id="cce_01_0066__ol9183433182510"><li id="cce_01_0066__li13183153352515"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0066__b131161714104112">Add-ons</strong>. On the <strong id="cce_01_0066__b161241014184112">Add-on Marketplace</strong> tab page, click <strong id="cce_01_0066__b1812616142410">Install Add-on</strong> under <strong id="cce_01_0066__b112741411416">everest</strong>.</span></li><li id="cce_01_0066__li65653111558"><span>On the <strong id="cce_01_0066__b1580216443408">Install Add-on</strong> page, select the cluster and the add-on version, and click <strong id="cce_01_0066__b08021644114014">Next: Configuration</strong>.</span></li><li id="cce_01_0066__li9455819152615"><span>Select <strong id="cce_01_0066__b32409326315">Single</strong> or <strong id="cce_01_0066__b102425321531">HA</strong> for <strong id="cce_01_0066__b724314322310">Add-on Specifications</strong>, and click <strong id="cce_01_0066__b172441032535">Install</strong>.</span><p><p id="cce_01_0066__p187721502282">After the add-on is installed, click <strong id="cce_01_0066__b1065713118572">Go Back to Previous Page</strong>. On the <strong id="cce_01_0066__b18657911125715">Add-on Instance</strong> tab page, select the corresponding cluster to view the running instance. This indicates that the add-on has been installed on each node in the cluster.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0066__section414918421496"><h4 class="sectiontitle">Upgrading the Add-on</h4><ol id="cce_01_0066__ol255316335402"><li id="cce_01_0066__li818142512414"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0066__b1314382919416">Add-ons</strong>. On the <strong id="cce_01_0066__b8149102984113">Add-on Instance</strong> tab page, click <strong id="cce_01_0066__b1814962964118">Upgrade</strong> under <strong id="cce_01_0066__b18150112916410">everest</strong>.</span><p><div class="note" id="cce_01_0066__note1625210332283"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_01_0066__ul163002418296"><li id="cce_01_0066__li33006482916">If the <strong id="cce_01_0066__b412131285913">Upgrade</strong> button is unavailable, the current add-on is already up-to-date and no upgrade is required.</li><li id="cce_01_0066__li53016432920">When the upgrade is complete, the original everest version on cluster nodes will be replaced by the latest version.</li></ul>
|
||||
</div></div>
|
||||
</p></li><li id="cce_01_0066__li11556163354015"><span>On the <strong id="cce_01_0066__b111255134617">Basic Information</strong> page, select the add-on version and click <strong id="cce_01_0066__b622165574615">Next</strong>.</span></li><li id="cce_01_0066__li132321726191110"><span>Select <strong id="cce_01_0066__b141941316548">Single</strong> or <strong id="cce_01_0066__b16194916741">HA</strong> for <strong id="cce_01_0066__b0194216643">Add-on Specifications</strong>, and click <strong id="cce_01_0066__b819471615415">Upgrade</strong>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0066__section610455514114"><h4 class="sectiontitle">Uninstalling the Add-on</h4><ol id="cce_01_0066__ol29784442018"><li id="cce_01_0066__li997812446010"><span>Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0066__b11223193611416">Add-ons</strong>. On the <strong id="cce_01_0066__b52291836174114">Add-on Instance</strong> tab page, click <strong id="cce_01_0066__b523083616419">Uninstall</strong> under <strong id="cce_01_0066__b13230193674110">everest</strong>.</span></li><li id="cce_01_0066__li20637152311120"><span>In the dialog box displayed, click <strong id="cce_01_0066__b183013316514">Yes</strong> to uninstall the add-on.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0064.html">Add-ons</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,68 +0,0 @@
|
||||
<a name="cce_01_0068"></a><a name="cce_01_0068"></a>
|
||||
|
||||
<h1 class="topictitle1">CCE Kubernetes Release Notes</h1>
|
||||
<div id="body1597718832332"><p id="cce_01_0068__p1684939205015">CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. To enable interoperability from one Kubernetes installation to the next, you must upgrade your Kubernetes clusters before the maintenance period ends.</p>
|
||||
<p id="cce_01_0068__p8592859164810">After the latest Kubernetes version is released, CCE will provide you the changes in this version. For details, see <a href="#cce_01_0068__table826812711586">Table 1</a>.</p>
|
||||
|
||||
<div class="tablenoborder"><a name="cce_01_0068__table826812711586"></a><a name="table826812711586"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_01_0068__table826812711586" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Cluster version differences</caption><thead align="left"><tr id="cce_01_0068__row182714712580"><th align="left" class="cellrowborder" valign="top" width="13.86%" id="mcps1.3.3.2.4.1.1"><p id="cce_01_0068__p9349145865810">Source Version</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="15.079999999999998%" id="mcps1.3.3.2.4.1.2"><p id="cce_01_0068__p1727217165814">Target Version</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="71.06%" id="mcps1.3.3.2.4.1.3"><p id="cce_01_0068__p0273577581">Description</p>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_01_0068__row290616094011"><td class="cellrowborder" valign="top" width="13.86%" headers="mcps1.3.3.2.4.1.1 "><p id="cce_01_0068__p139075011403">v1.19</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="15.079999999999998%" headers="mcps1.3.3.2.4.1.2 "><p id="cce_01_0068__p18907120144015">v1.21</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="71.06%" headers="mcps1.3.3.2.4.1.3 "><ul id="cce_01_0068__ul10641513184013"><li id="cce_01_0068__li2641111311405">Changelog from v1.19 to v1.21<p id="cce_01_0068__p1641151318401"><a name="cce_01_0068__li2641111311405"></a><a name="li2641111311405"></a>Changelog from v1.20 to v1.21:</p>
|
||||
<p id="cce_01_0068__p13641191313401"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md</a></p>
|
||||
<p id="cce_01_0068__p0641131304015">Changelog from v1.19 to v1.20:</p>
|
||||
<p id="cce_01_0068__p56411813114017"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md</a></p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0068__row172031813811"><td class="cellrowborder" valign="top" width="13.86%" headers="mcps1.3.3.2.4.1.1 "><p id="cce_01_0068__p11204619817">v1.17</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="15.079999999999998%" headers="mcps1.3.3.2.4.1.2 "><p id="cce_01_0068__p19204617812">v1.19</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="71.06%" headers="mcps1.3.3.2.4.1.3 "><ul id="cce_01_0068__ul23941211161919"><li id="cce_01_0068__li73941211161914">Changelog from v1.17 to v1.19<p id="cce_01_0068__p1639413113193"><a name="cce_01_0068__li73941211161914"></a><a name="li73941211161914"></a>Changelog from v1.18 to v1.19:</p>
|
||||
<p id="cce_01_0068__p939419114190"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md</a></p>
|
||||
<p id="cce_01_0068__p1239441131919">Changelog from v1.17 to v1.18:</p>
|
||||
<p id="cce_01_0068__p03947111198"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md</a></p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0068__row697493244012"><td class="cellrowborder" valign="top" width="13.86%" headers="mcps1.3.3.2.4.1.1 "><p id="cce_01_0068__p1997553274012">v1.15</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="15.079999999999998%" headers="mcps1.3.3.2.4.1.2 "><p id="cce_01_0068__p179759321403">v1.17</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="71.06%" headers="mcps1.3.3.2.4.1.3 "><ul id="cce_01_0068__ul1758010404114"><li id="cce_01_0068__li229675174117">Changelog from v1.15 to v1.17<p id="cce_01_0068__p92964512419"><a name="cce_01_0068__li229675174117"></a><a name="li229675174117"></a>Changelog from v1.16 to v1.17:</p>
|
||||
<p id="cce_01_0068__p029614534112"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md</a></p>
|
||||
<p id="cce_01_0068__p192967574110">Changelog from v1.15 to v1.16:</p>
|
||||
<p id="cce_01_0068__p32965518419"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md</a></p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0068__row12962942134718"><td class="cellrowborder" valign="top" width="13.86%" headers="mcps1.3.3.2.4.1.1 "><p id="cce_01_0068__p496215422475">v1.13</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="15.079999999999998%" headers="mcps1.3.3.2.4.1.2 "><p id="cce_01_0068__p12962164212472">v1.15</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="71.06%" headers="mcps1.3.3.2.4.1.3 "><ul id="cce_01_0068__ul2574330163719"><li id="cce_01_0068__li2574113023720">Changelog from v1.13 to v1.15<p id="cce_01_0068__p18574133017378"><a name="cce_01_0068__li2574113023720"></a><a name="li2574113023720"></a>Changelog from v1.14 to v1.15:</p>
|
||||
<p id="cce_01_0068__p1257423013717"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md</a></p>
|
||||
<p id="cce_01_0068__p1057473015371">Changelog from v1.13 to v1.14:</p>
|
||||
<p id="cce_01_0068__p857413013712"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md" target="_blank" rel="noopener noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md</a></p>
|
||||
</li><li id="cce_01_0068__li1121011122093">After a cluster is upgraded from v1.13 to v1.15, the FlexVolume plug-in (storage-driver) is taken over by the CSI plug-in (everest v1.1.6 or later) for container storage. This takeover brings in no function changes, however, you are advised not to create FlexVolume storage resources any more, which will not work in the cluster.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0215.html">Upgrading a Cluster</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,49 +0,0 @@
|
||||
<a name="cce_01_0085"></a><a name="cce_01_0085"></a>
|
||||
|
||||
<h1 class="topictitle1">Controlling Cluster Permissions</h1>
|
||||
<div id="body1532060076739"><div class="section" id="cce_01_0085__section183951620327"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0085__p04292733219">This section describes how to control permissions on resources in a cluster, for example, allow user A to read and write application data in a namespace, and user B to only read resource data in a cluster.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0085__section12024043015"><h4 class="sectiontitle">Procedure</h4><ol id="cce_01_0085__ol7212589179"><li id="cce_01_0085__li17672131923813"><span>If you need to perform permission control on the cluster, select <strong id="cce_01_0085__b126223145816">Enhanced authentication</strong> for <strong id="cce_01_0085__b96161139115810">Authentication Mode</strong> during cluster creation, upload your own <strong id="cce_01_0085__b1588515591578">CA certificate</strong>, <strong id="cce_01_0085__b7885155917577">client certificate</strong>, and <strong id="cce_01_0085__b788516594574">client certificate private key</strong> (for details about how to create a certificate, see <a href="https://kubernetes.io/docs/tasks/administer-cluster/certificates/" target="_blank" rel="noopener noreferrer">Certificates</a>), and select <strong id="cce_01_0085__b19886759175710">I have confirmed that the uploaded certificates are valid</strong>. For details, see <a href="cce_01_0028.html#cce_01_0028__table8638121213265">Table 1</a>.</span><p><div class="caution" id="cce_01_0085__note173064357597"><span class="cautiontitle"><img src="public_sys-resources/caution_3.0-en-us.png"> </span><div class="cautionbody"><ul id="cce_01_0085__ul63601125729"><li id="cce_01_0085__li16243349611">Upload a file <strong id="cce_01_0085__b2013902455712">smaller than 1 MB</strong>. The CA certificate and client certificate can be in <strong id="cce_01_0085__b151521824195714">.crt</strong> or <strong id="cce_01_0085__b7152192418579">.cer</strong> format. The private key of the client certificate can only be uploaded <strong id="cce_01_0085__b915202410571">unencrypted</strong>.</li><li id="cce_01_0085__li2030917431258">The validity period of the client certificate must be longer than five years.</li><li id="cce_01_0085__li082643315619">The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. <strong id="cce_01_0085__b1630482613570">If the certificate is invalid, the cluster cannot be created</strong>.</li></ul>
|
||||
</div></div>
|
||||
</p></li><li id="cce_01_0085__li176675220325"><span>Create a role using kubectl.</span><p><div class="p" id="cce_01_0085__p2067061013303">The following example shows how to create a <strong id="cce_01_0085__b842352706105835">role</strong> and allow the role to read all pods in the default namespace. For details about the parameters, see the <a href="https://kubernetes.io/docs/reference/" target="_blank" rel="noopener noreferrer">official Kubernetes documentation</a>.<pre class="screen" id="cce_01_0085__screen88861958154417">kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
namespace: default
|
||||
name: pod-reader
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "watch", "list"]</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_01_0085__li93082449442"><span>Bind the role to a user by using kubectl.</span><p><div class="p" id="cce_01_0085__p186706105302">In the following example, the <strong id="cce_01_0085__b84235270610591">RoleBinding</strong> assigns the role of <strong id="cce_01_0085__b842352706105926">pod-reader</strong> in the default namespace to user <strong id="cce_01_0085__b842352706105945">jane</strong>. This policy allows user <strong id="cce_01_0085__b842352706105953">jane</strong> to read all pods in the default namespace. For details about the parameters, see the <a href="https://kubernetes.io/docs/reference/" target="_blank" rel="noopener noreferrer">official Kubernetes documentation</a>.<pre class="screen" id="cce_01_0085__screen577125963212">kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: read-pods
|
||||
namespace: default
|
||||
subjects:
|
||||
- kind: User
|
||||
name: jane #User name
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: pod-reader #Name of the role that is created
|
||||
apiGroup: rbac.authorization.k8s.io</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_01_0085__li1310204916362"><span>After a role is created and bound to a user, call a Kubernetes API by initiating an API request message where headers carry user information and the certificate uploaded during cluster creation. For example, to call the pod query API, run the following command:</span><p><p id="cce_01_0085__p1562143141813"><strong id="cce_01_0085__b1421452712577">curl -k -H "X-Remote-User: <em id="cce_01_0085__i121422718571">jane</em>" --cacert /root/tls-ca.crt --key /root/tls.key --cert /root/tls.crt https://</strong><em id="cce_01_0085__i279218283574">192.168.23.5:5443</em><strong id="cce_01_0085__b16229142710572">/api/v1/namespaces/default/pods</strong></p>
|
||||
<p id="cce_01_0085__p1640919457189">If <strong id="cce_01_0085__b84235270611758">200</strong> is returned, user <strong id="cce_01_0085__b1435861221319">jane</strong> is authorized to read pods in the cluster's default namespace. If <strong id="cce_01_0085__b8423527061183">403</strong> is returned, user <strong id="cce_01_0085__b175651715181314">jane</strong> is not authorized to read pods in the cluster's default namespace.</p>
|
||||
<div class="note" id="cce_01_0085__note27661232194015"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_01_0085__p16318194716407">To prevent the command execution failure, upload the certificate to the <strong id="cce_01_0085__b84235270611815">/root</strong> directory in advance.</p>
|
||||
</div></div>
|
||||
<p id="cce_01_0085__p9615195293417">The parameter descriptions are as follows:</p>
|
||||
<ul id="cce_01_0085__ul1233451123514"><li id="cce_01_0085__li1933415103513"><strong id="cce_01_0085__b8423527061194">X-Remote-User: <em id="cce_01_0085__i84235269711852">jane</em></strong>: The request header is fixed at <strong id="cce_01_0085__b84235270611922">X-Remote-User</strong>, and <strong id="cce_01_0085__b84235270611934">jane</strong> is the username.</li><li id="cce_01_0085__li157711127123514"><strong id="cce_01_0085__b307734649111116"><em id="cce_01_0085__i223083934111116">tls-ca.crt</em></strong>: CA root certificate uploaded during cluster creation.</li><li id="cce_01_0085__li955721462919"><strong id="cce_01_0085__b84235270611133">tls.crt</strong>: client certificate that matches the CA root certificate uploaded during cluster creation.</li><li id="cce_01_0085__li101513210292"><strong id="cce_01_0085__b842352706111152">tls.key</strong>: client key corresponding to the CA root certificate uploaded during cluster creation.</li><li id="cce_01_0085__li12807123010577"><strong id="cce_01_0085__b842352706111158">192.168.23.5:5443</strong>: address for connecting to the cluster. To obtain the address, perform the following steps:<p id="cce_01_0085__p19544346153023">Log in to the CCE console. In the navigation pane, choose <strong id="cce_01_0085__b749082681318">Resource Management > Clusters</strong>. Click the name of the cluster to be connected and obtain the IP address and port number from <strong id="cce_01_0085__b131811181157">Internal API Server Address</strong> on the cluster details page.</p>
|
||||
<div class="fignone" id="cce_01_0085__fig743763911913"><span class="figcap"><b>Figure 1 </b>Obtaining the access address</span><br><span><img id="cce_01_0085__image15123189152917" src="en-us_image_0000001144208440.png"></span></div>
|
||||
</li></ul>
|
||||
<p id="cce_01_0085__p128742427296">In addition, the <strong id="cce_01_0085__b84235270611175">X-Remote-Group</strong> header field, that is, the user group name, is supported. During role binding, a role can be bound to a group and carry user group information when you access the cluster.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0027.html">Clusters</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,39 +0,0 @@
|
||||
<a name="cce_01_0107"></a><a name="cce_01_0107"></a>
|
||||
|
||||
<h1 class="topictitle1">Connecting to a Cluster Using kubectl</h1>
|
||||
<div id="body1512462600292"><div class="section" id="cce_01_0107__section14234115144"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0107__p133539491408">This section uses a CCE cluster as an example to describe how to connect to a CCE cluster using kubectl.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0107__section17352373317"><h4 class="sectiontitle">Permission Description</h4><p id="cce_01_0107__p51211251156">When you access a cluster using kubectl, CCE uses the<strong id="cce_01_0107__b10486036194010"> kubeconfig.json</strong> file generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_01_0107__b16295666413">kubeconfig.json</strong> file vary from user to user.</p>
|
||||
<p id="cce_01_0107__p142391810113">For details about user permissions, see <a href="cce_01_0187.html#cce_01_0187__section1464135853519">Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0107__section37321625113110"><h4 class="sectiontitle">Using kubectl</h4><p id="cce_01_0107__p1059125311547"><strong id="cce_01_0107__b394518331235">Background</strong></p>
|
||||
<p id="cce_01_0107__p7805114919351">To connect a client to a Kubernetes cluster, you can use kubectl. For details, see <a href="https://kubernetes.io/docs/tasks/tools/" target="_blank" rel="noopener noreferrer">Install Tools</a>.</p>
|
||||
<p id="cce_01_0107__p1774017615515"><strong id="cce_01_0107__b1046619156404">Prerequisites</strong></p>
|
||||
<div class="p" id="cce_01_0107__p13607162405518">CCE allows you to access a cluster through a <strong id="cce_01_0107__b668616652912">VPC network</strong> or a <strong id="cce_01_0107__b411181110295">public network</strong>.<ul id="cce_01_0107__ul126071124175518"><li id="cce_01_0107__li144192116548">VPC internal access: Clusters in the same VPC can access each other.</li><li id="cce_01_0107__li1460752419555"><span class="keyword" id="cce_01_0107__keyword1880824447101539">Public network access</span>: You need to prepare an ECS that can connect to a public network.</li></ul>
|
||||
</div>
|
||||
<div class="notice" id="cce_01_0107__note2967194410365"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_01_0107__p19671244103610">If public network access is used, the kube-apiserver of the cluster will be exposed to the public network and may be attacked. You are advised to configure Advanced Anti-DDoS for the EIP of the node where the kube-apiserver is located.</p>
|
||||
</div></div>
|
||||
<p id="cce_01_0107__p1815853784114"><strong id="cce_01_0107__b191882384018">Downloading kubectl</strong></p>
|
||||
<p id="cce_01_0107__p681416394409">You need to download kubectl and configuration file, copy the file to your client, and configure kubectl. After the configuration is complete, you can use kubectl to access your Kubernetes clusters.</p>
|
||||
<p id="cce_01_0107__p1616802216811">On the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/README.md" target="_blank" rel="noopener noreferrer">Kubernetes release</a> page, click the corresponding link based on the cluster version, click <strong id="cce_01_0107__b175559455257">Client Binaries</strong>, and download the corresponding platform software package.</p>
|
||||
<div class="fignone" id="cce_01_0107__fig978018401170"><span class="figcap"><b>Figure 1 </b>Downloading kubectl</span><br><span><img id="cce_01_0107__image17910133212172" src="en-us_image_0000001283755568.png"></span></div>
|
||||
<p id="cce_01_0107__p2589215105519"><strong id="cce_01_0107__b1897857154210">Installing and configuring kubectl</strong></p>
|
||||
<ol id="cce_01_0107__ol11839133213313"><li id="cce_01_0107__li8777174217117"><span>Log in to the CCE console, click <strong id="cce_01_0107__b12575820104612">Resource Management</strong> > <strong id="cce_01_0107__b10581182011469">Clusters</strong>, and choose <strong id="cce_01_0107__b558111203466">Command Line Tool</strong> > <strong id="cce_01_0107__b8582132015465">Kubectl</strong> under the cluster to be connected.</span></li><li id="cce_01_0107__li18450192584312"><span>On the <span class="uicontrol" id="cce_01_0107__uicontrol1146118348434"><b>Kubectl</b></span> tab page of the cluster details page, connect to the cluster as prompted.</span><p><div class="note" id="cce_01_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_01_0107__ul795610485546"><li id="cce_01_0107__li495634817549">You can download the kubectl configuration file (<strong id="cce_01_0107__b91542151256">kubeconfig.json</strong>) on the <strong id="cce_01_0107__b18413151719257">kubectl</strong> tab page. This file is used for user cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_01_0107__li127691420488">If two-way authentication is enabled for the current cluster and an EIP has been bound to the cluster, when the authentication fails (x509: certificate is valid), you need to bind the EIP and download the <strong id="cce_01_0107__b11941101920234">kubeconfig.json</strong> file again.</li><li id="cce_01_0107__li62692399615">By default, two-way authentication is disabled for domain names in the current cluster. You can run the <strong id="cce_01_0107__b208811223195">kubectl config use-context externalTLSVerify</strong> command to enable two-way authentication. For details, see <a href="#cce_01_0107__section1559919152711">Two-Way Authentication for Domain Names</a>. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_01_0107__b18517305915">kubeconfig.json</strong> again.</li><li id="cce_01_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_01_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_01_0107__b1017204911811">$home/.kube/config</strong>.</li></ul>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0107__section1559919152711"><a name="cce_01_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle">Two-Way Authentication for Domain Names</h4><p id="cce_01_0107__p138948491274">Currently, CCE supports two-way authentication for domain names.</p>
|
||||
<ul id="cce_01_0107__ul88981331482"><li id="cce_01_0107__li1705116151915">Two-way authentication is disabled for domain names by default. You can run the <strong id="cce_01_0107__b138607331913">kubectl config use-context externalTLSVerify</strong> command to switch to the externalTLSVerify context to enable it.</li><li id="cce_01_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_01_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes.</li><li id="cce_01_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_01_0107__b92571231093">kubeconfig.json</strong> again.</li><li id="cce_01_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_01_0107__b1226538997">kubeconfig.json</strong> contains the <strong id="cce_01_0107__b13262381592">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_01_0107__fig1941342411">Figure 2</a>. To use two-way authentication, you can download the <strong id="cce_01_0107__b10265401917">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_01_0107__fig1941342411"><a name="cce_01_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 2 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_01_0107__image3414621613" src="en-us_image_0000001243407853.png"></span></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0107__section1628510591883"><h4 class="sectiontitle">Common Issue (Error from server Forbidden)</h4><p id="cce_01_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
|
||||
<p id="cce_01_0107__p581934618458"># kubectl get deploy Error from server (Forbidden): deployments.apps is forbidden: User "0c97ac3cb280f4d91fa7c0096739e1f8" cannot list resource "deployments" in API group "apps" in the namespace "default"</p>
|
||||
<p id="cce_01_0107__p1418636115119">The cause is that the user does not have the permissions to operate the Kubernetes resources. For details about how to assign permissions, see <a href="cce_01_0189.html">Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0140.html">Using kubectl to Run a Cluster</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,14 +0,0 @@
|
||||
<a name="cce_01_0110"></a><a name="cce_01_0110"></a>
|
||||
|
||||
<h1 class="topictitle1">Monitoring and Logs</h1>
|
||||
<div id="body0000001219165543"><p id="cce_01_0110__p8060118"></p>
|
||||
</div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0182.html">Monitoring Overview</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0018.html">Container Logs</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -1,23 +0,0 @@
|
||||
<a name="cce_01_0111"></a><a name="cce_01_0111"></a>
|
||||
|
||||
|
||||
<h1 class="topictitle1">SFS Volumes</h1>
|
||||
|
||||
<div id="body0000001416079141"><p id="cce_01_0111__p8060118"></p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0259.html">Using SFS Volumes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0263.html">Creating a Deployment Mounted with an SFS Volume</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_01_0262.html">Creating a StatefulSet Mounted with an SFS Volume</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0393.html">Deployment Examples</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -1,50 +0,0 @@
|
||||
<a name="cce_01_0112"></a><a name="cce_01_0112"></a>
|
||||
|
||||
<h1 class="topictitle1">Setting Health Check for a Container</h1>
|
||||
<div id="body1512535109871"><div class="section" id="cce_01_0112__section1731112174912"><h4 class="sectiontitle">Scenario</h4><p id="cce_01_0112__p8242924192"><span class="keyword" id="cce_01_0112__keyword22817116429">Health check</span> regularly checks the health status of containers during container running. If the health check function is not configured, a pod cannot detect service exceptions or automatically restart the service to restore it. This will result in a situation where the pod status is normal but the service in the pod is abnormal.</p>
|
||||
<p id="cce_01_0112__a77e71e69afde4757ab0ef6087b2e30de">CCE provides the following health check probes:</p>
|
||||
<ul id="cce_01_0112__ul1867812287915"><li id="cce_01_0112__li574951765020"><strong id="cce_01_0112__b1209722181417">Liveness probe</strong>: checks whether a container is still alive. It is similar to the <strong id="cce_01_0112__b1821422218147">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_01_0112__li36781028792"><strong id="cce_01_0112__b1729242134220">Readiness probe</strong>: checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0112__section476025319384"><h4 class="sectiontitle"><span class="keyword" id="cce_01_0112__keyword94514523010">Health Check Methods</span></h4><ul id="cce_01_0112__ul2492162133910"><li id="cce_01_0112__li19505918465"><strong id="cce_01_0112__b84235270695216"><span class="keyword" id="cce_01_0112__keyword122935940517318">HTTP request</span></strong><p id="cce_01_0112__p17738122617398">This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200–399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path.</p>
|
||||
<p id="cce_01_0112__p051511331505">For example, for a container that provides HTTP services, the HTTP check path is <strong id="cce_01_0112__b2043313277265">/health-check</strong>, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container.</p>
|
||||
</li><li id="cce_01_0112__li92491637166"><strong id="cce_01_0112__b84235270695641"><span class="keyword" id="cce_01_0112__keyword84450853173134">TCP port</span></strong><p id="cce_01_0112__p14198132922215">For a container that provides TCP communication services, the cluster periodically establishes a TCP connection to the container. If the connection is successful, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port.</p>
|
||||
<p id="cce_01_0112__p1525113371164">For example, if you have a Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails.</p>
|
||||
</li><li id="cce_01_0112__li104061647154310"><strong id="cce_01_0112__b84235270695818"><span class="keyword" id="cce_01_0112__keyword1395397266173145">CLI</span></strong><p id="cce_01_0112__p105811510164113">CLI is an efficient tool for health check. When using the CLI, you must specify an executable command in a container. The cluster periodically runs the command in the container. If the command output is 0, the health check is successful. Otherwise, the health check fails.</p>
|
||||
<p id="cce_01_0112__p1658131014413">The CLI mode can be used to replace the HTTP request-based and TCP port-based health check.</p>
|
||||
<ul id="cce_01_0112__ul16409174744313"><li id="cce_01_0112__li7852728174119">For a TCP port, you can write a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_01_0112__b11599347141615">0</strong>. Otherwise, the script returns <strong id="cce_01_0112__b11599443121612">–1</strong>.</li><li id="cce_01_0112__li241104715431">For an HTTP request, you can write a program script to run the <strong id="cce_01_0112__b1767410318172">wget</strong> command for a container.<p id="cce_01_0112__p16488203413413"><strong id="cce_01_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
|
||||
<p id="cce_01_0112__p13488133464119">Check the return code of the response. If the return code is within 200–399, the script returns <strong id="cce_01_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_01_0112__b427293111227">–1</strong>.</p>
|
||||
<div class="notice" id="cce_01_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_01_0112__ul7414047164318"><li id="cce_01_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_01_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_01_0112__b842352706102616">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_01_0112__b842352706102629">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_01_0112__section2050653544516"><h4 class="sectiontitle">Common Parameter Description</h4>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_01_0112__t045a8ee10cb946eaa4c01da4319b7206" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Common parameter description</caption><thead align="left"><tr id="cce_01_0112__re3891f83a0b242b1bf3f178042398166"><th align="left" class="cellrowborder" valign="top" width="27%" id="mcps1.3.3.2.2.3.1.1"><p id="cce_01_0112__afec93a787dcb46788032cfc70a14a22e">Parameter</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="73%" id="mcps1.3.3.2.2.3.1.2"><p id="cce_01_0112__en-us_topic_0052519475_p74835383351">Description</p>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_01_0112__r82f45c7641534b8d80da858ce9ce9be7"><td class="cellrowborder" valign="top" width="27%" headers="mcps1.3.3.2.2.3.1.1 "><p id="cce_01_0112__p183641821163711">Initial Delay (s)</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="73%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_01_0112__p173941610161614">Check delay time in seconds. Set this parameter according to the normal startup time of services.</p>
|
||||
<p id="cce_01_0112__en-us_topic_0052519475_p05855219373">For example, if this parameter is set to 30, the health check will be started 30 seconds after the container is started. The time is reserved for containerized services to start.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0112__rf8dd0b9b29af4b96bcf3efaecb0c4bb2"><td class="cellrowborder" valign="top" width="27%" headers="mcps1.3.3.2.2.3.1.1 "><p id="cce_01_0112__p36325348374">Timeout (s)</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="73%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_01_0112__p052822120161">Timeout duration. Unit: second. </p>
|
||||
<p id="cce_01_0112__a376926047bc64e0a9304d6c9828fc5a2">For example, if this parameter is set to <strong id="cce_01_0112__b84235270617502">10</strong>, the timeout wait time for performing a health check is 10s. If the wait time elapses, the health check is regarded as a failure. If the parameter is left blank or set to <strong id="cce_01_0112__b84235270617523">0</strong>, the default timeout time is 1s. </p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_01_0130.html">Configuring a Container</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user