forked from laiweijian4/doc-exports
CCE UMN for 24.2.0 version -20240428
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com> Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com> Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
This commit is contained in:
parent
99a2d77599
commit
86fb05065f
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -8,9 +8,19 @@
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_01_0300__row125431763718"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p175401713377">2024-03-29</p>
|
||||
<tbody><tr id="cce_01_0300__row450133482720"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1051163432712">2024-05-30</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul178522051143711"><li id="cce_01_0300__li157281413144019">Modified the console style.</li><li id="cce_01_0300__li11783163645218">Added <a href="cce_bulletin_0061.html">CCE Console Upgrade</a>.</li><li id="cce_01_0300__li1485265114379">HCE OS 2.0 is supported. For details, see <a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</li><li id="cce_01_0300__li143973152818">Update <a href="cce_10_0405.html">Release Notes for CCE Cluster Versions</a>.</li><li id="cce_01_0300__li96317418918">Update <a href="cce_10_0423.html">Volcano Scheduling</a>.</li><li id="cce_01_0300__li12925155011536">Added the CCE Advanced HPA add-on.</li><li id="cce_01_0300__li19887214113914">Updated <a href="cce_bestpractice_0000.html">Best Practice</a>.</li><li id="cce_01_0300__li1963584304012">Updated <a href="cce_faq_0000.html">FAQs</a>.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul164481231281"><li id="cce_01_0300__li15337202482817">Deleted section "OS Patch Notes for Cluster Nodes".</li><li id="cce_01_0300__li1533782402818">Added <a href="cce_10_0476.html">Node OS</a>.</li><li id="cce_01_0300__li102136527395">Describes how to obtain the value of the available_zone, l4_flavor_name and l7_flavor_name.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row13794437162711"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p179512371275">2024-04-28</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul1113140112814"><li id="cce_01_0300__li0131704280">Supported the creation of clusters of v1.28.</li><li id="cce_01_0300__li333317613281">Supported IPv6.</li><li id="cce_01_0300__li1027417227372">Clusters of version 1.27 or later do not support nodes running EulerOS 2.5 or CentOS 7.7. For details, see <a href="cce_10_0302.html">Before You Start</a>.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row125431763718"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p175401713377">2024-03-29</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul178522051143711"><li id="cce_01_0300__li157281413144019">Modified the console style.</li><li id="cce_01_0300__li11783163645218">Added <a href="cce_bulletin_0061.html">CCE Console Upgrade</a>.</li><li id="cce_01_0300__li1485265114379">HCE OS 2.0 is supported. </li><li id="cce_01_0300__li143973152818">Update <a href="cce_10_0405.html">Release Notes for CCE Cluster Versions</a>.</li><li id="cce_01_0300__li96317418918">Update <a href="cce_10_0423.html">Volcano Scheduling</a>.</li><li id="cce_01_0300__li12925155011536">Added the CCE Advanced HPA add-on.</li><li id="cce_01_0300__li19887214113914">Updated <a href="cce_bestpractice_0000.html">Best Practice</a>.</li><li id="cce_01_0300__li1963584304012">Updated <a href="cce_faq_0000.html">FAQs</a>.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row15301133891"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p16301431995">2024-01-29</p>
|
||||
@ -30,7 +40,7 @@
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row450749103813"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p195076943820">2023-05-30</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul1843612311567"><li id="cce_01_0300__li14362312065">Added<a href="cce_10_0652.html">Configuring a Node Pool</a>.</li><li id="cce_01_0300__li48641237869">Added<a href="cce_10_0684.html">Configuring Health Check for Multiple Ports</a>.</li><li id="cce_01_0300__li152057919719">Updated<a href="cce_10_0363.html">Creating a Node</a>.</li><li id="cce_01_0300__li53955101178">Updated<a href="cce_10_0012.html">Creating a Node Pool</a>.</li><li id="cce_01_0300__li16648154715219">Updated<a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</li><li id="cce_01_0300__li7404516102217">Updated<a href="cce_productdesc_0005.html">Notes and Constraints</a>.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul1843612311567"><li id="cce_01_0300__li14362312065">Added<a href="cce_10_0652.html">Configuring a Node Pool</a>.</li><li id="cce_01_0300__li48641237869">Added<a href="cce_10_0684.html">Configuring Health Check for Multiple Ports</a>.</li><li id="cce_01_0300__li152057919719">Updated<a href="cce_10_0363.html">Creating a Node</a>.</li><li id="cce_01_0300__li53955101178">Updated<a href="cce_10_0012.html">Creating a Node Pool</a>.</li><li id="cce_01_0300__li7404516102217">Updated<a href="cce_productdesc_0005.html">Notes and Constraints</a>.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row181091826101811"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1510922618183">2023-02-10</p>
|
||||
@ -40,7 +50,7 @@
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row318811491127"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1188749921">2022-12-20</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul3243191617515"><li id="cce_01_0300__li62969232067">Updated <a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</li><li id="cce_01_0300__li202435167515">Added <a href="cce_10_0193.html">volcano</a>.</li><li id="cce_01_0300__li3694101821210">Added <a href="cce_10_0477.html">Service Account Token Security Improvement</a>.</li><li id="cce_01_0300__li13145192612434">Definition of new permission management roles: CCE ReadOnlyAccess, CCE Administrator, CCE FullAccess.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul3243191617515"><li id="cce_01_0300__li202435167515">Added <a href="cce_10_0193.html">volcano</a>.</li><li id="cce_01_0300__li3694101821210">Added <a href="cce_10_0477.html">Service Account Token Security Improvement</a>.</li><li id="cce_01_0300__li13145192612434">Definition of new permission management roles: CCE ReadOnlyAccess, CCE Administrator, CCE FullAccess.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row4557205544117"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p555719556417">2022-11-21</p>
|
||||
@ -50,7 +60,7 @@
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row1722210871314"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p32228891316">2022-08-27</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p02221989131">EulerOS 2.9 is supported. For details, see <a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</p>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p02221989131">EulerOS 2.9 is supported. </p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row830935719612"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p153091857867">2022-07-13</p>
|
||||
@ -70,7 +80,7 @@
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row1112142253113"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p612192283114">2022-04-14</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p3484184216310">Allowed cluster upgrade from v1.19 to v1.21. For details, see <a href="cce_10_0301.html">Performing In-place Upgrade</a>.</p>
|
||||
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p3484184216310">Allowed cluster upgrade from v1.19 to v1.21. </p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_01_0300__row37762558124"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p777625517122">2022-03-24</p>
|
||||
|
@ -6,9 +6,9 @@
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0430.html">Basic Cluster Information</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0068.html">Kubernetes Release Notes</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0068.html">Kubernetes Version Release Notes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0405.html">Release Notes for CCE Cluster Versions</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0405.html">Patch Version Release Notes</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
@ -6,9 +6,9 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0003__section0339185914138"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0003__ul975585510397"><li id="cce_10_0003__li15755125513910">For CCE standard clusters and CCE Turbo clusters to support node resetting, the version must be v1.13 or later.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0003__section83421713122615"><h4 class="sectiontitle">Precautions</h4><ul id="cce_10_0003__ul189321612123615"><li id="cce_10_0003__li139331412133615">Only worker nodes can be reset. If the node is still unavailable after the resetting, delete the node and create a new one.</li><li id="cce_10_0003__li133748101461"><strong id="cce_10_0003__b1724353045920">Resetting a node will reinstall the node OS and interrupt workload services running on the node. Therefore, perform this operation during off-peak hours.</strong></li><li id="cce_10_0003__li11336171744612"><strong id="cce_10_0003__b1654931913492">Data in the system disk and Docker data disks will be cleared. Back up important data before resetting the node.</strong></li><li id="cce_10_0003__li159325122367"><strong id="cce_10_0003__b18976436631">When an extra data disk is mounted to a node, data in this disk will be cleared if the disk has not been unmounted before the node reset. To prevent data loss, back up data in advance and mount the data disk again after the node reset is complete.</strong></li><li id="cce_10_0003__li18904821103817">The IP addresses of the workload pods on the node will change, but the container network access is not affected.</li><li id="cce_10_0003__li33901348389">There is remaining EVS disk quota.</li><li id="cce_10_0003__li893261218365">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0003__li551825451813">Resetting a node will cause PVC/PV data loss for the <a href="cce_10_0391.html">local PV</a> associated with the node. These PVCs and PVs cannot be restored or used again. In this scenario, the pod that uses the local PV is evicted from the reset node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod is always in the creating state because the underlying logical volume corresponding to the PVC does not exist.</li></ul>
|
||||
<div class="section" id="cce_10_0003__section83421713122615"><h4 class="sectiontitle">Precautions</h4><ul id="cce_10_0003__ul189321612123615"><li id="cce_10_0003__li139331412133615">Only worker nodes can be reset. If the node is still unavailable after the resetting, delete the node and create a new one.</li><li id="cce_10_0003__li133748101461"><strong id="cce_10_0003__b161591159125218">After a node is reset, the node OS will be reinstalled. Before resetting a node, <a href="cce_10_0605.html">drain</a> the node to gracefully evict the pods running on the node to other available nodes. Perform this operation during off-peak hours.</strong></li><li id="cce_10_0003__li11336171744612"><strong id="cce_10_0003__b3113619509">After a node is reset, its system disk and data disks will be cleared. Back up important data before resetting a node.</strong></li><li id="cce_10_0003__li159325122367"><strong id="cce_10_0003__b18976436631">After a worker node with an extra data disk attached is reset, the attachment will be cleared. In this case, attach the disk again and data will be retained.</strong></li><li id="cce_10_0003__li18904821103817">The IP addresses of the workload pods on the node will change, but the container network access is not affected.</li><li id="cce_10_0003__li33901348389">There is remaining EVS disk quota.</li><li id="cce_10_0003__li893261218365">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0003__li49618284552">Resetting a node will clear the Kubernetes labels and taints you added (those added by editing a node pool will not be lost). As a result, node-specific resources (such as local storage and workloads scheduled to this node) may be unavailable.</li><li id="cce_10_0003__li551825451813">Resetting a node will cause PVC/PV data loss for the <a href="cce_10_0391.html">local PV</a> associated with the node. These PVCs and PVs cannot be restored or used again. In this scenario, the pod that uses the local PV is evicted from the reset node. A new pod is created and stays in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0003__section13505122310576"><h4 class="sectiontitle">Procedure</h4><p id="cce_10_0003__p354415259593">The new console allows you to reset nodes in batches. You can also use a private image to reset nodes in batches.</p>
|
||||
<div class="section" id="cce_10_0003__section13505122310576"><h4 class="sectiontitle">Procedure</h4><p id="cce_10_0003__p354415259593">You can batch reset nodes using private images.</p>
|
||||
<ol id="cce_10_0003__ol19107956331"><li id="cce_10_0003__li12107195613316"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0003__li314420611592"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0003__uicontrol226720045103631"><b>Nodes</b></span>. On the displayed page, click the <strong id="cce_10_0003__b200115353103631">Nodes</strong> tab.</span></li><li id="cce_10_0003__li36690501449"><span>In the node list, select one or more nodes to be reset and choose <strong id="cce_10_0003__b75704965116">More</strong> > <strong id="cce_10_0003__b4241551145119">Reset Node</strong> in the <strong id="cce_10_0003__b9468195465310">Operation</strong> column.</span></li><li id="cce_10_0003__li2062015811615"><span>In the displayed dialog box, click <strong id="cce_10_0003__b143401521627">Next</strong>.</span><p><ul id="cce_10_0003__ul582555595914"><li id="cce_10_0003__li1582611556598">For nodes in the DefaultPool node pool, the parameter setting page is displayed. Set the parameters by referring to <a href="#cce_10_0003__li1646785611239">5</a>.</li><li id="cce_10_0003__li8826115511593">For a node you create in a node pool, resetting the node does not support parameter configuration. You can directly use the configuration image of the node pool to reset the node.</li></ul>
|
||||
</p></li><li id="cce_10_0003__li1646785611239"><a name="cce_10_0003__li1646785611239"></a><a name="li1646785611239"></a><span>Specify node parameters.</span><p><div class="p" id="cce_10_0003__en-us_topic_0000001244141037_p67901445163816"><strong id="cce_10_0003__b31796610207">Compute Settings</strong>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0003__en-us_topic_0000001244141037_table0668137185810" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Configuration parameters</caption><thead align="left"><tr id="cce_10_0003__en-us_topic_0000001244141037_row46680715812"><th align="left" class="cellrowborder" valign="top" width="20.02%" id="mcps1.3.4.3.5.2.1.2.2.3.1.1"><p id="cce_10_0003__en-us_topic_0000001244141037_p186688710581">Parameter</p>
|
||||
@ -24,14 +24,13 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0003__en-us_topic_0000001244141037_row20701239458"><td class="cellrowborder" valign="top" width="20.02%" headers="mcps1.3.4.3.5.2.1.2.2.3.1.1 "><p id="cce_10_0003__en-us_topic_0000001244141037_p66685314111">Container Engine</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="79.97999999999999%" headers="mcps1.3.4.3.5.2.1.2.2.3.1.2 "><div class="p" id="cce_10_0003__en-us_topic_0000001244141037_p1015871915468">CCE clusters support Docker and containerd in some scenarios.<ul id="cce_10_0003__cce_10_0363_ul93311957155518"><li id="cce_10_0003__cce_10_0363_li1770423185614">VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd.</li><li id="cce_10_0003__cce_10_0363_li1873025821011">For a CCE Turbo cluster, both <strong id="cce_10_0003__cce_10_0363_b86101911456">Docker</strong> and <strong id="cce_10_0003__cce_10_0363_b66114918457">containerd</strong> are supported. For details, see <a href="cce_10_0462.html#cce_10_0462__section159298451879">Mapping between Node OSs and Container Engines</a>.</li></ul>
|
||||
</div>
|
||||
<td class="cellrowborder" valign="top" width="79.97999999999999%" headers="mcps1.3.4.3.5.2.1.2.2.3.1.2 "><p id="cce_10_0003__en-us_topic_0000001244141037_p1015871915468">The container engines supported by CCE include Docker and containerd, which may vary depending on cluster types, cluster versions, and OSs. Select a container engine based on the information displayed on the CCE console. For details, see <a href="cce_10_0462.html#cce_10_0462__section159298451879">Mapping between Node OSs and Container Engines</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__en-us_topic_0000001244141037_row146695755817"><td class="cellrowborder" valign="top" width="20.02%" headers="mcps1.3.4.3.5.2.1.2.2.3.1.1 "><p id="cce_10_0003__en-us_topic_0000001244141037_p176690785813">OS</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="79.97999999999999%" headers="mcps1.3.4.3.5.2.1.2.2.3.1.2 "><div class="p" id="cce_10_0003__p872401816306">Select an OS type. Different types of nodes support different OSs.<ul id="cce_10_0003__cce_10_0363_ul95841381435"><li id="cce_10_0003__cce_10_0363_li95841981234"><strong id="cce_10_0003__cce_10_0363_en-us_topic_0000001138502763_b182401825201815">Public image</strong>: Select a public image for the node.</li><li id="cce_10_0003__cce_10_0363_li55846812310"><strong id="cce_10_0003__cce_10_0363_b81011681163">Private image</strong>: Select a private image for the node. </li></ul>
|
||||
<div class="note" id="cce_10_0003__cce_10_0363_note139511353025"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0003__cce_10_0363_ul19652193785018"><li id="cce_10_0003__cce_10_0363_li765223714505">Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.</li></ul>
|
||||
<div class="note" id="cce_10_0003__cce_10_0363_note139511353025"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0003__cce_10_0363_p619515468401">Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.</p>
|
||||
</div></div>
|
||||
</div>
|
||||
</td>
|
||||
@ -63,7 +62,7 @@
|
||||
<tr id="cce_10_0003__cce_10_0198_row1966913718588"><td class="cellrowborder" valign="top" width="20.02%" headers="mcps1.3.4.3.5.2.3.1.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p0669147185817">Data Disk</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="79.97999999999999%" headers="mcps1.3.4.3.5.2.3.1.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p17207615113820"><strong id="cce_10_0003__cce_10_0198_b034255085317">At least one data disk is required</strong> for the container runtime and kubelet. <strong id="cce_10_0003__cce_10_0198_b1034825011538">The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.</strong></p>
|
||||
<p id="cce_10_0003__cce_10_0198_p3752312011">Click <strong id="cce_10_0003__cce_10_0198_b291501816313">Expand</strong> and select <strong id="cce_10_0003__cce_10_0198_b325193733516">Allocate Disk Space</strong> to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see <a href="cce_10_0341.html">Data Disk Space Allocation</a>.</p>
|
||||
<p id="cce_10_0003__cce_10_0198_p3752312011">Click <strong id="cce_10_0003__cce_10_0198_b513813367555">Expand</strong> to configure <strong id="cce_10_0003__cce_10_0198_b13265825195416">Data Disk Space Allocation</strong>, which is used to allocate space for container engines, images, and ephemeral storage for them to run properly. For details about how to allocate data disk space, see <a href="cce_10_0341.html">Data Disk Space Allocation</a>.</p>
|
||||
<p id="cce_10_0003__cce_10_0198_p1391618153118">For other data disks, a raw disk is created without any processing by default. You can also click <strong id="cce_10_0003__cce_10_0198_b16127101911540">Expand</strong> and select <strong id="cce_10_0003__cce_10_0198_b21351519135417">Mount Disk</strong> to mount the data disk to a specified directory. </p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -78,22 +77,22 @@
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0003__cce_10_0198_row653985164019"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p1253915513402">Kubernetes Label</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p11258124419136">Click <strong id="cce_10_0003__cce_10_0198_b179325267572">Add</strong> to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added.</p>
|
||||
<p id="cce_10_0003__cce_10_0198_p1442572821211">Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" target="_blank" rel="noopener noreferrer">Labels and Selectors</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row25394514014"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p25391859406">Resource Tag</p>
|
||||
<tbody><tr id="cce_10_0003__cce_10_0198_row25394514014"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p25391859406">Resource Tag</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p275333410342">You can add resource tags to classify resources.</p>
|
||||
<p id="cce_10_0003__cce_10_0198_p117537347346">You can create <span class="uicontrol" id="cce_10_0003__cce_10_0198_uicontrol353353425315"><b>predefined tags</b></span> on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency. </p>
|
||||
<p id="cce_10_0003__cce_10_0198_p117537347346">You can create <span class="uicontrol" id="cce_10_0003__cce_10_0198_uicontrol3931145514441"><b>predefined tags</b></span> on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency. </p>
|
||||
<p id="cce_10_0003__cce_10_0198_p16753133419348">CCE will automatically create the "CCE-Dynamic-Provisioning-Node=<em id="cce_10_0003__cce_10_0198_i111451122126">node id</em>" tag.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row67956313405"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p15926420402">Kubernetes Label</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p1959217418400">Click <strong id="cce_10_0003__cce_10_0198_b18628111511245">Add Label</strong> to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added.</p>
|
||||
<p id="cce_10_0003__cce_10_0198_p75921845403">Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" target="_blank" rel="noopener noreferrer">Labels and Selectors</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row115391952402"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p55391457404">Taint</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><div class="p" id="cce_10_0003__cce_10_0198_p2875141354415">This parameter is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters:<ul id="cce_10_0003__cce_10_0198_ul17274222121015"><li id="cce_10_0003__cce_10_0198_li227482216106"><strong id="cce_10_0003__cce_10_0198_b315322791211">Key</strong>: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.</li><li id="cce_10_0003__cce_10_0198_li7274112241020"><strong id="cce_10_0003__cce_10_0198_b959462991212">Value</strong>: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).</li><li id="cce_10_0003__cce_10_0198_li2274182211010"><strong id="cce_10_0003__cce_10_0198_b1842253117126">Effect</strong>: Available options are <strong id="cce_10_0003__cce_10_0198_b18422153171218">NoSchedule</strong>, <strong id="cce_10_0003__cce_10_0198_b2042312311128">PreferNoSchedule</strong>, and <strong id="cce_10_0003__cce_10_0198_b1042343111212">NoExecute</strong>.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><div class="p" id="cce_10_0003__cce_10_0198_p2875141354415">This parameter is left blank by default. You can add taints to configure node anti-affinity. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters:<ul id="cce_10_0003__cce_10_0198_ul17274222121015"><li id="cce_10_0003__cce_10_0198_li227482216106"><strong id="cce_10_0003__cce_10_0198_b315322791211">Key</strong>: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.</li><li id="cce_10_0003__cce_10_0198_li7274112241020"><strong id="cce_10_0003__cce_10_0198_b959462991212">Value</strong>: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).</li><li id="cce_10_0003__cce_10_0198_li2274182211010"><strong id="cce_10_0003__cce_10_0198_b1842253117126">Effect</strong>: Available options are <strong id="cce_10_0003__cce_10_0198_b18422153171218">NoSchedule</strong>, <strong id="cce_10_0003__cce_10_0198_b2042312311128">PreferNoSchedule</strong>, and <strong id="cce_10_0003__cce_10_0198_b1042343111212">NoExecute</strong>.</li></ul>
|
||||
<div class="notice" id="cce_10_0003__cce_10_0198_note77443231113"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><ul id="cce_10_0003__cce_10_0198_ul104271158181515"><li id="cce_10_0003__cce_10_0198_li042725811158">If taints are used, you must configure tolerations in the YAML files of pods. Otherwise, scale-up may fail or pods cannot be scheduled onto the added nodes.</li><li id="cce_10_0003__cce_10_0198_li642712581152">After a node pool is created, you can click <strong id="cce_10_0003__cce_10_0198_b15911477124">Edit</strong> to modify its configuration. The modification will be synchronized to all nodes in the node pool.</li></ul>
|
||||
</div></div>
|
||||
</div>
|
||||
@ -101,19 +100,19 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row155390520404"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p054015516406">Max. Pods</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p18611194424216">Maximum number of pods that can run on the node, including the default system pods.</p>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p18611194424216">Maximum number of pods that can run on the node, including the default system pods. </p>
|
||||
<p id="cce_10_0003__cce_10_0198_p272611351429">This limit prevents the node from being overloaded with pods.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row23431056203915"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p534319566391">Pre-installation Command</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p11569142494118">Enter commands. A maximum of 1000 characters are allowed.</p>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p1644103463319">Pre-installation script command, in which Chinese characters are not allowed. The script command will be Base64-transcoded. </p>
|
||||
<p id="cce_10_0003__cce_10_0198_p03368579295">The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0003__cce_10_0198_row1167794673912"><td class="cellrowborder" valign="top" width="23.68%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.1 "><p id="cce_10_0003__cce_10_0198_p18677104643916">Post-installation Command</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p1114204119418">Enter commands. A maximum of 1000 characters are allowed.</p>
|
||||
<td class="cellrowborder" valign="top" width="76.32%" headers="mcps1.3.4.3.5.2.4.2.2.3.1.2 "><p id="cce_10_0003__cce_10_0198_p14193381618">Pre-installation script command, in which Chinese characters are not allowed. The script command will be Base64-transcoded. </p>
|
||||
<p id="cce_10_0003__cce_10_0198_p13471136154110">The script will be executed after Kubernetes software is installed, which does not affect the installation.</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -2,8 +2,7 @@
|
||||
|
||||
<h1 class="topictitle1">Managing Node Labels</h1>
|
||||
<div id="body1523168310157"><div class="section" id="cce_10_0004__section825504204814"><h4 class="sectiontitle">Node Label Usage Scenario</h4><p id="cce_10_0004__p780125519482">Node labels are mainly used in the following scenarios:</p>
|
||||
<ul id="cce_10_0004__ul1269074720287"><li id="cce_10_0004__li1269054722816">Node management: Node labels are used to classify nodes.</li><li id="cce_10_0004__li13690184719287">Affinity and anti-affinity between a workload and node:<ul id="cce_10_0004__ul1329315507281"><li id="cce_10_0004__li17292050192815">Different workloads have different resource requirements such as CPU, memory, and I/O. If a workload consumes too many resources in a cluster, other workloads in the same cluster may fail to run properly. In this case, you are advised to add different labels to nodes. When deploying a workload, you can select nodes with specified labels for affinity deployment to ensure the normal operation of the system. Otherwise, node anti-affinity deployment can be used.</li><li id="cce_10_0004__li1229255012816">A system can be divided into multiple modules. Each module consists of multiple microservices. To ensure efficient O&M, you can add a module label to each node so that each module can be deployed on the corresponding node. In this way, modules do not interfere with each other and microservices can be easily maintained on their nodes.</li></ul>
|
||||
</li></ul>
|
||||
<ul id="cce_10_0004__ul1269074720287"><li id="cce_10_0004__li1269054722816">Node management: Node labels are used to classify nodes.</li><li id="cce_10_0004__li13690184719287">Node affinity or anti-affinity for workloads: By adding labels to nodes, you can schedule pods to specific nodes through node affinity or prevent pods from being scheduled to specific nodes through node anti-affinity. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0004__section74111324152813"><h4 class="sectiontitle"><span class="keyword" id="cce_10_0004__keyword544709935144944">Inherent Label of a Node</span></h4><p id="cce_10_0004__p096179164111">After a node is created, some fixed labels exist and cannot be deleted. For details about these labels, see <a href="#cce_10_0004__table83962234533">Table 1</a>.</p>
|
||||
<div class="note" id="cce_10_0004__note1531361014395"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0004__p16314181043913">Do not manually change the inherent labels that are automatically added to a node. If the manually changed value conflicts with the system value, the system value is used.</p>
|
||||
|
@ -77,7 +77,7 @@
|
||||
<div class="notice" id="cce_10_0007__note177339212275"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0007__p998763711231">Before viewing logs, ensure that the time of the browser is the same as that on the backend server.</p>
|
||||
</div></div>
|
||||
<ol id="cce_10_0007__en-us_topic_0107283638_ol14644105712488"><li id="cce_10_0007__en-us_topic_0107283638_li2619151017014"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b153351729122716">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1340513385528"><span>Click the <strong id="cce_10_0007__b24101331162716">Deployments</strong> tab and click the <span class="uicontrol" id="cce_10_0007__uicontrol741018314276"><b>View Log</b></span> of the target workload.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p17548132715421">In the displayed <strong id="cce_10_0007__b793112517535">View Log</strong> window, you can view logs.</p>
|
||||
<div class="note" id="cce_10_0007__en-us_topic_0107283638_note216713316213"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p101679316212">The displayed logs are standard output logs of containers and do not have persistence and advanced O&M capabilities. To use more comprehensive log capabilities, see <a href="cce_10_0553.html">Logs</a>. If the function of collecting standard output is enabled for the workload (enabled by default), you can go to AOM to view more workload logs. For details, see <a href="cce_10_0018.html">Connecting CCE to AOM</a>.</p>
|
||||
<div class="note" id="cce_10_0007__en-us_topic_0107283638_note216713316213"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p101679316212">The displayed logs are standard output logs of containers and do not have persistence and advanced O&M capabilities. To use more comprehensive log capabilities, see <a href="cce_10_0553.html">Logs</a>. If the function of collecting standard output is enabled for the workload (enabled by default), you can go to AOM to view more workload logs. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.</p>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
|
@ -4,11 +4,11 @@
|
||||
<div id="body1522665832344"><p id="cce_10_0010__p13310145119810">You can learn about a cluster network from the following two aspects:</p>
|
||||
<ul id="cce_10_0010__ul65247121891"><li id="cce_10_0010__li14524161214917">What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see <a href="#cce_10_0010__section1131733719195">Cluster Network Structure</a>.</li><li id="cce_10_0010__li55241612391">How is pod access implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides <a href="#cce_10_0010__section1860619221134">Service</a> and <a href="#cce_10_0010__section1248852094313">Ingress</a> to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see <a href="#cce_10_0010__section1286493159">Access Scenarios</a>.</li></ul>
|
||||
<div class="section" id="cce_10_0010__section1131733719195"><a name="cce_10_0010__section1131733719195"></a><a name="section1131733719195"></a><h4 class="sectiontitle">Cluster Network Structure</h4><p id="cce_10_0010__p3299181794916">All nodes in the cluster are located in a VPC and use the VPC network. The container network is managed by dedicated network add-ons.</p>
|
||||
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001750950104.png"></span></p>
|
||||
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001897906049.png"></span></p>
|
||||
<ul id="cce_10_0010__ul1916179122617"><li id="cce_10_0010__li13455145754315"><strong id="cce_10_0010__b19468105563811">Node Network</strong><p id="cce_10_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. Select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
|
||||
</li><li id="cce_10_0010__li16131141644715"><strong id="cce_10_0010__b1975815172433">Container Network</strong><p id="cce_10_0010__p523322010499">A container network assigns IP addresses to containers in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
|
||||
<p id="cce_10_0010__p3753153443514">Currently, CCE supports the following container network models:</p>
|
||||
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model applies to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native 2.0 network deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance.</li></ul>
|
||||
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model applies to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native 2.0 network deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and EIPs are bound to deliver high performance.</li></ul>
|
||||
<p id="cce_10_0010__p397482011109">The performance, networking scale, and application scenarios of a container network vary according to the container network model. For details about the functions and features of different container network models, see <a href="cce_10_0281.html">Overview</a>.</p>
|
||||
</li><li id="cce_10_0010__li9139522183714"><strong id="cce_10_0010__b1885317214113">Service Network</strong><p id="cce_10_0010__p584703114499">Service is also a Kubernetes object. Each Service has a static IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster.</p>
|
||||
</li></ul>
|
||||
@ -25,9 +25,9 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0010__section1286493159"><a name="cce_10_0010__section1286493159"></a><a name="section1286493159"></a><h4 class="sectiontitle">Access Scenarios</h4><p id="cce_10_0010__p1558001514155">Workload access scenarios can be categorized as follows:</p>
|
||||
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li8904911447">Access through the public network: An EIP should be bound to the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through the private network: The workload can be accessed through the internal IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
|
||||
</li><li id="cce_10_0010__li1066365520014">The workload can access the external network as follows:<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block.</li><li id="cce_10_0010__li8257105318237">Accessing a public network: Assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing Public Networks from a Container</a>.</li></ul>
|
||||
</li><li id="cce_10_0010__li1066365520014">The workload can access the external network as follows:<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block.</li><li id="cce_10_0010__li8257105318237">Accessing a public network: Assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing the Internet from a Container</a>.</li></ul>
|
||||
</li></ul>
|
||||
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001797909889.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001851586668.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -4,9 +4,9 @@
|
||||
<div id="body1522736584192"><div class="section" id="cce_10_0011__section13559184110492"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0011__p32401248184910">ClusterIP Services allow workloads in the same cluster to use their cluster-internal domain names to access each other.</p>
|
||||
<p id="cce_10_0011__p653753053815">The cluster-internal domain name format is <em id="cce_10_0011__i8179113533712"><Service name></em>.<em id="cce_10_0011__i14179133519374"><Namespace of the workload></em><strong id="cce_10_0011__b164892813716">.svc.cluster.local:</strong><em id="cce_10_0011__i19337102815712"><Port></em>, for example, <strong id="cce_10_0011__b8115811381">nginx.default.svc.cluster.local:80</strong>.</p>
|
||||
<p id="cce_10_0011__p1778412445517"><a href="#cce_10_0011__fig192245420557">Figure 1</a> shows the mapping relationships between access channels, container ports, and access ports.</p>
|
||||
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001750791656.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001898025885.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>In the navigation pane, choose <strong id="cce_10_0011__b18658321171411"><span id="cce_10_0011__text9765124722315">Services & Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0011__uicontrol132971717714"><b>Create Service</b></span>.</span></li><li id="cce_10_0011__li3476651017144"><span>Set intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Confirm</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Port Settings</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
|
||||
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>In the navigation pane, choose <strong id="cce_10_0011__b18658321171411"><span id="cce_10_0011__text9765124722315">Services & Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0011__uicontrol132971717714"><b>Create Service</b></span>.</span></li><li id="cce_10_0011__li3476651017144"><span>Configure intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Confirm</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to use the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li142435567390"><strong id="cce_10_0011__b11211151715470">IPv6</strong>: This function is disabled by default. After this function is enabled, the cluster IP address of the Service changes to an IPv6 address. <strong id="cce_10_0011__b11322182810261">This parameter is available only in clusters of v1.15 or later with IPv6 enabled (set during cluster creation).</strong></li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Port Settings</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_10_0011__li5563226917144"><span>Click <strong id="cce_10_0011__b15590122052614">OK</strong>.</span></li></ol>
|
||||
</div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -6,16 +6,16 @@
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0681.html">Creating a LoadBalancer Service</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0385.html">Using Annotations to Configure Load Balancing</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0385.html">Using Annotations to Balance Load</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0683.html">Service Using HTTP or HTTPS</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0683.html">Configuring an HTTP or HTTPS Service</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0684.html">Configuring Health Check for Multiple Ports</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0729.html">Configuring Timeout for a Service</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0684.html">Configuring Health Check on Multiple Service Ports</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0685.html">Setting the Pod Ready Status Through the ELB Health Check</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0729.html">Configuring Timeout for a LoadBalancer Service</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0355.html">Enabling Passthrough Networking for LoadBalancer Services</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0084.html">Enabling ICMP Security Group Rules</a></strong><br>
|
||||
|
@ -1,10 +1,10 @@
|
||||
<a name="cce_10_0018"></a><a name="cce_10_0018"></a>
|
||||
|
||||
<h1 class="topictitle1">Connecting CCE to AOM</h1>
|
||||
<h1 class="topictitle1">Collecting Container Logs Using ICAgent</h1>
|
||||
<div id="body1522667123001"><p id="cce_10_0018__p78381781804">CCE works with AOM to collect workload logs. When a node is created, ICAgent (a DaemonSet named <strong id="cce_10_0018__b13829819578">icagent</strong> in the <strong id="cce_10_0018__b697274313582">kube-system</strong> namespace of a cluster) of AOM is installed by default. ICAgent collects workload logs and reports them to AOM. You can view workload logs on the CCE or AOM console.</p>
|
||||
<div class="section" id="cce_10_0018__section17884754413"><h4 class="sectiontitle">Constraints</h4><p id="cce_10_0018__p23831558355">ICAgent only collects text logs in .log, .trace, and .out formats.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001750791484.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image664110265156" src="en-us_image_0000001865613281.png"></span></div>
|
||||
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001898026057.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image664110265156" src="en-us_image_0000001851587156.png"></span></div>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Volume Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>hostPath</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>EmptyDir</b></span>.</span><p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0018__table115901715550" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Configuring log policies</caption><thead align="left"><tr id="cce_10_0018__row45851074554"><th align="left" class="cellrowborder" valign="top" width="22.12%" id="mcps1.3.3.2.3.2.1.2.3.1.1"><p id="cce_10_0018__p115843785517">Parameter</p>
|
||||
@ -25,7 +25,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0018__row19587147165512"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p1158647155518">Mount Path</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><div class="p" id="cce_10_0018__p358711715554">Container path (for example, <strong id="cce_10_0018__b8656121314711">/tmp</strong>) to which the storage resources will be mounted.<div class="notice" id="cce_10_0018__note155879745516"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><ul id="cce_10_0018__ul14587570556"><li id="cce_10_0018__li95877735510">Do not mount a volume to a system directory such as <strong id="cce_10_0018__b26844422295">/</strong> or <strong id="cce_10_0018__b968404214299">/var/run</strong>. Otherwise, an exception occurs. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, which leads to a container startup failure or workload creation failure.</li><li id="cce_10_0018__li1258777175519">If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host machine may be damaged.</li><li id="cce_10_0018__li1943916477113">AOM collects only the first 20 logs that have been modified recently. It collects logs from 2 levels of subdirectories by default.</li><li id="cce_10_0018__li545718441116">AOM only collects <span class="uicontrol" id="cce_10_0018__uicontrol27371025162017"><b>.log</b></span>, <span class="uicontrol" id="cce_10_0018__uicontrol874242592011"><b>.trace</b></span>, and <span class="uicontrol" id="cce_10_0018__uicontrol1974322522012"><b>.out</b></span> text logs in mounting paths.</li><li id="cce_10_0018__li866676185016">For details about how to set permissions for mount points in a container, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" target="_blank" rel="noopener noreferrer">Configure a Security Context for a Pod or Container</a>.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><div class="p" id="cce_10_0018__p358711715554">Container path (for example, <strong id="cce_10_0018__b8656121314711">/tmp</strong>) to which the storage resources will be mounted.<div class="notice" id="cce_10_0018__note155879745516"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><ul id="cce_10_0018__ul14587570556"><li id="cce_10_0018__li95877735510">Do not mount storage to a system directory such as <strong id="cce_10_0018__b0630176102713">/</strong> or <strong id="cce_10_0018__b063118642719">/var/run</strong>; this action may cause a container error to occur. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload.</li><li id="cce_10_0018__li1258777175519">If the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host may be damaged.</li><li id="cce_10_0018__li1943916477113">AOM collects only the first 20 logs that have been modified recently. It collects logs from 2 levels of subdirectories by default.</li><li id="cce_10_0018__li545718441116">AOM only collects <span class="uicontrol" id="cce_10_0018__uicontrol27371025162017"><b>.log</b></span>, <span class="uicontrol" id="cce_10_0018__uicontrol874242592011"><b>.trace</b></span>, and <span class="uicontrol" id="cce_10_0018__uicontrol1974322522012"><b>.out</b></span> text logs in mounting paths.</li><li id="cce_10_0018__li866676185016">For details about how to set permissions for mount points in a container, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" target="_blank" rel="noopener noreferrer">Configure a Security Context for a Pod or Container</a>.</li></ul>
|
||||
</div></div>
|
||||
</div>
|
||||
</td>
|
||||
@ -98,7 +98,7 @@ spec:
|
||||
name: vol-log
|
||||
imagePullSecrets:
|
||||
- name: default-secret</pre>
|
||||
<p id="cce_10_0018__p878213715533">The following shows how to use a hostPath volume. Compared with emptyDir, the type of <strong id="cce_10_0018__b1818572291612">volumes</strong> is changed to <strong id="cce_10_0018__b8437327171619">hostPath</strong>, and the path on the host needs to be configured for this hostPath volume. In the following example, <span class="uicontrol" id="cce_10_0018__uicontrol046012383406"><b>/tmp/log</b></span> on the host is mounted to <span class="uicontrol" id="cce_10_0018__uicontrol1546533819400"><b>/var/log/nginx</b></span>. In this way, the ICAgent can collects logs in <strong id="cce_10_0018__b1246512382409">/var/log/nginx</strong>, without deleting the logs from <strong id="cce_10_0018__b64661838144012">/tmp/log</strong>.</p>
|
||||
<p id="cce_10_0018__p878213715533">The following shows how to use a hostPath volume. Compared with emptyDir, the type of <strong id="cce_10_0018__b1818572291612">volume</strong><strong id="cce_10_0018__b16186132221614">s</strong> is changed to <strong id="cce_10_0018__b8437327171619">hostPath</strong>, and the path on the host needs to be configured for this hostPath volume. In the following example, <span class="uicontrol" id="cce_10_0018__uicontrol046012383406"><b>/tmp/log</b></span> on the host is mounted to <span class="uicontrol" id="cce_10_0018__uicontrol1546533819400"><b>/var/log/nginx</b></span>. In this way, the ICAgent can collects logs in <strong id="cce_10_0018__b1246512382409">/var/log/nginx</strong>, without deleting the logs from <strong id="cce_10_0018__b64661838144012">/tmp/log</strong>.</p>
|
||||
<pre class="screen" id="cce_10_0018__screen1347245314534">apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@ -155,8 +155,8 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p6329709512">Extended host path</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p32881805119">Extended host paths contain pod IDs or container names to distinguish different containers into which the host path is mounted.</p>
|
||||
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1378884164">Pod</span>.</p>
|
||||
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b125940737">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b186838195">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b839851076">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b279684090">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
|
||||
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1146433393">Pod</span>.</p>
|
||||
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b1203413527">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b1523679015">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b604021733">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b1376912744">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0018__row732915085118"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p17329004514">policy.logs.rotate</p>
|
||||
@ -164,7 +164,7 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p123292055113">Log dump</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p1017113396539">Log dump refers to rotating log files on a local host.</p>
|
||||
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1683498653">Disabled</strong>: AOM does not dump log files.</li></ul>
|
||||
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1317453866">Disabled</strong>: AOM does not dump log files.</li></ul>
|
||||
<div class="note" id="cce_10_0018__note121711639195319"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0018__ul817183918533"><li id="cce_10_0018__li9171183945310">AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur.</li><li id="cce_10_0018__li1117153914535">Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have already set rotation for log files, skip the configuration. Otherwise, conflicts may occur.</li><li id="cce_10_0018__li317113915532">You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files.</li></ul>
|
||||
</div></div>
|
||||
</td>
|
||||
@ -174,7 +174,7 @@ spec:
|
||||
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p14388112019519">Collection path</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p63882201153">A collection path narrows down the scope of collection to specified logs.</p>
|
||||
<ul id="cce_10_0018__ul73883209510"><li id="cce_10_0018__li14388162011513">If no collection path is specified, log files in <strong id="cce_10_0018__b159316672">.log</strong>, <strong id="cce_10_0018__b1413162972">.trace</strong>, and <strong id="cce_10_0018__b1207442241">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li03886201854"><strong id="cce_10_0018__b1215278268">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b771053601">.log</strong>, <strong id="cce_10_0018__b2076050429">.trace</strong>, and <strong id="cce_10_0018__b159772462">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li1938811201058">* in log file names indicates a fuzzy match.</li></ul>
|
||||
<ul id="cce_10_0018__ul73883209510"><li id="cce_10_0018__li14388162011513">If no collection path is specified, log files in <strong id="cce_10_0018__b473416318">.log</strong>, <strong id="cce_10_0018__b1965847560">.trace</strong>, and <strong id="cce_10_0018__b688464659">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li03886201854"><strong id="cce_10_0018__b1024751887">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b120678913">.log</strong>, <strong id="cce_10_0018__b1780575222">.trace</strong>, and <strong id="cce_10_0018__b1093378982">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li1938811201058">* in log file names indicates a fuzzy match.</li></ul>
|
||||
<p id="cce_10_0018__p17388152013515">Example: The collection path <strong id="cce_10_0018__b19951612237">/tmp/**/test*.log</strong> indicates that all <strong id="cce_10_0018__b49571315239">.log</strong> files prefixed with <strong id="cce_10_0018__b4958101202315">test</strong> will be collected from <strong id="cce_10_0018__b695815172316">/tmp</strong> and subdirectories at 5 levels deep.</p>
|
||||
<div class="caution" id="cce_10_0018__note153881220751"><span class="cautiontitle"> CAUTION: </span><div class="cautionbody"><p id="cce_10_0018__p938810204516">Ensure that ICAgent is of v5.12.22 or later.</p>
|
||||
</div></div>
|
||||
@ -217,7 +217,7 @@ kubectl logs -f <pod_name> -n namespace (real-time query in tail -f mode)<
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0182.html">Collecting Data Plane Logs</a></div>
|
||||
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0182.html">Collecting Container Logs</a></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -20,7 +20,7 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0399.html">Configuring Intra-VPC Access</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0400.html">Accessing Public Networks from a Container</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0400.html">Accessing the Internet from a Container</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -14,7 +14,7 @@
|
||||
<p id="cce_10_0026__en-us_topic_0179639644_p1694171715446"><span><img id="cce_10_0026__en-us_topic_0179639644_image49418175441" src="en-us_image_0000001696838318.png"></span></p>
|
||||
<p id="cce_10_0026__en-us_topic_0179639644_p109481744411"></p>
|
||||
</li><li id="cce_10_0026__en-us_topic_0179639644_li1094161784410">Click <strong id="cce_10_0026__en-us_topic_0179639644_b139145611337">View Trace</strong> in the <strong id="cce_10_0026__en-us_topic_0179639644_b1591756103313">Operation</strong> column. The trace details are displayed.<p id="cce_10_0026__en-us_topic_0179639644_p1695161714447"><span><img id="cce_10_0026__en-us_topic_0179639644_image1904172011220" src="en-us_image_0000001758618249.png"></span></p>
|
||||
</li><li id="cce_10_0026__en-us_topic_0179639644_li129561719446">For details about key fields in the trace structure, see section "Trace References" > "Trace Structure" and section "Trace References" > "Example Traces".</li></ol>
|
||||
</li><li id="cce_10_0026__en-us_topic_0179639644_li129561719446">For details about key fields in the trace structure, see section "Trace References" > "Trace Structure" and section "Trace References" > "Example Traces" in the <em id="cce_10_0026__en-us_topic_0179639644_i949883013810">CTS User Guide</em>.</li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -5,7 +5,7 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during the operation. Back up related data in advance.</li><li id="cce_10_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0036__li159521745431"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0036__uicontrol378153945103635"><b>Nodes</b></span>. On the displayed page, click the <strong id="cce_10_0036__b1786259085103635">Nodes</strong> tab.</span></li><li id="cce_10_0036__li224719151931"><span>Locate the target node and click its name.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b2347626195316">Stop</strong>. In the displayed dialog box, click <strong id="cce_10_0036__b434722605318">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image124001418192" src="en-us_image_0000001817324166.png"></span></div>
|
||||
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0036__li159521745431"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0036__uicontrol378153945103635"><b>Nodes</b></span>. On the displayed page, click the <strong id="cce_10_0036__b1786259085103635">Nodes</strong> tab.</span></li><li id="cce_10_0036__li224719151931"><span>Locate the target node and click its name.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b2347626195316">Stop</strong>. In the displayed dialog box, click <strong id="cce_10_0036__b434722605318">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image124001418192" src="en-us_image_0000001851745864.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -14,6 +14,8 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0007.html">Managing Workloads and Jobs</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0833.html">Managing Custom Resources</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0463.html">Kata Runtime and Common Runtime</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
@ -3,7 +3,7 @@
|
||||
<h1 class="topictitle1">Creating a Deployment</h1>
|
||||
<div id="body1505966783091"><div class="section" id="cce_10_0047__section686591217411"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0047__p1695318124112">Deployments are workloads (for example, Nginx) that do not store any data or status. You can create Deployments on the CCE console or by running kubectl commands.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0047__section7271245481"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0047__ul12960152618147"><li id="cce_10_0047__li596019263145">Before creating a workload, you must have an available cluster. For details on how to create a cluster, see <a href="cce_10_0028.html">Creating a CCE Cluster</a>.</li><li id="cce_10_0047__li19160540131415">To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.<div class="note" id="cce_10_0047__note991371915511"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0047__p7914191915520">If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the Deployment will fail.</p>
|
||||
<div class="section" id="cce_10_0047__section7271245481"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0047__ul12960152618147"><li id="cce_10_0047__li596019263145">Before creating a workload, you must have an available cluster. For details on how to create a cluster, see <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a>.</li><li id="cce_10_0047__li19160540131415">To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.<div class="note" id="cce_10_0047__note991371915511"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0047__p7914191915520">If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the Deployment will fail.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
@ -39,7 +39,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0047__row161110459565"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.1 "><p id="cce_10_0047__p56111845145612">CPU Quota</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0047__ul9168521572"><li id="cce_10_0047__li15168227577"><strong id="cce_10_0047__b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0047__li121681216579"><strong id="cce_10_0047__b833715229303">Limit</strong>: maximum number of CPU cores available for a container. Do not leave <strong id="cce_10_0047__b1257625123019">Limit</strong> unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0047__ul9168521572"><li id="cce_10_0047__li15168227577"><strong id="cce_10_0047__b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0047__li121681216579"><strong id="cce_10_0047__b67015378282">Limit</strong>: maximum number of CPU cores that can be used by a container. This prevents containers from using excessive resources.</li></ul>
|
||||
<p id="cce_10_0047__p520715505213">If <strong id="cce_10_0047__b2160104553012">Request</strong> and <strong id="cce_10_0047__b16757125053014">Limit</strong> are not specified, the quota is not limited. For more information and suggestions about <strong id="cce_10_0047__b12633192718313">Request</strong> and <strong id="cce_10_0047__b3633227113119">Limit</strong>, see <a href="cce_10_0163.html">Configuring Container Specifications</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -73,7 +73,7 @@
|
||||
</div>
|
||||
</li><li id="cce_10_0047__li127141737191112">(Optional) <strong id="cce_10_0047__b6712437288">Lifecycle</strong>: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see <a href="cce_10_0105.html">Configuring Container Lifecycle Parameters</a>.</li><li id="cce_10_0047__li9714123711114">(Optional) <strong id="cce_10_0047__b20675191620295">Health Check</strong>: Set the liveness probe, ready probe, and startup probe as required. For details, see <a href="cce_10_0112.html">Configuring Container Health Check</a>.</li><li id="cce_10_0047__li5714123721119">(Optional) <strong id="cce_10_0047__b17656135219292">Environment Variables</strong>: Configure variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see <a href="cce_10_0113.html">Configuring Environment Variables</a>.</li><li id="cce_10_0047__li571418378113">(Optional) <strong id="cce_10_0047__b1211902117322">Data Storage</strong>: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see <a href="cce_10_0374.html">Storage</a>.<div class="note" id="cce_10_0047__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0047__p17126153413513">If the workload contains more than one pod, EVS volumes cannot be mounted.</p>
|
||||
</div></div>
|
||||
</li><li id="cce_10_0047__li19714437161112">(Optional) <strong id="cce_10_0047__b347211410339">Security Context</strong>: Assign container permissions to protect the system and other containers from being affected. Enter the user ID to assign container permissions and prevent systems and other containers from being affected.</li><li id="cce_10_0047__li1371419375118">(Optional) <strong id="cce_10_0047__b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Connecting CCE to AOM</a>.<p id="cce_10_0047__p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0047__b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li><li id="cce_10_0047__li19714437161112">(Optional) <strong id="cce_10_0047__b347211410339">Security Context</strong>: Assign container permissions to protect the system and other containers from being affected. Enter the user ID to assign container permissions and prevent systems and other containers from being affected.</li><li id="cce_10_0047__li1371419375118">(Optional) <strong id="cce_10_0047__b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.<p id="cce_10_0047__p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0047__b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
</li><li id="cce_10_0047__li1487514116369"><strong id="cce_10_0047__b479415459616">Image Access Credential</strong>: Select the credential used for accessing the image repository. The default value is <strong id="cce_10_0047__b157944451067">default-secret</strong>. You can use default-secret to access images in SWR. For details about <strong id="cce_10_0047__b582111347813">default-secret</strong>, see <a href="cce_10_0388.html#cce_10_0388__section11760122012591">default-secret</a>.</li><li id="cce_10_0047__li11649141318194">(Optional) <strong id="cce_10_0047__b513531164612">GPU</strong>: <strong id="cce_10_0047__b11135211134611">All</strong> is selected by default. The workload instance will be scheduled to the node of the specified GPU type.</li></ul>
|
||||
@ -81,10 +81,10 @@
|
||||
<p id="cce_10_0047__p1447162741615"><strong id="cce_10_0047__b154561192487">(Optional) Service Settings</strong></p>
|
||||
<p id="cce_10_0047__p102354303348">A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and automatically balances load for these pods.</p>
|
||||
<p id="cce_10_0047__p13343123113612">You can also create a Service after creating a workload. For details about Services of different types, see <a href="cce_10_0249.html">Overview</a>.</p>
|
||||
<div class="p" id="cce_10_0047__p310913521612"><strong id="cce_10_0047__b204881212144816">(Optional) Advanced Settings</strong><ul id="cce_10_0047__ul142811417"><li id="cce_10_0047__li0421513417"><strong id="cce_10_0047__b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0047__b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0047__b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Workload Upgrade Policies</a>.</li><li id="cce_10_0047__li5292111713411"><strong id="cce_10_0047__b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0047__ul16976133413332"><li id="cce_10_0047__li7687143311331"><strong id="cce_10_0047__b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul1865517492338"><li id="cce_10_0047__li84431255153310"><strong id="cce_10_0047__b2012211505446">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b101221150204413">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li10775194183413"><strong id="cce_10_0047__b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li177960111349"><strong id="cce_10_0047__b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li><li id="cce_10_0047__li136191442193318"><strong id="cce_10_0047__b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul106562113415"><li id="cce_10_0047__li11588172453415"><strong id="cce_10_0047__b14488122019458">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0047__b1048816202454">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li12588142414347"><strong id="cce_10_0047__b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0047__b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li14588192418347"><strong id="cce_10_0047__b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
<div class="p" id="cce_10_0047__p310913521612"><strong id="cce_10_0047__b204881212144816">(Optional) Advanced Settings</strong><ul id="cce_10_0047__ul142811417"><li id="cce_10_0047__li0421513417"><strong id="cce_10_0047__b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0047__b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0047__b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Workload Upgrade Policies</a>.</li><li id="cce_10_0047__li5292111713411"><strong id="cce_10_0047__b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0047__ul16976133413332"><li id="cce_10_0047__li7687143311331"><strong id="cce_10_0047__b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul1865517492338"><li id="cce_10_0047__li84431255153310"><strong id="cce_10_0047__b21119711352">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b156511824123612">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li10775194183413"><strong id="cce_10_0047__b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li177960111349"><strong id="cce_10_0047__b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li><li id="cce_10_0047__li136191442193318"><strong id="cce_10_0047__b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul106562113415"><li id="cce_10_0047__li11588172453415"><strong id="cce_10_0047__b1354131044913">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0047__b17387313105016">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li12588142414347"><strong id="cce_10_0047__b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0047__b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li14588192418347"><strong id="cce_10_0047__b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li></ul>
|
||||
</li><li id="cce_10_0047__li13285132913414"><strong id="cce_10_0047__b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0047__li179714209414"><a name="cce_10_0047__li179714209414"></a><a name="li179714209414"></a><strong id="cce_10_0047__b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0047__b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0047__li1917237124111"><strong id="cce_10_0047__b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0047__li191696549535"><strong id="cce_10_0047__b748219141468">Network Configuration</strong><ul id="cce_10_0047__ul101792551538"><li id="cce_10_0047__li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li></ul>
|
||||
</li><li id="cce_10_0047__li13285132913414"><strong id="cce_10_0047__b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0047__li179714209414"><a name="cce_10_0047__li179714209414"></a><a name="li179714209414"></a><strong id="cce_10_0047__b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0047__b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0047__li1917237124111"><strong id="cce_10_0047__b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0047__li191696549535"><strong id="cce_10_0047__b563938103113">Network Configuration</strong><ul id="cce_10_0047__ul101792551538"><li id="cce_10_0047__li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0047__li053620118549">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0047__li01417411620"><span>Click <strong id="cce_10_0047__b5824103317919">Create Workload</strong> in the lower right corner.</span></li></ol>
|
||||
|
@ -4,9 +4,9 @@
|
||||
<div id="body1505966783091"><div class="section" id="cce_10_0048__section530452474212"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0048__p475763119422">StatefulSets are a type of workloads whose data or status is stored while they are running. For example, MySQL is a StatefulSet because it needs to store new data.</p>
|
||||
<p id="cce_10_0048__p167381126153418">A container can be migrated between different hosts, but data is not stored on the hosts. To store StatefulSet data persistently, attach HA storage volumes provided by CCE to the container.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0048__section6329175411713"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0048__ul3611113041018"><li id="cce_10_0048__li28151714171212">When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security.</li><li id="cce_10_0048__li9611230141012">When you delete a StatefulSet, reduce the number of replicas to <strong id="cce_10_0048__b20407050121312">0</strong> before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order.</li><li id="cce_10_0048__li611418311218">When you create a StatefulSet, a headless Service is required for pod access. For details, see <a href="cce_10_0398.html">Headless Service</a>.</li><li id="cce_10_0048__li093214329312">When a node is unavailable, pods become <strong id="cce_10_0048__b69930313149">Unready</strong>. In this case, manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node.</li></ul>
|
||||
<div class="section" id="cce_10_0048__section6329175411713"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0048__ul3611113041018"><li id="cce_10_0048__li28151714171212">When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security.</li><li id="cce_10_0048__li9611230141012">When you delete a StatefulSet, reduce the number of replicas to <strong id="cce_10_0048__b20407050121312">0</strong> before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order.</li><li id="cce_10_0048__li611418311218">When you create a StatefulSet, a headless Service is required for pod access. For details, see <a href="cce_10_0398.html">Headless Services</a>.</li><li id="cce_10_0048__li093214329312">When a node is unavailable, pods become <strong id="cce_10_0048__b69930313149">Unready</strong>. In this case, manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0048__section1734962819219"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0048__ul1685719423426"><li id="cce_10_0048__li612018144437">Before creating a workload, you must have an available cluster. For details on how to create a cluster, see <a href="cce_10_0028.html">Creating a CCE Cluster</a>.</li><li id="cce_10_0048__li19160540131415">To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.<div class="note" id="cce_10_0048__note991371915511"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0048__p195248425512">If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the StatefulSet will fail.</p>
|
||||
<div class="section" id="cce_10_0048__section1734962819219"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0048__ul1685719423426"><li id="cce_10_0048__li612018144437">Before creating a workload, you must have an available cluster. For details on how to create a cluster, see <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a>.</li><li id="cce_10_0048__li19160540131415">To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.<div class="note" id="cce_10_0048__note991371915511"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0048__p195248425512">If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the StatefulSet will fail.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
@ -42,7 +42,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0048__cce_10_0047_row161110459565"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.4.2.3.2.2.2.1.1.2.1.2.1.3.1.1 "><p id="cce_10_0048__cce_10_0047_p56111845145612">CPU Quota</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0048__cce_10_0047_ul9168521572"><li id="cce_10_0048__cce_10_0047_li15168227577"><strong id="cce_10_0048__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0048__cce_10_0047_li121681216579"><strong id="cce_10_0048__cce_10_0047_b833715229303">Limit</strong>: maximum number of CPU cores available for a container. Do not leave <strong id="cce_10_0048__cce_10_0047_b1257625123019">Limit</strong> unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0048__cce_10_0047_ul9168521572"><li id="cce_10_0048__cce_10_0047_li15168227577"><strong id="cce_10_0048__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0048__cce_10_0047_li121681216579"><strong id="cce_10_0048__cce_10_0047_b67015378282">Limit</strong>: maximum number of CPU cores that can be used by a container. This prevents containers from using excessive resources.</li></ul>
|
||||
<p id="cce_10_0048__cce_10_0047_p520715505213">If <strong id="cce_10_0048__cce_10_0047_b2160104553012">Request</strong> and <strong id="cce_10_0048__cce_10_0047_b16757125053014">Limit</strong> are not specified, the quota is not limited. For more information and suggestions about <strong id="cce_10_0048__cce_10_0047_b12633192718313">Request</strong> and <strong id="cce_10_0048__cce_10_0047_b3633227113119">Limit</strong>, see <a href="cce_10_0163.html">Configuring Container Specifications</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -77,22 +77,22 @@
|
||||
</li><li id="cce_10_0048__li4810204715113">(Optional) <strong id="cce_10_0048__cce_10_0047_b6712437288">Lifecycle</strong>: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see <a href="cce_10_0105.html">Configuring Container Lifecycle Parameters</a>.</li><li id="cce_10_0048__li4810134791115">(Optional) <strong id="cce_10_0048__cce_10_0047_b20675191620295">Health Check</strong>: Set the liveness probe, ready probe, and startup probe as required. For details, see <a href="cce_10_0112.html">Configuring Container Health Check</a>.</li><li id="cce_10_0048__li1810447181110">(Optional) <strong id="cce_10_0048__cce_10_0047_b17656135219292">Environment Variables</strong>: Configure variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see <a href="cce_10_0113.html">Configuring Environment Variables</a>.</li><li id="cce_10_0048__li4810124731117">(Optional) <strong id="cce_10_0048__b11209164933310">Data Storage</strong>: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see <a href="cce_10_0374.html">Storage</a>.<div class="note" id="cce_10_0048__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0048__ul26865762616"><li id="cce_10_0048__li4956180135815">StatefulSets support dynamic attachment of EVS disks. For details, see <a href="cce_10_0616.html">Dynamically Mounting an EVS Disk to a StatefulSet</a> and <a href="cce_10_0635.html">Dynamically Mounting a Local PV to a StatefulSet</a>.<p id="cce_10_0048__p270761115810">Dynamic mounting is achieved by using the <strong id="cce_10_0048__b5442124241413"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" target="_blank" rel="noopener noreferrer">volumeClaimTemplates</a></strong> field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a PVC using the <strong id="cce_10_0048__b15877917111512">volumeClaimTemplates</strong> field, and the PVC is bound to the corresponding PV. Therefore, after the pod is rescheduled, the original data can still be mounted based on the PVC name.</p>
|
||||
</li><li id="cce_10_0048__li126861777269">After a workload is created, the storage that is dynamically mounted cannot be updated.</li></ul>
|
||||
</div></div>
|
||||
</li><li id="cce_10_0048__li1581013477116">(Optional) <strong id="cce_10_0048__cce_10_0047_b347211410339">Security Context</strong>: Assign container permissions to protect the system and other containers from being affected. Enter the user ID to assign container permissions and prevent systems and other containers from being affected.</li><li id="cce_10_0048__li128105471119">(Optional) <strong id="cce_10_0048__cce_10_0047_b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Connecting CCE to AOM</a>.<p id="cce_10_0048__cce_10_0047_p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0048__cce_10_0047_b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="cce_10_0047.html#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li><li id="cce_10_0048__li1581013477116">(Optional) <strong id="cce_10_0048__cce_10_0047_b347211410339">Security Context</strong>: Assign container permissions to protect the system and other containers from being affected. Enter the user ID to assign container permissions and prevent systems and other containers from being affected.</li><li id="cce_10_0048__li128105471119">(Optional) <strong id="cce_10_0048__cce_10_0047_b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.<p id="cce_10_0048__cce_10_0047_p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0048__cce_10_0047_b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="cce_10_0047.html#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
</li><li id="cce_10_0048__li1487514116369"><strong id="cce_10_0048__cce_10_0047_b479415459616">Image Access Credential</strong>: Select the credential used for accessing the image repository. The default value is <strong id="cce_10_0048__cce_10_0047_b157944451067">default-secret</strong>. You can use default-secret to access images in SWR. For details about <strong id="cce_10_0048__cce_10_0047_b582111347813">default-secret</strong>, see <a href="cce_10_0388.html#cce_10_0388__section11760122012591">default-secret</a>.</li><li id="cce_10_0048__li11649141318194">(Optional) <strong id="cce_10_0048__cce_10_0047_b513531164612">GPU</strong>: <strong id="cce_10_0048__cce_10_0047_b11135211134611">All</strong> is selected by default. The workload instance will be scheduled to the node of the specified GPU type.</li></ul>
|
||||
</div>
|
||||
<p id="cce_10_0048__p75731743299"><strong id="cce_10_0048__b104641840113614">Headless Service Parameters</strong></p>
|
||||
<p id="cce_10_0048__p757424310917">A headless Service is used to solve the problem of mutual access between pods in a StatefulSet. The headless Service provides a fixed access domain name for each pod. For details, see <a href="cce_10_0398.html">Headless Service</a>.</p>
|
||||
<p id="cce_10_0048__p757424310917">A headless Service is used to solve the problem of mutual access between pods in a StatefulSet. The headless Service provides a fixed access domain name for each pod. For details, see <a href="cce_10_0398.html">Headless Services</a>.</p>
|
||||
<p id="cce_10_0048__p1447162741615"><strong id="cce_10_0048__b4235027843479">(Optional) Service Settings</strong></p>
|
||||
<p id="cce_10_0048__p102354303348">A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and automatically balances load for these pods.</p>
|
||||
<p id="cce_10_0048__p13343123113612">You can also create a Service after creating a workload. For details about Services of different types, see <a href="cce_10_0249.html">Overview</a>.</p>
|
||||
<div class="p" id="cce_10_0048__p310913521612"><strong id="cce_10_0048__b21631580735239">(Optional) Advanced Settings</strong><ul id="cce_10_0048__ul142811417"><li id="cce_10_0048__li0421513417"><strong id="cce_10_0048__cce_10_0047_b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0048__cce_10_0047_b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0048__cce_10_0047_b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Workload Upgrade Policies</a>.</li><li id="cce_10_0048__li206428507436"><strong id="cce_10_0048__b1840219331836">Pod Management Policies</strong><p id="cce_10_0048__p151323251334">For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers.</p>
|
||||
<ul id="cce_10_0048__ul758812493316"><li id="cce_10_0048__li258832417338"><strong id="cce_10_0048__b13534251116">OrderedReady</strong>: The StatefulSet will deploy, delete, or scale pods in order and one by one. (The StatefulSet continues only after the previous pod is ready or deleted.) This is the default policy.</li><li id="cce_10_0048__li1558862416338"><strong id="cce_10_0048__b112293521039">Parallel</strong>: The StatefulSet will create pods in parallel to match the desired scale without waiting, and will delete all pods at once.</li></ul>
|
||||
</li><li id="cce_10_0048__li7127180594"><strong id="cce_10_0048__cce_10_0047_b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0048__cce_10_0047_ul16976133413332"><li id="cce_10_0048__cce_10_0047_li7687143311331"><strong id="cce_10_0048__cce_10_0047_b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul1865517492338"><li id="cce_10_0048__cce_10_0047_li84431255153310"><strong id="cce_10_0048__cce_10_0047_b2012211505446">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b101221150204413">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li10775194183413"><strong id="cce_10_0048__cce_10_0047_b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li177960111349"><strong id="cce_10_0048__cce_10_0047_b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li><li id="cce_10_0048__cce_10_0047_li136191442193318"><strong id="cce_10_0048__cce_10_0047_b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul106562113415"><li id="cce_10_0048__cce_10_0047_li11588172453415"><strong id="cce_10_0048__cce_10_0047_b14488122019458">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0048__cce_10_0047_b1048816202454">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li12588142414347"><strong id="cce_10_0048__cce_10_0047_b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0048__cce_10_0047_b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li14588192418347"><strong id="cce_10_0048__cce_10_0047_b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li><li id="cce_10_0048__li7127180594"><strong id="cce_10_0048__cce_10_0047_b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0048__cce_10_0047_ul16976133413332"><li id="cce_10_0048__cce_10_0047_li7687143311331"><strong id="cce_10_0048__cce_10_0047_b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul1865517492338"><li id="cce_10_0048__cce_10_0047_li84431255153310"><strong id="cce_10_0048__cce_10_0047_b21119711352">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b156511824123612">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li10775194183413"><strong id="cce_10_0048__cce_10_0047_b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li177960111349"><strong id="cce_10_0048__cce_10_0047_b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li><li id="cce_10_0048__cce_10_0047_li136191442193318"><strong id="cce_10_0048__cce_10_0047_b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul106562113415"><li id="cce_10_0048__cce_10_0047_li11588172453415"><strong id="cce_10_0048__cce_10_0047_b1354131044913">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0048__cce_10_0047_b17387313105016">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li12588142414347"><strong id="cce_10_0048__cce_10_0047_b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0048__cce_10_0047_b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li14588192418347"><strong id="cce_10_0048__cce_10_0047_b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
|
||||
</li></ul>
|
||||
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__cce_10_0047_b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0048__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__cce_10_0047_b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0048__li1985863319162"><strong id="cce_10_0048__b71371124124619">Network Configuration</strong><ul id="cce_10_0048__ul9870163414162"><li id="cce_10_0048__li8488616152">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li></ul>
|
||||
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__cce_10_0047_b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0048__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__cce_10_0047_b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0048__li1985863319162"><strong id="cce_10_0048__b157014128328">Network Configuration</strong><ul id="cce_10_0048__ul9870163414162"><li id="cce_10_0048__li8488616152">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0048__li246062816567">Whether to enable the static IP address: available only for clusters that support this function. After this function is enabled, you can set the interval for reclaiming expired pod IP addresses. For details, see <a href="cce_10_0603.html">Configuring a Static IP Address for a Pod</a>.</li><li id="cce_10_0048__li6361894173">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0048__li01417411620"><span>Click <strong id="cce_10_0048__b2573105264313">Create Workload</strong> in the lower right corner.</span></li></ol>
|
||||
|
@ -73,7 +73,7 @@
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p1864418479569">This operation cannot be undone.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row273482911598"><td class="cellrowborder" rowspan="13" valign="top" width="14.42%" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p13120620202119">Worker node</p>
|
||||
<tr id="cce_10_0054__row273482911598"><td class="cellrowborder" rowspan="15" valign="top" width="14.42%" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p13120620202119">Worker node</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="23.72%" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p4512165465912">Modifying the security group of a node in a cluster</p>
|
||||
<div class="note" id="cce_10_0054__note733110531049"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0054__p153411953747">Naming rule of a security group: <em id="cce_10_0054__i143944160222">Cluster name</em>-<strong id="cce_10_0054__b1339915165227">cce-node</strong>-<em id="cce_10_0054__i18400171662214">Random digits</em></p>
|
||||
@ -84,6 +84,15 @@
|
||||
<td class="cellrowborder" valign="top" width="29.87%" headers="mcps1.3.2.2.2.5.1.4 "><p id="cce_10_0054__p051216549594">Restore the security group and allow traffic from the security group to pass through. </p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row557362812212"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p20573192817215">Modifying the DNS configuration (<strong id="cce_10_0054__b1513011409559">/etc/resolv.conf</strong>) of a node</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p75737281922">Internal domain names cannot be accessed, which may lead to errors in functions such as add-on errors or errors in in-place node upgrade.</p>
|
||||
<div class="note" id="cce_10_0054__note1029211411497"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0054__p629216418913">If your service needs to use an on-premises DNS, configure the DNS in the workload. Do not change node's DNS address. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</p>
|
||||
</div></div>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p18547192312618">Restore the DNS configuration based on the DNS configuration of a new node.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row187355295597"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p14512185455918">Deleting the node</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p18512105411593">The node will become unavailable.</p>
|
||||
@ -130,7 +139,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row177351629205915"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p747565111218">Deleting or modifying the <strong id="cce_10_0054__b1246211210114">/opt/cloud/cce</strong> and <strong id="cce_10_0054__b046217123112">/var/paas</strong> directories, and deleting the data disk</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p6475751102112">The node will become unready.</p>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p6475751102112">The node will become unavailable.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p3844322182018">Reset the node. For details, see <a href="cce_10_0003.html">Resetting a Node</a>.</p>
|
||||
</td>
|
||||
@ -139,7 +148,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p750482911912">The permissions will be abnormal.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p135041829121913">You are not advised to modify the permissions. Restore the permissions if they are modified.</p>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p135041829121913">Do not modify the permissions. Restore the permissions if they have been modified.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row125701815131514"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p5475185114211">Formatting or partitioning system disks, Docker disks, and kubelet disks on nodes.</p>
|
||||
@ -163,11 +172,18 @@
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p1721942415199">Reset the node. For details, see <a href="cce_10_0003.html">Resetting a Node</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row19671655616"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p76916185616">Delete system images such as <strong id="cce_10_0054__b5625191551511">cce-pause</strong> from the node.</p>
|
||||
<tr id="cce_10_0054__row19671655616"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p76916185616">Deleting system images such as <strong id="cce_10_0054__b5625191551511">cce-pause</strong> from the node</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p176516135618">Containers cannot be created and system images cannot be pulled.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p106161613562">Copy the image from another normal node for restoration.</p>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p106161613562">Copy the image from a functional node for restoration.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row1528873904220"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p1028823917424">Changing the flavor of a node in a node pool on the ECS console</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p172881539184210">If a node flavor is different from the flavor specified in the node pool where the node resides, the increased number of nodes in a node pool scale-out is different from the expected number.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p6288639124218">Change the node flavor to the one specified in the node pool, or delete the node and perform a node pool scale-out again.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@ -179,7 +195,7 @@
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="33.33333333333333%" id="mcps1.3.3.2.2.4.1.2"><p id="cce_10_0054__p1511712819176">Impact</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="33.33333333333333%" id="mcps1.3.3.2.2.4.1.3"><p id="cce_10_0054__p1211762819175">How to Avoid/Fix</p>
|
||||
<th align="left" class="cellrowborder" valign="top" width="33.33333333333333%" id="mcps1.3.3.2.2.4.1.3"><p id="cce_10_0054__p1211762819175">Solution</p>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
@ -208,7 +224,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.2 "><p id="cce_10_0054__p1393045811172">The DNS in the cluster cannot work properly.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.3 "><p id="cce_10_0054__p16261123010200">Restore the security group by referring to <a href="cce_10_0028.html">Creating a CCE Cluster</a> and allow traffic from the security group to pass through.</p>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.3 "><p id="cce_10_0054__p16261123010200">Restore the security group by referring to <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a> and allow traffic from the security group to pass through.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row1140144414619"><td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.1 "><p id="cce_10_0054__p137301051270">Deleting CRD resources of network-attachment-definitions of default-network</p>
|
||||
@ -220,11 +236,11 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0054__row1834415386476"><td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.1 "><p id="cce_10_0054__p16344103804711">Enabling the iptables firewall</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.2 "><p id="cce_10_0054__p2034483810479">By default, iptables firewall is disabled on CCE. Enabling the firewall can leave the network inaccessible.</p>
|
||||
<div class="note" id="cce_10_0054__note15101393365"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0054__p6334856304">Do not enable iptables firewall. If iptables firewall must be enabled, check whether the rules configured in <strong id="cce_10_0054__b97847573491">/etc/sysconfig/iptables</strong> and <strong id="cce_10_0054__b213732125019">/etc/sysconfig/ip6tables</strong> will affect the network in the test environment.</p>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.2 "><p id="cce_10_0054__p2034483810479">By default, the iptables firewall is disabled on CCE. Enabling the firewall can leave the network inaccessible.</p>
|
||||
<div class="note" id="cce_10_0054__note15101393365"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0054__p6334856304">Do not enable the iptables firewall. If the iptables firewall must be enabled, check whether the rules configured in <strong id="cce_10_0054__b3640517153312">/etc/sysconfig/iptables</strong> and <strong id="cce_10_0054__b186411617123314">/etc/sysconfig/ip6tables</strong> in the test environment will affect the network.</p>
|
||||
</div></div>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.3 "><p id="cce_10_0054__p99122414365">Disable the iptables firewall and check the rules configured in <strong id="cce_10_0054__b37181310517">/etc/sysconfig/iptables</strong> and <strong id="cce_10_0054__b160506205110">/etc/sysconfig/ip6tables</strong>.</p>
|
||||
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.3.3.2.2.4.1.3 "><p id="cce_10_0054__p99122414365">Disable the iptables firewall and check the rules configured in <strong id="cce_10_0054__b13193131183415">/etc/sysconfig/iptables</strong> and <strong id="cce_10_0054__b319321183411">/etc/sysconfig/ip6tables</strong>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
@ -14,21 +14,13 @@
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0059__row18814118153114"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.1 "><p id="cce_10_0059__p481431810315">EulerOS 2.5</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.2 "><p id="cce_10_0059__p209546245557">v1.23 or later</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="64%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.3 "><p id="cce_10_0059__p188141518133118">3.10.0-862.14.1.5.h591.eulerosv2r7.x86_64</p>
|
||||
<p id="cce_10_0059__p123044913199">3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0059__row115473572295"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.1 "><p id="cce_10_0059__p1181481843113">EulerOS 2.9</p>
|
||||
<tbody><tr id="cce_10_0059__row115473572295"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.1 "><p id="cce_10_0059__p1181481843113">EulerOS 2.9</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.2 "><p id="cce_10_0059__p3954224155512">v1.23 or later</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="64%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.3 "><p id="cce_10_0059__p181413185315">4.18.0-147.5.1.6.h541.eulerosv2r9.x86_64</p>
|
||||
<p id="cce_10_0059__p1589920212208">4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64</p>
|
||||
<p id="cce_10_0059__p52321801171">4.18.0-147.5.1.6.h998.eulerosv2r9.x86_64</p>
|
||||
<p id="cce_10_0059__p3871581545">4.18.0-147.5.1.6.h998.eulerosv2r9.x86_64</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@ -55,7 +47,7 @@ spec:
|
||||
- protocol: TCP
|
||||
port: 6379</pre>
|
||||
<p id="cce_10_0059__en-us_topic_0249851123_p88888434711">The following figure shows how podSelector works.</p>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig139410543444"><span class="figcap"><b>Figure 1 </b>podSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image185021946194414" src="en-us_image_0000001750950232.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig139410543444"><span class="figcap"><b>Figure 1 </b>podSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image185021946194414" src="en-us_image_0000001898025749.png"></span></div>
|
||||
</li></ul>
|
||||
<ul id="cce_10_0059__en-us_topic_0249851123_ul68309714213"><li id="cce_10_0059__en-us_topic_0249851123_li1283027192120"><strong id="cce_10_0059__en-us_topic_0249851123_b184891164227">Using namespaceSelector to specify the access scope</strong><pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen18399134874818">apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
@ -74,11 +66,11 @@ spec:
|
||||
- protocol: TCP
|
||||
port: 6379</pre>
|
||||
<p id="cce_10_0059__en-us_topic_0249851123_p3874718155019">The following figure shows how namespaceSelector works.</p>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig127351855617"><span class="figcap"><b>Figure 2 </b>namespaceSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image141441335560" src="en-us_image_0000001750791324.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig127351855617"><span class="figcap"><b>Figure 2 </b>namespaceSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image141441335560" src="en-us_image_0000001897906237.png"></span></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0059__section20486817707"><h4 class="sectiontitle">Using Egress Rules</h4><p id="cce_10_0059__en-us_topic_0249851123_p1311606618">Egress supports not only podSelector and namespaceSelector, but also ipBlock.</p>
|
||||
<div class="note" id="cce_10_0059__en-us_topic_0249851123_note16478276101"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0059__en-us_topic_0249851123_p1547814741018">Only clusters of version 1.23 or later support Egress rules. Only nodes running EulerOS 2.5 or EulerOS 2.9 are supported.</p>
|
||||
<div class="note" id="cce_10_0059__en-us_topic_0249851123_note16478276101"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0059__en-us_topic_0249851123_p1547814741018">Only clusters of version 1.23 or later support Egress rules. Only nodes running EulerOS 2.9 are supported.</p>
|
||||
</div></div>
|
||||
<pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen14581393131">apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
@ -98,7 +90,7 @@ spec:
|
||||
except:
|
||||
- 172.16.0.40/32 # This CIDR block cannot be accessed. This value must fall within the range specified by <strong id="cce_10_0059__b52842121410">cidr</strong>.</pre>
|
||||
<p id="cce_10_0059__en-us_topic_0249851123_p3245846202818">The following figure shows how ipBlock works.</p>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig15678132552812"><span class="figcap"><b>Figure 3 </b>ipBlock</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image6270134419182" src="en-us_image_0000001797910037.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig15678132552812"><span class="figcap"><b>Figure 3 </b>ipBlock</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image6270134419182" src="en-us_image_0000001851745580.png"></span></div>
|
||||
<p id="cce_10_0059__en-us_topic_0249851123_p1260313810298">You can define ingress and egress in the same rule.</p>
|
||||
<pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen235835922918">apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
@ -126,10 +118,10 @@ spec:
|
||||
matchLabels:
|
||||
role: web</pre>
|
||||
<p id="cce_10_0059__en-us_topic_0249851123_p17239137193116">The following figure shows how to use ingress and egress together.</p>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig14112102353618"><span class="figcap"><b>Figure 4 </b>Using both ingress and egress</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image103563915919" src="en-us_image_0000001797871009.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig14112102353618"><span class="figcap"><b>Figure 4 </b>Using both ingress and egress</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image103563915919" src="en-us_image_0000001897906233.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0059__section349662212313"><h4 class="sectiontitle">Creating a Network Policy on the Console</h4><ol id="cce_10_0059__ol10753729162012"><li id="cce_10_0059__li67621546123813"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0059__li275310297205"><span>Choose <strong id="cce_10_0059__b1684093473514"><span id="cce_10_0059__text144061727132711">Policies</span></strong> in the navigation pane, click the <span class="uicontrol" id="cce_10_0059__uicontrol8840143418354"><b>Network Policies</b></span> tab, and click <strong id="cce_10_0059__b1684043412358">Create Network Policy</strong> in the upper right corner.</span><p><ul id="cce_10_0059__ul1275420367216"><li id="cce_10_0059__li207540368218"><strong id="cce_10_0059__b5858127617589">Policy Name</strong>: Specify a network policy name.</li><li id="cce_10_0059__li86551950162110"><strong id="cce_10_0059__b2485142065319">Namespace</strong>: Select a namespace in which the network policy is applied.</li><li id="cce_10_0059__li1811145118419"><strong id="cce_10_0059__b1082493183618">Selector</strong>: Enter a label, select the pod to be associated, and click <strong id="cce_10_0059__b39962039143613">Add</strong>. You can also click <span class="uicontrol" id="cce_10_0059__uicontrol127315410439"><b>Reference Workload Label</b></span> to reference the label of an existing workload.</li><li id="cce_10_0059__li20288331248"><strong id="cce_10_0059__b288315258371">Inbound Rule</strong>: Click <span><img id="cce_10_0059__image297081312440" src="en-us_image_0000001750950236.png"></span> to add an inbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p13464141094517"></p>
|
||||
<p id="cce_10_0059__p1251071818275"><span><img id="cce_10_0059__image3789195442716" src="en-us_image_0000001750791308.png"></span></p>
|
||||
<div class="section" id="cce_10_0059__section349662212313"><h4 class="sectiontitle">Creating a Network Policy on the Console</h4><ol id="cce_10_0059__ol10753729162012"><li id="cce_10_0059__li67621546123813"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0059__li275310297205"><span>Choose <strong id="cce_10_0059__b1684093473514"><span id="cce_10_0059__text144061727132711">Policies</span></strong> in the navigation pane, click the <span class="uicontrol" id="cce_10_0059__uicontrol8840143418354"><b>Network Policies</b></span> tab, and click <strong id="cce_10_0059__b1684043412358">Create Network Policy</strong> in the upper right corner.</span><p><ul id="cce_10_0059__ul1275420367216"><li id="cce_10_0059__li207540368218"><strong id="cce_10_0059__b5858127617589">Policy Name</strong>: Specify a network policy name.</li><li id="cce_10_0059__li86551950162110"><strong id="cce_10_0059__b2485142065319">Namespace</strong>: Select a namespace in which the network policy is applied.</li><li id="cce_10_0059__li1811145118419"><strong id="cce_10_0059__b1082493183618">Selector</strong>: Enter a label, select the pod to be associated, and click <strong id="cce_10_0059__b39962039143613">Add</strong>. You can also click <span class="uicontrol" id="cce_10_0059__uicontrol127315410439"><b>Reference Workload Label</b></span> to use the label of an existing workload.</li><li id="cce_10_0059__li20288331248"><strong id="cce_10_0059__b288315258371">Inbound Rule</strong>: Click <span><img id="cce_10_0059__image297081312440" src="en-us_image_0000001851745568.png"></span> to add an inbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p13464141094517"></p>
|
||||
<p id="cce_10_0059__p1251071818275"><span><img id="cce_10_0059__image3789195442716" src="en-us_image_0000001897906213.png"></span></p>
|
||||
<div class="p" id="cce_10_0059__p16644759445">
|
||||
<div class="tablenoborder"><a name="cce_10_0059__table166419994515"></a><a name="table166419994515"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0059__table166419994515" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Adding an inbound rule</caption><thead align="left"><tr id="cce_10_0059__row186401397458"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.8.2.2.2.1.4.6.1.2.3.1.1"><p id="cce_10_0059__p163919913452">Parameter</p>
|
||||
</th>
|
||||
@ -149,14 +141,14 @@ spec:
|
||||
</tr>
|
||||
<tr id="cce_10_0059__row1564115912452"><td class="cellrowborder" valign="top" width="15%" headers="mcps1.3.8.2.2.2.1.4.6.1.2.3.1.1 "><p id="cce_10_0059__p8640792450">Source Pod Label</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="85%" headers="mcps1.3.8.2.2.2.1.4.6.1.2.3.1.2 "><p id="cce_10_0059__p590912241576">Allow accesses from the pods with this label. If this parameter is not specified, all pods in the namespace can access the port.</p>
|
||||
<td class="cellrowborder" valign="top" width="85%" headers="mcps1.3.8.2.2.2.1.4.6.1.2.3.1.2 "><p id="cce_10_0059__p590912241576">Allow accessing the pods with this label. If this parameter is not specified, all pods in the namespace can be accessed.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</li><li id="cce_10_0059__li208969565264"><strong id="cce_10_0059__b13933104315451">Outbound Rule</strong>: Click <span><img id="cce_10_0059__image190375162714" src="en-us_image_0000001797871005.png"></span> to add an outbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p475815292283"><span><img id="cce_10_0059__image10587182118298" src="en-us_image_0000001797871017.png"></span></p>
|
||||
</li><li id="cce_10_0059__li208969565264"><strong id="cce_10_0059__b13933104315451">Outbound Rule</strong>: Click <span><img id="cce_10_0059__image190375162714" src="en-us_image_0000001897906225.png"></span> to add an outbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p74227561415"><span><img id="cce_10_0059__image203216571849" src="en-us_image_0000001863378970.png"></span></p>
|
||||
<div class="p" id="cce_10_0059__p1052121812515">
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0059__table940510264284" frame="border" border="1" rules="all"><caption><b>Table 2 </b>Adding an outbound rule</caption><thead align="left"><tr id="cce_10_0059__row15405182632814"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.8.2.2.2.1.5.5.1.2.3.1.1"><p id="cce_10_0059__p34051926152811">Parameter</p>
|
||||
</th>
|
||||
@ -189,7 +181,7 @@ spec:
|
||||
</div>
|
||||
</div>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_10_0059__li1513793212118"><span>Click <span class="uicontrol" id="cce_10_0059__uicontrol1498744718284"><b>OK</b></span>.</span></li></ol>
|
||||
</p></li><li id="cce_10_0059__li1513793212118"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0059__uicontrol1498744718284"><b>OK</b></span>.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -6,13 +6,13 @@
|
||||
<div class="section" id="cce_10_0063__section102878407207"><h4 class="sectiontitle">Viewing a Node Scaling Policy</h4><p id="cce_10_0063__p713741135215">You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed.</p>
|
||||
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li757116188514"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0063__uicontrol885043603616"><b>Nodes</b></span>.On the page displayed, click the <strong id="cce_10_0063__b1785019363361">Node Pools</strong> tab and then the name of the node pool for which an auto scaling policy has been created to view the node pool details.</span></li><li id="cce_10_0063__li391162210375"><span>On the node pool details page, click the <strong id="cce_10_0063__b182822310377">Auto Scaling</strong> tab to view the auto scaling configuration and scaling records.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_10_0063__ol14644105712488"><li id="cce_10_0063__li41181041153517"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li21181041113517"><span>In the navigation pane, choose <strong id="cce_10_0063__b1214315541372"><span id="cce_10_0063__text82292962415">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b6742397389">Node Scaling</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1770171519392">More</strong> > <strong id="cce_10_0063__b88264165396">Delete</strong> in the <strong id="cce_10_0063__b7342193564112">Operation</strong> column.</span></li><li id="cce_10_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_10_0063__wintitle195460432178"><b>Delete Node Scaling Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_10_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_10_0063__uicontrol12723105481711"><b>Yes</b></span> to delete the policy.</span></li></ol>
|
||||
<div class="section" id="cce_10_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_10_0063__ol14644105712488"><li id="cce_10_0063__li41181041153517"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li21181041113517"><span>In the navigation pane, choose <strong id="cce_10_0063__b1214315541372"><span id="cce_10_0063__text82292962415">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b6742397389">Node Scaling Policies</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1770171519392">More</strong> > <strong id="cce_10_0063__b88264165396">Delete</strong> in the <strong id="cce_10_0063__b7342193564112">Operation</strong> column.</span></li><li id="cce_10_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_10_0063__wintitle195460432178"><b>Delete Node Scaling Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_10_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_10_0063__uicontrol12723105481711"><b>Yes</b></span> to delete the policy.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_10_0063__ol067875612225"><li id="cce_10_0063__li1148617913919"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li19486498394"><span>In the navigation pane, choose <strong id="cce_10_0063__b19317105710390"><span id="cce_10_0063__text105014172246">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b5317185793910">Node Scaling</strong> tab, locate the row containing the target policy and click <strong id="cce_10_0063__b822154212401">Edit</strong> in the <strong id="cce_10_0063__b152854419415">Operation</strong> column.</span></li><li id="cce_10_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, configure policy parameters listed in <a href="cce_10_0209.html#cce_10_0209__table18763092201">Table 2</a>.</span></li><li id="cce_10_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
|
||||
<div class="section" id="cce_10_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_10_0063__ol067875612225"><li id="cce_10_0063__li1148617913919"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li19486498394"><span>In the navigation pane, choose <strong id="cce_10_0063__b19317105710390"><span id="cce_10_0063__text105014172246">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b5317185793910">Node Scaling Policies</strong> tab, locate the row containing the target policy and click <strong id="cce_10_0063__b822154212401">Edit</strong> in the <strong id="cce_10_0063__b152854419415">Operation</strong> column.</span></li><li id="cce_10_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, configure policy parameters listed in <a href="cce_10_0209.html#cce_10_0209__table18763092201">Table 2</a>.</span></li><li id="cce_10_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>In the navigation pane, choose <strong id="cce_10_0063__b182079494212"><span id="cce_10_0063__text15369102114247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1620711418426">Node Scaling</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1020734124213">More</strong> > <strong id="cce_10_0063__b620724164218">Clone</strong> in the <strong id="cce_10_0063__b82081045425">Operation</strong> column.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Clone Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
|
||||
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>In the navigation pane, choose <strong id="cce_10_0063__b182079494212"><span id="cce_10_0063__text15369102114247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1620711418426">Node Scaling Policies</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1020734124213">More</strong> > <strong id="cce_10_0063__b620724164218">Clone</strong> in the <strong id="cce_10_0063__b82081045425">Operation</strong> column.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Clone Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_10_0063__ol0843321258"><li id="cce_10_0063__li1221435414019"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li4214105494011"><span>In the navigation pane, choose <strong id="cce_10_0063__b4140849438"><span id="cce_10_0063__text1726717273247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1814110454317">Node Scaling</strong> tab, locate the row containing the target policy click <strong id="cce_10_0063__b0778424161820">Disable</strong> in the <strong id="cce_10_0063__b1977852441812">Operation</strong> column. If the policy is in the disabled state, click <span class="uicontrol" id="cce_10_0063__uicontrol177902431813"><b>Enable</b></span> in the <strong id="cce_10_0063__b47795246181">Operation</strong> column.</span></li><li id="cce_10_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li></ol>
|
||||
<div class="section" id="cce_10_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_10_0063__ol0843321258"><li id="cce_10_0063__li1221435414019"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li4214105494011"><span>In the navigation pane, choose <strong id="cce_10_0063__b4140849438"><span id="cce_10_0063__text1726717273247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1814110454317">Node Scaling Policies</strong> tab, locate the row containing the target policy click <strong id="cce_10_0063__b0778424161820">Disable</strong> in the <strong id="cce_10_0063__b1977852441812">Operation</strong> column. If the policy is in the disabled state, click <span class="uicontrol" id="cce_10_0063__uicontrol177902431813"><b>Enable</b></span> in the <strong id="cce_10_0063__b47795246181">Operation</strong> column.</span></li><li id="cce_10_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -22,7 +22,7 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0193.html">Volcano Scheduler</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0127.html">CCE Container Storage (FlexVolume, Discarded)</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0127.html">FlexVolume (Discarded)</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -4,10 +4,10 @@
|
||||
<div id="body1529577481025"><div class="section" id="cce_10_0066__section25311744154917"><h4 class="sectiontitle">Introduction</h4><p id="cce_10_0066__p728554610430">Everest is a cloud native container storage system, which enables clusters of Kubernetes v1.15.6 or later to access cloud storage services through the CSI.</p>
|
||||
<p id="cce_10_0066__p17820349194112"><strong id="cce_10_0066__b14107173616557">Everest is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.15 or later is created.</strong></p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0066__section202191122814"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0066__ul39998883313"><li id="cce_10_0066__li1347222711015">If your cluster is upgraded from v1.13 to v1.15, <a href="cce_10_0127.html">storage-driver</a> will be replaced by Everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions.</li><li id="cce_10_0066__li3787162513612">In version 1.2.0 of the Everest add-on, <strong id="cce_10_0066__b98652269315">key authentication</strong> is optimized when OBS is used. After upgrade Everest from a version earlier than 1.2.0, restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS.</li><li id="cce_10_0066__li139991585338">By default, this add-on is installed in <strong id="cce_10_0066__b3236525404">clusters of v1.15 and later</strong>. For clusters of v1.13 and earlier, the <a href="cce_10_0127.html">storage-driver</a> add-on is installed by default.</li></ul>
|
||||
<div class="section" id="cce_10_0066__section202191122814"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0066__ul39998883313"><li id="cce_10_0066__li1347222711015">If your cluster is upgraded from v1.13 to v1.15, <a href="cce_10_0127.html">storage-driver</a> will be replaced by Everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions.</li><li id="cce_10_0066__li3787162513612">In version 1.2.0 of the Everest add-on, <strong id="cce_10_0066__b51541345758">key authentication</strong> is optimized when OBS is used. After the Everest add-on is upgraded from a version earlier than 1.2.0, restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS.</li><li id="cce_10_0066__li139991585338">By default, this add-on is installed in <strong id="cce_10_0066__b3236525404">clusters of v1.15 and later</strong>. For clusters of v1.13 and earlier, the <a href="cce_10_0127.html">storage-driver</a> add-on is installed by default.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0066__section168341157155317"><h4 class="sectiontitle">Installing the Add-on</h4><p id="cce_10_0066__p11695354471">This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps:</p>
|
||||
<ol id="cce_10_0066__ol9183433182510"><li id="cce_10_0066__li13183153352515"><span>Log in to the CCE console and click the cluster name to access the cluster console. Click <strong id="cce_10_0066__b311354871"><span id="cce_10_0066__text77103384818">Add-ons</span></strong> in the navigation pane, locate <strong id="cce_10_0066__b121415449719">CCE Container Storage (Everest)</strong> on the right, and click <strong id="cce_10_0066__b1614214419719">Install</strong>.</span></li><li id="cce_10_0066__li178033014157"><span>On the <strong id="cce_10_0066__b116470600010649">Install Add-on</strong> page, configure the specifications.</span><p>
|
||||
<ol id="cce_10_0066__ol9183433182510"><li id="cce_10_0066__li13183153352515"><span>Log in to the CCE console and click the cluster name to access the cluster console. Click <strong id="cce_10_0066__b112701937115017"><span id="cce_10_0066__text102706378505">Add-ons</span></strong> in the navigation pane, locate <strong id="cce_10_0066__b1527133713504">CCE Container Storage (Everest)</strong> on the right, and click <strong id="cce_10_0066__b10271193714503">Install</strong>.</span></li><li id="cce_10_0066__li178033014157"><span>On the <strong id="cce_10_0066__b116470600010649">Install Add-on</strong> page, configure the specifications.</span><p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0066__table924319911495" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Everest parameters</caption><thead align="left"><tr id="cce_10_0066__row42442974913"><th align="left" class="cellrowborder" valign="top" width="14.000000000000002%" id="mcps1.3.3.3.2.2.1.2.3.1.1"><p id="cce_10_0066__p17244793496">Parameter</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="86%" id="mcps1.3.3.3.2.2.1.2.3.1.2"><p id="cce_10_0066__p42441596495">Description</p>
|
||||
@ -16,8 +16,8 @@
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0066__row83701240105118"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.3.3.2.2.1.2.3.1.1 "><p id="cce_10_0066__p3370040165116">Pods</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="86%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><p id="cce_10_0066__p19217184416417">Number of instances for the add-on.</p>
|
||||
<p id="cce_10_0066__p3619164011407">High availability is not possible with a single add-on instance. If an error occurs on the node where the add-on instance runs, the add-on will fail.</p>
|
||||
<td class="cellrowborder" valign="top" width="86%" headers="mcps1.3.3.3.2.2.1.2.3.1.2 "><p id="cce_10_0066__p19217184416417">Number of pods for the add-on.</p>
|
||||
<p id="cce_10_0066__p3619164011407">High availability is not possible with a single pod. If an error occurs on the node where the add-on instance runs, the add-on will fail.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0066__row4370840165119"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.3.3.2.2.1.2.3.1.1 "><p id="cce_10_0066__p937054045117">Containers</p>
|
||||
@ -197,6 +197,12 @@
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.3.2.1.2.3.1.2 "><p id="cce_10_0066__p11326534015">This field is left blank by default. You do not need to configure this parameter.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0066__row26371460231"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.3.2.1.2.3.1.1 "><p id="cce_10_0066__p4637246112320">number_of_reserved_disks</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.3.2.1.2.3.1.2 "><p id="cce_10_0066__p5637546162319">Number of disks on the node reserved for custom use. This parameter is supported when the add-on version is 2.3.11 or later.</p>
|
||||
<p id="cce_10_0066__p0882173917168">Assume that a maximum of 20 EVS disks can be attached to a node, and the value of this parameter is set to <strong id="cce_10_0066__b74412434393">6</strong>. Then 14 (20-6) disks can be attached to this node when the system schedules the EVS disk attachment workloads. The reserved six disks include one system disk and one data disk that has been attached to the node. You can attach four EVS disks to this node as additional data disks or raw disks for a local storage pool.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0066__row153261731005"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.3.2.1.2.3.1.1 "><p id="cce_10_0066__p163261731501">over_subscription</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.3.2.1.2.3.1.2 "><p id="cce_10_0066__p23261634014">Overcommitment ratio of the local storage pool (<strong id="cce_10_0066__b128402321515">local_storage</strong>). The default value is <strong id="cce_10_0066__b1321346135111">80</strong>. If the size of the local storage pool is 100 GB, it can be overcommitted to 180 GB.</p>
|
||||
@ -212,7 +218,7 @@
|
||||
</div>
|
||||
<div class="note" id="cce_10_0066__note153262031605"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><div class="p" id="cce_10_0066__p13326143607">In Everest 1.2.26 or later, the performance of attaching a large number of EVS volumes has been optimized. The following parameters can be configured:<ul id="cce_10_0066__ul43268320018"><li id="cce_10_0066__li173261638013">csi_attacher_worker_threads</li><li id="cce_10_0066__li1032623902">csi_attacher_detach_worker_threads</li><li id="cce_10_0066__li19326438013">volume_attaching_flow_ctrl</li></ul>
|
||||
</div>
|
||||
<p id="cce_10_0066__p113261314019">The preceding parameters are associated with each other and are constrained by the underlying storage resources in the region where the cluster is located. To attach a large number of volumes (more than 500 EVS volumes per minute), contact customer service and configure the parameters under their guidance to prevent the Everest add-on from running abnormally due to improper parameter settings.</p>
|
||||
<p id="cce_10_0066__p113261314019">The preceding parameters are associated with each other and are constrained by the underlying storage resources in the region where the cluster is located. To attach a large number of volumes (more than 500 EVS volumes per minute), contact administrator and configure the parameters under their guidance to prevent the Everest add-on from running abnormally due to improper parameter settings.</p>
|
||||
</div></div>
|
||||
</p></li><li id="cce_10_0066__li155851217011"><span>Configure scheduling policies for the add-on.</span><p><div class="note" id="cce_10_0066__cce_10_0129_note32098410561"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0066__cce_10_0129_ul220911419567"><li id="cce_10_0066__cce_10_0129_li152095435618">Scheduling policies do not take effect on add-on instances of the DaemonSet type.</li><li id="cce_10_0066__cce_10_0129_li1720914445612">When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.</li></ul>
|
||||
</div></div>
|
||||
@ -225,12 +231,12 @@
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0066__cce_10_0129_row162102049564"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.4.2.2.2.3.1.1 "><p id="cce_10_0066__cce_10_0129_p421019416569">Multi AZ</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0066__cce_10_0129_ul122101425619"><li id="cce_10_0066__cce_10_0129_li142101342560"><strong id="cce_10_0066__cce_10_0129_b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0066__cce_10_0129_li52682031184214"><strong id="cce_10_0066__cce_10_0129_b1516164563318">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0066__cce_10_0129_li3210440562"><strong id="cce_10_0066__cce_10_0129_b4304164818353">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0066__cce_10_0129_ul122101425619"><li id="cce_10_0066__cce_10_0129_li142101342560"><strong id="cce_10_0066__cce_10_0129_b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0066__cce_10_0129_li52682031184214"><strong id="cce_10_0066__cce_10_0129_b8203192017422">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0066__cce_10_0129_li3210440562"><strong id="cce_10_0066__cce_10_0129_b105801282497">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0066__cce_10_0129_row1121010416566"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.4.2.2.2.3.1.1 "><p id="cce_10_0066__cce_10_0129_p12210114165612">Node Affinity</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0066__cce_10_0129_ul1621054145617"><li id="cce_10_0066__cce_10_0129_li1721017413562"><strong id="cce_10_0066__cce_10_0129_b2074619819545">Incompatibility</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0066__cce_10_0129_li52109417563"><strong id="cce_10_0066__cce_10_0129_b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0066__cce_10_0129_li1421015415561"><strong id="cce_10_0066__cce_10_0129_b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0066__cce_10_0129_li92101542568"><strong id="cce_10_0066__cce_10_0129_b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0066__cce_10_0129_p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0066__cce_10_0129_ul1621054145617"><li id="cce_10_0066__cce_10_0129_li1721017413562"><strong id="cce_10_0066__cce_10_0129_b2074619819545">Not configured</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0066__cce_10_0129_li52109417563"><strong id="cce_10_0066__cce_10_0129_b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0066__cce_10_0129_li1421015415561"><strong id="cce_10_0066__cce_10_0129_b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0066__cce_10_0129_li92101542568"><strong id="cce_10_0066__cce_10_0129_b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0066__cce_10_0129_p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
@ -247,7 +253,7 @@
|
||||
</p></li><li id="cce_10_0066__li22191112338"><span>Click <span class="uicontrol" id="cce_10_0066__uicontrol215115115539"><b>Install</b></span>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0066__section0377457163618"><a name="cce_10_0066__section0377457163618"></a><a name="section0377457163618"></a><h4 class="sectiontitle">Components</h4>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0066__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 5 </b>Everest components</caption><thead align="left"><tr id="cce_10_0066__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="16.831683168316832%" id="mcps1.3.4.2.2.4.1.1"><p id="cce_10_0066__p14653141018584">Component</p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0066__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 5 </b>Add-on components</caption><thead align="left"><tr id="cce_10_0066__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="16.831683168316832%" id="mcps1.3.4.2.2.4.1.1"><p id="cce_10_0066__p14653141018584">Component</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="65.34653465346535%" id="mcps1.3.4.2.2.4.1.2"><p id="cce_10_0066__p065391025820">Description</p>
|
||||
</th>
|
||||
|
@ -1,9 +1,11 @@
|
||||
<a name="cce_10_0068"></a><a name="cce_10_0068"></a>
|
||||
|
||||
<h1 class="topictitle1">Kubernetes Release Notes</h1>
|
||||
<h1 class="topictitle1">Kubernetes Version Release Notes</h1>
|
||||
<div id="body8662426"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_bulletin_0068.html">Kubernetes 1.28 Release Notes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_bulletin_0059.html">Kubernetes 1.27 Release Notes</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_bulletin_0058.html">Kubernetes 1.25 Release Notes</a></strong><br>
|
||||
|
@ -88,7 +88,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0081__row18431117142713"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p1657922832717">Changing node pool configurations</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), and taints and adjust the disk, OS, and container engine configurations of the node pool.</p>
|
||||
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), resource tags, and taints.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p857917281274">The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation.</p>
|
||||
</td>
|
||||
@ -100,7 +100,7 @@
|
||||
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p8580132811270">Nodes in the default node pool cannot be migrated to other node pools, and nodes in a user-created node pool cannot be migrated to other user-created node pools.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0081__row425414163469"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p7254616174618">Cloning a node pool</p>
|
||||
<tr id="cce_10_0081__row425414163469"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p7254616174618">Copying a node pool</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1025414163462">You can copy the configuration of an existing node pool to create a new node pool.</p>
|
||||
</td>
|
||||
|
@ -1,10 +1,10 @@
|
||||
<a name="cce_10_0083"></a><a name="cce_10_0083"></a>
|
||||
|
||||
<h1 class="topictitle1">Managing Workload Scaling Policies</h1>
|
||||
<div id="body1508729244098"><div class="section" id="cce_10_0083__section11873141710246"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0083__p799618243249">After an HPA policy is created, you can update, clone, edit, and delete the policy, as well as edit the YAML file.</p>
|
||||
<div id="body1508729244098"><div class="section" id="cce_10_0083__section11873141710246"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0083__p799618243249">After an HPA policy is created, you can update and delete the policy, as well as edit the YAML file.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0083__section14993443181414"><h4 class="sectiontitle">Checking an HPA Policy</h4><p id="cce_10_0083__p713741135215">You can view the rules, status, and events of an HPA policy and handle exceptions based on the error information displayed.</p>
|
||||
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611"><span id="cce_10_0083__text67571453104013">Policies</span></strong>. On the page displayed, click the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab and then <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001797871061.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, choose <strong id="cce_10_0083__b486014516522">View Events</strong> in the <strong id="cce_10_0083__b77159572522">Operation</strong> column. If the policy malfunctions, locate and rectify the fault based on the error message displayed on the page.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
|
||||
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611"><span id="cce_10_0083__text67571453104013">Policies</span></strong>. On the page displayed, click the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab and then <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001851745612.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, choose <strong id="cce_10_0083__b486014516522">View Events</strong> in the <strong id="cce_10_0083__b77159572522">Operation</strong> column. If the policy malfunctions, locate and rectify the fault based on the error message displayed on the page.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
|
||||
<ol type="a" id="cce_10_0083__ol1691347738"><li id="cce_10_0083__li5468556932">Log in to the CCE console and click the cluster name to access the cluster console.</li><li id="cce_10_0083__li87313521749">In the navigation pane, choose <strong id="cce_10_0083__b01748420311">Workloads</strong>. Click the workload name to view its details.</li><li id="cce_10_0083__li1769110474318">On the workload details page, switch to the <strong id="cce_10_0083__b3716156354">Auto Scaling</strong> tab page to view the HPA policies. You can also view the scaling policies you configured on the <strong id="cce_10_0083__b1389131612229"><span id="cce_10_0083__text85771564218">Policies</span></strong> page.</li></ol>
|
||||
</div></div>
|
||||
|
||||
|
@ -1,11 +1,69 @@
|
||||
<a name="cce_10_0084"></a><a name="cce_10_0084"></a>
|
||||
|
||||
<h1 class="topictitle1">Enabling ICMP Security Group Rules</h1>
|
||||
<div id="body1530866171131"><div class="section" id="cce_10_0084__section106079439418"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0084__p34679509418">If a workload uses UDP for both load balancing and health check, enable ICMP security group rules for the backend servers.</p>
|
||||
<div id="body1530866171131"><div class="section" id="cce_10_0084__section106079439418"><h4 class="sectiontitle">Application Scenarios</h4><p id="cce_10_0084__p34679509418">If a workload uses UDP for both load balancing and health check, enable ICMP security group rules for the backend servers.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0084__section865612352391"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0084__ol1999461164212"><li id="cce_10_0084__li1947817217515"><span>Log in to the ECS console, find the ECS corresponding to any node where the workload runs, and click the ECS name. On the displayed ECS details page, record the security group name.</span></li><li id="cce_10_0084__li2114123554110"><span>Log in to the VPC console. In the navigation pane on the left, choose <span class="uicontrol" id="cce_10_0084__uicontrol1587081612913"><b>Access Control > Security Groups</b></span>. In the security group list on the right, click the security group name obtained in step 1.</span></li><li id="cce_10_0084__li201591224113516"><span>On the page displayed, click the <span class="uicontrol" id="cce_10_0084__uicontrol9411262192"><b>Inbound Rules</b></span> tab and click <span class="uicontrol" id="cce_10_0084__uicontrol0982204218191"><b>Add Rule</b></span> to add an inbound rule for ECS. Then, click <span class="uicontrol" id="cce_10_0084__uicontrol28571458151915"><b>OK</b></span>.</span><p><div class="note" id="cce_10_0084__note18685241993"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0084__ul885417541386"><li id="cce_10_0084__li188541540813">You only need to add security group rules to any node where the workload runs.</li><li id="cce_10_0084__li385515419812">The security group must have rules to allow access from the CIDR block 100.125.0.0/16.</li></ul>
|
||||
</div></div>
|
||||
</p></li></ol>
|
||||
<div class="section" id="cce_10_0084__section865612352391"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0084__ol1999461164212"><li id="cce_10_0084__li2114123554110"><span>Log in to the CCE console, choose <span class="uicontrol" id="cce_10_0084__uicontrol16903135110235"><b>Service List</b></span> > <span class="uicontrol" id="cce_10_0084__uicontrol8903205152316"><b>Networking</b></span> > <span class="uicontrol" id="cce_10_0084__uicontrol2903851102314"><b>Virtual Private Cloud</b></span>, and choose <span class="uicontrol" id="cce_10_0084__uicontrol13903195119235"><b>Access Control</b></span> > <span class="uicontrol" id="cce_10_0084__uicontrol1903115192316"><b>Security Groups</b></span> in the navigation pane.</span></li><li id="cce_10_0084__li1211191111308"><span>In the security group list, locate the security group of the cluster. Click the <strong id="cce_10_0084__b104332046247">Inbound Rules</strong> tab page and then <strong id="cce_10_0084__b104331541248">Add Rule</strong>. In the <strong id="cce_10_0084__b143384162410">Add Inbound Rule</strong> dialog box, configure inbound parameters.</span><p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0084__table14257503611" frame="border" border="1" rules="all"><thead align="left"><tr id="cce_10_0084__row02645133615"><th align="left" class="cellrowborder" valign="top" width="16.189999999999998%" id="mcps1.3.2.2.2.2.1.1.6.1.1"><p id="cce_10_0084__p84201847103620">Cluster Type</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="12.690000000000001%" id="mcps1.3.2.2.2.2.1.1.6.1.2"><p id="cce_10_0084__p152616516364">ELB Type</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="29.080000000000002%" id="mcps1.3.2.2.2.2.1.1.6.1.3"><p id="cce_10_0084__p02471639113710">Security Group</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="14.66%" id="mcps1.3.2.2.2.2.1.1.6.1.4"><p id="cce_10_0084__p976365363811">Protocol & Port</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="27.38%" id="mcps1.3.2.2.2.2.1.1.6.1.5"><p id="cce_10_0084__p0266553619">Allowed Source CIDR Block</p>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0084__row192719512365"><td class="cellrowborder" rowspan="2" valign="top" width="16.189999999999998%" headers="mcps1.3.2.2.2.2.1.1.6.1.1 "><p id="cce_10_0084__p16421114718364">CCE Standard</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="12.690000000000001%" headers="mcps1.3.2.2.2.2.1.1.6.1.2 "><p id="cce_10_0084__p127356361">Shared</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="29.080000000000002%" headers="mcps1.3.2.2.2.2.1.1.6.1.3 "><p id="cce_10_0084__p15247183923712">Node security group, which is named in the format of "{Cluster name}-cce-node-{Random ID}".</p>
|
||||
<p id="cce_10_0084__p10804654172115">If a custom node security group is bound to the cluster, select the target security group.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="14.66%" headers="mcps1.3.2.2.2.2.1.1.6.1.4 "><p id="cce_10_0084__p137631953173816">All ICMP ports</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="27.38%" headers="mcps1.3.2.2.2.2.1.1.6.1.5 "><p id="cce_10_0084__p3277503613">100.125.0.0/16 for the shared load balancer</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0084__row158382183614"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.1 "><p id="cce_10_0084__p6584521113616">Dedicated</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.2 "><p id="cce_10_0084__p74112038174015">Node security group, which is named in the format of "{Cluster name}-cce-node-{Random ID}".</p>
|
||||
<p id="cce_10_0084__p121628210220">If a custom node security group is bound to the cluster, select the target security group.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.3 "><p id="cce_10_0084__p67631553143815">All ICMP ports</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.4 "><p id="cce_10_0084__p2584621153619">Backend subnet of the load balancer</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0084__row93996213716"><td class="cellrowborder" rowspan="2" valign="top" width="16.189999999999998%" headers="mcps1.3.2.2.2.2.1.1.6.1.1 "><p id="cce_10_0084__p174008214378">CCE Turbo</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="12.690000000000001%" headers="mcps1.3.2.2.2.2.1.1.6.1.2 "><p id="cce_10_0084__p141411333710">Shared</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="29.080000000000002%" headers="mcps1.3.2.2.2.2.1.1.6.1.3 "><p id="cce_10_0084__p5441926412">Node security group, which is named in the format of "{Cluster name}-cce-node-{Random ID}".</p>
|
||||
<p id="cce_10_0084__p946116302214">If a custom node security group is bound to the cluster, select the target security group.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="14.66%" headers="mcps1.3.2.2.2.2.1.1.6.1.4 "><p id="cce_10_0084__p171971715164120">All ICMP ports</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="27.38%" headers="mcps1.3.2.2.2.2.1.1.6.1.5 "><p id="cce_10_0084__p1340013263717">100.125.0.0/16 for the shared load balancer</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0084__row1723101111371"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.1 "><p id="cce_10_0084__p741517133374">Dedicated</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.2 "><p id="cce_10_0084__p5323165512407">ENI security group, which is named in the format of "{Cluster name}-cce-eni-{Random ID}".</p>
|
||||
<p id="cce_10_0084__p25894256193">If a custom ENI security group is bound to the cluster, select the target security group.</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.3 "><p id="cce_10_0084__p121271511415">All ICMP ports</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.2.1.1.6.1.4 "><p id="cce_10_0084__p1749419109414">Backend subnet of the load balancer</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0084__li10545192216388"><span>Click <strong id="cce_10_0084__b195521425432">OK</strong>.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -3,17 +3,19 @@
|
||||
<h1 class="topictitle1">Overview</h1>
|
||||
<div id="body0000001159453456"><div class="section" id="cce_10_0094__section17868123416122"><h4 class="sectiontitle">Why We Need Ingresses</h4><p id="cce_10_0094__p19813582419">A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.</p>
|
||||
<p id="cce_10_0094__p168757241679">An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in <a href="#cce_10_0094__fig18155819416">Figure 1</a>, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.</p>
|
||||
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001750791624.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001851587340.png"></span></div>
|
||||
<p id="cce_10_0094__p128258846">The following describes the ingress-related definitions:</p>
|
||||
<ul id="cce_10_0094__ul2875811411"><li id="cce_10_0094__li78145815413">Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.</li><li id="cce_10_0094__li148115817417">Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services.</li></ul>
|
||||
<ul id="cce_10_0094__ul2875811411"><li id="cce_10_0094__li78145815413">Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.</li><li id="cce_10_0094__li148115817417">Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the target backend Services.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0094__section162271821192312"><h4 class="sectiontitle">Working Principle of ELB Ingress Controller</h4><p id="cce_10_0094__p172542048121220">ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.</p>
|
||||
<p id="cce_10_0094__p4254124831218">ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). <a href="#cce_10_0094__fig122542486129">Figure 2</a> shows the working principle of ELB Ingress Controller.</p>
|
||||
<div class="section" id="cce_10_0094__section162271821192312"><h4 class="sectiontitle">Working Rules of LoadBalancer Ingress Controller</h4><p id="cce_10_0094__p172542048121220">LoadBalancer Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.</p>
|
||||
<p id="cce_10_0094__p4254124831218">LoadBalancer Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). <a href="#cce_10_0094__fig122542486129">Figure 2</a> shows the working rules of LoadBalancer Ingress Controller.</p>
|
||||
<ol id="cce_10_0094__ol525410483123"><li id="cce_10_0094__li8254184813127">A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port.</li><li id="cce_10_0094__li1225474817126">When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule.</li><li id="cce_10_0094__li115615167193">When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service.</li></ol>
|
||||
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working principle of ELB Ingress Controller</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001750791628.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working rules of shared LoadBalancer ingresses in CCE standard and Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001851587344.png"></span></div>
|
||||
<p id="cce_10_0094__p3662933103112">When you use <strong id="cce_10_0094__b61532583920">a dedicated load balancer in a CCE Turbo cluster</strong>, pod IP addresses are allocated from the VPC and the load balancer can directly access the pods. When creating an ingress for external cluster access, you can use ELB to access a ClusterIP Service and use pods as the backend server of the ELB listener. In this way, external traffic can directly access the pods in the cluster without being forwarded by node ports.</p>
|
||||
<div class="fignone" id="cce_10_0094__fig44531612193618"><span class="figcap"><b>Figure 3 </b>Working rules of passthrough networking for dedicated LoadBalancer ingresses in CCE Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image6906154516408" src="en-us_image_0000001897906717.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0094__section3565202819276"><a name="cce_10_0094__section3565202819276"></a><a name="section3565202819276"></a><h4 class="sectiontitle">Services Supported by Ingresses</h4><div class="p" id="cce_10_0094__p109298589133"><a href="#cce_10_0094__table143264518141">Table 1</a> lists the services supported by ELB Ingresses.
|
||||
<div class="tablenoborder"><a name="cce_10_0094__table143264518141"></a><a name="table143264518141"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0094__table143264518141" width="100%" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Services supported by ELB Ingresses</caption><thead align="left"><tr id="cce_10_0094__row1132645112145"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.3.2.2.2.5.1.1"><p id="cce_10_0094__p33261518148">Cluster Type</p>
|
||||
<div class="section" id="cce_10_0094__section3565202819276"><a name="cce_10_0094__section3565202819276"></a><a name="section3565202819276"></a><h4 class="sectiontitle">Services Supported by Ingresses</h4><div class="p" id="cce_10_0094__p109298589133"><a href="#cce_10_0094__table143264518141">Table 1</a> lists the services supported by LoadBalancer ingresses.
|
||||
<div class="tablenoborder"><a name="cce_10_0094__table143264518141"></a><a name="table143264518141"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0094__table143264518141" width="100%" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Services supported by LoadBalancer ingresses</caption><thead align="left"><tr id="cce_10_0094__row1132645112145"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.3.2.2.2.5.1.1"><p id="cce_10_0094__p33261518148">Cluster Type</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="22%" id="mcps1.3.3.2.2.2.5.1.2"><p id="cce_10_0094__p15326195191413">ELB Type</p>
|
||||
</th>
|
||||
|
@ -132,7 +132,7 @@
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0105__row04201302279"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.4.2.2.2.1.2.3.1.1 "><p id="cce_10_0105__p6420110192718">CLI</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b992317765">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b1616752902">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b1740229594">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b74996436">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
|
||||
<p id="cce_10_0105__p94203082712">Example command:</p>
|
||||
<pre class="screen" id="cce_10_0105__screen6420190132712">exec:
|
||||
command:
|
||||
|
@ -1,12 +1,12 @@
|
||||
<a name="cce_10_0107"></a><a name="cce_10_0107"></a>
|
||||
|
||||
<h1 class="topictitle1">Connecting to a Cluster Using kubectl</h1>
|
||||
<div id="body1512462600292"><div class="section" id="cce_10_0107__section14234115144"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0107__p133539491408">This section uses a CCE standard cluster as an example to describe how to connect to a CCE cluster using <span class="keyword" id="cce_10_0107__keyword19467121518447">kubectl</span>.</p>
|
||||
<div id="body1512462600292"><div class="section" id="cce_10_0107__section14234115144"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0107__p133539491408">This section uses a CCE standard cluster as an example to describe how to access a CCE cluster using <span class="keyword" id="cce_10_0107__keyword19467121518447">kubectl</span>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section17352373317"><h4 class="sectiontitle">Permissions</h4><p id="cce_10_0107__p51211251156">When you access a cluster using kubectl, CCE uses <strong id="cce_10_0107__b9161182320391"><span class="keyword" id="cce_10_0107__keyword1354319447418">kubeconfig</span>.json</strong> generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_10_0107__b16295666413">kubeconfig.json</strong> file vary from user to user.</p>
|
||||
<div class="section" id="cce_10_0107__section17352373317"><h4 class="sectiontitle">Permissions</h4><p id="cce_10_0107__p51211251156">When you access a cluster using kubectl, CCE uses <strong id="cce_10_0107__b204601556154217">kubeconfig</strong> generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_10_0107__b16295666413">kubeconfig</strong> file vary from user to user.</p>
|
||||
<p id="cce_10_0107__p142391810113">For details about user permissions, see <a href="cce_10_0187.html#cce_10_0187__section1464135853519">Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section37321625113110"><a name="cce_10_0107__section37321625113110"></a><a name="section37321625113110"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console and click the name of the target cluster to access the cluster console. On the <strong id="cce_10_0107__b127302345555">Overview</strong> page, view the access address and kubectl connection procedure.</p>
|
||||
<div class="section" id="cce_10_0107__section37321625113110"><a name="cce_10_0107__section37321625113110"></a><a name="section37321625113110"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console and click the name of the target cluster to access the cluster console. On the <strong id="cce_10_0107__b127302345555"><span id="cce_10_0107__text869825054114"><strong>Overview</strong></span></strong> page, view the access address and kubectl connection procedure.</p>
|
||||
<div class="p" id="cce_10_0107__p7805114919351">CCE allows you to access a cluster through a private network or a public network.<ul id="cce_10_0107__ul126071124175518"><li id="cce_10_0107__li144192116548"><span class="keyword" id="cce_10_0107__keyword13441034142917">Intranet access</span>: The client that accesses the cluster must be in the same VPC as the cluster.</li><li id="cce_10_0107__li1460752419555">Public access: The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP.<div class="notice" id="cce_10_0107__note2967194410365"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0107__p19671244103610">To bind an EIP to the cluster, go to the <strong id="cce_10_0107__b1061217302"><span id="cce_10_0107__text6807412192418">Overview</span></strong> page and click <strong id="cce_10_0107__b021910485396">Bind</strong> next to <strong id="cce_10_0107__b132197480394">EIP</strong> in the <strong id="cce_10_0107__b14219164815396">Connection Information</strong> area. In a cluster with an EIP bound, kube-apiserver will be exposed to the Internet and may be attacked. To solve this problem, you can configure Advanced Anti-DDoS for the EIP of the node on which kube-apiserver runs.</p>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
@ -20,12 +20,12 @@ curl -LO https://dl.k8s.io/release/<em id="cce_10_0107__i13511182443516">{v1.25.
|
||||
</li><li id="cce_10_0107__li1216814211286">Install kubectl.<pre class="screen" id="cce_10_0107__screen16892115815271">chmod +x kubectl
|
||||
mv -f kubectl /usr/local/bin</pre>
|
||||
</li></ol>
|
||||
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span><strong id="cce_10_0107__b196211619192411">Obtain the kubectl configuration file (kubeconfig).</strong></span><p><p id="cce_10_0107__p1295818109256">On the <strong id="cce_10_0107__b124401123203115">Overview</strong> page, locate the <strong id="cce_10_0107__b450013549611">Connection Info</strong> area, click <strong id="cce_10_0107__b136512181078">Configure</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the window displayed, download the configuration file.</p>
|
||||
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig.json</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
|
||||
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span><strong id="cce_10_0107__b196211619192411">Obtain the kubectl configuration file (kubeconfig).</strong></span><p><p id="cce_10_0107__p1295818109256">On the <strong id="cce_10_0107__b124401123203115"><span id="cce_10_0107__text10158103924216"><strong>Overview</strong></span></strong> page, locate the <strong id="cce_10_0107__b450013549611">Connection Info</strong> area, click <strong id="cce_10_0107__b136512181078">Configure</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the page displayed, download the configuration file.</p>
|
||||
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
|
||||
</div></div>
|
||||
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the kubeconfig.json configuration file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b175828331240">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
|
||||
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the <strong id="cce_10_0107__b156991854125914">kubeconfig.yaml</strong> file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b175828331240">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
|
||||
mkdir -p $HOME/.kube
|
||||
mv -f kubeconfig.json $HOME/.kube/config</pre>
|
||||
mv -f kubeconfig.yaml $HOME/.kube/config</pre>
|
||||
</li><li id="cce_10_0107__li1480512253214">Switch the kubectl access mode based on service scenarios.<ul id="cce_10_0107__ul91037595229"><li id="cce_10_0107__li5916145112313">Run this command to enable intra-VPC access:<pre class="screen" id="cce_10_0107__screen279213242247">kubectl config use-context internal</pre>
|
||||
</li><li id="cce_10_0107__li113114274233">Run this command to enable public access (EIP required):<pre class="screen" id="cce_10_0107__screen965013316242">kubectl config use-context external</pre>
|
||||
</li><li id="cce_10_0107__li104133512481">Run this command to enable public access and two-way authentication (EIP required):<pre class="screen" id="cce_10_0107__screen61712126498">kubectl config use-context externalTLSVerify</pre>
|
||||
@ -36,7 +36,7 @@ mv -f kubeconfig.json $HOME/.kube/config</pre>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section1559919152711"><a name="cce_10_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle"><span class="keyword" id="cce_10_0107__keyword311020376452">Two-Way Authentication for Domain Names</span></h4><p id="cce_10_0107__p138948491274">CCE supports two-way authentication for domain names.</p>
|
||||
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">After an EIP is bound to an API Server, two-way domain name authentication will be disabled by default if kubectl is used to connect to the cluster. You can run <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> to switch to the externalTLSVerify context to enable the two-way domain name authentication.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.json</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, you can download the <strong id="cce_10_0107__b549311585216">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001750791664.png"></span></div>
|
||||
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">After an EIP is bound to an API Server, two-way domain name authentication is disabled by default if kubectl is used to access the cluster. You can run <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> to enable the two-way domain name authentication.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.yaml</strong> again.</li><li id="cce_10_0107__li5950658165414">If the two-way domain name authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.yaml</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, download the <strong id="cce_10_0107__b549311585216">kubeconfig.yaml</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001851587804.png"></span></div>
|
||||
</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">FAQs</h4><ul id="cce_10_0107__ul1374831051115"><li id="cce_10_0107__li4748810121112"><strong id="cce_10_0107__b456677171119"><span class="keyword" id="cce_10_0107__keyword0702458114510">Error from server Forbidden</span></strong><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
|
||||
|
@ -3,7 +3,7 @@
|
||||
<h1 class="topictitle1">Configuring Container Health Check</h1>
|
||||
<div id="body1512535109871"><div class="section" id="cce_10_0112__section1731112174912"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0112__p8242924192"><span class="keyword" id="cce_10_0112__keyword22817116429">Health check</span> regularly checks the health status of containers during container running. If the health check function is not configured, a pod cannot detect application exceptions or automatically restart the application to restore it. This will result in a situation where the pod status is normal but the application in the pod is abnormal.</p>
|
||||
<p id="cce_10_0112__a77e71e69afde4757ab0ef6087b2e30de">Kubernetes provides the following health check probes:</p>
|
||||
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b1644144411235">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b5645134422313">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a containerized application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started.</li></ul>
|
||||
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b1644144411235">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b5645134422313">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, although the application process has started, the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a containerized application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0112__section476025319384"><h4 class="sectiontitle">Check Method</h4><ul id="cce_10_0112__ul2492162133910"><li id="cce_10_0112__li19505918465"><strong id="cce_10_0112__b84235270695216"><span class="keyword" id="cce_10_0112__keyword122935940517318">HTTP request</span></strong><p id="cce_10_0112__p17738122617398">This health check mode applies to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200–399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path.</p>
|
||||
<p id="cce_10_0112__p051511331505">For example, for a container that provides HTTP services, the HTTP check path is <strong id="cce_10_0112__b2043313277265">/health-check</strong>, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. You can also add one or more headers to an HTTP request. For example, set the request header name to <strong id="cce_10_0112__b1157115313232">Custom-Header</strong> and the corresponding value to <strong id="cce_10_0112__b195721853152316">example</strong>.</p>
|
||||
@ -13,7 +13,7 @@
|
||||
<p id="cce_10_0112__p1658131014413">The CLI mode can be used to replace the HTTP request-based and TCP port-based health check.</p>
|
||||
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b1610019014247">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b5100905245">–1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can use the script command to run the <strong id="cce_10_0112__b16819134246">wget</strong> command to detect the container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
|
||||
<p id="cce_10_0112__p13488133464119">Check the return code of the response. If the return code is within 200–399, the script returns <strong id="cce_10_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b427293111227">–1</strong>. </p>
|
||||
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b9972128102411">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b11973988247">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
|
||||
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b9972128102411">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b11973988247">sh/data/scripts/health_check.sh</strong> for command execution.</li></ul>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC Check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and obtain its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
|
||||
@ -57,7 +57,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="81%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_10_0112__p9644133173213">Number of retry times when the detection fails.</p>
|
||||
<p id="cce_10_0112__p111011316163216">Giving up in case of liveness probe means to restart the container. In case of readiness probe the pod will be marked Unready.</p>
|
||||
<p id="cce_10_0112__p446822117214">The default value is <strong id="cce_10_0112__b18801222192519">3</strong>. The minimum value is <strong id="cce_10_0112__b9698122717253">1</strong>.</p>
|
||||
<p id="cce_10_0112__p446822117214">The default value is <strong id="cce_10_0112__b18399114614911">3</strong>, and the minimum value is <strong id="cce_10_0112__b6399144611918">1</strong>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@ -73,29 +73,29 @@ metadata:
|
||||
spec:
|
||||
containers:
|
||||
- name: liveness
|
||||
image: nginx:alpine
|
||||
image: <i><span class="varname" id="cce_10_0112__varname107124316217"><image_address></span></i>
|
||||
args:
|
||||
- /server
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 80
|
||||
httpHeaders:
|
||||
livenessProbe: # Liveness probe
|
||||
httpGet: # Checking an HTTP request is used as an example.
|
||||
path: /healthz # The HTTP check path is <strong id="cce_10_0112__b1387514122610">/healthz</strong>.
|
||||
port: 80 # The check port number is <strong id="cce_10_0112__b887104111261">80</strong>.
|
||||
httpHeaders: # (Optional) The request header name is <strong id="cce_10_0112__b10558725276">Custom-Header</strong> and the value is <strong id="cce_10_0112__b14999615162717">Awesome</strong>.
|
||||
- name: Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
readinessProbe: # Readiness probe
|
||||
exec: # Checking an execution command is used as an example.
|
||||
command: # Command to be executed
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
startupProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 80
|
||||
startupProbe: # Startup probe
|
||||
httpGet: # Checking an HTTP request is used as an example.
|
||||
path: /healthz # The HTTP check path is <strong id="cce_10_0112__b812787483">/healthz</strong>.
|
||||
port: 80 # The check port number is <strong id="cce_10_0112__b561594217264">80</strong>.
|
||||
failureThreshold: 30
|
||||
periodSeconds: 10</pre>
|
||||
</div>
|
||||
|
@ -7,9 +7,9 @@
|
||||
<p id="cce_10_0113__p26271321192016">Configurations must be imported to a container as arguments. Otherwise, configurations will be lost after the container restarts.</p>
|
||||
</div></div>
|
||||
<p id="cce_10_0113__p78261119155911">Environment variables can be set in the following modes:</p>
|
||||
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b4564141914250">Custom</strong>: Enter the environment variable name and parameter value.</li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b31161818143614">Added from ConfigMap key</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap</strong>: Import a key in a ConfigMap as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b17273203011542">configmap_value</strong> of <strong id="cce_10_0113__b11273163035411">configmap_key</strong> in a ConfigMap as the value of environment variable <strong id="cce_10_0113__b327363011547">key1</strong>, an environment variable named <strong id="cce_10_0113__b17273123015548">key1</strong> whose value is <strong id="cce_10_0113__b72741430195420">configmap_value</strong> exists in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b21956593546">secret_value</strong> of <strong id="cce_10_0113__b1619505935419">secret_key</strong> in secret <strong id="cce_10_0113__b171951159145419">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b1819555965412">key2</strong>, an environment variable named <strong id="cce_10_0113__b17195859155410">key2</strong> whose value is <strong id="cce_10_0113__b1219510597545">secret_value</strong> exists in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b19931701407">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if the pod name is imported as the value of environment variable <strong id="cce_10_0113__b1571429125511">key3</strong>, an environment variable named <strong id="cce_10_0113__b8571202914556">key3</strong> exists in the container and its value is the pod name.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b1625513417292">Resource Reference</strong>: The value of <strong id="cce_10_0113__b176281198307">Request</strong> or <strong id="cce_10_0113__b186221022193017">Limit</strong> defined by the container is used as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import the CPU limit of <strong id="cce_10_0113__b1144418567">container-1</strong> as the value of environment variable <strong id="cce_10_0113__b1563165375511">key4</strong>, an environment variable named <strong id="cce_10_0113__b35638539558">key4</strong> exists in the container and its value is the CPU limit of <strong id="cce_10_0113__b1425871495610">container-1</strong>.</li></ul>
|
||||
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b4564141914250">Custom</strong>: Enter the environment variable name and parameter value.</li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b31161818143614">Added from ConfigMap key</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap</strong>: Import a key in a ConfigMap as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b67861335193619">configmap_value</strong> of <strong id="cce_10_0113__b478643513618">configmap_key</strong> in <strong id="cce_10_0113__b14610123945714">configmap-example</strong> as the value of environment variable <strong id="cce_10_0113__b7786133573616">key1</strong>, an environment variable named <strong id="cce_10_0113__b678683518364">key1</strong> whose value is <strong id="cce_10_0113__b1378615359362">configmap_value</strong> is available in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b12974122713812">secret_value</strong> of <strong id="cce_10_0113__b197472716385">secret_key</strong> in <strong id="cce_10_0113__b722441953910">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b8975627173810">key2</strong>, an environment variable named <strong id="cce_10_0113__b29756275384">key2</strong> whose value is <strong id="cce_10_0113__b097552703811">secret_value</strong> is available in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b19931701407">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if the pod name is imported as the value of environment variable <strong id="cce_10_0113__b1939710417283">key3</strong>, an environment variable named <strong id="cce_10_0113__b11252186142914">key3</strong> whose value is the pod name is available in the container.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b1625513417292">Resource Reference</strong>: The value of <strong id="cce_10_0113__b176281198307">Request</strong> or <strong id="cce_10_0113__b186221022193017">Limit</strong> defined by the container is used as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import the CPU limit of container-1 as the value of environment variable <strong id="cce_10_0113__b272674753017">key4</strong>, an environment variable named <strong id="cce_10_0113__b99015318423">key4</strong> whose value is the CPU limit of container-1 is available in the container.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0113__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0113__b1794501219430">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0113__b11945131216432">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0113__li190412461831"><span>When creating a workload, modify the container information in <strong id="cce_10_0113__b101361766447">Container Settings</strong> and click the <strong id="cce_10_0113__b8169124424315">Environment Variables</strong> tab.</span></li><li id="cce_10_0113__li468251942720"><span>Configure environment variables.</span><p><div class="fignone" id="cce_10_0113__fig164568529317"><a name="cce_10_0113__fig164568529317"></a><a name="fig164568529317"></a><span class="figcap"><b>Figure 1 </b>Configuring environment variables</span><br><span><img id="cce_10_0113__image91271459162610" src="en-us_image_0000001750950284.png"></span></div>
|
||||
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0113__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0113__b1794501219430">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0113__b11945131216432">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0113__li190412461831"><span>When creating a workload, modify the container information in <strong id="cce_10_0113__b101361766447">Container Settings</strong> and click the <strong id="cce_10_0113__b8169124424315">Environment Variables</strong> tab.</span></li><li id="cce_10_0113__li468251942720"><span>Configure environment variables.</span><p><div class="fignone" id="cce_10_0113__fig164568529317"><a name="cce_10_0113__fig164568529317"></a><a name="fig164568529317"></a><span class="figcap"><b>Figure 1 </b>Configuring environment variables</span><br><span><img id="cce_10_0113__image131385146481" src="en-us_image_0000001867802022.png"></span></div>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0113__section19591158201313"><h4 class="sectiontitle">YAML Example</h4><pre class="screen" id="cce_10_0113__screen1034117614147">apiVersion: apps/v1
|
||||
@ -45,12 +45,12 @@ spec:
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: configmap-example
|
||||
key: key1
|
||||
key: configmap_key
|
||||
- name: key2 # Added from secret key
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: secret-example
|
||||
key: key2
|
||||
key: secret_key
|
||||
- name: key3 # Variable reference, which uses the field defined by a pod as the value of the environment variable.
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
|
@ -11,7 +11,7 @@
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0626.html">Configuring SFS Turbo Mount Options</a></strong><br>
|
||||
</li>
|
||||
<li class="ulchildlink"><strong><a href="cce_bestpractice_00253.html">Dynamically Creating and Mounting Subdirectories of an SFS Turbo File System</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_bestpractice_00253_0.html">Using StorageClass to Dynamically Create a Subdirectory in an SFS Turbo File System</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
<a name="cce_10_0127"></a><a name="cce_10_0127"></a>
|
||||
|
||||
<h1 class="topictitle1">CCE Container Storage (FlexVolume, Discarded)</h1>
|
||||
<h1 class="topictitle1">FlexVolume (Discarded)</h1>
|
||||
<div id="body1541037494110"><div class="section" id="cce_10_0127__section25311744154917"><h4 class="sectiontitle">Introduction</h4><p id="cce_10_0127__p1574910495496">CCE Container Storage (FlexVolume), also called storage-driver, functions as a standard Kubernetes FlexVolume plugin to allow containers to use EVS, SFS, OBS, and SFS Turbo storage resources. By installing and upgrading storage-driver, you can quickly install and update cloud storage capabilities.</p>
|
||||
<p id="cce_10_0127__p5414123111414"><strong id="cce_10_0127__b2471165511315">FlexVolume is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.13 or earlier is created.</strong></p>
|
||||
</div>
|
||||
@ -10,7 +10,7 @@
|
||||
</div>
|
||||
<div class="section" id="cce_10_0127__section776571919194"><h4 class="sectiontitle">Installing the Add-on</h4><p id="cce_10_0127__p11975102684817">This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps:</p>
|
||||
<p id="cce_10_0127__p21112429503">If storage-driver is not installed in a cluster, perform the following steps to install it:</p>
|
||||
<ol id="cce_10_0127__ol9183433182510"><li id="cce_10_0127__li13183153352515"><span>Log in to the CCE console and click the cluster name to access the cluster console. Choose <strong id="cce_10_0127__b636391011166"><span id="cce_10_0127__text77103384818">Add-ons</span></strong> in the navigation pane, locate <strong id="cce_10_0127__b8174151516816">CCE Container Storage (FlexVolume)</strong> on the right, and click <strong id="cce_10_0127__b31752151083">Install</strong>.</span></li><li id="cce_10_0127__li9455819152615"><span>Click <strong id="cce_10_0127__b227242216554">Install</strong> to install the add-on. Note that the storage-driver has no configurable parameters and can be directly installed.</span></li></ol>
|
||||
<ol id="cce_10_0127__ol9183433182510"><li id="cce_10_0127__li13183153352515"><span>Log in to the CCE console and click the cluster name to access the cluster console. Choose <strong id="cce_10_0127__b9141193415148"><span id="cce_10_0127__text1114113345145">Add-ons</span></strong> in the navigation pane, locate <strong id="cce_10_0127__b191416342144">CCE Container Storage (FlexVolume)</strong> on the right, and click <strong id="cce_10_0127__b141411534191416">Install</strong>.</span></li><li id="cce_10_0127__li9455819152615"><span>Click <strong id="cce_10_0127__b227242216554">Install</strong> to install the add-on. Note that the storage-driver has no configurable parameters and can be directly installed.</span></li></ol>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
@ -294,12 +294,12 @@ $configBlock
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0129__row162102049564"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.4.2.2.2.3.1.1 "><p id="cce_10_0129__p421019416569">Multi AZ</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0129__ul122101425619"><li id="cce_10_0129__li142101342560"><strong id="cce_10_0129__b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0129__li52682031184214"><strong id="cce_10_0129__b1516164563318">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0129__li3210440562"><strong id="cce_10_0129__b4304164818353">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0129__ul122101425619"><li id="cce_10_0129__li142101342560"><strong id="cce_10_0129__b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0129__li52682031184214"><strong id="cce_10_0129__b8203192017422">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0129__li3210440562"><strong id="cce_10_0129__b105801282497">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0129__row1121010416566"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.3.3.4.2.2.2.3.1.1 "><p id="cce_10_0129__p12210114165612">Node Affinity</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0129__ul1621054145617"><li id="cce_10_0129__li1721017413562"><strong id="cce_10_0129__b2074619819545">Incompatibility</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0129__li52109417563"><strong id="cce_10_0129__b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0129__li1421015415561"><strong id="cce_10_0129__b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0129__li92101542568"><strong id="cce_10_0129__b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0129__p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.3.3.4.2.2.2.3.1.2 "><ul id="cce_10_0129__ul1621054145617"><li id="cce_10_0129__li1721017413562"><strong id="cce_10_0129__b2074619819545">Not configured</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0129__li52109417563"><strong id="cce_10_0129__b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0129__li1421015415561"><strong id="cce_10_0129__b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0129__li92101542568"><strong id="cce_10_0129__b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0129__p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
@ -316,7 +316,7 @@ $configBlock
|
||||
</p></li><li id="cce_10_0129__li9455819152615"><span>Click <strong id="cce_10_0129__b165814819135">Install</strong>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0129__section0377457163618"><h4 class="sectiontitle">Components</h4>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0129__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 6 </b>CoreDNS components</caption><thead align="left"><tr id="cce_10_0129__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="28.000000000000004%" id="mcps1.3.4.2.2.4.1.1"><p id="cce_10_0129__p14653141018584">Component</p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0129__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 6 </b>Add-on components</caption><thead align="left"><tr id="cce_10_0129__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="28.000000000000004%" id="mcps1.3.4.2.2.4.1.1"><p id="cce_10_0129__p14653141018584">Component</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="45%" id="mcps1.3.4.2.2.4.1.2"><p id="cce_10_0129__p065391025820">Description</p>
|
||||
</th>
|
||||
@ -345,7 +345,7 @@ $configBlock
|
||||
<ol id="cce_10_0129__ol1895815493314"><li id="cce_10_0129__li29576413330">The query is first sent to the DNS caching layer in CoreDNS.</li><li id="cce_10_0129__li79589463318">From the caching layer, the suffix of the request is examined and then the request is forwarded to the corresponding DNS:<ul id="cce_10_0129__ul29582417338"><li id="cce_10_0129__li495814453313">Names with the cluster suffix, for example, <strong id="cce_10_0129__b11610940133413">.cluster.local</strong>: The request is sent to CoreDNS.</li></ul>
|
||||
<ul id="cce_10_0129__ul189581349330"><li id="cce_10_0129__li169582413313">Names with the stub domain suffix, for example, <strong id="cce_10_0129__b208218633511">.acme.local</strong>: The request is sent to the configured custom DNS resolver that listens, for example, on 1.2.3.4.</li><li id="cce_10_0129__li195815453320">Names that do not match the suffix (for example, <strong id="cce_10_0129__b13519452133513">widget.com</strong>): The request is forwarded to the upstream DNS.</li></ul>
|
||||
</li></ol>
|
||||
<div class="fignone" id="cce_10_0129__fig7582181514118"><span class="figcap"><b>Figure 1 </b>Routing</span><br><span><img id="cce_10_0129__image23305161015" src="en-us_image_0000001750791424.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0129__fig7582181514118"><span class="figcap"><b>Figure 1 </b>Routing</span><br><span><img id="cce_10_0129__image23305161015" src="en-us_image_0000001897906037.png"></span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -23,11 +23,11 @@ cd /usr/local/nvidia/bin && ./nvidia-smi</pre>
|
||||
</li><li id="cce_10_0141__li186276437397">Container:<pre class="screen" id="cce_10_0141__screen11900202601214">cd /usr/local/nvidia/bin && ./nvidia-smi</pre>
|
||||
</li></ul>
|
||||
<p id="cce_10_0141__p7254950101912">If GPU information is returned, the device is available and the add-on has been installed.</p>
|
||||
<p id="cce_10_0141__p78452015208"><span><img id="cce_10_0141__image5372171217135" src="en-us_image_0000001750791684.png"></span></p>
|
||||
<p id="cce_10_0141__p78452015208"><span><img id="cce_10_0141__image5372171217135" src="en-us_image_0000001898025961.png"></span></p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0141__section95451728192112"><a name="cce_10_0141__section95451728192112"></a><a name="section95451728192112"></a><h4 class="sectiontitle">Obtaining the Driver Link from Public Network</h4><ol id="cce_10_0141__ol1138125974915"><li id="cce_10_0141__li19138125912498"><span>Log in to the CCE console.</span></li><li id="cce_10_0141__li111387599493"><span>Click <strong id="cce_10_0141__b7473141016405">Create Node</strong> and select the GPU node to be created in the <strong id="cce_10_0141__b13473161014016">Specifications</strong> area. The GPU card model of the node is displayed in the lower part of the page.</span></li></ol><ol start="3" id="cce_10_0141__ol195031456154814"><li id="cce_10_0141__li165032056184815"><span>Visit <em id="cce_10_0141__i2070996145418"><a href="https://www.nvidia.com/Download/Find.aspx?lang=en" target="_blank" rel="noopener noreferrer">https://www.nvidia.com/Download/Find.aspx?lang=en</a></em>.</span></li><li id="cce_10_0141__li16232124410505"><span>Select the driver information on the <span class="uicontrol" id="cce_10_0141__uicontrol1291212498541"><b>NVIDIA Driver Downloads</b></span> page, as shown in <a href="#cce_10_0141__fig11696366517">Figure 1</a>. <span class="uicontrol" id="cce_10_0141__uicontrol1650164444518"><b>Operating System</b></span> must be <strong id="cce_10_0141__b10981947121910">Linux 64-bit</strong>.</span><p><div class="fignone" id="cce_10_0141__fig11696366517"><a name="cce_10_0141__fig11696366517"></a><a name="fig11696366517"></a><span class="figcap"><b>Figure 1 </b>Setting parameters</span><br><span><img id="cce_10_0141__image18514163918398" src="en-us_image_0000001750950592.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li1682301014493"><span>After confirming the driver information, click <span class="uicontrol" id="cce_10_0141__uicontrol1411775314551"><b>SEARCH</b></span>. A page is displayed, showing the driver information, as shown in <a href="#cce_10_0141__fig7873421145213">Figure 2</a>. Click <span class="uicontrol" id="cce_10_0141__uicontrol163131533185618"><b>DOWNLOAD</b></span>.</span><p><div class="fignone" id="cce_10_0141__fig7873421145213"><a name="cce_10_0141__fig7873421145213"></a><a name="fig7873421145213"></a><span class="figcap"><b>Figure 2 </b>Driver information</span><br><span><img id="cce_10_0141__image6928629163818" src="en-us_image_0000001797871377.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li624514474513"><span>Obtain the driver link in either of the following ways:</span><p><ul id="cce_10_0141__ul18225815213"><li id="cce_10_0141__li68351817115313">Method 1: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, find <em id="cce_10_0141__i1537731713469">url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</em> in the browser address box. Then, supplement it to obtain the driver link <a href="https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run" target="_blank" rel="noopener noreferrer">https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</a>. By using this method, you must bind an EIP to each GPU node.</li><li id="cce_10_0141__li193423205231">Method 2: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, click <span class="uicontrol" id="cce_10_0141__uicontrol1435254915542"><b>AGREE & DOWNLOAD</b></span> to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.<div class="fignone" id="cce_10_0141__fig5901194614534"><a name="cce_10_0141__fig5901194614534"></a><a name="fig5901194614534"></a><span class="figcap"><b>Figure 3 </b>Obtaining the link</span><br><span><img id="cce_10_0141__image293362819366" src="en-us_image_0000001750950604.png"></span></div>
|
||||
<div class="section" id="cce_10_0141__section95451728192112"><a name="cce_10_0141__section95451728192112"></a><a name="section95451728192112"></a><h4 class="sectiontitle">Obtaining the Driver Link from Public Network</h4><ol id="cce_10_0141__ol1138125974915"><li id="cce_10_0141__li19138125912498"><span>Log in to the CCE console.</span></li><li id="cce_10_0141__li111387599493"><span>Click <strong id="cce_10_0141__b7473141016405">Create Node</strong> and select the GPU node to be created in the <strong id="cce_10_0141__b13473161014016">Specifications</strong> area. The GPU card model of the node is displayed in the lower part of the page.</span></li></ol><ol start="3" id="cce_10_0141__ol195031456154814"><li id="cce_10_0141__li165032056184815"><span>Visit <em id="cce_10_0141__i2070996145418"><a href="https://www.nvidia.com/Download/Find.aspx?lang=en" target="_blank" rel="noopener noreferrer">https://www.nvidia.com/Download/Find.aspx?lang=en</a></em>.</span></li><li id="cce_10_0141__li16232124410505"><span>Select the driver information on the <span class="uicontrol" id="cce_10_0141__uicontrol1291212498541"><b>NVIDIA Driver Downloads</b></span> page, as shown in <a href="#cce_10_0141__fig11696366517">Figure 1</a>. <span class="uicontrol" id="cce_10_0141__uicontrol1650164444518"><b>Operating System</b></span> must be <strong id="cce_10_0141__b10981947121910">Linux 64-bit</strong>.</span><p><div class="fignone" id="cce_10_0141__fig11696366517"><a name="cce_10_0141__fig11696366517"></a><a name="fig11696366517"></a><span class="figcap"><b>Figure 1 </b>Setting parameters</span><br><span><img id="cce_10_0141__image18514163918398" src="en-us_image_0000001851745764.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li1682301014493"><span>After confirming the driver information, click <span class="uicontrol" id="cce_10_0141__uicontrol1411775314551"><b>SEARCH</b></span>. A page is displayed, showing the driver information, as shown in <a href="#cce_10_0141__fig7873421145213">Figure 2</a>. Click <span class="uicontrol" id="cce_10_0141__uicontrol163131533185618"><b>DOWNLOAD</b></span>.</span><p><div class="fignone" id="cce_10_0141__fig7873421145213"><a name="cce_10_0141__fig7873421145213"></a><a name="fig7873421145213"></a><span class="figcap"><b>Figure 2 </b>Driver information</span><br><span><img id="cce_10_0141__image6928629163818" src="en-us_image_0000001851587044.png"></span></div>
|
||||
</p></li><li id="cce_10_0141__li624514474513"><span>Obtain the driver link in either of the following ways:</span><p><ul id="cce_10_0141__ul18225815213"><li id="cce_10_0141__li68351817115313">Method 1: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, find <em id="cce_10_0141__i1537731713469">url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</em> in the browser address box. Then, supplement it to obtain the driver link <a href="https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run" target="_blank" rel="noopener noreferrer">https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run</a>. By using this method, you must bind an EIP to each GPU node.</li><li id="cce_10_0141__li193423205231">Method 2: As shown in <a href="#cce_10_0141__fig5901194614534">Figure 3</a>, click <span class="uicontrol" id="cce_10_0141__uicontrol1435254915542"><b>AGREE & DOWNLOAD</b></span> to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.<div class="fignone" id="cce_10_0141__fig5901194614534"><a name="cce_10_0141__fig5901194614534"></a><a name="fig5901194614534"></a><span class="figcap"><b>Figure 3 </b>Obtaining the link</span><br><span><img id="cce_10_0141__image293362819366" src="en-us_image_0000001897906445.png"></span></div>
|
||||
</li></ul>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
@ -36,7 +36,7 @@ cd /usr/local/nvidia/bin && ./nvidia-smi</pre>
|
||||
</p></li><li id="cce_10_0141__li32141341144910"><span>In the bucket list, click a bucket name, and then the <strong id="cce_10_0141__b162847101346">Overview</strong> page of the bucket is displayed.</span></li><li id="cce_10_0141__li644534394917"><span>In the navigation pane, choose <strong id="cce_10_0141__b1431216143510">Objects</strong>.</span></li><li id="cce_10_0141__li3518125314429"><span>Select the name of the target object and copy the driver link on the object details page.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0141__section0377457163618"><h4 class="sectiontitle">Components</h4>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0141__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 1 </b>GPU component</caption><thead align="left"><tr id="cce_10_0141__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="19%" id="mcps1.3.7.2.2.4.1.1"><p id="cce_10_0141__p14653141018584">Component</p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0141__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Add-on components</caption><thead align="left"><tr id="cce_10_0141__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="19%" id="mcps1.3.7.2.2.4.1.1"><p id="cce_10_0141__p14653141018584">Component</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="60%" id="mcps1.3.7.2.2.4.1.2"><p id="cce_10_0141__p065391025820">Description</p>
|
||||
</th>
|
||||
|
@ -2,12 +2,12 @@
|
||||
|
||||
<h1 class="topictitle1">NodePort</h1>
|
||||
<div id="body1553224785332"><div class="section" id="cce_10_0142__section13654155944916"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0142__p028915126124">A Service is exposed on each node's IP address at a static port (NodePort). When you create a NodePort Service, Kubernetes automatically allocates an internal IP address (ClusterIP) of the cluster. When clients outside the cluster access <NodeIP>:<NodePort>, the traffic will be forwarded to the target pod through the ClusterIP of the NodePort Service.</p>
|
||||
<div class="fignone" id="cce_10_0142__fig6819133414131"><span class="figcap"><b>Figure 1 </b>NodePort access</span><br><span><img id="cce_10_0142__image10510139711" src="en-us_image_0000001797870765.png"></span></div>
|
||||
<div class="fignone" id="cce_10_0142__fig6819133414131"><span class="figcap"><b>Figure 1 </b>NodePort access</span><br><span><img id="cce_10_0142__image10510139711" src="en-us_image_0000001851586420.png"></span></div>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0142__section8501151104219"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0142__ul1685519569431"><li id="cce_10_0142__li1585575616436">By default, a NodePort Service is accessed within a VPC. To use an EIP to access a NodePort Service through public networks, bind an EIP to the node in the cluster in advance.</li><li id="cce_10_0142__li128551156114310">After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. Do not modify the Service affinity setting after the Service is created. To modify it, create a Service again.</li><li id="cce_10_0142__li62831358182017">CCE Turbo clusters support only cluster-level service affinity.</li><li id="cce_10_0142__li217783916207">In VPC network mode, when container A is published through a NodePort service and the service affinity is set to the node level (that is, <strong id="cce_10_0142__b1291203218520">externalTrafficPolicy</strong> is set to <strong id="cce_10_0142__b11911632135217">local</strong>), container B deployed on the same node cannot access container A through the node IP address and NodePort service.</li><li id="cce_10_0142__li14613571073">When a NodePort service is created in a cluster of v1.21.7 or later, the port on the node is not displayed using <strong id="cce_10_0142__b13256143512525">netstat</strong> by default. If the cluster forwarding mode is <strong id="cce_10_0142__b42563350522">iptables</strong>, run the <strong id="cce_10_0142__b62561135115212">iptables -t nat -L</strong> command to view the port. If the cluster forwarding mode is <strong id="cce_10_0142__b925763515218">IPVS</strong>, run the <strong id="cce_10_0142__b23917223106">ipvsadm -Ln</strong> command to view the port.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0142__section1325012312139"><h4 class="sectiontitle">Creating a NodePort Service</h4><ol id="cce_10_0142__ol751935681319"><li id="cce_10_0142__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0142__li1651955651312"><span>In the navigation pane, choose <strong id="cce_10_0142__b169702128151"><span id="cce_10_0142__text9765124722315">Services & Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0142__uicontrol69701128153"><b>Create Service</b></span>.</span></li><li id="cce_10_0142__li185190567138"><span>Set intra-cluster access parameters.</span><p><ul id="cce_10_0142__ul4446314017144"><li id="cce_10_0142__li6462394317144"><strong id="cce_10_0142__b845613814287">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0142__li89543531070"><strong id="cce_10_0142__b106597277362">Service Type</strong>: Select <span class="uicontrol" id="cce_10_0142__uicontrol5666142710366"><b>NodePort</b></span>.</li><li id="cce_10_0142__li4800017144"><strong id="cce_10_0142__b1263193014367">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0142__li1758110116149"><strong id="cce_10_0142__b38118349367">Service Affinity</strong>: For details, see <a href="cce_10_0249.html#cce_10_0249__section18134208069">externalTrafficPolicy (Service Affinity)</a>.<ul id="cce_10_0142__ul158101161412"><li id="cce_10_0142__li105815113141"><strong id="cce_10_0142__b2674164185210">Cluster level</strong>: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained.</li><li id="cce_10_0142__li185817117145"><strong id="cce_10_0142__b465617445525">Node level</strong>: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained.</li></ul>
|
||||
</li><li id="cce_10_0142__li43200017144"><strong id="cce_10_0142__b2600143835813">Selector</strong>: Add a label and click <strong id="cce_10_0142__b260020382582">Confirm</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0142__b354965233619">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0142__b16550125293615">OK</strong>.</li><li id="cce_10_0142__li388800117144"><strong id="cce_10_0142__b451552415715">Port Settings</strong><ul id="cce_10_0142__ul3499201217144"><li id="cce_10_0142__li4649265917144"><strong id="cce_10_0142__b28899114374">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0142__li353122153610"><strong id="cce_10_0142__b1852318551688">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0142__li1578074917144"><strong id="cce_10_0142__b19416443712">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li><li id="cce_10_0142__li780902117144"><strong id="cce_10_0142__b11763931199">Node Port</strong>: You are advised to select <strong id="cce_10_0142__b1198741292">Auto</strong>. You can also specify a port. The default port ranges from 30000 to 32767.</li></ul>
|
||||
<div class="section" id="cce_10_0142__section1325012312139"><h4 class="sectiontitle">Creating a NodePort Service</h4><ol id="cce_10_0142__ol751935681319"><li id="cce_10_0142__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0142__li1651955651312"><span>In the navigation pane, choose <strong id="cce_10_0142__b169702128151"><span id="cce_10_0142__text9765124722315">Services & Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0142__uicontrol69701128153"><b>Create Service</b></span>.</span></li><li id="cce_10_0142__li185190567138"><span>Configure intra-cluster access parameters.</span><p><ul id="cce_10_0142__ul4446314017144"><li id="cce_10_0142__li6462394317144"><strong id="cce_10_0142__b845613814287">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0142__li89543531070"><strong id="cce_10_0142__b106597277362">Service Type</strong>: Select <span class="uicontrol" id="cce_10_0142__uicontrol5666142710366"><b>NodePort</b></span>.</li><li id="cce_10_0142__li4800017144"><strong id="cce_10_0142__b1263193014367">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0142__li1758110116149"><strong id="cce_10_0142__b38118349367">Service Affinity</strong>: For details, see <a href="cce_10_0249.html#cce_10_0249__section18134208069">externalTrafficPolicy (Service Affinity)</a>.<ul id="cce_10_0142__ul158101161412"><li id="cce_10_0142__li105815113141"><strong id="cce_10_0142__b2674164185210">Cluster level</strong>: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained.</li><li id="cce_10_0142__li185817117145"><strong id="cce_10_0142__b465617445525">Node level</strong>: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained.</li></ul>
|
||||
</li><li id="cce_10_0142__li43200017144"><strong id="cce_10_0142__b2600143835813">Selector</strong>: Add a label and click <strong id="cce_10_0142__b260020382582">Confirm</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0142__b354965233619">Reference Workload Label</strong> to use the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0142__b16550125293615">OK</strong>.</li><li id="cce_10_0142__li142435567390"><strong id="cce_10_0142__b3860171074716">IPv6</strong>: This function is disabled by default. After this function is enabled, the cluster IP address of the Service changes to an IPv6 address. <strong id="cce_10_0142__b14552143617309">This parameter is available only in clusters of v1.15 or later with IPv6 enabled (set during cluster creation).</strong></li><li id="cce_10_0142__li388800117144"><strong id="cce_10_0142__b451552415715">Port Settings</strong><ul id="cce_10_0142__ul3499201217144"><li id="cce_10_0142__li4649265917144"><strong id="cce_10_0142__b28899114374">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0142__li353122153610"><strong id="cce_10_0142__b1852318551688">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0142__li1578074917144"><strong id="cce_10_0142__b19416443712">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li><li id="cce_10_0142__li780902117144"><strong id="cce_10_0142__b11763931199">Node Port</strong>: You are advised to select <strong id="cce_10_0142__b1198741292">Auto</strong>. You can also specify a port. The default port ranges from 30000 to 32767.</li></ul>
|
||||
</li></ul>
|
||||
</p></li><li id="cce_10_0142__li552017569135"><span>Click <strong id="cce_10_0142__b1012031216378">OK</strong>.</span></li></ol>
|
||||
</div>
|
||||
|
@ -1,7 +1,7 @@
|
||||
<a name="cce_10_0144"></a><a name="cce_10_0144"></a>
|
||||
|
||||
<h1 class="topictitle1">Deploying an Application Through the Helm v3 Client</h1>
|
||||
<div id="body0000001207271506"><div class="section" id="cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section121301535620"><h4 class="sectiontitle">Prerequisites</h4><p id="cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_p114934552110">The Kubernetes cluster created on CCE has been connected to kubectl. For details, see <a href="cce_10_0107.html#cce_10_0107__section37321625113110">Using kubectl</a>.</p>
|
||||
<div id="body0000001207271506"><div class="section" id="cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section121301535620"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0144__ul3747634144719"><li id="cce_10_0144__li10747143414714">The Kubernetes cluster created on CCE has been connected to kubectl. For details, see <a href="cce_10_0107.html#cce_10_0107__section37321625113110">Using kubectl</a>.</li><li id="cce_10_0144__li143264387478">To pull a public image when deploying Helm, ensure an EIP has been bound to the node.</li></ul>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section3719193213815"><a name="cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section3719193213815"></a><a name="en-us_topic_0226102212_en-us_topic_0179003017_section3719193213815"></a><h4 class="sectiontitle">Installing Helm v3</h4><p id="cce_10_0144__p81882426153">This section uses Helm v3.3.0 as an example.</p>
|
||||
<p id="cce_10_0144__p1421305841217">For other versions, visit <a href="https://github.com/helm/helm/releases" target="_blank" rel="noopener noreferrer">https://github.com/helm/helm/releases</a>.</p>
|
||||
@ -18,17 +18,15 @@ version.BuildInfo{Version:"v3.3.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6
|
||||
<ol id="cce_10_0144__ol198915561710"><li id="cce_10_0144__li125132594918"><a name="cce_10_0144__li125132594918"></a><a name="li125132594918"></a><span>Search for a chart from the <a href="https://artifacthub.io/packages/search?kind=0" target="_blank" rel="noopener noreferrer">Artifact Hub</a> repository recommended by Helm and configure the Helm repository.</span><p><pre class="screen" id="cce_10_0144__screen19268154117556">helm repo add <i><span class="varname" id="cce_10_0144__varname939372365611">{repo_name}</span></i> <i><span class="varname" id="cce_10_0144__varname11190152615617">{repo_addr}</span></i></pre>
|
||||
<div class="p" id="cce_10_0144__p12751123516551">The following uses the <a href="https://artifacthub.io/packages/helm/bitnami/wordpress" target="_blank" rel="noopener noreferrer">WordPress chart</a> as an example:<pre class="screen" id="cce_10_0144__screen119515334490">helm repo add bitnami https://charts.bitnami.com/bitnami</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0144__li107097713613"><span>Run the <strong id="cce_10_0144__b117101971863">helm install</strong> command to install the chart.</span><p><ul id="cce_10_0144__ul6996161117614"><li id="cce_10_0144__li9996181113615">Default installation: This is the simplest method, which requires only two parameters.<pre class="screen" id="cce_10_0144__screen1313816261322">helm install <i><span class="varname" id="cce_10_0144__varname1850601013015">{release_name}</span></i> <i><span class="varname" id="cce_10_0144__varname11497121315010">{chart_name}</span></i></pre>
|
||||
<div class="p" id="cce_10_0144__p1752181663">For example, to install WordPress, the WordPress chart added in <a href="#cce_10_0144__li125132594918">step 1</a> is <strong id="cce_10_0144__b181111283259">bitnami/wordpress</strong>, and the release name is <strong id="cce_10_0144__b225294462510">my-wordpress</strong>.<pre class="screen" id="cce_10_0144__screen15248191517114">helm install my-wordpress bitnami/wordpress</pre>
|
||||
</div>
|
||||
</li><li id="cce_10_0144__li198815354425">Custom installation: The default installation uses the default settings in the chart. Use custom installation to custom parameter settings. Run the <strong id="cce_10_0144__b488963516424">helm show values </strong><i><span class="varname" id="cce_10_0144__varname688063544218">{chart_name}</span></i> command to view the configurable options of the chart. For example, to view the configurable items of WordPress, run the following command:<pre class="screen" id="cce_10_0144__screen554719211295">helm show values bitnami/wordpress</pre>
|
||||
<p id="cce_10_0144__p670418414295">Overwrite specified parameters by running the following commands:</p>
|
||||
<pre class="screen" id="cce_10_0144__screen1473311313209">helm install my-wordpress bitnami/wordpress \
|
||||
</p></li><li id="cce_10_0144__li107097713613"><span>Run the <strong id="cce_10_0144__b117101971863">helm install</strong> command to install the chart.</span><p><pre class="screen" id="cce_10_0144__screen1313816261322">helm install <i><span class="varname" id="cce_10_0144__varname1850601013015">{release_name}</span></i> <i><span class="varname" id="cce_10_0144__varname11497121315010">{chart_name}</span></i> --set <i><span class="varname" id="cce_10_0144__varname659972112719">key1=val1</span></i></pre>
|
||||
<p id="cce_10_0144__p1752181663">For example, to install WordPress, the WordPress chart added in <a href="#cce_10_0144__li125132594918">1</a> is <strong id="cce_10_0144__b181111283259">bitnami/wordpress</strong>, the release name is <strong id="cce_10_0144__b225294462510">my-wordpress</strong>, and mandatory parameters have been configured.</p>
|
||||
<pre class="screen" id="cce_10_0144__screen2069219521578">helm install my-wordpress bitnami/wordpress \
|
||||
--set mariadb.primary.persistence.enabled=true \
|
||||
--set mariadb.primary.persistence.storageClass=csi-disk \
|
||||
--set mariadb.primary.persistence.size=10Gi \
|
||||
--set persistence.enabled=false</pre>
|
||||
</li></ul>
|
||||
<div class="p" id="cce_10_0144__p928974513414">Run the <strong id="cce_10_0144__b488963516424">helm show values </strong><i><span class="varname" id="cce_10_0144__varname590872713456">{chart_name}</span></i> command to view the configurable options of the chart. For example, to view the configurable items of WordPress, run the following command:<pre class="screen" id="cce_10_0144__screen554719211295">helm show values bitnami/wordpress</pre>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0144__li48391935194110"><span>View the installed chart release.</span><p><pre class="screen" id="cce_10_0144__screen774012498414">helm list</pre>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
|
@ -33,7 +33,7 @@
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="78%" headers="mcps1.3.3.3.2.4.2.2.3.1.2 "><p id="cce_10_0146__p1678472115013">Describes configuration parameters required by templates.</p>
|
||||
<div class="notice" id="cce_10_0146__note11415171194911"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0146__p394216481648">Make sure that the image address set in the <strong id="cce_10_0146__b169837156417">values.yaml</strong> file is the same as the image address in the container image repository. Otherwise, an exception occurs when you create a workload, and the system displays a message indicating that the image fails to be pulled.</p>
|
||||
<p id="cce_10_0146__p04177113498">To obtain the image address, perform the following operations: Log in to the CCE console. In the navigation pane, choose <strong id="cce_10_0146__b860412174116">Image Repository</strong> to access the SWR console. Choose <strong id="cce_10_0146__b10171926114117">My Images</strong> > <strong id="cce_10_0146__b12372684119">Private Images</strong> and click the name of the uploaded image. On the <strong id="cce_10_0146__b223726104111">Image Tags</strong> tab page, obtain the image address from the pull command. You can click <span><img id="cce_10_0146__image292113414153" src="en-us_image_0000001797909977.png"></span> to copy the command in the <strong id="cce_10_0146__b723192619418">Image Pull Command</strong> column.</p>
|
||||
<p id="cce_10_0146__p04177113498">To obtain the image address, perform the following operations: Log in to the CCE console. In the navigation pane, choose <strong id="cce_10_0146__b860412174116">Image Repository</strong> to access the SWR console. Choose <strong id="cce_10_0146__b10171926114117">My Images</strong> > <strong id="cce_10_0146__b12372684119">Private Images</strong> and click the name of the uploaded image. On the <strong id="cce_10_0146__b223726104111">Image Tags</strong> tab page, obtain the image address from the pull command. You can click <span><img id="cce_10_0146__image292113414153" src="en-us_image_0000001897906097.png"></span> to copy the command in the <strong id="cce_10_0146__b723192619418">Image Pull Command</strong> column.</p>
|
||||
</div></div>
|
||||
</td>
|
||||
</tr>
|
||||
@ -101,7 +101,7 @@
|
||||
</p></li><li id="cce_10_0146__le625c1f421df4758b99f9184828926a9"><span>Click <span class="uicontrol" id="cce_10_0146__uicontrol67604016460"><b>Install</b></span>.</span><p><p id="cce_10_0146__p128679414295">On the <strong id="cce_10_0146__b377004011358">Releases</strong> tab page, you can view the installation status of the release.</p>
|
||||
</p></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0146__section5324101171010"><h4 class="sectiontitle">Upgrading a Chart-based Workload</h4><ol id="cce_10_0146__ol1541655017447"><li id="cce_10_0146__li1869015203020"><span>Log in to the CCE console and click the cluster name to access the cluster console. Choose <strong id="cce_10_0146__b1332619365361"><span id="cce_10_0146__text1020012374467">App Templates</span></strong> in the navigation pane and click the <strong id="cce_10_0146__b15333103663618">Releases</strong> tab.</span></li><li id="cce_10_0146__li9260155614447"><span>Click <strong id="cce_10_0146__b624761044718">Upgrade</strong> in the row where the desired workload resides and set the parameters for the workload.</span></li><li id="cce_10_0146__li1126255674414"><span>Select a chart version for <strong id="cce_10_0146__b420264371153445">Chart Version</strong>.</span></li><li id="cce_10_0146__li1126615644417"><span>Follow the prompts to modify the chart parameters. Click <strong id="cce_10_0146__b1224624192812">Upgrade</strong>, and then click <strong id="cce_10_0146__b2090164210284">Submit</strong>.</span></li><li id="cce_10_0146__li1327935644412"><span>Click <strong id="cce_10_0146__b824513662153516">Back to Release List</strong>. If the chart status changes to <strong id="cce_10_0146__b15735371333">Upgrade successful</strong>, the workload is successfully upgraded.</span></li></ol>
|
||||
<div class="section" id="cce_10_0146__section5324101171010"><h4 class="sectiontitle">Upgrading a Chart-based Workload</h4><ol id="cce_10_0146__ol1541655017447"><li id="cce_10_0146__li1869015203020"><span>Log in to the CCE console and click the cluster name to access the cluster console. Choose <strong id="cce_10_0146__b1332619365361"><span id="cce_10_0146__text1020012374467">App Templates</span></strong> in the navigation pane and click the <strong id="cce_10_0146__b15333103663618">Releases</strong> tab.</span></li><li id="cce_10_0146__li9260155614447"><span>Click <strong id="cce_10_0146__b624761044718">Upgrade</strong> in the row where the desired workload resides and set the parameters for the workload.</span></li><li id="cce_10_0146__li1126255674414"><span>Select a chart version for <strong id="cce_10_0146__b420264371153445">Chart Version</strong>.</span></li><li id="cce_10_0146__li1126615644417"><span>Follow the prompts to modify the chart parameters. Confirm the modification and click <strong id="cce_10_0146__b1349265264015">Upgrade</strong>.</span></li><li id="cce_10_0146__li1327935644412"><span>If the execution status is <strong id="cce_10_0146__b8151125412417">Upgraded</strong>, the workload has been upgraded.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0146__section13251511191012"><h4 class="sectiontitle">Rolling Back a Chart-based Workload</h4><ol id="cce_10_0146__ol675012341451"><li id="cce_10_0146__li855613303328"><span>Log in to the CCE console and click the cluster name to access the cluster console. Choose <strong id="cce_10_0146__b5287330381"><span id="cce_10_0146__text733717447461">App Templates</span></strong> in the navigation pane and click the <strong id="cce_10_0146__b192885313382">Releases</strong> tab.</span></li><li id="cce_10_0146__li15170194294515"><span>Click <strong id="cce_10_0146__b42851336162919">More</strong> > <strong id="cce_10_0146__b528583611291">Roll Back</strong> for the workload to be rolled back, select the workload version, and click <strong id="cce_10_0146__b14293173618298">Roll back</strong> <strong id="cce_10_0146__b4293436172920">to this version</strong>.</span><p><p id="cce_10_0146__p1917254212454">In the workload list, if the status is <strong id="cce_10_0146__b250435233917">Rollback successful</strong>, the workload is rolled back successfully.</p>
|
||||
</p></li></ol>
|
||||
|
@ -40,7 +40,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0150__cce_10_0047_row161110459565"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.1 "><p id="cce_10_0150__cce_10_0047_p56111845145612">CPU Quota</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0150__cce_10_0047_ul9168521572"><li id="cce_10_0150__cce_10_0047_li15168227577"><strong id="cce_10_0150__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0150__cce_10_0047_li121681216579"><strong id="cce_10_0150__cce_10_0047_b833715229303">Limit</strong>: maximum number of CPU cores available for a container. Do not leave <strong id="cce_10_0150__cce_10_0047_b1257625123019">Limit</strong> unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0150__cce_10_0047_ul9168521572"><li id="cce_10_0150__cce_10_0047_li15168227577"><strong id="cce_10_0150__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0150__cce_10_0047_li121681216579"><strong id="cce_10_0150__cce_10_0047_b67015378282">Limit</strong>: maximum number of CPU cores that can be used by a container. This prevents containers from using excessive resources.</li></ul>
|
||||
<p id="cce_10_0150__cce_10_0047_p520715505213">If <strong id="cce_10_0150__cce_10_0047_b2160104553012">Request</strong> and <strong id="cce_10_0150__cce_10_0047_b16757125053014">Limit</strong> are not specified, the quota is not limited. For more information and suggestions about <strong id="cce_10_0150__cce_10_0047_b12633192718313">Request</strong> and <strong id="cce_10_0150__cce_10_0047_b3633227113119">Limit</strong>, see <a href="cce_10_0163.html">Configuring Container Specifications</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -75,7 +75,7 @@
|
||||
</li></ul>
|
||||
<ul id="cce_10_0150__ul153491055201112"><li id="cce_10_0150__li4810204715113">(Optional) <strong id="cce_10_0150__cce_10_0047_b6712437288">Lifecycle</strong>: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see <a href="cce_10_0105.html">Configuring Container Lifecycle Parameters</a>.</li><li id="cce_10_0150__li1810447181110">(Optional) <strong id="cce_10_0150__cce_10_0047_b17656135219292">Environment Variables</strong>: Configure variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see <a href="cce_10_0113.html">Configuring Environment Variables</a>.</li><li id="cce_10_0150__li4810124731117">(Optional) <strong id="cce_10_0150__cce_10_0047_b1211902117322">Data Storage</strong>: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see <a href="cce_10_0374.html">Storage</a>.<div class="note" id="cce_10_0150__cce_10_0047_note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0150__cce_10_0047_p17126153413513">If the workload contains more than one pod, EVS volumes cannot be mounted.</p>
|
||||
</div></div>
|
||||
</li><li id="cce_10_0150__li128105471119">(Optional) <strong id="cce_10_0150__cce_10_0047_b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Connecting CCE to AOM</a>.<p id="cce_10_0150__cce_10_0047_p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0150__cce_10_0047_b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="cce_10_0047.html#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li><li id="cce_10_0150__li128105471119">(Optional) <strong id="cce_10_0150__cce_10_0047_b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.<p id="cce_10_0150__cce_10_0047_p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0150__cce_10_0047_b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="cce_10_0047.html#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
|
||||
</li></ul>
|
||||
</div>
|
||||
</li><li id="cce_10_0150__li5559101392014"><strong id="cce_10_0150__cce_10_0047_b479415459616">Image Access Credential</strong>: Select the credential used for accessing the image repository. The default value is <strong id="cce_10_0150__cce_10_0047_b157944451067">default-secret</strong>. You can use default-secret to access images in SWR. For details about <strong id="cce_10_0150__cce_10_0047_b582111347813">default-secret</strong>, see <a href="cce_10_0388.html#cce_10_0388__section11760122012591">default-secret</a>.</li><li id="cce_10_0150__li4559151310203">(Optional) <strong id="cce_10_0150__cce_10_0047_b513531164612">GPU</strong>: <strong id="cce_10_0150__cce_10_0047_b11135211134611">All</strong> is selected by default. The workload instance will be scheduled to the node of the specified GPU type.</li></ul>
|
||||
@ -83,7 +83,7 @@
|
||||
<div class="p" id="cce_10_0150__p310913521612"><strong id="cce_10_0150__b104487882335241">(Optional) Advanced Settings</strong><ul id="cce_10_0150__ul67010503227"><li id="cce_10_0150__li179714209414"><strong id="cce_10_0150__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0150__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li></ul>
|
||||
<ul id="cce_10_0150__ul142811417"><li id="cce_10_0150__li1981131361"><strong id="cce_10_0150__b31333382212">Job Settings</strong><ul id="cce_10_0150__ul3224164372"><li id="cce_10_0150__li153401527710"><strong id="cce_10_0150__b58521141142118">Parallel Pods</strong>: Maximum number of pods that can run in parallel during job execution. The value cannot be greater than the total number of pods in the job.</li><li id="cce_10_0150__li10287691771"><strong id="cce_10_0150__b143534522119">Timeout (s)</strong>: Once a job reaches this time, the job status becomes failed and all pods in this job will be deleted. If you leave this parameter blank, the job will never time out.</li><li id="cce_10_0150__li138103518153">Completion Mode<ul id="cce_10_0150__ul51723941512"><li id="cce_10_0150__li17920114315153"><strong id="cce_10_0150__b699913411252">Non-indexed</strong>: A job is considered complete when all the pods are successfully executed. Each pod completion is homologous to each other.</li><li id="cce_10_0150__li1358717341155"><strong id="cce_10_0150__b6574144582718">Indexed</strong>: Each pod gets an associated completion index from 0 to the number of pods minus 1. The job is considered complete when every pod allocated with an index is successfully executed. For an indexed job, pods are named in the format of $(job-name)-$(index).</li></ul>
|
||||
</li><li id="cce_10_0150__li5539150161613"><strong id="cce_10_0150__b28515464300">Suspend Job</strong>: By default, a job is executed immediately after being created. The job's execution will be suspended if you enable this option, and resumed after you disable it.</li></ul>
|
||||
</li><li id="cce_10_0150__li34513820295"><strong id="cce_10_0150__cce_10_0047_b748219141468">Network Configuration</strong><ul id="cce_10_0150__cce_10_0047_ul101792551538"><li id="cce_10_0150__cce_10_0047_li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li></ul>
|
||||
</li><li id="cce_10_0150__li34513820295"><strong id="cce_10_0150__cce_10_0047_b563938103113">Network Configuration</strong><ul id="cce_10_0150__cce_10_0047_ul101792551538"><li id="cce_10_0150__cce_10_0047_li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0150__cce_10_0047_li053620118549">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0150__li01417411620"><span>Click <strong id="cce_10_0150__b2573105264313">Create Workload</strong> in the lower right corner.</span></li></ol>
|
||||
|
@ -41,7 +41,7 @@
|
||||
</tr>
|
||||
<tr id="cce_10_0151__cce_10_0047_row161110459565"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.1 "><p id="cce_10_0151__cce_10_0047_p56111845145612">CPU Quota</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0151__cce_10_0047_ul9168521572"><li id="cce_10_0151__cce_10_0047_li15168227577"><strong id="cce_10_0151__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0151__cce_10_0047_li121681216579"><strong id="cce_10_0151__cce_10_0047_b833715229303">Limit</strong>: maximum number of CPU cores available for a container. Do not leave <strong id="cce_10_0151__cce_10_0047_b1257625123019">Limit</strong> unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.2 "><ul id="cce_10_0151__cce_10_0047_ul9168521572"><li id="cce_10_0151__cce_10_0047_li15168227577"><strong id="cce_10_0151__cce_10_0047_b3669018123014">Request</strong>: minimum number of CPU cores required by a container. The default value is 0.25 cores.</li><li id="cce_10_0151__cce_10_0047_li121681216579"><strong id="cce_10_0151__cce_10_0047_b67015378282">Limit</strong>: maximum number of CPU cores that can be used by a container. This prevents containers from using excessive resources.</li></ul>
|
||||
<p id="cce_10_0151__cce_10_0047_p520715505213">If <strong id="cce_10_0151__cce_10_0047_b2160104553012">Request</strong> and <strong id="cce_10_0151__cce_10_0047_b16757125053014">Limit</strong> are not specified, the quota is not limited. For more information and suggestions about <strong id="cce_10_0151__cce_10_0047_b12633192718313">Request</strong> and <strong id="cce_10_0151__cce_10_0047_b3633227113119">Limit</strong>, see <a href="cce_10_0163.html">Configuring Container Specifications</a>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@ -86,7 +86,7 @@
|
||||
</div></div>
|
||||
</li><li id="cce_10_0151__li1112523911136"><strong id="cce_10_0151__b06862052175618">Job Records</strong>: You can set the number of jobs that are successfully executed or fail to be executed. Setting a limit to <strong id="cce_10_0151__b26802462489">0</strong> corresponds to keeping none of the jobs after they finish.</li></ul>
|
||||
<div class="p" id="cce_10_0151__p310913521612"><strong id="cce_10_0151__b109654260535243">(Optional) Advanced Settings</strong><ul id="cce_10_0151__ul344721232613"><li id="cce_10_0151__li179714209414"><strong id="cce_10_0151__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0151__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li></ul>
|
||||
<ul id="cce_10_0151__ul142811417"><li id="cce_10_0151__li34513820295"><strong id="cce_10_0151__cce_10_0047_b748219141468">Network Configuration</strong><ul id="cce_10_0151__cce_10_0047_ul101792551538"><li id="cce_10_0151__cce_10_0047_li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li></ul>
|
||||
<ul id="cce_10_0151__ul142811417"><li id="cce_10_0151__li34513820295"><strong id="cce_10_0151__cce_10_0047_b563938103113">Network Configuration</strong><ul id="cce_10_0151__cce_10_0047_ul101792551538"><li id="cce_10_0151__cce_10_0047_li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0151__cce_10_0047_li053620118549">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
|
||||
</li></ul>
|
||||
</div>
|
||||
</p></li><li id="cce_10_0151__li01417411620"><span>Click <strong id="cce_10_0151__b040712154920">Create Workload</strong> in the lower right corner.</span></li></ol>
|
||||
|
@ -33,12 +33,12 @@
|
||||
<tr id="cce_10_0152__row133224252315"><td class="cellrowborder" valign="top" width="16%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0152__p23228259314">Data</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="84%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0152__p085820352295">Data of a ConfigMap, in the key-value pair format.</p>
|
||||
<p id="cce_10_0152__p15328144616261">Click <span><img id="cce_10_0152__image12816235293" src="en-us_image_0000001797910077.png"></span> to add data. The value can be in string, JSON, or YAML format.</p>
|
||||
<p id="cce_10_0152__p15328144616261">Click <span><img id="cce_10_0152__image12816235293" src="en-us_image_0000001851745504.png"></span> to add data. The value can be in string, JSON, or YAML format.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0152__row123142814330"><td class="cellrowborder" valign="top" width="16%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0152__p17322225134">Label</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="84%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0152__p055041093119">Label of the ConfigMap. Enter a key-value pair and click <strong id="cce_10_0152__b1027145220482">Add</strong>.</p>
|
||||
<td class="cellrowborder" valign="top" width="84%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0152__p055041093119">Label of the ConfigMap. Enter a key-value pair and click <strong id="cce_10_0152__b1027145220482">Confirm</strong>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
@ -36,14 +36,14 @@
|
||||
<tr id="cce_10_0153__row133224252315"><td class="cellrowborder" valign="top" width="28.000000000000004%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0153__p23228259314">Secret Data</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="72%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0153__p133223251335">Workload secret data can be used in containers.</p>
|
||||
<ul id="cce_10_0153__ul180611337469"><li id="cce_10_0153__li108069333468">If <strong id="cce_10_0153__b10282732165415">Secret Type</strong> is <strong id="cce_10_0153__b1449613357547">Opaque</strong>, click <span><img id="cce_10_0153__image12816235293" src="en-us_image_0000001750950360.png"></span>. In the dialog box displayed, enter a key-value pair and select <strong id="cce_10_0153__b31811921195517">Auto Base64 Encoding</strong>.</li><li id="cce_10_0153__li1536053764716">If <strong id="cce_10_0153__b17791104012492">Secret Type</strong> is <strong id="cce_10_0153__b722045644918">kubernetes.io/dockerconfigjson</strong>, enter the account and password for logging in to the private image repository.</li><li id="cce_10_0153__li17736104214478">If <strong id="cce_10_0153__b1214075424815">Secret Type</strong> is <strong id="cce_10_0153__b37767205275">kubernetes.io/tls</strong> or <strong id="cce_10_0153__b162903173270">IngressTLS</strong>, upload the certificate file and private key file.<div class="note" id="cce_10_0153__note1890215211325"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0153__ul1280017919332"><li id="cce_10_0153__li14977104417334">A certificate is a self-signed or CA-signed credential used for identity authentication.</li><li id="cce_10_0153__li6236332143310">A certificate request is a request for a signature with a private key.</li></ul>
|
||||
<ul id="cce_10_0153__ul180611337469"><li id="cce_10_0153__li108069333468">If <strong id="cce_10_0153__b10282732165415">Secret Type</strong> is <strong id="cce_10_0153__b1449613357547">Opaque</strong>, click <span><img id="cce_10_0153__image12816235293" src="en-us_image_0000001851745844.png"></span>. In the dialog box displayed, enter a key-value pair and select <strong id="cce_10_0153__b31811921195517">Auto Base64 Encoding</strong>.</li><li id="cce_10_0153__li1536053764716">If <strong id="cce_10_0153__b17791104012492">Secret Type</strong> is <strong id="cce_10_0153__b722045644918">kubernetes.io/dockerconfigjson</strong>, enter the account and password for logging in to the private image repository.</li><li id="cce_10_0153__li17736104214478">If <strong id="cce_10_0153__b1214075424815">Secret Type</strong> is <strong id="cce_10_0153__b37767205275">kubernetes.io/tls</strong> or <strong id="cce_10_0153__b162903173270">IngressTLS</strong>, upload the certificate file and private key file.<div class="note" id="cce_10_0153__note1890215211325"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0153__ul1280017919332"><li id="cce_10_0153__li14977104417334">A certificate is a self-signed or CA-signed credential used for identity authentication.</li><li id="cce_10_0153__li6236332143310">A certificate request is a request for a signature with a private key.</li></ul>
|
||||
</div></div>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0153__row123142814330"><td class="cellrowborder" valign="top" width="28.000000000000004%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0153__p17322225134">Secret Label</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="72%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0153__p055041093119">Label of the secret. Enter a key-value pair and click <strong id="cce_10_0153__b161531566532">Add</strong>.</p>
|
||||
<td class="cellrowborder" valign="top" width="72%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0153__p055041093119">Label of the secret. Enter a key-value pair and click <span class="uicontrol" id="cce_10_0153__uicontrol9702173611313"><b>Confirm</b></span>.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@ -67,7 +67,7 @@ data:
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret #Secret name
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b1366701944">default</strong>.
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b1341938255">default</strong>.
|
||||
data:
|
||||
<strong id="cce_10_0153__b196671430132319">.dockerconfigjson: eyJh</strong><strong id="cce_10_0153__b1052142752319">*****</strong> # Content encoded using Base64.
|
||||
<strong id="cce_10_0153__b18496153310233">type: kubernetes.io/dockerconfigjson</strong></pre>
|
||||
@ -86,7 +86,7 @@ data:
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mysecret #Secret name
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b722759877">default</strong>.
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b54705092">default</strong>.
|
||||
data:
|
||||
tls.crt: <strong id="cce_10_0153__b1479454093611">LS0tLS1CRU*****FURS0tLS0t</strong> # Certificate content, which must be encoded using Base64.
|
||||
tls.key: <strong id="cce_10_0153__b3794134014361">LS0tLS1CRU*****VZLS0tLS0=</strong> # Private key content, which must be encoded using Base64.
|
||||
@ -96,7 +96,7 @@ data:
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mysecret #Secret name
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b477417991">default</strong>.
|
||||
namespace: default #Namespace. The default value is <strong id="cce_10_0153__b1229618289">default</strong>.
|
||||
data:
|
||||
tls.crt: <strong id="cce_10_0153__b4259755912">LS0tLS1CRU*****FURS0tLS0t</strong> # Certificate content, which must be encoded using Base64.
|
||||
tls.key: <strong id="cce_10_0153__b1522022111010">LS0tLS1CRU*****VZLS0tLS0=</strong> # Private key content, which must be encoded using Base64.
|
||||
|
@ -151,12 +151,12 @@
|
||||
</thead>
|
||||
<tbody><tr id="cce_10_0154__cce_10_0129_row162102049564"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.4.2.4.2.2.2.3.1.1 "><p id="cce_10_0154__cce_10_0129_p421019416569">Multi AZ</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.4.2.4.2.2.2.3.1.2 "><ul id="cce_10_0154__cce_10_0129_ul122101425619"><li id="cce_10_0154__cce_10_0129_li142101342560"><strong id="cce_10_0154__cce_10_0129_b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0154__cce_10_0129_li52682031184214"><strong id="cce_10_0154__cce_10_0129_b1516164563318">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0154__cce_10_0129_li3210440562"><strong id="cce_10_0154__cce_10_0129_b4304164818353">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.4.2.4.2.2.2.3.1.2 "><ul id="cce_10_0154__cce_10_0129_ul122101425619"><li id="cce_10_0154__cce_10_0129_li142101342560"><strong id="cce_10_0154__cce_10_0129_b6395193820145">Preferred</strong>: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.</li><li id="cce_10_0154__cce_10_0129_li52682031184214"><strong id="cce_10_0154__cce_10_0129_b8203192017422">Equivalent mode</strong>: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.</li><li id="cce_10_0154__cce_10_0129_li3210440562"><strong id="cce_10_0154__cce_10_0129_b105801282497">Required</strong>: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="cce_10_0154__cce_10_0129_row1121010416566"><td class="cellrowborder" valign="top" width="24%" headers="mcps1.3.4.2.4.2.2.2.3.1.1 "><p id="cce_10_0154__cce_10_0129_p12210114165612">Node Affinity</p>
|
||||
</td>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.4.2.4.2.2.2.3.1.2 "><ul id="cce_10_0154__cce_10_0129_ul1621054145617"><li id="cce_10_0154__cce_10_0129_li1721017413562"><strong id="cce_10_0154__cce_10_0129_b2074619819545">Incompatibility</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0154__cce_10_0129_li52109417563"><strong id="cce_10_0154__cce_10_0129_b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0154__cce_10_0129_li1421015415561"><strong id="cce_10_0154__cce_10_0129_b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0154__cce_10_0129_li92101542568"><strong id="cce_10_0154__cce_10_0129_b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0154__cce_10_0129_p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
<td class="cellrowborder" valign="top" width="76%" headers="mcps1.3.4.2.4.2.2.2.3.1.2 "><ul id="cce_10_0154__cce_10_0129_ul1621054145617"><li id="cce_10_0154__cce_10_0129_li1721017413562"><strong id="cce_10_0154__cce_10_0129_b2074619819545">Not configured</strong>: Node affinity is disabled for the add-on.</li><li id="cce_10_0154__cce_10_0129_li52109417563"><strong id="cce_10_0154__cce_10_0129_b7658101316551">Node Affinity</strong>: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0154__cce_10_0129_li1421015415561"><strong id="cce_10_0154__cce_10_0129_b98581358205610">Specified Node Pool Scheduling</strong>: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.</li><li id="cce_10_0154__cce_10_0129_li92101542568"><strong id="cce_10_0154__cce_10_0129_b634615619572">Custom Policies</strong>: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.<p id="cce_10_0154__cce_10_0129_p19210104145617">If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.</p>
|
||||
</li></ul>
|
||||
</td>
|
||||
</tr>
|
||||
@ -173,7 +173,7 @@
|
||||
</p></li><li id="cce_10_0154__li855215375213"><span>After the configuration is complete, click <strong id="cce_10_0154__b842352706153736">Install</strong>.</span></li></ol>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0154__section0377457163618"><h4 class="sectiontitle">Components</h4>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0154__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 5 </b>Autoscaler</caption><thead align="left"><tr id="cce_10_0154__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="23%" id="mcps1.3.5.2.2.4.1.1"><p id="cce_10_0154__p14653141018584">Component</p>
|
||||
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0154__table1965341035819" frame="border" border="1" rules="all"><caption><b>Table 5 </b>Add-on components</caption><thead align="left"><tr id="cce_10_0154__row1565319102582"><th align="left" class="cellrowborder" valign="top" width="23%" id="mcps1.3.5.2.2.4.1.1"><p id="cce_10_0154__p14653141018584">Component</p>
|
||||
</th>
|
||||
<th align="left" class="cellrowborder" valign="top" width="55.00000000000001%" id="mcps1.3.5.2.2.4.1.2"><p id="cce_10_0154__p065391025820">Description</p>
|
||||
</th>
|
||||
|
@ -55,7 +55,7 @@
|
||||
</table>
|
||||
</div>
|
||||
<p id="cce_10_0163__p19399171422017"><strong id="cce_10_0163__b71614421416">Recommended configuration</strong></p>
|
||||
<p id="cce_10_0163__p151281711539">Actual available memory of a node ≥ Sum of memory limits of all containers on the current node ≥ Sum of memory requests of all containers on the current node. You can view the actual available memory of a node on the CCE console (<strong id="cce_10_0163__b1537511249">Resource Management</strong> > <strong id="cce_10_0163__b1580194535">Nodes</strong> > <strong id="cce_10_0163__b84520670">Allocatable</strong>).</p>
|
||||
<p id="cce_10_0163__p151281711539">Actual available memory of a node ≥ Sum of memory limits of all containers on the current node ≥ Sum of memory requests of all containers on the current node. You can view the actual available memory of a node on the CCE console (<strong id="cce_10_0163__b783749649">Resource Management</strong> > <strong id="cce_10_0163__b205432717">Nodes</strong> > <strong id="cce_10_0163__b1901761336">Allocatable</strong>).</p>
|
||||
</li></ul>
|
||||
<div class="note" id="cce_10_0163__note96535331218"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0163__p73492457214">The allocatable resources are calculated based on the resource request value (<strong id="cce_10_0163__b1985192711107">Request</strong>), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node (for details, see <a href="#cce_10_0163__section17887209103612">Example of CPU and Memory Quota Usage</a>). The calculation formula is as follows:</p>
|
||||
<ul id="cce_10_0163__ul259653921"><li id="cce_10_0163__li1259253828">Allocatable CPU = Total CPU – Requested CPU of all pods – Reserved CPU for other resources</li><li id="cce_10_0163__li15913539216">Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources</li></ul>
|
||||
@ -105,6 +105,42 @@
|
||||
<p id="cce_10_0163__p16372013174">In this case, the remaining 1 core 5 GiB can be used by the next new pod.</p>
|
||||
<p id="cce_10_0163__p57837416161">If pod 1 is under heavy load during peak hours, it will use more CPUs and memory within the limit. Therefore, the actual allocatable resources are fewer than 1 core 5 GiB.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0163__section1894434272116"><h4 class="sectiontitle">Quotas of Other Resources</h4><p id="cce_10_0163__p192125524716">Typically, nodes support local <span class="keyword" id="cce_10_0163__keyword159255594719">ephemeral storage</span>, which is provided by locally mounted writable devices or RAM. Ephemeral storage does not ensure long-term data availability. Pods can use local ephemeral storage to buffer data and store logs, or mount emptyDir storage volumes to containers. For details, see <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" target="_blank" rel="noopener noreferrer">Local ephemeral storage</a>.</p>
|
||||
<p id="cce_10_0163__p151131252102110">Kubernetes allows you to specify the requested value and limit value of ephemeral storage in container configurations to manage the local ephemeral storage. The following attributes can be configured for each container in a pod:</p>
|
||||
<ul id="cce_10_0163__ul1925111194912"><li id="cce_10_0163__li16251618496">spec.containers[].resources.limits.ephemeral-storage</li></ul>
|
||||
<ul id="cce_10_0163__ul593313522486"><li id="cce_10_0163__li19933165211487">spec.containers[].resources.requests.ephemeral-storage</li></ul>
|
||||
<p id="cce_10_0163__p93012104509">In the following example, a pod contains two containers. The requested value of each container for local ephemeral storage is 2 GiB, and the limit value is 4 GiB. Therefore, the requested value of the pod for local ephemeral storage is 4 GiB, the limit value is 8 GiB, and the emptyDir volume uses 500 MiB of the local ephemeral storage.</p>
|
||||
<pre class="screen" id="cce_10_0163__screen1751325713499">apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: container-1
|
||||
image: <i><span class="varname" id="cce_10_0163__varname1565816348216"><example_app_image></span></i>
|
||||
<strong id="cce_10_0163__b187329132710">resources:</strong>
|
||||
<strong id="cce_10_0163__b487929122710"> requests:</strong>
|
||||
<strong id="cce_10_0163__b12881529142712"> ephemeral-storage: "2Gi"</strong>
|
||||
<strong id="cce_10_0163__b148802942713"> limits:</strong>
|
||||
<strong id="cce_10_0163__b208902992719"> ephemeral-storage: "4Gi"</strong>
|
||||
volumeMounts:
|
||||
- name: ephemeral
|
||||
mountPath: "/tmp"
|
||||
- name: container-2
|
||||
image: <i><span class="varname" id="cce_10_0163__varname1457354611214"><example_log_aggregator_image></span></i>
|
||||
<strong id="cce_10_0163__b1742443242711">resources:</strong>
|
||||
<strong id="cce_10_0163__b15424123215277"> requests:</strong>
|
||||
<strong id="cce_10_0163__b1424732162716"> ephemeral-storage: "2Gi"</strong>
|
||||
<strong id="cce_10_0163__b1142519323272"> limits:</strong>
|
||||
<strong id="cce_10_0163__b1242513214272"> ephemeral-storage: "4Gi"</strong>
|
||||
volumeMounts:
|
||||
- name: ephemeral
|
||||
mountPath: "/tmp"
|
||||
volumes:
|
||||
- name: ephemeral
|
||||
<strong id="cce_10_0163__b14273203720275">emptyDir:</strong>
|
||||
<strong id="cce_10_0163__b727316376271"> sizeLimit: 500Mi</strong></pre>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="familylinks">
|
||||
|
@ -1,9 +1,9 @@
|
||||
<a name="cce_10_0175"></a><a name="cce_10_0175"></a>
|
||||
|
||||
<h1 class="topictitle1">Connecting to a Cluster Using an X.509 Certificate</h1>
|
||||
<div id="body1556615866530"><div class="section" id="cce_10_0175__section160213214302"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0175__p1840111417517">This section describes how to obtain the cluster certificate from the console and use it to access Kubernetes clusters.</p>
|
||||
<div id="body1556615866530"><div class="section" id="cce_10_0175__section160213214302"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0175__p1840111417517">This section describes how to obtain the cluster certificate from the console and use it access Kubernetes clusters.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0175__section1590914113306"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0175__ol898314521505"><li id="cce_10_0175__li4829928181812"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0175__li179831852301"><span>On the <strong id="cce_10_0175__b1562014204338">Overview</strong> page, locate the <strong id="cce_10_0175__b15595218133311">Connection Info</strong> area, and click <strong id="cce_10_0175__b17735142393311">Download</strong> next to <strong id="cce_10_0175__b11788192563319">X.509 certificate</strong>.</span></li><li id="cce_10_0175__li1979910715109"><span>In the <span class="uicontrol" id="cce_10_0175__uicontrol13516511412"><b>Obtain Certificate</b></span> dialog box displayed, select the certificate expiration time and download the <span class="keyword" id="cce_10_0175__keyword2331112794610">X.509 certificate</span> of the cluster as prompted.</span><p><div class="fignone" id="cce_10_0175__fig873583013712"><span class="figcap"><b>Figure 1 </b>Downloading a certificate</span><br><span><img id="cce_10_0175__image135275918501" src="en-us_image_0000001864147989.png"></span></div>
|
||||
<div class="section" id="cce_10_0175__section1590914113306"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0175__ol898314521505"><li id="cce_10_0175__li4829928181812"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0175__li179831852301"><span>On the <strong id="cce_10_0175__b1562014204338">Overview</strong> page, locate the <strong id="cce_10_0175__b15595218133311">Connection Info</strong> area, and click <strong id="cce_10_0175__b17735142393311">Download</strong> next to <strong id="cce_10_0175__b11788192563319">X.509 certificate</strong>.</span></li><li id="cce_10_0175__li1979910715109"><span>In the <span class="uicontrol" id="cce_10_0175__uicontrol13516511412"><b>Obtain Certificate</b></span> dialog box displayed, select the certificate expiration time and download the <span class="keyword" id="cce_10_0175__keyword2331112794610">X.509 certificate</span> of the cluster as prompted.</span><p><div class="fignone" id="cce_10_0175__fig873583013712"><span class="figcap"><b>Figure 1 </b>Downloading a certificate</span><br><span><img id="cce_10_0175__image5191162792910" src="en-us_image_0000001898025121.png"></span></div>
|
||||
<div class="notice" id="cce_10_0175__note21816913343"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0175__ul45041635102414"><li id="cce_10_0175__li050403542411">The downloaded certificate contains three files: <strong id="cce_10_0175__b1790092752911">client.key</strong>, <strong id="cce_10_0175__b990002710298">client.crt</strong>, and <strong id="cce_10_0175__b690015272292">ca.crt</strong>. Keep these files secure.</li><li id="cce_10_0175__li150414359248">Certificates are not required for mutual access between containers in a cluster.</li></ul>
|
||||
</div></div>
|
||||
</p></li><li id="cce_10_0175__li067115818495"><span>Call native Kubernetes APIs using the cluster certificate.</span><p><p id="cce_10_0175__p1870145813497">For example, run the <strong id="cce_10_0175__b53321552399">curl</strong> command to call an API to view the pod information. In the following information, <i><span class="varname" id="cce_10_0175__varname149655191957">192.168.0.18:5443</span></i> indicates the IP address of the API server in the cluster.</p>
|
||||
|
@ -2,15 +2,18 @@
|
||||
|
||||
<h1 class="topictitle1">Node Overview</h1>
|
||||
<div id="body0000001389082026"><div class="section" id="cce_10_0180__section192318418302"><h4 class="sectiontitle">Introduction</h4><p id="cce_10_0180__p8909587309">A container cluster consists of a set of worker machines, called nodes, that run containerized applications. A node can be a virtual machine (VM) or a physical machine (PM), depending on your service requirements. The components on a node include kubelet, container runtime, and kube-proxy.</p>
|
||||
<div class="note" id="cce_10_0180__note62802132513"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0180__p17287216259">A Kubernetes cluster consists of master nodes and worker nodes. The nodes described in this section refer to <strong id="cce_10_0180__b214518810434">worker nodes</strong>, the computing nodes of a cluster that run containerized applications.</p>
|
||||
<div class="note" id="cce_10_0180__note62802132513"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0180__p17287216259">A Kubernetes cluster consists of master nodes and worker nodes. The nodes described in this section refer to <strong id="cce_10_0180__b214518810434">worker nodes</strong>, which are computing nodes of a cluster that run containerized applications.</p>
|
||||
</div></div>
|
||||
<p id="cce_10_0180__p7224034124116">CCE uses high-performance Elastic Cloud Servers (ECSs) as nodes to build highly available Kubernetes clusters.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0180__section1667513391595"><h4 class="sectiontitle">Supported Node Specifications</h4><p id="cce_10_0180__p145110527493">Different regions support different node flavors, and node flavors may be changed. Log in to the CCE console and check whether the required node flavors are supported on the page for creating nodes.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0180__section270816512219"><h4 class="sectiontitle">Underlying File Storage System of Docker</h4><ul id="cce_10_0180__ul1042195318224"><li id="cce_10_0180__li1842253112217">In clusters of v1.15.6 or earlier, the underlying file storage system uses the XFS format.</li><li id="cce_10_0180__li1842185332216">In clusters of v1.15.11 or later, after a node is created or reset, the underlying file storage system uses the ext4 format.</li></ul>
|
||||
<div class="section" id="cce_10_0180__section270816512219"><h4 class="sectiontitle">Underlying File Storage System of Containers</h4><p id="cce_10_0180__p91820457711"><strong id="cce_10_0180__b1463915471277">Docker</strong></p>
|
||||
<ul id="cce_10_0180__ul1042195318224"><li id="cce_10_0180__li1842253112217">In clusters of v1.15.6 or earlier, the underlying Docker file storage system is in XFS format.</li><li id="cce_10_0180__li1842185332216">In clusters of v1.15.11 or later, after a node is created or reset, the underlying Docker file storage system changes to the ext4 format.</li></ul>
|
||||
<p id="cce_10_0180__p18592419162315">For containerized applications that use the XFS format, pay attention to the impact of the underlying file storage format change. (The sequence of files in different file systems is different. For example, some Java applications reference a JAR package, but the directory contains multiple versions of the JAR package. If the version is not specified, the actual referenced package is determined by the system file.)</p>
|
||||
<p id="cce_10_0180__p177514150229">Run the <strong id="cce_10_0180__b12228135152816">docker info | grep "Backing Filesystem"</strong> command to check the format of the Docker underlying storage file used by the current node.</p>
|
||||
<p id="cce_10_0180__p177514150229">Run the <strong id="cce_10_0180__b12228135152816">docker info | grep "Backing Filesystem"</strong> command to check the format of the underlying Docker storage file used by the current node.</p>
|
||||
<p id="cce_10_0180__p775411518716"><strong id="cce_10_0180__b165551257173713">containerd</strong></p>
|
||||
<p id="cce_10_0180__p66721751084">Nodes running on containerd use the ext4 file storage system.</p>
|
||||
</div>
|
||||
<div class="section" id="cce_10_0180__section1163534412367"><h4 class="sectiontitle">paas User and User Group</h4><p id="cce_10_0180__p2431914163714">When you create a node in a cluster, the <span class="keyword" id="cce_10_0180__keyword12385114255216">paas</span> user or a <span class="keyword" id="cce_10_0180__keyword12946151191711">user group</span> will be created on the node by default. CCE components and CCE add-ons on a node run as a non-root user (user <strong id="cce_10_0180__b1169194817229">paas</strong> or a user group) to minimize the running permission. If the paas user or user group is modified, CCE components and pods may fail to run properly.</p>
|
||||
<div class="notice" id="cce_10_0180__note1649203844910"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0180__p7884254523">The normal running of CCE components depends on the paas user or user group. Pay attention to the following requirements:</p>
|
||||
|
@ -1,10 +1,10 @@
|
||||
<a name="cce_10_0182"></a><a name="cce_10_0182"></a>
|
||||
|
||||
<h1 class="topictitle1">Collecting Data Plane Logs</h1>
|
||||
<h1 class="topictitle1">Collecting Container Logs</h1>
|
||||
<div id="body0000001757642125"></div>
|
||||
<div>
|
||||
<ul class="ullinks">
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0018.html">Connecting CCE to AOM</a></strong><br>
|
||||
<li class="ulchildlink"><strong><a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a></strong><br>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user