1
0
forked from docs/doc-exports

CCE UMN update -20240625 version

Reviewed-by: Kovács, Zoltán <zkovacs@t-systems.com>
Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
This commit is contained in:
Dong, Qiu Jian 2024-09-04 11:43:54 +00:00 committed by zuul
parent 64197bfe40
commit f7b9a88535
662 changed files with 19166 additions and 11583 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -8,7 +8,32 @@
</th>
</tr>
</thead>
<tbody><tr id="cce_01_0300__row450133482720"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1051163432712">2024-05-30</p>
<tbody><tr id="cce_01_0300__row196562925719"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1565620945712">2024-08-30</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p975316296574">Update:</p>
<ul id="cce_01_0300__ul13776134813571"><li id="cce_01_0300__li17522205165713">Updated <a href="cce_10_0406.html">Cloud Native Cluster Monitoring</a>.</li><li id="cce_01_0300__li913815612582">Updated <a href="cce_10_0789.html">Load-aware Scheduling</a>.</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row5921952484"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p29285218814">2024-08-15</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p18948053792">Add:</p>
<ul id="cce_01_0300__ul15826614171120"><li id="cce_01_0300__li1879742710302">Added <strong id="cce_01_0300__b8789140102912">Default Security Group</strong> in <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a>.</li><li id="cce_01_0300__li168261814141112">Added <a href="cce_10_0426.html">Changing the Default Security Group of a Node</a>.</li><li id="cce_01_0300__li1487595561116">Added <a href="cce_faq_00392.html">How Do I Change the Security Group of Nodes in a Cluster in Batches?</a>.</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row6517416151211"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p2517141612121">2024-08-07</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p28901632378">Add:</p>
<ul id="cce_01_0300__ul182643201212"><li id="cce_01_0300__li132582188180">Added <a href="cce_10_0658.html">Scaling a Node Pool</a>.</li><li id="cce_01_0300__li10394147151917">Added <a href="cce_10_0886.html">Accepting Nodes in a Node Pool</a>.</li><li id="cce_01_0300__li167854752011">Added <a href="cce_10_0789.html">Load-aware Scheduling</a>.</li><li id="cce_01_0300__li14675656102014">Added <a href="cce_10_0813.html">Configuration Cases for Resource Usage-based Scheduling</a>.</li><li id="cce_01_0300__li1941141102212">Added <a href="cce_10_0906.html">Adding a Pod Subnet for a Cluster</a>.</li><li id="cce_01_0300__li1324131822212">Added <a href="cce_10_0897.html">Binding a Security Group to a Pod Using an Annotation</a>.</li><li id="cce_01_0300__li1692141910227">Added <a href="cce_10_0841.html">Configuring SNI for a LoadBalancer Service</a>.</li><li id="cce_01_0300__li99871819102215">Added <a href="cce_10_0842.html">Configuring HTTP/2 for a LoadBalancer Service</a>.</li><li id="cce_01_0300__li18635420132211">Added <a href="cce_10_0831.html">Configuring a Blocklist/Trustlist Access Policy for a LoadBalancer Service</a>.</li><li id="cce_01_0300__li572992410224">Added <a href="cce_10_0832.html">Configuring a Blocklist/Trustlist Access Policy for a LoadBalancer Ingress</a>.</li><li id="cce_01_0300__li135171535122814">Added <a href="cce_10_0896.html">Configuring a Custom Header Forwarding Policy for a LoadBalancer Ingress</a>.</li><li id="cce_01_0300__li122509406288">Added <a href="cce_10_0859.html">Encrypting EVS Disks</a>.</li><li id="cce_01_0300__li1344364114281">Added <a href="cce_10_0860.html">Expanding the Capacity of an EVS Disk</a>.</li><li id="cce_01_0300__li7430942202819">Added <a href="cce_10_0839.html">Creating an SFS Turbo Subdirectory Using a Dynamic PV</a>.</li><li id="cce_01_0300__li10953104517105">Added <a href="cce_10_0649.html">Priorities for Scaling Node Pools</a>.</li><li id="cce_01_0300__li14310174319289">Added <a href="cce_bestpractice_10024.html">Protecting a CCE Cluster Against Overload</a>.</li><li id="cce_01_0300__li19277944102810">Added <a href="cce_bestpractice_10006.html">CoreDNS Configuration Optimization</a>.</li><li id="cce_01_0300__li1165912753610">Added <a href="cce_bestpractice_10041.html">Retaining the Original IP Address of a Pod</a>.</li><li id="cce_01_0300__li1751462813361">Added <a href="cce_faq_00440.html">What Should I Do If a Node Pool Is Abnormal?</a>.</li><li id="cce_01_0300__li12281182910364">Added <a href="cce_faq_00443.html">How Do I Modify ECS Configurations When an ECS Cannot Be Managed by a Node Pool?</a>.</li><li id="cce_01_0300__li187021652121518">Added <a href="cce_10_0864.html">Configuring a Cluster's API Server for Internet Access</a>.</li><li id="cce_01_0300__li14939181151719">Added <a href="cce_10_0883.html">Differences Between CCE Node mountPath Configurations and Community Native Configurations</a>.</li><li id="cce_01_0300__li690216184338">Added PVC parameter <strong id="cce_01_0300__b18102642133414">Storage Volume Name Prefix</strong>.</li><li id="cce_01_0300__li10150131316371">Added to all add-ons the change history.</li></ul>
<p id="cce_01_0300__p14691499714">Update:</p>
<ul id="cce_01_0300__ul693149374"><li id="cce_01_0300__li240625818158">Updated <a href="cce_bulletin_0089.html">Kubernetes 1.29 Release Notes</a> and <a href="cce_10_0405.html">Patch Version Release Notes</a>.</li><li id="cce_01_0300__li462814268913">Updated <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a>.</li><li id="cce_01_0300__li1986423119126">Updated <a href="cce_10_0213.html">Modifying Cluster Configurations</a>.</li><li id="cce_01_0300__li61531855181617">Updated <a href="cce_10_0003.html">Resetting a Node</a>.</li><li id="cce_01_0300__li24111121710">Updated <a href="cce_10_0605.html">Draining a Node</a>.</li><li id="cce_01_0300__li6240102101720">Updated <a href="cce_10_0012.html">Creating a Node Pool</a>.</li><li id="cce_01_0300__li318716320175">Updated <a href="cce_10_0653.html">Updating a Node Pool</a>.</li><li id="cce_01_0300__li189501631102112">Updated <a href="cce_10_0652.html">Modifying Node Pool Configurations</a>.</li><li id="cce_01_0300__li59214493715">Updated <a href="cce_10_0059.html">Configuring Network Policies to Restrict Pod Access</a>.</li><li id="cce_01_0300__li121251637152316">Updated <a href="cce_10_0014.html">LoadBalancer</a>.</li><li id="cce_01_0300__li8950123142117">Updated <a href="cce_10_0686.html">LoadBalancer Ingresses</a>.</li><li id="cce_01_0300__li1040203832312">Updated <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_01_0300__li181512039202316">Updated <a href="cce_10_0406.html">Cloud Native Cluster Monitoring</a>.</li><li id="cce_01_0300__li14999103312818">Updated <a href="cce_10_0373.html">Monitoring Custom Metrics Using Cloud Native Cluster Monitoring</a>.</li><li id="cce_01_0300__li125103512812">Updated <a href="cce_10_0066.html">CCE Container Storage (Everest)</a>.</li><li id="cce_01_0300__li189311491274">Updated the Add-ons directory structure.</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row058713234347"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p25882023143415">2024-06-26</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul118110437342"><li id="cce_01_0300__li38254316343">Supported the creation of clusters of v1.29. For details, see <a href="cce_bulletin_0089.html">Kubernetes 1.29 Release Notes</a> and <a href="cce_10_0405.html">Patch Version Release Notes</a>.</li><li id="cce_01_0300__li668895417342">Added the Cloud Native Cluster Monitoring add-on. For details, see <a href="cce_10_0406.html">Cloud Native Cluster Monitoring</a>.</li><li id="cce_01_0300__li16457344123612">Added <a href="cce_10_0373.html">Monitoring Custom Metrics Using Cloud Native Cluster Monitoring</a>.</li><li id="cce_01_0300__li19403177171618">Deleted section "Kubernetes Version Support Mechanism".</li><li id="cce_01_0300__li184038711166">Added <a href="cce_bulletin_0033.html">Kubernetes Version Policy</a>.</li><li id="cce_01_0300__li7668108144413">Added <a href="cce_10_0734.html">Configuring an EIP for a Pod</a>.</li><li id="cce_01_0300__li1389051010459">Added <a href="cce_10_0651.html">Configuring a Static EIP for a Pod</a>.</li><li id="cce_01_0300__li14982191412301">Update <a href="cce_10_0476.html">Node OS</a>.</li><li id="cce_01_0300__li38112312311">Update <a href="cce_productdesc_0005.html">Notes and Constraints</a>.</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row450133482720"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1051163432712">2024-05-30</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul164481231281"><li id="cce_01_0300__li15337202482817">Deleted section "OS Patch Notes for Cluster Nodes".</li><li id="cce_01_0300__li1533782402818">Added <a href="cce_10_0476.html">Node OS</a>.</li><li id="cce_01_0300__li102136527395">Describes how to obtain the value of the available_zone, l4_flavor_name and l7_flavor_name.</li></ul>
</td>

File diff suppressed because it is too large Load Diff

View File

@ -33,6 +33,12 @@
<p id="cce_10_0004__p878819218284"><strong id="cce_10_0004__b137781937201815">false</strong> indicates that the node is not a bare metal node.</p>
</td>
</tr>
<tr id="cce_10_0004__row187691733134919"><td class="cellrowborder" valign="top" width="45%" headers="mcps1.4.2.4.2.3.1.1 "><p id="cce_10_0004__p16769633104916">node.kubernetes.io/container-engine</p>
</td>
<td class="cellrowborder" valign="top" width="55.00000000000001%" headers="mcps1.4.2.4.2.3.1.2 "><p id="cce_10_0004__p13769163354919">Container engine</p>
<p id="cce_10_0004__p16315959105512">Example: <strong id="cce_10_0004__b9316104812117">docker</strong> or <strong id="cce_10_0004__b11318749141118">containerd</strong></p>
</td>
</tr>
<tr id="cce_10_0004__row5551359185318"><td class="cellrowborder" valign="top" width="45%" headers="mcps1.4.2.4.2.3.1.1 "><p id="cce_10_0004__p126155014549">node.kubernetes.io/instance-type</p>
</td>
<td class="cellrowborder" valign="top" width="55.00000000000001%" headers="mcps1.4.2.4.2.3.1.2 "><p id="cce_10_0004__p11552159195316">Node specifications</p>
@ -74,11 +80,6 @@
<td class="cellrowborder" valign="top" width="55.00000000000001%" headers="mcps1.4.2.4.2.3.1.2 "><p id="cce_10_0004__p641192311530">Node OS kernel version</p>
</td>
</tr>
<tr id="cce_10_0004__row23484510537"><td class="cellrowborder" valign="top" width="45%" headers="mcps1.4.2.4.2.3.1.1 "><p id="cce_10_0004__p1534935185313">node.kubernetes.io/container-engine</p>
</td>
<td class="cellrowborder" valign="top" width="55.00000000000001%" headers="mcps1.4.2.4.2.3.1.2 "><p id="cce_10_0004__p452411147537">Container engine used by the node.</p>
</td>
</tr>
<tr id="cce_10_0004__row157991762533"><td class="cellrowborder" valign="top" width="45%" headers="mcps1.4.2.4.2.3.1.1 "><p id="cce_10_0004__p1159194516538">accelerator</p>
</td>
<td class="cellrowborder" valign="top" width="55.00000000000001%" headers="mcps1.4.2.4.2.3.1.2 "><p id="cce_10_0004__p13799136175313">GPU node labels.</p>
@ -93,7 +94,7 @@
</table>
</div>
</div>
<div class="section" id="cce_10_0004__section33951611481"><h4 class="sectiontitle">Adding or Deleting a Node Label</h4><ol id="cce_10_0004__ol4618636938"><li id="cce_10_0004__li56102343513"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0004__li12107195613316"><span>In the navigation pane, choose <strong id="cce_10_0004__b184671149151910">Nodes</strong>. On the displayed page, click the <strong id="cce_10_0004__b6486192182018">Nodes</strong> tab, select the target node and click <strong id="cce_10_0004__b186317458204">Labels and Taints</strong> in the upper left corner.</span></li><li id="cce_10_0004__li2568164932420"><span>In the displayed dialog box, click <span class="uicontrol" id="cce_10_0004__uicontrol197381013144411"><b>Add batch operations</b></span> under <span class="uicontrol" id="cce_10_0004__uicontrol147382132448"><b>Batch Operation</b></span>, and then choose <span class="uicontrol" id="cce_10_0004__uicontrol1973861354418"><b>Add/Update</b></span> or <span class="uicontrol" id="cce_10_0004__uicontrol2073819135443"><b>Delete</b></span>.</span><p><p id="cce_10_0004__p59891449182418">Enter the key and value of the label to be added or deleted, and click <strong id="cce_10_0004__b10531103420434">OK</strong>.</p>
<div class="section" id="cce_10_0004__section33951611481"><h4 class="sectiontitle">Adding or Deleting a Node Label</h4><ol id="cce_10_0004__ol4618636938"><li id="cce_10_0004__li56102343513"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0004__li12107195613316"><span>In the navigation pane, choose <strong id="cce_10_0004__b184671149151910">Nodes</strong>. On the displayed page, click the <strong id="cce_10_0004__b6486192182018">Nodes</strong> tab, select the target node and click <strong id="cce_10_0004__b186317458204">Labels and Taints</strong> in the upper left corner.</span></li><li id="cce_10_0004__li2568164932420"><span>In the displayed dialog box, click <span class="uicontrol" id="cce_10_0004__uicontrol197381013144411"><b>Add operation</b></span> under <span class="uicontrol" id="cce_10_0004__uicontrol147382132448"><b>Batch Operation</b></span>, and then choose <span class="uicontrol" id="cce_10_0004__uicontrol1973861354418"><b>Add/Update</b></span> or <span class="uicontrol" id="cce_10_0004__uicontrol2073819135443"><b>Delete</b></span>.</span><p><p id="cce_10_0004__p59891449182418">Enter the key and value of the label to be added or deleted, and click <strong id="cce_10_0004__b10531103420434">OK</strong>.</p>
<p id="cce_10_0004__p12647141114247">For example, the key is <strong id="cce_10_0004__b842352706145648">deploy_qa</strong> and the value is <strong id="cce_10_0004__b842352706145652">true</strong>, indicating that the node is used to deploy the QA (test) environment.</p>
</p></li><li id="cce_10_0004__li68199221571"><span>After the label is added, check the added label in node data.</span></li></ol>
</div>

View File

@ -25,8 +25,8 @@
<p id="cce_10_0006__en-us_topic_0249851114_p5986375820">DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet will not create the same pods on other nodes.</p>
<div class="fignone" id="cce_10_0006__en-us_topic_0249851114_fig27588261914"><span class="figcap"><b>Figure 3 </b>DaemonSet</span><br><span><img id="cce_10_0006__en-us_topic_0249851114_image13336133243518" src="en-us_image_0258871213.png"></span></div>
</div>
<div class="section" id="cce_10_0006__section153173319578"><h4 class="sectiontitle">Overview of Job and CronJob</h4><p id="cce_10_0006__en-us_topic_0249851115_p10889736123218">Jobs and cron jobs allow you to run short lived, one-off tasks in batch. They ensure the task pods run to completion.</p>
<ul id="cce_10_0006__en-us_topic_0249851115_ul197714911354"><li id="cce_10_0006__en-us_topic_0249851115_li47711097352">A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after successfully completing tasks based on user configurations.</li><li id="cce_10_0006__en-us_topic_0249851115_li249061111353">A cron job runs a job periodically on a specified schedule. A cron job object is similar to a line of a crontab file in Linux.</li></ul>
<div class="section" id="cce_10_0006__section153173319578"><h4 class="sectiontitle">Overview of Job and CronJob</h4><p id="cce_10_0006__en-us_topic_0249851115_p10889736123218">Jobs and CronJobs allow you to run short lived, one-off tasks in batch. They ensure the task pods run to completion.</p>
<ul id="cce_10_0006__en-us_topic_0249851115_ul197714911354"><li id="cce_10_0006__en-us_topic_0249851115_li47711097352">A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after successfully completing tasks based on user configurations.</li><li id="cce_10_0006__en-us_topic_0249851115_li249061111353">A CronJob runs a job periodically on a specified schedule. A CronJob object is similar to a line of a crontab file in Linux.</li></ul>
<p id="cce_10_0006__en-us_topic_0249851115_p166171774387">This run-to-completion feature of jobs is especially suitable for one-off tasks, such as continuous integration (CI).</p>
</div>
<div class="section" id="cce_10_0006__section3891192610218"><h4 class="sectiontitle">Workload Lifecycle</h4>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0007"></a><a name="cce_10_0007"></a>
<h1 class="topictitle1">Managing Workloads and Jobs</h1>
<h1 class="topictitle1">Managing Workloads</h1>
<div id="body8662426"><div class="section" id="cce_10_0007__en-us_topic_0107283638_section430113764416"><h4 class="sectiontitle">Scenario</h4><div class="p" id="cce_10_0007__en-us_topic_0107283638_p723817425449">After a workload is created, you can upgrade, monitor, roll back, or delete the workload, as well as edit its YAML file.
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0007__en-us_topic_0107283638_table156143911815" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Workload/Job management</caption><thead align="left"><tr id="cce_10_0007__en-us_topic_0107283638_row856143916184"><th align="left" class="cellrowborder" valign="top" width="24.610000000000003%" id="mcps1.3.1.2.1.2.3.1.1"><p id="cce_10_0007__en-us_topic_0107283638_p1571039111814">Operation</p>
</th>
@ -25,8 +25,8 @@
</tr>
<tr id="cce_10_0007__en-us_topic_0107283638_row1657153916181"><td class="cellrowborder" valign="top" width="24.610000000000003%" headers="mcps1.3.1.2.1.2.3.1.1 "><p id="cce_10_0007__en-us_topic_0107283638_p1457639131817"><a href="#cce_10_0007__en-us_topic_0107283638_section21669213390">Edit YAML</a></p>
</td>
<td class="cellrowborder" valign="top" width="75.39%" headers="mcps1.3.1.2.1.2.3.1.2 "><p id="cce_10_0007__en-us_topic_0107283638_p11572397189">You can modify and download YAML files of Deployments, StatefulSets, DaemonSets, CronJobs, and containers on the CCE console. YAML files of jobs can only be viewed, copied, and downloaded.</p>
<div class="note" id="cce_10_0007__note17426542882"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0007__p142618426810">If an existing CronJob is modified, the new configuration takes effect for the new pods, and the existing pod continues to run without any change.</p>
<td class="cellrowborder" valign="top" width="75.39%" headers="mcps1.3.1.2.1.2.3.1.2 "><p id="cce_10_0007__en-us_topic_0107283638_p11572397189">You can modify and download YAML files of Deployments, StatefulSets, DaemonSets, CronJobs, and pods on the CCE console. YAML files of jobs can only be viewed, copied, and downloaded.</p>
<div class="note" id="cce_10_0007__note17426542882"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0007__p142618426810">If an existing CronJob is modified, the new configuration takes effect for the new pods, and the existing pods continue to run without any change.</p>
</div></div>
</td>
</tr>
@ -76,7 +76,7 @@
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section51511928173817"><a name="cce_10_0007__en-us_topic_0107283638_section51511928173817"></a><a name="en-us_topic_0107283638_section51511928173817"></a><h4 class="sectiontitle">Viewing Logs</h4><p id="cce_10_0007__en-us_topic_0107283638_p7643185724813">You can view logs of Deployments, StatefulSets, DaemonSets, and jobs. This section uses a Deployment as an example to describe how to view logs.</p>
<div class="notice" id="cce_10_0007__note177339212275"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0007__p998763711231">Before viewing logs, ensure that the time of the browser is the same as that on the backend server.</p>
</div></div>
<ol id="cce_10_0007__en-us_topic_0107283638_ol14644105712488"><li id="cce_10_0007__en-us_topic_0107283638_li2619151017014"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b153351729122716">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1340513385528"><span>Click the <strong id="cce_10_0007__b24101331162716">Deployments</strong> tab and click the <span class="uicontrol" id="cce_10_0007__uicontrol741018314276"><b>View Log</b></span> of the target workload.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p17548132715421">In the displayed <strong id="cce_10_0007__b793112517535">View Log</strong> window, you can view logs.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol14644105712488"><li id="cce_10_0007__en-us_topic_0107283638_li2619151017014"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b153351729122716">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1340513385528"><span>Click the <strong id="cce_10_0007__b24101331162716">Deployments</strong> tab and click <span class="uicontrol" id="cce_10_0007__uicontrol741018314276"><b>View Log</b></span> of the target workload.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p17548132715421">In the displayed <strong id="cce_10_0007__b793112517535">View Log</strong> window, you can view logs.</p>
<div class="note" id="cce_10_0007__en-us_topic_0107283638_note216713316213"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p101679316212">The displayed logs are standard output logs of containers and do not have persistence and advanced O&amp;M capabilities. To use more comprehensive log capabilities, see <a href="cce_10_0553.html">Logs</a>. If the function of collecting standard output is enabled for the workload (enabled by default), you can go to AOM to view more workload logs. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.</p>
</div></div>
</p></li></ol>
@ -88,11 +88,11 @@
</div></div>
</p></li><li id="cce_10_0007__en-us_topic_0107283638_li8831149194314"><span>Upgrade the workload based on service requirements. The method for setting parameter is the same as that for creating a workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li3833189134315"><span>After the update is complete, click <span class="uicontrol" id="cce_10_0007__uicontrol5311635122814"><b>Upgrade Workload</b></span>, manually confirm the YAML file, and submit the upgrade.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section21669213390"><a name="cce_10_0007__en-us_topic_0107283638_section21669213390"></a><a name="en-us_topic_0107283638_section21669213390"></a><h4 class="sectiontitle">Editing a YAML file</h4><p id="cce_10_0007__en-us_topic_0107283638_p879119319360">You can modify and download YAML files of Deployments, StatefulSets, DaemonSets, CronJobs, and containers on the CCE console. YAML files of jobs can only be viewed, copied, and downloaded. This section uses a Deployment as an example to describe how to edit the YAML file.</p>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section21669213390"><a name="cce_10_0007__en-us_topic_0107283638_section21669213390"></a><a name="en-us_topic_0107283638_section21669213390"></a><h4 class="sectiontitle">Editing a YAML file</h4><p id="cce_10_0007__en-us_topic_0107283638_p879119319360">You can modify and download YAML files of Deployments, StatefulSets, DaemonSets, CronJobs, and pods on the CCE console. YAML files of jobs can only be viewed, copied, and downloaded. This section uses a Deployment as an example to describe how to edit the YAML file.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol1879112311361"><li id="cce_10_0007__li635115103505"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b95501137142817">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li1335171017509"><span>Click the <strong id="cce_10_0007__b1413614042816">Deployments</strong> tab and choose <strong id="cce_10_0007__b413716406287">More</strong> &gt; <strong id="cce_10_0007__b18137240202819">Edit YAML</strong> in the <strong id="cce_10_0007__b21377402282">Operation</strong> column of the target workload. In the dialog box that is displayed, modify the YAML file.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li97921133367"><span>Click <strong id="cce_10_0007__b1165164173410">OK</strong>.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li87324268415"><span>(Optional) In the <strong id="cce_10_0007__en-us_topic_0107283638_b8257102371317">Edit YAML</strong> window, click <strong id="cce_10_0007__en-us_topic_0107283638_b13222327121315">Download</strong> to download the YAML file.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section13324541124815"><a name="cce_10_0007__en-us_topic_0107283638_section13324541124815"></a><a name="en-us_topic_0107283638_section13324541124815"></a><h4 class="sectiontitle">Rolling Back a Workload (Available Only for Deployments)</h4><p id="cce_10_0007__en-us_topic_0107283638_p252119142614">CCE records the release history of all Deployments. You can roll back a Deployment to a specified version.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol165211495268"><li id="cce_10_0007__en-us_topic_0107283638_li0901438403"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1982864212286">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1254215491914"><span>Click the <strong id="cce_10_0007__b0953744172818">Deployments</strong> tab, choose <span class="uicontrol" id="cce_10_0007__uicontrol1195354418284"><b>More &gt; Roll Back</b></span> in the <strong id="cce_10_0007__b8954204472812">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li383212838"><span>Switch to the <strong id="cce_10_0007__b9222047122811">Change History</strong> tab page, click <span class="uicontrol" id="cce_10_0007__uicontrol13223154715289"><b>Roll Back to This Version</b></span> of the target version, manually confirm the YAML file, and click <span class="uicontrol" id="cce_10_0007__uicontrol5223104722812"><b>OK</b></span>.</span></li></ol>
<ol id="cce_10_0007__en-us_topic_0107283638_ol165211495268"><li id="cce_10_0007__en-us_topic_0107283638_li0901438403"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1982864212286">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1254215491914"><span>Click the <strong id="cce_10_0007__b0953744172818">Deployments</strong> tab and choose <span class="uicontrol" id="cce_10_0007__uicontrol1195354418284"><b>More &gt; Roll Back</b></span> in the <strong id="cce_10_0007__b8954204472812">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li383212838"><span>Switch to the <strong id="cce_10_0007__b9222047122811">Change History</strong> tab page, click <span class="uicontrol" id="cce_10_0007__uicontrol13223154715289"><b>Roll Back to This Version</b></span> of the target version, manually confirm the YAML file, and click <span class="uicontrol" id="cce_10_0007__uicontrol5223104722812"><b>OK</b></span>.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__section132451237607"><a name="cce_10_0007__section132451237607"></a><a name="section132451237607"></a><h4 class="sectiontitle">Redeploying a Workload</h4><p id="cce_10_0007__p15601819195812">After you redeploy a workload, all pods in the workload will be restarted. This section uses Deployments as an example to illustrate how to redeploy a workload.</p>
<ol id="cce_10_0007__ol0529114105916"><li id="cce_10_0007__li152911415912"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1861155692810">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li2052917419597"><span>Click the <strong id="cce_10_0007__b13542145872817">Deployments</strong> tab and choose <strong id="cce_10_0007__b1454245820284">More</strong> &gt; <strong id="cce_10_0007__b6543165817284">Redeploy</strong> in the <strong id="cce_10_0007__b18543858112819">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__li052984175917"><span>In the dialog box that is displayed, click <span class="uicontrol" id="cce_10_0007__uicontrol8574100202910"><b>Yes</b></span> to redeploy the workload.</span></li></ol>
@ -109,13 +109,13 @@
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section14423721191418"><a name="cce_10_0007__en-us_topic_0107283638_section14423721191418"></a><a name="en-us_topic_0107283638_section14423721191418"></a><h4 class="sectiontitle">Deleting a Workload/Job</h4><p id="cce_10_0007__en-us_topic_0107283638_p44461328132920">You can delete a workload or job that is no longer needed. Deleted workloads or jobs cannot be recovered. Exercise caution when you perform this operation. This section uses a Deployment as an example to describe how to delete a workload.</p>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section14423721191418"><a name="cce_10_0007__en-us_topic_0107283638_section14423721191418"></a><a name="en-us_topic_0107283638_section14423721191418"></a><h4 class="sectiontitle">Deleting a Workload/Job</h4><p id="cce_10_0007__en-us_topic_0107283638_p44461328132920">You can delete a workload or job that is no longer needed. Deleted workloads or jobs cannot be recovered. This section uses a Deployment as an example to describe how to delete a workload.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol16301162312555"><li id="cce_10_0007__li1824612582414"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b4293132919298">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li23014231555"><span>In the same row as the workload you will delete, choose <strong id="cce_10_0007__en-us_topic_0107283638_b2032918125613">Operation</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b0329141219611">More</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b23291912765">Delete</strong>.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p11245223162515">Read the system prompts carefully. A workload cannot be recovered after it is deleted. Exercise caution when performing this operation.</p>
</p></li><li id="cce_10_0007__en-us_topic_0107283638_li1566102365617"><span>Click <strong id="cce_10_0007__en-us_topic_0107283638_b2297164413617">Yes</strong>.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note1933510551189"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0007__en-us_topic_0107283638_ul204031813191914"><li id="cce_10_0007__en-us_topic_0107283638_li7404151371913">If the node where the pod is located is unavailable or shut down and the workload cannot be deleted, you can forcibly delete the pod from the pod list on the workload details page.</li><li id="cce_10_0007__en-us_topic_0107283638_li10404113191914">Ensure that the storage volumes to be deleted are not used by other workloads. If these volumes are imported or have snapshots, you can only unbind them.</li></ul>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section1947616516301"><a name="cce_10_0007__en-us_topic_0107283638_section1947616516301"></a><a name="en-us_topic_0107283638_section1947616516301"></a><h4 class="sectiontitle">Events</h4><p id="cce_10_0007__p16951182315188">This section uses Deployments as an example to illustrate how to view events of a workload. To view the event of a job or cron jon, click <span class="uicontrol" id="cce_10_0007__uicontrol5141163802911"><b>View Event</b></span> in the <strong id="cce_10_0007__b5141193842916">Operation</strong> column of the target workload.</p>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section1947616516301"><a name="cce_10_0007__en-us_topic_0107283638_section1947616516301"></a><a name="en-us_topic_0107283638_section1947616516301"></a><h4 class="sectiontitle">Events</h4><p id="cce_10_0007__p16951182315188">This section uses a Deployment as an example to describe how to view events of a workload. To view the event of a job or CronJob, click <span class="uicontrol" id="cce_10_0007__uicontrol5141163802911"><b>View Event</b></span> in the <strong id="cce_10_0007__b5141193842916">Operation</strong> column of the target workload.</p>
<ol id="cce_10_0007__ol114609411810"><li id="cce_10_0007__li146044118811"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1144092910">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li14460104111813"><span>On the <strong id="cce_10_0007__b10635642182913">Deployments</strong> tab page, click the target workload. In the <strong id="cce_10_0007__b1463504252913">Pods</strong> tab page, click the <span class="uicontrol" id="cce_10_0007__uicontrol96354422296"><b>View Events</b></span> to view the event name, event type, number of occurrences, Kubernetes event, first occurrence time, and last occurrence time.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note645916250256"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p2459102542512">Event data will be retained for one hour and then automatically deleted.</p>
</div></div>
</p></li></ol>

View File

@ -10,7 +10,7 @@
<p id="cce_10_0009__p819111064514">Enter the username and password used to access the third-party image repository.</p>
</p></li><li id="cce_10_0009__li13221161713456"><span>When creating a workload, enter a private image path in the format of <em id="cce_10_0009__i127371150203116">domainname/namespace/imagename:tag</em> in <span class="uicontrol" id="cce_10_0009__uicontrol153963238313"><b>Image Name</b></span> and select the key created in <a href="#cce_10_0009__li16481144064414">1</a>.</span></li><li id="cce_10_0009__li1682113518595"><span>Set other parameters and click <span class="uicontrol" id="cce_10_0009__uicontrol14664142510020"><b>Create Workload</b></span>.</span></li></ol>
</div>
<div class="section" id="cce_10_0009__section18217101117197"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_10_0009__ol84677271516"><li id="cce_10_0009__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0009__li54671627213"><span>Use kubectl to create a secret of the kubernetes.io/dockerconfigjson.</span><p><pre class="screen" id="cce_10_0009__screen1466527017">kubectl create secret docker-registry <i><span class="varname" id="cce_10_0009__varname20740165882418">myregistrykey</span></i> -n <i><span class="varname" id="cce_10_0009__varname846884372519">default</span></i> --docker-server=<i><span class="varname" id="cce_10_0009__varname153949106259">DOCKER_REGISTRY_SERVER</span></i> --docker-username=<i><span class="varname" id="cce_10_0009__varname6836161311251">DOCKER_USER</span></i> --docker-password=<i><span class="varname" id="cce_10_0009__varname321011555243">DOCKER_PASSWORD</span></i> --docker-email=<i><span class="varname" id="cce_10_0009__varname17516111722514">DOCKER_EMAIL</span></i></pre>
<div class="section" id="cce_10_0009__section18217101117197"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_10_0009__ol84677271516"><li id="cce_10_0009__li2338171784610"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0009__li54671627213"><span>Use kubectl to create a secret of the kubernetes.io/dockerconfigjson.</span><p><pre class="screen" id="cce_10_0009__screen1466527017">kubectl create secret docker-registry <i><span class="varname" id="cce_10_0009__varname20740165882418">myregistrykey</span></i> -n <i><span class="varname" id="cce_10_0009__varname846884372519">default</span></i> --docker-server=<i><span class="varname" id="cce_10_0009__varname153949106259">DOCKER_REGISTRY_SERVER</span></i> --docker-username=<i><span class="varname" id="cce_10_0009__varname6836161311251">DOCKER_USER</span></i> --docker-password=<i><span class="varname" id="cce_10_0009__varname321011555243">DOCKER_PASSWORD</span></i> --docker-email=<i><span class="varname" id="cce_10_0009__varname17516111722514">DOCKER_EMAIL</span></i></pre>
<p id="cce_10_0009__p164665271714">In the preceding command, <em id="cce_10_0009__i18443812102618">myregistrykey</em> indicates the key name, <em id="cce_10_0009__i8904529112612">default</em> indicates the namespace where the key is located, and other parameters are as follows:</p>
<ul id="cce_10_0009__ul84670278112"><li id="cce_10_0009__li4467142711112"><strong id="cce_10_0009__b640184594119">DOCKER_REGISTRY_SERVER</strong>: address of a third-party image repository, for example, <strong id="cce_10_0009__b240104584114">www.3rdregistry.com</strong> or <strong id="cce_10_0009__b1440215458415">10.10.10.10:443</strong></li><li id="cce_10_0009__li13467127716"><strong id="cce_10_0009__b164021745114117">DOCKER_USER</strong>: account used for logging in to a third-party image repository</li><li id="cce_10_0009__li746782712110"><strong id="cce_10_0009__b1539245574117">DOCKER</strong><strong id="cce_10_0009__b4392185511418">_PASSWORD</strong>: password used for logging in to a third-party image repository</li><li id="cce_10_0009__li1546712278117"><strong id="cce_10_0009__b10402845154110">DOCKER_EMAIL</strong>: email of a third-party image repository</li></ul>
</p></li><li id="cce_10_0009__li161523518110"><span>Use a third-party image to create a workload.</span><p><div class="p" id="cce_10_0009__p13583471429">A kubernetes.io/dockerconfigjson secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication.<pre class="screen" id="cce_10_0009__screen0583771125">apiVersion: v1
@ -30,7 +30,7 @@ spec:
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Container</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Workload</a></div>
</div>
</div>

View File

@ -4,11 +4,11 @@
<div id="body1522665832344"><p id="cce_10_0010__p13310145119810">You can learn about a cluster network from the following two aspects:</p>
<ul id="cce_10_0010__ul65247121891"><li id="cce_10_0010__li14524161214917">What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see <a href="#cce_10_0010__section1131733719195">Cluster Network Structure</a>.</li><li id="cce_10_0010__li55241612391">How is pod access implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides <a href="#cce_10_0010__section1860619221134">Service</a> and <a href="#cce_10_0010__section1248852094313">Ingress</a> to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see <a href="#cce_10_0010__section1286493159">Access Scenarios</a>.</li></ul>
<div class="section" id="cce_10_0010__section1131733719195"><a name="cce_10_0010__section1131733719195"></a><a name="section1131733719195"></a><h4 class="sectiontitle">Cluster Network Structure</h4><p id="cce_10_0010__p3299181794916">All nodes in the cluster are located in a VPC and use the VPC network. The container network is managed by dedicated network add-ons.</p>
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001897906049.png"></span></p>
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001981436297.png"></span></p>
<ul id="cce_10_0010__ul1916179122617"><li id="cce_10_0010__li13455145754315"><strong id="cce_10_0010__b19468105563811">Node Network</strong><p id="cce_10_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. Select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
</li><li id="cce_10_0010__li16131141644715"><strong id="cce_10_0010__b1975815172433">Container Network</strong><p id="cce_10_0010__p523322010499">A container network assigns IP addresses to containers in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
</li><li id="cce_10_0010__li16131141644715"><strong id="cce_10_0010__b1975815172433">Container Network</strong><p id="cce_10_0010__p523322010499">A container network assigns IP addresses to pods in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
<p id="cce_10_0010__p3753153443514">Currently, CCE supports the following container network models:</p>
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model applies to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native 2.0 network deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and EIPs are bound to deliver high performance.</li></ul>
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network model seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. Each node is assigned a CIDR block of a fixed size. The VPC network model outperforms the container tunnel network model in terms of performance because it does not have tunnel encapsulation overhead. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native 2.0 network deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and EIPs are bound to deliver high performance.</li></ul>
<p id="cce_10_0010__p397482011109">The performance, networking scale, and application scenarios of a container network vary according to the container network model. For details about the functions and features of different container network models, see <a href="cce_10_0281.html">Overview</a>.</p>
</li><li id="cce_10_0010__li9139522183714"><strong id="cce_10_0010__b1885317214113">Service Network</strong><p id="cce_10_0010__p584703114499">Service is also a Kubernetes object. Each Service has a static IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster.</p>
</li></ul>
@ -27,7 +27,7 @@
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li8904911447">Access through the public network: An EIP should be bound to the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through the private network: The workload can be accessed through the internal IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
</li><li id="cce_10_0010__li1066365520014">The workload can access the external network as follows:<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block.</li><li id="cce_10_0010__li8257105318237">Accessing a public network: Assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing the Internet from a Container</a>.</li></ul>
</li></ul>
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001851586668.png"></span></div>
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001981436301.png"></span></div>
</div>
</div>
<div>

View File

@ -4,14 +4,14 @@
<div id="body1522736584192"><div class="section" id="cce_10_0011__section13559184110492"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0011__p32401248184910">ClusterIP Services allow workloads in the same cluster to use their cluster-internal domain names to access each other.</p>
<p id="cce_10_0011__p653753053815">The cluster-internal domain name format is <em id="cce_10_0011__i8179113533712">&lt;Service name&gt;</em>.<em id="cce_10_0011__i14179133519374">&lt;Namespace of the workload&gt;</em><strong id="cce_10_0011__b164892813716">.svc.cluster.local:</strong><em id="cce_10_0011__i19337102815712">&lt;Port&gt;</em>, for example, <strong id="cce_10_0011__b8115811381">nginx.default.svc.cluster.local:80</strong>.</p>
<p id="cce_10_0011__p1778412445517"><a href="#cce_10_0011__fig192245420557">Figure 1</a> shows the mapping relationships between access channels, container ports, and access ports.</p>
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001898025885.png"></span></div>
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001981436829.png"></span></div>
</div>
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>In the navigation pane, choose <strong id="cce_10_0011__b18658321171411"><span id="cce_10_0011__text9765124722315">Services &amp; Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0011__uicontrol132971717714"><b>Create Service</b></span>.</span></li><li id="cce_10_0011__li3476651017144"><span>Configure intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Confirm</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to use the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li142435567390"><strong id="cce_10_0011__b11211151715470">IPv6</strong>: This function is disabled by default. After this function is enabled, the cluster IP address of the Service changes to an IPv6 address. <strong id="cce_10_0011__b11322182810261">This parameter is available only in clusters of v1.15 or later with IPv6 enabled (set during cluster creation).</strong></li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Port Settings</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>In the navigation pane, choose <strong id="cce_10_0011__b18658321171411"><span id="cce_10_0011__text9765124722315">Services &amp; Ingresses</span></strong>. In the upper right corner, click <span class="uicontrol" id="cce_10_0011__uicontrol132971717714"><b>Create Service</b></span>.</span></li><li id="cce_10_0011__li3476651017144"><span>Configure intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Specify a Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: namespace that the workload belongs to.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Confirm</strong>. The Service will use this label to select pods. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to use the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li142435567390"><strong id="cce_10_0011__b2095128121518">IPv6</strong>: This function is disabled by default. After this function is enabled, the cluster IP address of the Service changes to an IPv6 address. <strong id="cce_10_0011__b11322182810261">This parameter is available only in clusters of v1.15 or later with IPv6 enabled (set during cluster creation).</strong></li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Ports</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: listener port of the workload. For example, Nginx uses port 80 by default.</li></ul>
</li></ul>
</p></li><li id="cce_10_0011__li5563226917144"><span>Click <strong id="cce_10_0011__b15590122052614">OK</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0011__section9813121512319"><h4 class="sectiontitle">Setting the Access Type Using kubectl</h4><p id="cce_10_0011__p1626583075113">You can run kubectl commands to set the access type (Service). This section uses an Nginx workload as an example to describe how to implement intra-cluster access using kubectl.</p>
<ol id="cce_10_0011__ol19191171513118"><li id="cce_10_0011__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0011__li1020013819415"><span>Create and edit the <strong id="cce_10_0011__b1451217585494">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b0512195818499">nginx-clusterip-svc.yaml</strong> files.</span><p><p id="cce_10_0011__p1527690125210">The file names are user-defined. <strong id="cce_10_0011__b1073117514231">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b1373115162318">nginx-clusterip-svc.yaml</strong> are merely example file names.</p>
<div class="section" id="cce_10_0011__section9813121512319"><h4 class="sectiontitle">Setting the Access Type Using kubectl</h4><p id="cce_10_0011__p1626583075113">You can configure Service access using kubectl. This section uses an Nginx workload as an example to describe how to implement intra-cluster access using kubectl.</p>
<ol id="cce_10_0011__ol19191171513118"><li id="cce_10_0011__li2338171784610"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0011__li1020013819415"><span>Create and edit the <strong id="cce_10_0011__b1451217585494">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b0512195818499">nginx-clusterip-svc.yaml</strong> files.</span><p><p id="cce_10_0011__p1527690125210">The file names are user-defined. <strong id="cce_10_0011__b1073117514231">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b1373115162318">nginx-clusterip-svc.yaml</strong> are merely example file names.</p>
<div class="p" id="cce_10_0011__p7581950184318"><strong id="cce_10_0011__b111191541172515">vi nginx-deployment.yaml</strong><pre class="screen" id="cce_10_0011__screen47713471440">apiVersion: apps/v1
kind: Deployment
metadata:
@ -64,7 +64,7 @@ spec:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.247.0.1 &lt;none&gt; 443/TCP 4d6h
nginx-clusterip ClusterIP 10.247.74.52 &lt;none&gt; 8080/TCP 14m</pre>
</p></li><li id="cce_10_0011__li1847854017180"><span>Access a Service.</span><p><p id="cce_10_0011__p7601154373418">A Service can be accessed from containers or nodes in a cluster.</p>
</p></li><li id="cce_10_0011__li1847854017180"><span>Access the Service.</span><p><p id="cce_10_0011__p7601154373418">A Service can be accessed from containers or nodes in a cluster.</p>
<p id="cce_10_0011__p73315335616">Create a pod, access the pod, and run the <strong id="cce_10_0011__b91231031103314">curl</strong> command to access <em id="cce_10_0011__i34141151351">IP address:Port</em> or the domain name of the Service, as shown in the following figure.</p>
<p id="cce_10_0011__p1258754194715">The domain name suffix can be omitted. In the same namespace, you can directly use <strong id="cce_10_0011__b12708123293515">nginx-clusterip:8080</strong> for access. In other namespaces, you can use <strong id="cce_10_0011__b1747923918014">nginx-clusterip.default:8080</strong> for access.</p>
<pre class="screen" id="cce_10_0011__screen3418202633310"># kubectl run -i --tty --image nginx:alpine test --rm /bin/sh

File diff suppressed because it is too large Load Diff

View File

@ -8,16 +8,22 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0385.html">Using Annotations to Balance Load</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0683.html">Configuring an HTTP or HTTPS Service</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0683.html">Configuring HTTP/HTTPS for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0729.html">Configuring Timeout for a Service</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0841.html">Configuring SNI for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0684.html">Configuring Health Check on Multiple Service Ports</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0842.html">Configuring HTTP/2 for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0729.html">Configuring Timeout for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0831.html">Configuring a Blocklist/Trustlist Access Policy for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0684.html">Configuring Health Check on Multiple Ports of a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0355.html">Configuring Passthrough Networking for a LoadBalancer Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0685.html">Setting the Pod Ready Status Through the ELB Health Check</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0355.html">Enabling Passthrough Networking for LoadBalancer Services</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0084.html">Enabling ICMP Security Group Rules</a></strong><br>
</li>
</ul>

View File

@ -24,7 +24,7 @@ data:
<pre class="screen" id="cce_10_0015__screen76458538178">Hello</pre>
</p></li></ol>
<p id="cce_10_0015__p2562105044215"><strong id="cce_10_0015__b1096818714819">Using kubectl</strong></p>
<ol id="cce_10_0015__ol1392823394416"><li id="cce_10_0015__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li1020013819415"><span>Create a file named <strong id="cce_10_0015__b796819481813">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p106999147413"><strong id="cce_10_0015__b6469155655719">vi nginx-configmap.yaml</strong></p>
<ol id="cce_10_0015__ol1392823394416"><li id="cce_10_0015__li1681024195710"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li1020013819415"><span>Create a file named <strong id="cce_10_0015__b796819481813">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p106999147413"><strong id="cce_10_0015__b6469155655719">vi nginx-configmap.yaml</strong></p>
<p id="cce_10_0015__p58981554135312">Content of the YAML file:</p>
<ul id="cce_10_0015__ul11105134234513"><li id="cce_10_0015__li6846103416564"><strong id="cce_10_0015__b1369120812134">Added from ConfigMap</strong>: To add all data in a ConfigMap to environment variables, use the <strong id="cce_10_0015__b1156313267125">envFrom</strong> parameter. The keys in the ConfigMap will become names of environment variables in the workload.<pre class="screen" id="cce_10_0015__screen104944321312">apiVersion: apps/v1
kind: Deployment
@ -99,13 +99,13 @@ CCE</pre>
-c
echo $SPECIAL_LEVEL $SPECIAL_TYPE &gt; /usr/share/nginx/html/index.html</pre>
</li></ul>
</p></li><li id="cce_10_0015__li7994114314516"><span>Set other workload parameters and click <span class="uicontrol" id="cce_10_0015__uicontrol353819371113"><b>Create Workload</b></span>.</span><p><p id="cce_10_0015__p399574314456">After the workload runs properly, <a href="cce_10_00356.html">log in to the container</a> and run the following statement to check whether the ConfigMap has been set as an environment variable of the workload:</p>
</p></li><li id="cce_10_0015__li7994114314516"><span>Configure other workload parameters and click <span class="uicontrol" id="cce_10_0015__uicontrol353819371113"><b>Create Workload</b></span>.</span><p><p id="cce_10_0015__p399574314456">After the workload runs properly, <a href="cce_10_00356.html">log in to the container</a> and run the following statement to check whether the ConfigMap has been set as an environment variable of the workload:</p>
<pre class="screen" id="cce_10_0015__screen13995134318451">cat /usr/share/nginx/html/index.html</pre>
<p id="cce_10_0015__p1995643134510">The example output is as follows:</p>
<pre class="screen" id="cce_10_0015__screen9995143134514">Hello CCE</pre>
</p></li></ol>
<p id="cce_10_0015__p4491185413187"><strong id="cce_10_0015__b4624652134016">Using kubectl</strong></p>
<ol id="cce_10_0015__ol34911754131817"><li id="cce_10_0015__li1949135461810"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li94910544186"><span>Create a file named <strong id="cce_10_0015__b1168565594">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p1249155491817"><strong id="cce_10_0015__b749135421815">vi nginx-configmap.yaml</strong></p>
<ol id="cce_10_0015__ol34911754131817"><li id="cce_10_0015__li1949135461810"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li94910544186"><span>Create a file named <strong id="cce_10_0015__b1168565594">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p1249155491817"><strong id="cce_10_0015__b749135421815">vi nginx-configmap.yaml</strong></p>
<div class="p" id="cce_10_0015__p1265811921915">As shown in the following example, the <strong id="cce_10_0015__b66288220412">cce-configmap</strong> ConfigMap is imported to the workload. <i><span class="varname" id="cce_10_0015__varname040473852310">SPECIAL_LEVEL</span></i> and <i><span class="varname" id="cce_10_0015__varname1840483822318">SPECIAL_TYPE</span></i> are the environment variable names in the workload, that is, the key names in the <strong id="cce_10_0015__b3940185910414">cce-configmap</strong> ConfigMap.<pre class="screen" id="cce_10_0015__screen4422017182118">apiVersion: apps/v1
kind: Deployment
metadata:
@ -162,7 +162,7 @@ spec:
<tr id="cce_10_0015__row030921189"><td class="cellrowborder" valign="top" width="15%" headers="mcps1.4.7.4.3.2.1.2.3.1.1 "><p id="cce_10_0015__p11311217185">Mount Path</p>
</td>
<td class="cellrowborder" valign="top" width="85%" headers="mcps1.4.7.4.3.2.1.2.3.1.2 "><p id="cce_10_0015__p1548510541219">Enter a mount point. After the ConfigMap volume is mounted, a configuration file with the key as the file name and value as the file content is generated in the mount path of the container.</p>
<div class="p" id="cce_10_0015__p53873531026">This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as <strong id="cce_10_0015__b172894059355716">/</strong> or <strong id="cce_10_0015__b207686195955716">/var/run</strong>. This may lead to container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, which leads to a container startup failure or workload creation failure.<div class="notice" id="cce_10_0015__note1538785311211"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0015__p16387105311220">If the container is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.</p>
<div class="p" id="cce_10_0015__p53873531026">This parameter specifies a container path to which a data volume will be mounted. Do not mount the volume to a system directory such as <strong id="cce_10_0015__b172894059355716">/</strong> or <strong id="cce_10_0015__b207686195955716">/var/run</strong>. This may lead to container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, which leads to a container startup failure or workload creation failure.<div class="notice" id="cce_10_0015__note1538785311211"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0015__p16387105311220">If the container is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.</p>
</div></div>
</div>
</td>
@ -188,7 +188,7 @@ spec:
<pre class="screen" id="cce_10_0015__screen6548113717229">Hello</pre>
</p></li></ol>
<p id="cce_10_0015__p1578674911813"><strong id="cce_10_0015__b3190959134010">Using kubectl</strong></p>
<ol id="cce_10_0015__ol14792145817332"><li id="cce_10_0015__li20792115815330"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li97921458113311"><span>Create a file named <strong id="cce_10_0015__b156961159914">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p178311975614"><strong id="cce_10_0015__b158312914569">vi nginx-configmap.yaml</strong></p>
<ol id="cce_10_0015__ol14792145817332"><li id="cce_10_0015__li20792115815330"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0015__li97921458113311"><span>Create a file named <strong id="cce_10_0015__b156961159914">nginx-configmap.yaml</strong> and edit it.</span><p><p id="cce_10_0015__p178311975614"><strong id="cce_10_0015__b158312914569">vi nginx-configmap.yaml</strong></p>
<p id="cce_10_0015__p19313173835612">As shown in the following example, after the ConfigMap volume is mounted, a configuration file with the key as the file name and value as the file content is generated in the <strong id="cce_10_0015__b172619112025">/etc/config</strong> directory of the container.</p>
<pre class="screen" id="cce_10_0015__screen11489958268">apiVersion: apps/v1
kind: Deployment
@ -216,7 +216,7 @@ spec:
configMap:
<strong id="cce_10_0015__b19854194815519">name: </strong><strong id="cce_10_0015__b37890323556"><i><span class="varname" id="cce_10_0015__varname82649213233">cce-configmap</span></i></strong> # Name of the referenced ConfigMap.</pre>
</p></li><li id="cce_10_0015__li495018318575"><span>Create a workload.</span><p><p id="cce_10_0015__p2955123118577"><strong id="cce_10_0015__b1695511311579">kubectl apply -f nginx-configmap.yaml</strong></p>
</p></li><li id="cce_10_0015__li184269385573"><span>After the workload runs properly, the <strong id="cce_10_0015__b9389122917137">SPECIAL_LEVEL</strong> and <strong id="cce_10_0015__b939172961317">SPECIAL_TYPE</strong> files are generated in the <strong id="cce_10_0015__b63928293132">/etc/config</strong> directory. The contents of the files are <strong id="cce_10_0015__b2829164701317">Hello</strong> and <strong id="cce_10_0015__b383812491135">CCE</strong>, respectively.</span><p><ol type="a" id="cce_10_0015__ol1763108185811"><li id="cce_10_0015__li4631187586">Run the following command to view the created pod:<pre class="screen" id="cce_10_0015__screen1263188105810">kubectl get pod | grep nginx-configmap</pre>
</p></li><li id="cce_10_0015__li184269385573"><span>After the workload runs properly, the <strong id="cce_10_0015__b9389122917137">SPECIAL_LEVEL</strong> and <strong id="cce_10_0015__b939172961317">SPECIAL_TYPE</strong> files will be generated in the <strong id="cce_10_0015__b63928293132">/etc/config</strong> directory. The contents of the files are <strong id="cce_10_0015__b2829164701317">Hello</strong> and <strong id="cce_10_0015__b383812491135">CCE</strong>, respectively.</span><p><ol type="a" id="cce_10_0015__ol1763108185811"><li id="cce_10_0015__li4631187586">Run the following command to view the created pod:<pre class="screen" id="cce_10_0015__screen1263188105810">kubectl get pod | grep nginx-configmap</pre>
<div class="p" id="cce_10_0015__p1631148185812">Expected output:<pre class="screen" id="cce_10_0015__screen1663178195820">nginx-configmap-*** 1/1 Running 0 2m18s</pre>
</div>
</li><li id="cce_10_0015__li1863120816581">Run the following command to view the <strong id="cce_10_0015__b4261152113151">SPECIAL_LEVEL</strong> or <strong id="cce_10_0015__b5806102417153">SPECIAL_TYPE</strong> file in the pod:<pre class="screen" id="cce_10_0015__screen163128105811">kubectl exec <i><span class="varname" id="cce_10_0015__varname663111814584">nginx-configmap-***</span></i> -- cat /etc/config/<i><span class="varname" id="cce_10_0015__varname1375216171013">SPECIAL_LEVEL</span></i></pre>

View File

@ -22,12 +22,12 @@ data:
<ul id="cce_10_0016__ul259911812406"><li id="cce_10_0016__li1459919185403"><strong id="cce_10_0016__b16966165016278">Added from secret</strong>: Select a secret and import all keys in the secret as environment variables.</li><li id="cce_10_0016__li12862240165014"><strong id="cce_10_0016__b20968220132913">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable.<ul id="cce_10_0016__ul15909447135011"><li id="cce_10_0016__li95213468509"><strong id="cce_10_0016__b199784317300">Variable Name</strong>: name of an environment variable in the workload. The name can be customized and is set to the key name selected in the secret by default.</li><li id="cce_10_0016__li591660145119"><strong id="cce_10_0016__b1268109153118">Variable Value/Reference</strong>: Select a secret and the key to be imported. The corresponding value is imported as a workload environment variable.</li></ul>
<p id="cce_10_0016__p3488115325013">For example, after you import the value of <span class="parmname" id="cce_10_0016__parmname9630135816408"><b>username</b></span> in secret <strong id="cce_10_0016__b6631145811406">mysecret</strong> as the value of workload environment variable <span class="parmname" id="cce_10_0016__parmname863285812402"><b>username</b></span>, an environment variable named <span class="parmname" id="cce_10_0016__parmname0633185814019"><b>username</b></span> exists in the container.</p>
</li></ul>
</p></li><li id="cce_10_0016__li72753567401"><span>Set other workload parameters and click <span class="uicontrol" id="cce_10_0016__uicontrol3387540627139"><b>Create Workload</b></span>.</span><p><p id="cce_10_0016__p2670183316271">After the workload runs properly, <a href="cce_10_00356.html">log in to the container</a> and run the following statement to check whether the secret has been set as an environment variable of the workload:</p>
</p></li><li id="cce_10_0016__li72753567401"><span>Configure other workload parameters and click <span class="uicontrol" id="cce_10_0016__uicontrol3387540627139"><b>Create Workload</b></span>.</span><p><p id="cce_10_0016__p2670183316271">After the workload runs properly, <a href="cce_10_00356.html">log in to the container</a> and run the following statement to check whether the secret has been set as an environment variable of the workload:</p>
<pre class="screen" id="cce_10_0016__screen15459445182819">printenv username</pre>
<p id="cce_10_0016__p413944715257">If the output is the same as the content in the secret, the secret has been set as an environment variable of the workload.</p>
</p></li></ol>
<p id="cce_10_0016__p2562105044215"><strong id="cce_10_0016__b6015467293411">Using kubectl</strong></p>
<ol id="cce_10_0016__ol6921167164"><li id="cce_10_0016__li159211618168"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0016__li10921362161"><span>Create a file named <strong id="cce_10_0016__b1059510157499">nginx-secret.yaml</strong> and edit it.</span><p><p id="cce_10_0016__p1492110621619"><strong id="cce_10_0016__b192116619168">vi nginx-secret.yaml</strong></p>
<ol id="cce_10_0016__ol6921167164"><li id="cce_10_0016__li159211618168"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0016__li10921362161"><span>Create a file named <strong id="cce_10_0016__b1059510157499">nginx-secret.yaml</strong> and edit it.</span><p><p id="cce_10_0016__p1492110621619"><strong id="cce_10_0016__b192116619168">vi nginx-secret.yaml</strong></p>
<p id="cce_10_0016__p192114614169">Content of the YAML file:</p>
<ul id="cce_10_0016__ul11105134234513"><li id="cce_10_0016__li6846103416564"><strong id="cce_10_0016__b1715838155610">Added from secret</strong>: To add all data in a secret to environment variables, use the <strong id="cce_10_0016__b415816875619">envFrom</strong> parameter. The keys in the secret will become names of environment variables in a workload.<pre class="screen" id="cce_10_0016__screen104944321312">apiVersion: apps/v1
kind: Deployment
@ -93,7 +93,7 @@ spec:
</div>
<div class="section" id="cce_10_0016__section472505211214"><a name="cce_10_0016__section472505211214"></a><a name="section472505211214"></a><h4 class="sectiontitle">Configuring the Data Volume of a Workload</h4><p id="cce_10_0016__p196047901010">You can mount a secret as a volume to the specified container path. Contents in a secret are user-defined. Before that, create a secret. For details, see <a href="cce_10_0153.html">Creating a Secret</a>.</p>
<p id="cce_10_0016__p748195412417"><strong id="cce_10_0016__b201492784833249">Using the CCE console</strong></p>
<ol id="cce_10_0016__ol668714114817"><li id="cce_10_0016__li1179513219432"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0016__li13692141222712"><span>In the navigation pane on the left, click <strong id="cce_10_0016__b211914674314">Workloads</strong>. In the right pane, click the <strong id="cce_10_0016__b812012614435">Deployments</strong> tab. Click <strong id="cce_10_0016__b25719165432">Create Workload</strong> in the upper right corner.</span><p><p id="cce_10_0016__p89743143278">When creating a workload, click <span class="uicontrol" id="cce_10_0016__uicontrol818333124318"><b>Data Storage</b></span> in the <span class="uicontrol" id="cce_10_0016__uicontrol151851931144318"><b>Container Settings</b></span> area. Click <span class="uicontrol" id="cce_10_0016__uicontrol1218616318436"><b>Add Volume</b></span> and select <strong id="cce_10_0016__b318973114434">Secret</strong> from the drop-down list.</p>
<ol id="cce_10_0016__ol668714114817"><li id="cce_10_0016__li1179513219432"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0016__li13692141222712"><span>Choose <strong id="cce_10_0016__b211914674314">Workloads</strong> in the navigation pane. In the right pane, click the <strong id="cce_10_0016__b812012614435">Deployments</strong> tab. Click <strong id="cce_10_0016__b25719165432">Create Workload</strong> in the upper right corner.</span><p><p id="cce_10_0016__p89743143278">When creating a workload, click <span class="uicontrol" id="cce_10_0016__uicontrol818333124318"><b>Data Storage</b></span> in the <span class="uicontrol" id="cce_10_0016__uicontrol151851931144318"><b>Container Settings</b></span> area. Click <span class="uicontrol" id="cce_10_0016__uicontrol1218616318436"><b>Add Volume</b></span> and select <strong id="cce_10_0016__b318973114434">Secret</strong> from the drop-down list.</p>
</p></li><li id="cce_10_0016__li06877414482"><span>Select parameters for mounting a secret volume, as shown in <a href="#cce_10_0016__table861818920109">Table 1</a>.</span><p>
<div class="tablenoborder"><a name="cce_10_0016__table861818920109"></a><a name="table861818920109"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0016__table861818920109" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Mounting a secret volume</caption><thead align="left"><tr id="cce_10_0016__row1962619171020"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.4.7.4.3.2.1.2.3.1.1"><p id="cce_10_0016__p196285991018">Parameter</p>
</th>
@ -110,7 +110,7 @@ spec:
<tr id="cce_10_0016__row12511814162518"><td class="cellrowborder" valign="top" width="15%" headers="mcps1.4.7.4.3.2.1.2.3.1.1 "><p id="cce_10_0016__p11311217185">Mount Path</p>
</td>
<td class="cellrowborder" valign="top" width="85%" headers="mcps1.4.7.4.3.2.1.2.3.1.2 "><p id="cce_10_0016__p1548510541219">Enter a mount point. After the secret volume is mounted, a secret file with the key as the file name and value as the file content is generated in the mount path of the container.</p>
<div class="p" id="cce_10_0016__p53873531026">This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as <strong id="cce_10_0016__b14073263005443">/</strong> or <strong id="cce_10_0016__b15813089835443">/var/run</strong>. This may cause container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, which leads to a container startup failure or workload creation failure.<div class="notice" id="cce_10_0016__note1538785311211"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0016__p16387105311220">If the container is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.</p>
<div class="p" id="cce_10_0016__p53873531026">This parameter specifies a container path to which a data volume will be mounted. Do not mount the volume to a system directory such as <strong id="cce_10_0016__b14073263005443">/</strong> or <strong id="cce_10_0016__b15813089835443">/var/run</strong>. This may cause container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, which leads to a container startup failure or workload creation failure.<div class="notice" id="cce_10_0016__note1538785311211"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0016__p16387105311220">If the container is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.</p>
</div></div>
</div>
</td>
@ -135,7 +135,7 @@ spec:
<p id="cce_10_0016__p151342044102718">The expected output is the same as the content in the secret.</p>
</p></li></ol>
<p id="cce_10_0016__p146758020252"><strong id="cce_10_0016__b9979523373411">Using kubectl</strong></p>
<ol id="cce_10_0016__ol1392823394416"><li id="cce_10_0016__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0016__li1020013819415"><span>Create a file named <strong id="cce_10_0016__b10773122084914">nginx-secret.yaml</strong> and edit it.</span><p><p id="cce_10_0016__p106999147413"><strong id="cce_10_0016__b6469155655719">vi nginx-secret.yaml</strong></p>
<ol id="cce_10_0016__ol1392823394416"><li id="cce_10_0016__li1681024195710"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0016__li1020013819415"><span>Create a file named <strong id="cce_10_0016__b10773122084914">nginx-secret.yaml</strong> and edit it.</span><p><p id="cce_10_0016__p106999147413"><strong id="cce_10_0016__b6469155655719">vi nginx-secret.yaml</strong></p>
<div class="p" id="cce_10_0016__p9949138153913">In the following example, the username and password in the <strong id="cce_10_0016__b14926152314461">mysecret</strong> secret are saved in the <strong id="cce_10_0016__b8927122313461">/etc/foo</strong> directory as files.<pre class="screen" id="cce_10_0016__screen11489958268">apiVersion: apps/v1
kind: Deployment
metadata:

View File

@ -4,9 +4,9 @@
<div id="body1522667123001"><p id="cce_10_0018__p78381781804">CCE works with AOM to collect workload logs. When a node is created, ICAgent (a DaemonSet named <strong id="cce_10_0018__b13829819578">icagent</strong> in the <strong id="cce_10_0018__b697274313582">kube-system</strong> namespace of a cluster) of AOM is installed by default. ICAgent collects workload logs and reports them to AOM. You can view workload logs on the CCE or AOM console.</p>
<div class="section" id="cce_10_0018__section17884754413"><h4 class="sectiontitle">Constraints</h4><p id="cce_10_0018__p23831558355">ICAgent only collects text logs in .log, .trace, and .out formats.</p>
</div>
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001898026057.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image664110265156" src="en-us_image_0000001851587156.png"></span></div>
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001950317236.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image664110265156" src="en-us_image_0000001981276785.png"></span></div>
</div>
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Volume Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>hostPath</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>EmptyDir</b></span>.</span><p>
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Volume Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>hostPath</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>emptyDir</b></span>.</span><p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0018__table115901715550" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Configuring log policies</caption><thead align="left"><tr id="cce_10_0018__row45851074554"><th align="left" class="cellrowborder" valign="top" width="22.12%" id="mcps1.3.3.2.3.2.1.2.3.1.1"><p id="cce_10_0018__p115843785517">Parameter</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="77.88000000000001%" id="mcps1.3.3.2.3.2.1.2.3.1.2"><p id="cce_10_0018__p12584573550">Description</p>
@ -155,8 +155,8 @@ spec:
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p6329709512">Extended host path</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p32881805119">Extended host paths contain pod IDs or container names to distinguish different containers into which the host path is mounted.</p>
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1146433393">Pod</span>.</p>
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b1203413527">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b1523679015">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b604021733">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b1376912744">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1772976139">Pod</span>.</p>
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b1197707643">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b1247156547">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b1115375169">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b481519217">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
</td>
</tr>
<tr id="cce_10_0018__row732915085118"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p17329004514">policy.logs.rotate</p>
@ -164,7 +164,7 @@ spec:
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p123292055113">Log dump</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p1017113396539">Log dump refers to rotating log files on a local host.</p>
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1317453866">Disabled</strong>: AOM does not dump log files.</li></ul>
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1930637705">Disabled</strong>: AOM does not dump log files.</li></ul>
<div class="note" id="cce_10_0018__note121711639195319"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0018__ul817183918533"><li id="cce_10_0018__li9171183945310">AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur.</li><li id="cce_10_0018__li1117153914535">Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have already set rotation for log files, skip the configuration. Otherwise, conflicts may occur.</li><li id="cce_10_0018__li317113915532">You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files.</li></ul>
</div></div>
</td>
@ -174,7 +174,7 @@ spec:
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p14388112019519">Collection path</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p63882201153">A collection path narrows down the scope of collection to specified logs.</p>
<ul id="cce_10_0018__ul73883209510"><li id="cce_10_0018__li14388162011513">If no collection path is specified, log files in <strong id="cce_10_0018__b473416318">.log</strong>, <strong id="cce_10_0018__b1965847560">.trace</strong>, and <strong id="cce_10_0018__b688464659">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li03886201854"><strong id="cce_10_0018__b1024751887">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b120678913">.log</strong>, <strong id="cce_10_0018__b1780575222">.trace</strong>, and <strong id="cce_10_0018__b1093378982">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li1938811201058">* in log file names indicates a fuzzy match.</li></ul>
<ul id="cce_10_0018__ul73883209510"><li id="cce_10_0018__li14388162011513">If no collection path is specified, log files in <strong id="cce_10_0018__b1467704559">.log</strong>, <strong id="cce_10_0018__b1015947165">.trace</strong>, and <strong id="cce_10_0018__b441913230">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li03886201854"><strong id="cce_10_0018__b1440530493">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b358580375">.log</strong>, <strong id="cce_10_0018__b843315747">.trace</strong>, and <strong id="cce_10_0018__b436971263">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li1938811201058">* in log file names indicates a fuzzy match.</li></ul>
<p id="cce_10_0018__p17388152013515">Example: The collection path <strong id="cce_10_0018__b19951612237">/tmp/**/test*.log</strong> indicates that all <strong id="cce_10_0018__b49571315239">.log</strong> files prefixed with <strong id="cce_10_0018__b4958101202315">test</strong> will be collected from <strong id="cce_10_0018__b695815172316">/tmp</strong> and subdirectories at 5 levels deep.</p>
<div class="caution" id="cce_10_0018__note153881220751"><span class="cautiontitle"> CAUTION: </span><div class="cautionbody"><p id="cce_10_0018__p938810204516">Ensure that ICAgent is of v5.12.22 or later.</p>
</div></div>

View File

@ -6,7 +6,7 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0010.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0280.html">Container Network Models</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0280.html">Container Network</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0247.html">Service</a></strong><br>
</li>
@ -14,10 +14,6 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0359.html">DNS</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0675.html">Container Network Settings</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0679.html">Cluster Network Settings</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0399.html">Configuring Intra-VPC Access</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0400.html">Accessing the Internet from a Container</a></strong><br>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0024"></a><a name="cce_10_0024"></a>
<h1 class="topictitle1">Cloud Trace Service</h1>
<h1 class="topictitle1">Log Auditing</h1>
<div id="body1525226397666"></div>
<div>
<ul class="ullinks">

View File

@ -590,7 +590,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Cloud Trace Service</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Log Auditing</a></div>
</div>
</div>

View File

@ -19,7 +19,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Cloud Trace Service</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Log Auditing</a></div>
</div>
</div>

View File

@ -2,7 +2,7 @@
<h1 class="topictitle1">Creating a CCE Standard/Turbo Cluster</h1>
<div id="body1505899032898"><p id="cce_10_0028__p126541913151116">On the CCE console, you can easily create Kubernetes clusters. After a cluster is created, the master node is hosted by CCE. You only need to create worker nodes. In this way, you can implement cost-effective O&amp;M and efficient service deployment.</p>
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. A private DNS server must be used to resolve the OBS domain name. Therefore, the DNS server address of the subnet where the node resides must be set to the private DNS server address so that the node can access the private DNS server. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_10_0028__li124606217339">You can create a maximum of 50 clusters in a single region.</li><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configurations of the cluster, such as the VPC, subnet, Service CIDR block, IPv6 settings, and kube-proxy settings.</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Precautions</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configurations of the cluster, such as the VPC, subnet, Service CIDR block, IPv6 settings, and kube-proxy settings</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
</li></ul>
</div>
<div class="section" id="cce_10_0028__section176228482126"><h4 class="sectiontitle">Step 1: Log In to the CCE Console</h4><ol id="cce_10_0028__ol1233331493511"><li id="cce_10_0028__li16411127162211"><span>Log in to the CCE console.</span></li><li id="cce_10_0028__li833491416359"><span>On the <span class="uicontrol" id="cce_10_0028__uicontrol18993624144518"><b>Clusters</b></span> page, click <strong id="cce_10_0028__b1229822719327"></strong><strong id="cce_10_0028__b0633122963216">Create</strong> <strong id="cce_10_0028__b84923612393">Cluster</strong> in the upper right corner.</span></li></ol>
@ -41,7 +41,7 @@
<tr id="cce_10_0028__row169221520111012"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.4.1.3.1.1 "><p id="cce_10_0028__p1292217201100">Master Nodes</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.4.1.3.1.2 "><p id="cce_10_0028__p11922720121010">Select the number of master nodes. The master nodes are automatically hosted by CCE and deployed with Kubernetes cluster management components such as kube-apiserver, kube-controller-manager, and kube-scheduler.</p>
<ul id="cce_10_0028__ul592282031013"><li id="cce_10_0028__li0922122016106"><strong id="cce_10_0028__b1262385366">Multiple</strong>: Three master nodes will be created for high cluster availability.</li><li id="cce_10_0028__li29225205107"><strong id="cce_10_0028__b1859615018366">Single</strong>: Only one master node will be created in your cluster.</li></ul>
<ul id="cce_10_0028__ul592282031013"><li id="cce_10_0028__li9706202816307"><strong id="cce_10_0028__b741055618291">3 Masters</strong>: Three master nodes will be created for high cluster availability.</li><li id="cce_10_0028__li29225205107"><strong id="cce_10_0028__b1859615018366">Single</strong>: Only one master node will be created in your cluster.</li></ul>
<div class="p" id="cce_10_0028__p6922192001020">You can also select AZs for the master nodes. By default, AZs are allocated automatically for the master nodes.<ul id="cce_10_0028__ul16922182051017"><li id="cce_10_0028__li149221020151014"><strong id="cce_10_0028__b82531791387">Automatic</strong>: Master nodes are randomly distributed in different AZs for cluster DR. If the number of available AZs is less than the number of nodes to be created, CCE will create the nodes in the AZs with sufficient resources to preferentially ensure cluster creation. In this case, AZ-level DR may not be ensured.</li><li id="cce_10_0028__li12922920151015"><strong id="cce_10_0028__b181966766871353">Custom</strong>: Master nodes are deployed in specific AZs.<div class="p" id="cce_10_0028__p159229208105">If there is one master node in your cluster, you can select one AZ for the master node. If there are multiple master nodes in your cluster, you can select multiple AZs for the master nodes.<ul id="cce_10_0028__ul7922320191015"><li id="cce_10_0028__li149221220101019"><strong id="cce_10_0028__b28486679971353">AZ</strong>: Master nodes are deployed in different AZs for cluster DR.</li><li id="cce_10_0028__li99221420111018"><strong id="cce_10_0028__b212140326871353">Host</strong>: Master nodes are deployed on different hosts in the same AZ for cluster DR.</li><li id="cce_10_0028__li1292242021018"><strong id="cce_10_0028__b1048015443112">Custom</strong>: Master nodes are deployed in the AZs you specified.</li></ul>
</div>
</li></ul>
@ -70,6 +70,13 @@
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.7.2.3.1.2 "><p id="cce_10_0028__p1792210200105">Select the subnet to which the master nodes belong. If no subnet is available, click <span class="uicontrol" id="cce_10_0028__uicontrol179221820151013"><b>Create Subnet</b></span> to create one. The value cannot be changed after the cluster is created.</p>
</td>
</tr>
<tr id="cce_10_0028__row193881250141720"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.7.2.3.1.1 "><p id="cce_10_0028__p626015111172">Default Security Group</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.7.2.3.1.2 "><div class="p" id="cce_10_0028__p12260105119173">Select the security group automatically generated by CCE or use the existing one as the default security group of the node.<div class="notice" id="cce_10_0028__note1426035117179"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><p id="cce_10_0028__p102601451191715">The default security group must allow traffic from certain ports to ensure normal communication. Otherwise, the node cannot be created. </p>
</div></div>
</div>
</td>
</tr>
<tr id="cce_10_0028__row13923142019102"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.7.2.3.1.1 "><p id="cce_10_0028__p0923172021015">IPv6</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.7.2.3.1.2 "><p id="cce_10_0028__p9923122015109">If enabled, cluster resources, including nodes and workloads, can be accessed through IPv6 CIDR blocks.</p>
@ -155,7 +162,7 @@
</tr>
<tr id="cce_10_0028__row17321218162319"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.11.1.3.1.1 "><p id="cce_10_0028__p1482719142312">Overload Control</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.11.1.3.1.2 "><p id="cce_10_0028__p12482111932313">After this function is enabled, concurrent requests will be dynamically controlled based on the resource demands received by master nodes to ensure the stable running of the master nodes and the cluster. For details, see <a href="cce_10_0602.html">Cluster Overload Control</a>.</p>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.11.1.3.1.2 "><p id="cce_10_0028__p12482111932313">After this function is enabled, concurrent requests will be dynamically controlled based on the resource demands received by master nodes to ensure the stable running of the master nodes and the cluster. For details, see <a href="cce_10_0602.html">Enabling Overload Control for a Cluster</a>.</p>
</td>
</tr>
<tr id="cce_10_0028__row6924112016103"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.11.1.3.1.1 "><p id="cce_10_0028__p109241220111015">Disk Encryption for Master Nodes</p>
@ -167,8 +174,8 @@
</tr>
<tr id="cce_10_0028__row11925142091019"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.11.1.3.1.1 "><p id="cce_10_0028__p129241206101">Resource Tag</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.11.1.3.1.2 "><p id="cce_10_0028__p10924220161020">You can add resource tags to classify resources.</p>
<p id="cce_10_0028__p992432014101">You can create <span class="uicontrol" id="cce_10_0028__uicontrol719003014449"><b>predefined tags</b></span> on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency. </p>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.4.11.1.3.1.2 "><p id="cce_10_0028__p10924220161020">You can add resource tags to classify resources. A maximum of 20 resource tags can be added.</p>
<p id="cce_10_0028__p992432014101">You can create <span class="uicontrol" id="cce_10_0028__uicontrol1919112925819"><b>predefined tags</b></span> on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency. </p>
</td>
</tr>
<tr id="cce_10_0028__row5925122011010"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.4.11.1.3.1.1 "><p id="cce_10_0028__p129253204106">Description</p>
@ -214,7 +221,12 @@
</th>
</tr>
</thead>
<tbody><tr id="cce_10_0028__row179261420161011"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.5.5.2.1.3.1.1 "><p id="cce_10_0028__p44622153710">CCE Node Problem Detector</p>
<tbody><tr id="cce_10_0028__row12836181383713"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.5.5.2.1.3.1.1 "><p id="cce_10_0028__p1259551415374">Cloud Native Cluster Monitoring</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.5.5.2.1.3.1.2 "><p id="cce_10_0028__p3595141413374">(Optional) If selected, this add-on (<a href="cce_10_0406.html">Cloud Native Cluster Monitoring</a>) will be automatically installed. Cloud Native Cluster Monitoring collects monitoring metrics for your cluster and reports the metrics to AOM. The agent mode does not support HPA based on custom Prometheus statements. If related functions are required, install this add-on manually after the cluster is created.</p>
</td>
</tr>
<tr id="cce_10_0028__row179261420161011"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.5.5.2.1.3.1.1 "><p id="cce_10_0028__p44622153710">CCE Node Problem Detector</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.5.5.2.1.3.1.2 "><p id="cce_10_0028__p1192622071012">(Optional) If selected, this add-on (<a href="cce_10_0132.html">CCE Node Problem Detector</a>) will be automatically installed to detect faults and isolate nodes for prompt cluster troubleshooting.</p>
</td>
@ -258,7 +270,12 @@
</th>
</tr>
</thead>
<tbody><tr id="cce_10_0028__row52959235477"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.6.5.2.1.3.1.1 "><p id="cce_10_0028__p16295172334719">CCE Node Problem Detector</p>
<tbody><tr id="cce_10_0028__row16295523154716"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.6.5.2.1.3.1.1 "><p id="cce_10_0028__p8295182318471">Cloud Native Cluster Monitoring</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.6.5.2.1.3.1.2 "><p id="cce_10_0028__p13142125105312">Select an AOM instance for Cloud Native Cluster Monitoring to report metrics. If no AOM instance is available, click <strong id="cce_10_0028__b13662840164819">Creating Instance</strong> to create one.</p>
</td>
</tr>
<tr id="cce_10_0028__row52959235477"><td class="cellrowborder" valign="top" width="20%" headers="mcps1.3.6.5.2.1.3.1.1 "><p id="cce_10_0028__p16295172334719">CCE Node Problem Detector</p>
</td>
<td class="cellrowborder" valign="top" width="80%" headers="mcps1.3.6.5.2.1.3.1.2 "><p id="cce_10_0028__p158281157145915">This add-on is unconfigurable. After the cluster is created, choose <strong id="cce_10_0028__b18358104693712">Add-ons</strong> in the navigation pane of the cluster console and modify the configuration.</p>
</td>

View File

@ -4,15 +4,17 @@
<div id="body1506157580881"></div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0213.html">Cluster Configuration Management</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0213.html">Modifying Cluster Configurations</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0602.html">Cluster Overload Control</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0602.html">Enabling Overload Control for a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0403.html">Changing Cluster Scale</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0426.html">Changing the Default Security Group of a Node</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0212.html">Deleting a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0214.html">Hibernating and Waking Up a Cluster</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0214.html">Hibernating or Waking Up a Cluster</a></strong><br>
</li>
</ul>

View File

@ -8,6 +8,8 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0012.html">Creating a Node Pool</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0658.html">Scaling a Node Pool</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0222.html">Managing a Node Pool</a></strong><br>
</li>
</ul>

View File

@ -1,9 +1,9 @@
<a name="cce_10_00356"></a><a name="cce_10_00356"></a>
<h1 class="topictitle1">Accessing a Container</h1>
<h1 class="topictitle1">Logging In to a Container</h1>
<div id="body0000001151211236"><div class="section" id="cce_10_00356__section7379040716"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_00356__p1134114511811">If you encounter unexpected problems when using a container, you can log in to the container to debug it.</p>
</div>
<div class="section" id="cce_10_00356__section1293318163114"><h4 class="sectiontitle">Logging In to a Container Using kubectl</h4><ol id="cce_10_00356__ol1392823394416"><li id="cce_10_00356__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_00356__li1020013819415"><span>Run the following command to view the created pod:</span><p><pre class="screen" id="cce_10_00356__screen156898195914">kubectl get pod</pre>
<div class="section" id="cce_10_00356__section1293318163114"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_10_00356__ol1392823394416"><li id="cce_10_00356__li1681024195710"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_00356__li1020013819415"><span>Run the following command to view the created pod:</span><p><pre class="screen" id="cce_10_00356__screen156898195914">kubectl get pod</pre>
<div class="p" id="cce_10_00356__p18257204595920">The example output is as follows:<pre class="screen" id="cce_10_00356__screen7944553592">NAME READY STATUS RESTARTS AGE
nginx-59d89cb66f-mhljr 1/1 Running 0 11m</pre>
</div>

View File

@ -1,11 +1,11 @@
<a name="cce_10_0036"></a><a name="cce_10_0036"></a>
<h1 class="topictitle1">Stopping a Node</h1>
<div id="body1564130562761"><div class="section" id="cce_10_0036__section127213017388"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0036__p866311509249">After a node in the cluster is stopped, services on the node are also stopped. Before stopping a node, ensure that discontinuity of the services on the node will not result in adverse impacts.</p>
<div id="body1564130562761"><div class="section" id="cce_10_0036__section127213017388"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0036__p866311509249">When a node in the cluster is stopped, all services on that node will also be stopped, and the node will no longer be available for scheduling. Check if your services will be affected before stopping a node.</p>
</div>
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during the operation. Back up related data in advance.</li><li id="cce_10_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Precautions</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Perform this operation during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during the operation. Back up data beforehand.</li></ul>
</div>
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0036__li159521745431"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0036__uicontrol378153945103635"><b>Nodes</b></span>. On the displayed page, click the <strong id="cce_10_0036__b1786259085103635">Nodes</strong> tab.</span></li><li id="cce_10_0036__li224719151931"><span>Locate the target node and click its name.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b2347626195316">Stop</strong>. In the displayed dialog box, click <strong id="cce_10_0036__b434722605318">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image124001418192" src="en-us_image_0000001851745864.png"></span></div>
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0036__li159521745431"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0036__uicontrol378153945103635"><b>Nodes</b></span>. On the displayed page, click the <strong id="cce_10_0036__b1786259085103635">Nodes</strong> tab.</span></li><li id="cce_10_0036__li224719151931"><span>Locate the target node and click its name.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b2347626195316">Stop</strong>. In the displayed dialog box, click <strong id="cce_10_0036__b434722605318">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image124001418192" src="en-us_image_0000001981276729.png"></span></div>
</p></li></ol>
</div>
</div>

View File

@ -12,6 +12,10 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0616.html">Dynamically Mounting an EVS Disk to a StatefulSet</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0859.html">Encrypting EVS Disks</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0860.html">Expanding the Capacity of an EVS Disk</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0381.html">Snapshots and Backups</a></strong><br>
</li>
</ul>

View File

@ -8,15 +8,15 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0673.html">Creating a Workload</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0130.html">Configuring a Container</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0130.html">Configuring a Workload</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_00356.html">Accessing a Container</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_00356.html">Logging In to a Container</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0007.html">Managing Workloads and Jobs</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0007.html">Managing Workloads</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0833.html">Managing Custom Resources</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0463.html">Kata Runtime and Common Runtime</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0465.html">Pod Security</a></strong><br>
</li>
</ul>
</div>

View File

@ -7,7 +7,7 @@
</div></div>
</li></ul>
</div>
<div class="section" id="cce_10_0047__section1996635141916"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0047__ol2012902601117"><li id="cce_10_0047__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0047__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0047__b1421120185819">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0047__b139221951155717">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0047__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0047__p1259466151612"><strong id="cce_10_0047__b1493704971917">Basic Info</strong><ul id="cce_10_0047__ul6954101318184"><li id="cce_10_0047__li11514131617185"><strong id="cce_10_0047__b17688966208">Workload Type</strong>: Select <strong id="cce_10_0047__b19319191110206">Deployment</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0047__li129541213101814"><strong id="cce_10_0047__b12465144313510">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0047__li179541813111814"><strong id="cce_10_0047__b20501185611511">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0047__b1508155615514">default</strong>. You can also click <span class="uicontrol" id="cce_10_0047__uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0047__li18955181315189"><strong id="cce_10_0047__b1997313316218">Pods</strong>: Enter the number of pods of the workload.</li><li id="cce_10_0047__li11753142112539"><strong id="cce_10_0047__b1111971612">Container Runtime</strong>: A CCE standard cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see <a href="cce_10_0463.html">Kata Runtime and Common Runtime</a>.</li><li id="cce_10_0047__li1295571341818"><strong id="cce_10_0047__b4596419068">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
<div class="section" id="cce_10_0047__section1996635141916"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0047__ol2012902601117"><li id="cce_10_0047__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0047__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0047__b1421120185819">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0047__b139221951155717">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0047__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0047__p1259466151612"><strong id="cce_10_0047__b1493704971917">Basic Info</strong><ul id="cce_10_0047__ul6954101318184"><li id="cce_10_0047__li11514131617185"><strong id="cce_10_0047__b17688966208">Workload Type</strong>: Select <strong id="cce_10_0047__b19319191110206">Deployment</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0047__li129541213101814"><strong id="cce_10_0047__b12465144313510">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0047__li179541813111814"><strong id="cce_10_0047__b20501185611511">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0047__b1508155615514">default</strong>. You can also click <span class="uicontrol" id="cce_10_0047__uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0047__li18955181315189"><strong id="cce_10_0047__b1997313316218">Pods</strong>: Enter the number of pods of the workload.</li><li id="cce_10_0047__li11753142112539"><strong id="cce_10_0047__b1111971612">Container Runtime</strong>: A CCE standard cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see <a href="cce_10_0463.html">Secure Runtime and Common Runtime</a>.</li><li id="cce_10_0047__li1295571341818"><strong id="cce_10_0047__b4596419068">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
</div>
<div class="p" id="cce_10_0047__p206571518181616"><strong id="cce_10_0047__b062716554277">Container Settings</strong><ul id="cce_10_0047__ul42071022103320"><li id="cce_10_0047__li8770480458">Container Information<div class="p" id="cce_10_0047__p10493941854"><a name="cce_10_0047__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0047__uicontrol2024214181967"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0047__ul10714183717111"><li id="cce_10_0047__li1471463741113"><strong id="cce_10_0047__b2309121414294">Basic Info</strong>: Configure basic information about the container.
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0047__table128216444815" frame="border" border="1" rules="all"><thead align="left"><tr id="cce_10_0047__row0282348486"><th align="left" class="cellrowborder" valign="top" width="23%" id="mcps1.3.3.2.3.2.2.2.1.1.2.1.2.1.3.1.1"><p id="cce_10_0047__p3282147483">Parameter</p>
@ -81,16 +81,16 @@
<p id="cce_10_0047__p1447162741615"><strong id="cce_10_0047__b154561192487">(Optional) Service Settings</strong></p>
<p id="cce_10_0047__p102354303348">A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and automatically balances load for these pods.</p>
<p id="cce_10_0047__p13343123113612">You can also create a Service after creating a workload. For details about Services of different types, see <a href="cce_10_0249.html">Overview</a>.</p>
<div class="p" id="cce_10_0047__p310913521612"><strong id="cce_10_0047__b204881212144816">(Optional) Advanced Settings</strong><ul id="cce_10_0047__ul142811417"><li id="cce_10_0047__li0421513417"><strong id="cce_10_0047__b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0047__b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0047__b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Workload Upgrade Policies</a>.</li><li id="cce_10_0047__li5292111713411"><strong id="cce_10_0047__b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0047__ul16976133413332"><li id="cce_10_0047__li7687143311331"><strong id="cce_10_0047__b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul1865517492338"><li id="cce_10_0047__li84431255153310"><strong id="cce_10_0047__b21119711352">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b156511824123612">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li10775194183413"><strong id="cce_10_0047__b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li177960111349"><strong id="cce_10_0047__b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
<div class="p" id="cce_10_0047__p310913521612"><strong id="cce_10_0047__b204881212144816">(Optional) Advanced Settings</strong><ul id="cce_10_0047__ul142811417"><li id="cce_10_0047__li0421513417"><strong id="cce_10_0047__b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0047__b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0047__b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Configuring Workload Upgrade Policies</a>.</li><li id="cce_10_0047__li5292111713411"><strong id="cce_10_0047__b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0047__ul16976133413332"><li id="cce_10_0047__li7687143311331"><strong id="cce_10_0047__b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul1865517492338"><li id="cce_10_0047__li84431255153310"><strong id="cce_10_0047__b21119711352">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b156511824123612">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li10775194183413"><strong id="cce_10_0047__b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0047__b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0047__li177960111349"><strong id="cce_10_0047__b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
</li><li id="cce_10_0047__li136191442193318"><strong id="cce_10_0047__b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0047__ul106562113415"><li id="cce_10_0047__li11588172453415"><strong id="cce_10_0047__b1354131044913">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0047__b17387313105016">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li12588142414347"><strong id="cce_10_0047__b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0047__b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0047__li14588192418347"><strong id="cce_10_0047__b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
</li></ul>
</li><li id="cce_10_0047__li13285132913414"><strong id="cce_10_0047__b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0047__li179714209414"><a name="cce_10_0047__li179714209414"></a><a name="li179714209414"></a><strong id="cce_10_0047__b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0047__b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0047__li1917237124111"><strong id="cce_10_0047__b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0047__li191696549535"><strong id="cce_10_0047__b563938103113">Network Configuration</strong><ul id="cce_10_0047__ul101792551538"><li id="cce_10_0047__li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0047__li053620118549">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
</li><li id="cce_10_0047__li13285132913414"><strong id="cce_10_0047__b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Configuring Tolerance Policies</a>.</li><li id="cce_10_0047__li179714209414"><a name="cce_10_0047__li179714209414"></a><a name="li179714209414"></a><strong id="cce_10_0047__b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0047__b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Configuring Labels and Annotations</a>.</li><li id="cce_10_0047__li1917237124111"><strong id="cce_10_0047__b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0047__li191696549535"><strong id="cce_10_0047__b563938103113">Network Configuration</strong><ul id="cce_10_0047__ul101792551538"><li id="cce_10_0047__li1985863319162">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0047__li053620118549">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
</li></ul>
</div>
</p></li><li id="cce_10_0047__li01417411620"><span>Click <strong id="cce_10_0047__b5824103317919">Create Workload</strong> in the lower right corner.</span></li></ol>
</div>
<div class="section" id="cce_10_0047__section155246177178"><a name="cce_10_0047__section155246177178"></a><a name="section155246177178"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0047__p13147194016468">The following procedure uses Nginx as an example to describe how to <span class="keyword" id="cce_10_0047__keyword1613307257114737">create a workload using kubectl</span>.</p>
<ol id="cce_10_0047__ol1424992320616"><li id="cce_10_0047__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0047__li1020013819415"><span>Create and edit the <strong id="cce_10_0047__b27748113122">nginx-deployment.yaml</strong> file. <strong id="cce_10_0047__b630359246113719">nginx-deployment.yaml</strong> is an example file name, and you can rename it as required.</span><p><p id="cce_10_0047__p7581950184318"><strong id="cce_10_0047__b111191541172515">vi nginx-deployment.yaml</strong></p>
<ol id="cce_10_0047__ol1424992320616"><li id="cce_10_0047__li2338171784610"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0047__li1020013819415"><span>Create and edit the <strong id="cce_10_0047__b27748113122">nginx-deployment.yaml</strong> file. <strong id="cce_10_0047__b630359246113719">nginx-deployment.yaml</strong> is an example file name, and you can rename it as required.</span><p><p id="cce_10_0047__p7581950184318"><strong id="cce_10_0047__b111191541172515">vi nginx-deployment.yaml</strong></p>
<p id="cce_10_0047__p5292517598">The following is an example YAML file. For more information about Deployments, see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" target="_blank" rel="noopener noreferrer">Kubernetes documentation</a>.</p>
<pre class="screen" id="cce_10_0047__screen47761831782">apiVersion: apps/v1
kind: Deployment
@ -114,7 +114,7 @@ spec:
name: nginx
imagePullSecrets:
- name: default-secret</pre>
<p id="cce_10_0047__p2848155215917">For details about these parameters, see <a href="#cce_10_0047__table132326831016">Table 1</a>.</p>
<p id="cce_10_0047__p2848155215917">For details about the parameters, see <a href="#cce_10_0047__table132326831016">Table 1</a>.</p>
<div class="tablenoborder"><a name="cce_10_0047__table132326831016"></a><a name="table132326831016"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0047__table132326831016" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Deployment YAML parameters</caption><thead align="left"><tr id="cce_10_0047__row523318817104"><th align="left" class="cellrowborder" valign="top" width="16%" id="mcps1.3.4.3.2.2.5.2.4.1.1"><p id="cce_10_0047__p162344817100">Parameter</p>
</th>

View File

@ -4,13 +4,13 @@
<div id="body1505966783091"><div class="section" id="cce_10_0048__section530452474212"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0048__p475763119422">StatefulSets are a type of workloads whose data or status is stored while they are running. For example, MySQL is a StatefulSet because it needs to store new data.</p>
<p id="cce_10_0048__p167381126153418">A container can be migrated between different hosts, but data is not stored on the hosts. To store StatefulSet data persistently, attach HA storage volumes provided by CCE to the container.</p>
</div>
<div class="section" id="cce_10_0048__section6329175411713"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0048__ul3611113041018"><li id="cce_10_0048__li28151714171212">When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security.</li><li id="cce_10_0048__li9611230141012">When you delete a StatefulSet, reduce the number of replicas to <strong id="cce_10_0048__b20407050121312">0</strong> before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order.</li><li id="cce_10_0048__li611418311218">When you create a StatefulSet, a headless Service is required for pod access. For details, see <a href="cce_10_0398.html">Headless Services</a>.</li><li id="cce_10_0048__li093214329312">When a node is unavailable, pods become <strong id="cce_10_0048__b69930313149">Unready</strong>. In this case, manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node.</li></ul>
<div class="section" id="cce_10_0048__section6329175411713"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0048__ul3611113041018"><li id="cce_10_0048__li28151714171212">When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security.</li><li id="cce_10_0048__li9611230141012">When you delete a StatefulSet, reduce the number of replicas to <strong id="cce_10_0048__b20407050121312">0</strong> before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order.</li><li id="cce_10_0048__li611418311218">When you create a StatefulSet, a headless Service is required for pod access. For details, see <a href="cce_10_0398.html">Headless Services</a>.</li><li id="cce_10_0048__li093214329312">When a node is unavailable, pods become <strong id="cce_10_0048__b69930313149">Unready</strong>. In this case, manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node.</li></ul>
</div>
<div class="section" id="cce_10_0048__section1734962819219"><h4 class="sectiontitle">Prerequisites</h4><ul id="cce_10_0048__ul1685719423426"><li id="cce_10_0048__li612018144437">Before creating a workload, you must have an available cluster. For details on how to create a cluster, see <a href="cce_10_0028.html">Creating a CCE Standard/Turbo Cluster</a>.</li><li id="cce_10_0048__li19160540131415">To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.<div class="note" id="cce_10_0048__note991371915511"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0048__p195248425512">If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the StatefulSet will fail.</p>
</div></div>
</li></ul>
</div>
<div class="section" id="cce_10_0048__section16385130102112"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0048__ol2012902601117"><li id="cce_10_0048__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0048__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0048__b94442390613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0048__b1844413910614">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0048__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0048__p1259466151612"><strong id="cce_10_0048__b64930521915">Basic Info</strong><ul id="cce_10_0048__ul6954101318184"><li id="cce_10_0048__li11514131617185"><strong id="cce_10_0048__b19311135410116">Workload Type</strong>: Select <strong id="cce_10_0048__b0311195410110">StatefulSet</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0048__li129541213101814"><strong id="cce_10_0048__cce_10_0047_b12465144313510">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0048__li179541813111814"><strong id="cce_10_0048__cce_10_0047_b20501185611511">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0048__cce_10_0047_b1508155615514">default</strong>. You can also click <span class="uicontrol" id="cce_10_0048__cce_10_0047_uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0048__li18955181315189"><strong id="cce_10_0048__cce_10_0047_b1997313316218">Pods</strong>: Enter the number of pods of the workload.</li><li id="cce_10_0048__li11753142112539"><strong id="cce_10_0048__cce_10_0047_b1111971612">Container Runtime</strong>: A CCE standard cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see <a href="cce_10_0463.html">Kata Runtime and Common Runtime</a>.</li><li id="cce_10_0048__li198695115505"><strong id="cce_10_0048__cce_10_0047_b4596419068">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
<div class="section" id="cce_10_0048__section16385130102112"><h4 class="sectiontitle">Using the CCE Console</h4><ol id="cce_10_0048__ol2012902601117"><li id="cce_10_0048__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0048__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0048__b94442390613">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0048__b1844413910614">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0048__li67891737151520"><span>Set basic information about the workload. </span><p><div class="p" id="cce_10_0048__p1259466151612"><strong id="cce_10_0048__b64930521915">Basic Info</strong><ul id="cce_10_0048__ul6954101318184"><li id="cce_10_0048__li11514131617185"><strong id="cce_10_0048__b19311135410116">Workload Type</strong>: Select <strong id="cce_10_0048__b0311195410110">StatefulSet</strong>. For details about workload types, see <a href="cce_10_0006.html">Overview</a>.</li><li id="cce_10_0048__li129541213101814"><strong id="cce_10_0048__cce_10_0047_b12465144313510">Workload Name</strong>: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.</li><li id="cce_10_0048__li179541813111814"><strong id="cce_10_0048__cce_10_0047_b20501185611511">Namespace</strong>: Select the namespace of the workload. The default value is <strong id="cce_10_0048__cce_10_0047_b1508155615514">default</strong>. You can also click <span class="uicontrol" id="cce_10_0048__cce_10_0047_uicontrol342862818214"><b>Create Namespace</b></span> to create one. For details, see <a href="cce_10_0278.html">Creating a Namespace</a>.</li><li id="cce_10_0048__li18955181315189"><strong id="cce_10_0048__cce_10_0047_b1997313316218">Pods</strong>: Enter the number of pods of the workload.</li><li id="cce_10_0048__li11753142112539"><strong id="cce_10_0048__cce_10_0047_b1111971612">Container Runtime</strong>: A CCE standard cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see <a href="cce_10_0463.html">Secure Runtime and Common Runtime</a>.</li><li id="cce_10_0048__li198695115505"><strong id="cce_10_0048__cce_10_0047_b4596419068">Time Zone Synchronization</strong>: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see <a href="cce_10_0354.html">Configuring Time Zone Synchronization</a>.</li></ul>
</div>
<div class="p" id="cce_10_0048__p206571518181616"><strong id="cce_10_0048__b163231218124">Container Settings</strong><ul id="cce_10_0048__ul42071022103320"><li id="cce_10_0048__li8770480458">Container Information<div class="p" id="cce_10_0048__p10493941854"><a name="cce_10_0048__li8770480458"></a><a name="li8770480458"></a>Multiple containers can be configured in a pod. You can click <span class="uicontrol" id="cce_10_0048__uicontrol75255211621"><b>Add Container</b></span> on the right to configure multiple containers for the pod.<ul id="cce_10_0048__ul481018470119"><li id="cce_10_0048__li18101047191117"><strong id="cce_10_0048__cce_10_0047_b2309121414294">Basic Info</strong>: Configure basic information about the container.
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0048__cce_10_0047_table128216444815" frame="border" border="1" rules="all"><thead align="left"><tr id="cce_10_0048__cce_10_0047_row0282348486"><th align="left" class="cellrowborder" valign="top" width="23%" id="mcps1.3.4.2.3.2.2.2.1.1.2.1.2.1.3.1.1"><p id="cce_10_0048__cce_10_0047_p3282147483">Parameter</p>
@ -74,7 +74,7 @@
</tbody>
</table>
</div>
</li><li id="cce_10_0048__li4810204715113">(Optional) <strong id="cce_10_0048__cce_10_0047_b6712437288">Lifecycle</strong>: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see <a href="cce_10_0105.html">Configuring Container Lifecycle Parameters</a>.</li><li id="cce_10_0048__li4810134791115">(Optional) <strong id="cce_10_0048__cce_10_0047_b20675191620295">Health Check</strong>: Set the liveness probe, ready probe, and startup probe as required. For details, see <a href="cce_10_0112.html">Configuring Container Health Check</a>.</li><li id="cce_10_0048__li1810447181110">(Optional) <strong id="cce_10_0048__cce_10_0047_b17656135219292">Environment Variables</strong>: Configure variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see <a href="cce_10_0113.html">Configuring Environment Variables</a>.</li><li id="cce_10_0048__li4810124731117">(Optional) <strong id="cce_10_0048__b11209164933310">Data Storage</strong>: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see <a href="cce_10_0374.html">Storage</a>.<div class="note" id="cce_10_0048__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0048__ul26865762616"><li id="cce_10_0048__li4956180135815">StatefulSets support dynamic attachment of EVS disks. For details, see <a href="cce_10_0616.html">Dynamically Mounting an EVS Disk to a StatefulSet</a> and <a href="cce_10_0635.html">Dynamically Mounting a Local PV to a StatefulSet</a>.<p id="cce_10_0048__p270761115810">Dynamic mounting is achieved by using the <strong id="cce_10_0048__b5442124241413"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" target="_blank" rel="noopener noreferrer">volumeClaimTemplates</a></strong> field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a PVC using the <strong id="cce_10_0048__b15877917111512">volumeClaimTemplates</strong> field, and the PVC is bound to the corresponding PV. Therefore, after the pod is rescheduled, the original data can still be mounted based on the PVC name.</p>
</li><li id="cce_10_0048__li4810204715113">(Optional) <strong id="cce_10_0048__cce_10_0047_b6712437288">Lifecycle</strong>: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see <a href="cce_10_0105.html">Configuring Container Lifecycle Parameters</a>.</li><li id="cce_10_0048__li4810134791115">(Optional) <strong id="cce_10_0048__cce_10_0047_b20675191620295">Health Check</strong>: Set the liveness probe, ready probe, and startup probe as required. For details, see <a href="cce_10_0112.html">Configuring Container Health Check</a>.</li><li id="cce_10_0048__li1810447181110">(Optional) <strong id="cce_10_0048__cce_10_0047_b17656135219292">Environment Variables</strong>: Configure variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see <a href="cce_10_0113.html">Configuring Environment Variables</a>.</li><li id="cce_10_0048__li4810124731117">(Optional) <strong id="cce_10_0048__b11209164933310">Data Storage</strong>: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see <a href="cce_10_0374.html">Storage</a>.<div class="note" id="cce_10_0048__note101269342356"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0048__ul26865762616"><li id="cce_10_0048__li4956180135815">StatefulSets support dynamic attachment of EVS disks. For details, see <a href="cce_10_0616.html">Dynamically Mounting an EVS Disk to a StatefulSet</a> or <a href="cce_10_0635.html">Dynamically Mounting a Local PV to a StatefulSet</a>.<p id="cce_10_0048__p270761115810">Dynamic mounting is achieved by using the <strong id="cce_10_0048__b5442124241413"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" target="_blank" rel="noopener noreferrer">volumeClaimTemplates</a></strong> field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a PVC using the <strong id="cce_10_0048__b15877917111512">volumeClaimTemplates</strong> field, and the PVC is bound to the corresponding PV. Therefore, after the pod is rescheduled, the original data can still be mounted based on the PVC name.</p>
</li><li id="cce_10_0048__li126861777269">After a workload is created, the storage that is dynamically mounted cannot be updated.</li></ul>
</div></div>
</li><li id="cce_10_0048__li1581013477116">(Optional) <strong id="cce_10_0048__cce_10_0047_b347211410339">Security Context</strong>: Assign container permissions to protect the system and other containers from being affected. Enter the user ID to assign container permissions and prevent systems and other containers from being affected.</li><li id="cce_10_0048__li128105471119">(Optional) <strong id="cce_10_0048__cce_10_0047_b4129950193311">Logging</strong>: Report standard container output logs to AOM by default, without requiring manual settings. You can manually configure the log collection path. For details, see <a href="cce_10_0018.html">Collecting Container Logs Using ICAgent</a>.<p id="cce_10_0048__cce_10_0047_p154878397159">To disable the standard output of the current workload, add the annotation <strong id="cce_10_0048__cce_10_0047_b882934924220">kubernetes.AOM.log.stdout: []</strong> in <a href="cce_10_0047.html#cce_10_0047__li179714209414">Labels and Annotations</a>. For details about how to use this annotation, see <a href="cce_10_0386.html#cce_10_0386__table194691458405">Table 1</a>.</p>
@ -87,18 +87,18 @@
<p id="cce_10_0048__p1447162741615"><strong id="cce_10_0048__b4235027843479">(Optional) Service Settings</strong></p>
<p id="cce_10_0048__p102354303348">A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and automatically balances load for these pods.</p>
<p id="cce_10_0048__p13343123113612">You can also create a Service after creating a workload. For details about Services of different types, see <a href="cce_10_0249.html">Overview</a>.</p>
<div class="p" id="cce_10_0048__p310913521612"><strong id="cce_10_0048__b21631580735239">(Optional) Advanced Settings</strong><ul id="cce_10_0048__ul142811417"><li id="cce_10_0048__li0421513417"><strong id="cce_10_0048__cce_10_0047_b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0048__cce_10_0047_b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0048__cce_10_0047_b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Workload Upgrade Policies</a>.</li><li id="cce_10_0048__li206428507436"><strong id="cce_10_0048__b1840219331836">Pod Management Policies</strong><p id="cce_10_0048__p151323251334">For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers.</p>
<div class="p" id="cce_10_0048__p310913521612"><strong id="cce_10_0048__b21631580735239">(Optional) Advanced Settings</strong><ul id="cce_10_0048__ul142811417"><li id="cce_10_0048__li0421513417"><strong id="cce_10_0048__cce_10_0047_b15415314859">Upgrade</strong>: Specify the upgrade mode and parameters of the workload. <strong id="cce_10_0048__cce_10_0047_b153151558165913">Rolling upgrade</strong> and <strong id="cce_10_0048__cce_10_0047_b1621251402">Replace upgrade</strong> are available. For details, see <a href="cce_10_0397.html">Configuring Workload Upgrade Policies</a>.</li><li id="cce_10_0048__li206428507436"><strong id="cce_10_0048__b1840219331836">Pod Management Policies</strong><p id="cce_10_0048__p151323251334">For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers.</p>
<ul id="cce_10_0048__ul758812493316"><li id="cce_10_0048__li258832417338"><strong id="cce_10_0048__b13534251116">OrderedReady</strong>: The StatefulSet will deploy, delete, or scale pods in order and one by one. (The StatefulSet continues only after the previous pod is ready or deleted.) This is the default policy.</li><li id="cce_10_0048__li1558862416338"><strong id="cce_10_0048__b112293521039">Parallel</strong>: The StatefulSet will create pods in parallel to match the desired scale without waiting, and will delete all pods at once.</li></ul>
</li><li id="cce_10_0048__li7127180594"><strong id="cce_10_0048__cce_10_0047_b289714923012">Scheduling</strong>: Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.<ul id="cce_10_0048__cce_10_0047_ul16976133413332"><li id="cce_10_0048__cce_10_0047_li7687143311331"><strong id="cce_10_0048__cce_10_0047_b1243811103214">Load Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul1865517492338"><li id="cce_10_0048__cce_10_0047_li84431255153310"><strong id="cce_10_0048__cce_10_0047_b21119711352">Multi-AZ deployment is preferred</strong>: Workload pods are preferentially scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b156511824123612">podAntiAffinity</strong>). If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ but onto different nodes for high availability. If there are fewer nodes than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li10775194183413"><strong id="cce_10_0048__cce_10_0047_b1667575214119">Forcible multi-AZ deployment</strong>: Workload pods are forcibly scheduled to nodes in different AZs through pod anti-affinity (<strong id="cce_10_0048__cce_10_0047_b10853186174217">podAntiAffinity</strong>). If there are fewer AZs than pods, the extra pods will fail to run.</li><li id="cce_10_0048__cce_10_0047_li177960111349"><strong id="cce_10_0048__cce_10_0047_b18931103644418">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
</li><li id="cce_10_0048__cce_10_0047_li136191442193318"><strong id="cce_10_0048__cce_10_0047_b540915914458">Node Affinity</strong>: Common load affinity policies are offered for quick load affinity deployment.<ul id="cce_10_0048__cce_10_0047_ul106562113415"><li id="cce_10_0048__cce_10_0047_li11588172453415"><strong id="cce_10_0048__cce_10_0047_b1354131044913">Node Affinity</strong>: Workload pods can be deployed on specified nodes through node affinity (<strong id="cce_10_0048__cce_10_0047_b17387313105016">nodeAffinity</strong>). If no node is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li12588142414347"><strong id="cce_10_0048__cce_10_0047_b1143642735217">Specified node pool scheduling</strong>: Workload pods can be deployed in a specified node pool through node affinity (<strong id="cce_10_0048__cce_10_0047_b1443715272523">nodeAffinity</strong>). If no node pool is specified, the pods will be randomly scheduled based on the default scheduling policy of the cluster.</li><li id="cce_10_0048__cce_10_0047_li14588192418347"><strong id="cce_10_0048__cce_10_0047_b145411819458">Custom policies</strong>: Affinity and anti-affinity policies can be customized as needed. For details, see <a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a>.</li></ul>
</li></ul>
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__cce_10_0047_b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Taints and Tolerations</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0048__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__cce_10_0047_b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0048__li1985863319162"><strong id="cce_10_0048__b157014128328">Network Configuration</strong><ul id="cce_10_0048__ul9870163414162"><li id="cce_10_0048__li8488616152">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0048__li246062816567">Whether to enable the static IP address: available only for clusters that support this function. After this function is enabled, you can set the interval for reclaiming expired pod IP addresses. For details, see <a href="cce_10_0603.html">Configuring a Static IP Address for a Pod</a>.</li><li id="cce_10_0048__li6361894173">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
</li><li id="cce_10_0048__li13285132913414"><strong id="cce_10_0048__cce_10_0047_b15261142101217">Toleration</strong>: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see <a href="cce_10_0728.html">Configuring Tolerance Policies</a>.</li><li id="cce_10_0048__li179714209414"><strong id="cce_10_0048__cce_10_0047_b562135212518">Labels and Annotations</strong>: Add labels or annotations for pods using key-value pairs. After entering the key and value, click <strong id="cce_10_0048__cce_10_0047_b1439805716617">Confirm</strong>. For details about how to use and configure labels and annotations, see <a href="cce_10_0386.html">Configuring Labels and Annotations</a>.</li><li id="cce_10_0048__li1917237124111"><strong id="cce_10_0048__cce_10_0047_b1428118321389">DNS</strong>: Configure a separate DNS policy for the workload. For details, see <a href="cce_10_0365.html">DNS Configuration</a>.</li><li id="cce_10_0048__li1985863319162"><strong id="cce_10_0048__b157014128328">Network Configuration</strong><ul id="cce_10_0048__ul9870163414162"><li id="cce_10_0048__li8488616152">Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see <a href="cce_10_0382.html">Configuring QoS for a Pod</a>.</li><li id="cce_10_0048__li246062816567">Whether to enable the static IP address: available only for clusters that support this function. After this function is enabled, you can set the interval for reclaiming expired pod IP addresses. For details, see <a href="cce_10_0603.html">Configuring a Static IP Address for a Pod</a>.</li><li id="cce_10_0048__li6361894173">IPv6 shared bandwidth: available only for clusters that support this function. After this function is enabled, you can configure a shared bandwidth for a pod with IPv6 dual-stack ENIs. For details, see <a href="cce_10_0604.html">Configuring Shared Bandwidth for a Pod with IPv6 Dual-Stack ENIs</a>.</li></ul>
</li></ul>
</div>
</p></li><li id="cce_10_0048__li01417411620"><span>Click <strong id="cce_10_0048__b2573105264313">Create Workload</strong> in the lower right corner.</span></li></ol>
</div>
<div class="section" id="cce_10_0048__section113441881214"><h4 class="sectiontitle"><span class="keyword" id="cce_10_0048__keyword1096424634155120">Using kubectl</span></h4><p id="cce_10_0048__p829311262556">In this example, a Nginx workload is used and the EVS volume is dynamically mounted to it using the <strong id="cce_10_0048__b890694313189">volumeClaimTemplates</strong> field.</p>
<ol id="cce_10_0048__ol8784163652310"><li id="cce_10_0048__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0048__li786619612249"><span>Create and edit the <strong id="cce_10_0048__b17333192071415">nginx-statefulset.yaml</strong> file.</span><p><p id="cce_10_0048__li1020013819415p0"><strong id="cce_10_0048__b122558194376">nginx-statefulset.yaml</strong> is an example file name, and you can change it as required.</p>
<ol id="cce_10_0048__ol8784163652310"><li id="cce_10_0048__li2338171784610"><span>Use kubectl to access the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0048__li786619612249"><span>Create and edit the <strong id="cce_10_0048__b17333192071415">nginx-statefulset.yaml</strong> file.</span><p><p id="cce_10_0048__li1020013819415p0"><strong id="cce_10_0048__b122558194376">nginx-statefulset.yaml</strong> is an example file name, and you can change it as required.</p>
<p id="cce_10_0048__p1587215618244"><strong id="cce_10_0048__b28744642413">vi nginx-statefulset.yaml</strong></p>
<p id="cce_10_0048__p211135719251">The following provides an example of the file contents. For more information on StatefulSet, see the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" target="_blank" rel="noopener noreferrer">Kubernetes documentation</a>.</p>
<pre class="screen" id="cce_10_0048__screen188753615243">apiVersion: apps/v1
@ -153,7 +153,7 @@ spec:
resources:
requests:
storage: 10Gi
storageClassName: csi-disk # Storage class name. The value is <strong id="cce_10_0048__b135233111916">csi-disk</strong> for the EVS volume.
storageClassName: csi-disk # StorageClass name. The value is <strong id="cce_10_0048__b135233111916">csi-disk</strong> for the EVS volume.
updateStrategy:
type: RollingUpdate</pre>
<p id="cce_10_0048__p2939196152413"><strong id="cce_10_0048__b1394256172413">vi nginx-headless.yaml</strong></p>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0054"></a><a name="cce_10_0054"></a>
<h1 class="topictitle1">High-Risk Operations and Solutions</h1>
<h1 class="topictitle1">High-Risk Operations</h1>
<div id="body1525923325040"><p id="cce_10_0054__p1390915599514">During service deployment or running, you may trigger high-risk operations at different levels, causing service faults or interruption. To help you better estimate and avoid operation risks, this section introduces the consequences and solutions of high-risk operations from multiple dimensions, such as clusters, nodes, networking, load balancing, logs, and EVS disks.</p>
<div class="section" id="cce_10_0054__section16411195115212"><h4 class="sectiontitle">Clusters and Nodes</h4>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0054__table12476951152111" frame="border" border="1" rules="all"><caption><b>Table 1 </b>High-risk operations and solutions</caption><thead align="left"><tr id="cce_10_0054__row047515519215"><th align="left" class="cellrowborder" valign="top" width="14.42%" id="mcps1.3.2.2.2.5.1.1"><p id="cce_10_0054__p1312082018212">Category</p>
@ -63,7 +63,7 @@
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p1564494718563">The master node may be unavailable.</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p164404719561">Restore the parameter settings to the recommended values. For details, see <a href="cce_10_0213.html">Cluster Configuration Management</a>.</p>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p164404719561">Restore the parameter settings to the recommended values. For details, see <a href="cce_10_0213.html">Modifying Cluster Configurations</a>.</p>
</td>
</tr>
<tr id="cce_10_0054__row1866012145616"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p56441647145612">Replacing the master or etcd certificate</p>
@ -107,7 +107,7 @@
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p351375485914">Reset the node. For details, see <a href="cce_10_0003.html">Resetting a Node</a>.</p>
</td>
</tr>
<tr id="cce_10_0054__row1673511291596"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p18513185425915">Upgrading the kernel or components on which the container platform depends (such as Open vSwitch, IPvlan, Docker, and containerd)</p>
<tr id="cce_10_0054__row1673511291596"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p18513185425915">Upgrading the kernel or components on which the container platform depends (such as Open vSwitch, IPVLAN, Docker, and containerd)</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p1251315544595">The node may be unavailable or the network may be abnormal.</p>
<div class="note" id="cce_10_0054__note1791614419108"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0054__p17926741111013">Node running depends on the system kernel version. Do not use the <strong id="cce_10_0054__b513114494419">yum update</strong> command to update or reinstall the operating system kernel of a node unless necessary. (Reinstalling the operating system kernel using the original image or other images is a risky operation.)</p>
@ -127,7 +127,7 @@
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.2 "><p id="cce_10_0054__p125135541599">The node may become unavailable, and components may be insecure if security-related configurations are modified.</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p851395416595">Restore the parameter settings to the recommended values. For details, see <a href="cce_10_0652.html">Configuring a Node Pool</a>.</p>
<td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.3 "><p id="cce_10_0054__p851395416595">Restore the parameter settings to the recommended values. For details, see <a href="cce_10_0652.html">Modifying Node Pool Configurations</a>.</p>
</td>
</tr>
<tr id="cce_10_0054__row9442141061414"><td class="cellrowborder" valign="top" headers="mcps1.3.2.2.2.5.1.1 "><p id="cce_10_0054__p5513115415911">Modifying OS configuration</p>

View File

@ -1,11 +1,11 @@
<a name="cce_10_0059"></a><a name="cce_10_0059"></a>
<h1 class="topictitle1">Network Policies</h1>
<h1 class="topictitle1">Configuring Network Policies to Restrict Pod Access</h1>
<div id="body1526883582577"><p id="cce_10_0059__p8060118">Network policies are designed by Kubernetes to restrict pod access. It is equivalent to a firewall at the application layer to enhance network security. The capabilities supported by network policies depend on the capabilities of the network add-ons of the cluster.</p>
<p id="cce_10_0059__p1599485013158">By default, if a namespace does not have any policy, pods in the namespace accept traffic from any source and send traffic to any destination.</p>
<p id="cce_10_0059__p1463183603211">Network policies are classified into the following types:</p>
<ul id="cce_10_0059__ul13161939133212"><li id="cce_10_0059__li18351124913218"><strong id="cce_10_0059__b6606312280">namespaceSelector</strong>: selects particular namespaces for which all pods should be allowed as ingress sources or egress destinations.</li><li id="cce_10_0059__li5998840163217"><strong id="cce_10_0059__b36551310112810">podSelector</strong>: selects particular pods in the same namespace as the network policy which should be allowed as ingress sources or egress destinations.</li><li id="cce_10_0059__li08641649183212"><strong id="cce_10_0059__b116215248286">ipBlock</strong>: selects particular IP blocks to allow as ingress sources or egress destinations. (Only egress rules support IP blocks.)</li></ul>
<div class="section" id="cce_10_0059__section332285584912"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0059__ul178821748131512"><li id="cce_10_0059__li1388224818153">Only clusters that use the tunnel network model support network policies. Network policies are classified into the following types:<ul id="cce_10_0059__ul7256209500"><li id="cce_10_0059__li1392310594496">Ingress: All versions support this type.</li><li id="cce_10_0059__en-us_topic_0000001199501178_li570615420397">Egress: Only the following OSs and cluster versions support egress rules.
<div class="section" id="cce_10_0059__section332285584912"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0059__ul178821748131512"><li id="cce_10_0059__li1388224818153">Only clusters that use the tunnel network model support network policies. Network policies are classified into the following types:<ul id="cce_10_0059__ul7256209500"><li id="cce_10_0059__li1392310594496">Ingress: All versions support this type.</li><li id="cce_10_0059__en-us_topic_0000001199501178_li570615420397">Egress: Only the following OSs and cluster versions support egress rules.
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0059__table12813218163117" frame="border" border="1" rules="all"><thead align="left"><tr id="cce_10_0059__row11813218183120"><th align="left" class="cellrowborder" valign="top" width="14.000000000000002%" id="mcps1.3.5.2.1.1.2.1.1.4.1.1"><p id="cce_10_0059__p18132189316">OS</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="22%" id="mcps1.3.5.2.1.1.2.1.1.4.1.2"><p id="cce_10_0059__p294822455516">Cluster Version</p>
@ -23,6 +23,13 @@
<p id="cce_10_0059__p3871581545">4.18.0-147.5.1.6.h998.eulerosv2r9.x86_64</p>
</td>
</tr>
<tr id="cce_10_0059__row13463832135114"><td class="cellrowborder" valign="top" width="14.000000000000002%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.1 "><p id="cce_10_0059__p1246493235112">HCE OS 2.0</p>
</td>
<td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.2 "><p id="cce_10_0059__p134641032135113">v1.25 or later</p>
</td>
<td class="cellrowborder" valign="top" width="64%" headers="mcps1.3.5.2.1.1.2.1.1.4.1.3 "><p id="cce_10_0059__p54649322519">5.10.0-60.18.0.50.r865_35.hce2.x86_64</p>
</td>
</tr>
</tbody>
</table>
</div>
@ -47,7 +54,7 @@ spec:
- protocol: TCP
port: 6379</pre>
<p id="cce_10_0059__en-us_topic_0249851123_p88888434711">The following figure shows how podSelector works.</p>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig139410543444"><span class="figcap"><b>Figure 1 </b>podSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image185021946194414" src="en-us_image_0000001898025749.png"></span></div>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig139410543444"><span class="figcap"><b>Figure 1 </b>podSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image185021946194414" src="en-us_image_0000001981276601.png"></span></div>
</li></ul>
<ul id="cce_10_0059__en-us_topic_0249851123_ul68309714213"><li id="cce_10_0059__en-us_topic_0249851123_li1283027192120"><strong id="cce_10_0059__en-us_topic_0249851123_b184891164227">Using namespaceSelector to specify the access scope</strong><pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen18399134874818">apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@ -66,11 +73,11 @@ spec:
- protocol: TCP
port: 6379</pre>
<p id="cce_10_0059__en-us_topic_0249851123_p3874718155019">The following figure shows how namespaceSelector works.</p>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig127351855617"><span class="figcap"><b>Figure 2 </b>namespaceSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image141441335560" src="en-us_image_0000001897906237.png"></span></div>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig127351855617"><span class="figcap"><b>Figure 2 </b>namespaceSelector</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image141441335560" src="en-us_image_0000001950317068.png"></span></div>
</li></ul>
</div>
<div class="section" id="cce_10_0059__section20486817707"><h4 class="sectiontitle">Using Egress Rules</h4><p id="cce_10_0059__en-us_topic_0249851123_p1311606618">Egress supports not only podSelector and namespaceSelector, but also ipBlock.</p>
<div class="note" id="cce_10_0059__en-us_topic_0249851123_note16478276101"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0059__en-us_topic_0249851123_p1547814741018">Only clusters of version 1.23 or later support Egress rules. Only nodes running EulerOS 2.9 are supported.</p>
<div class="note" id="cce_10_0059__en-us_topic_0249851123_note16478276101"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0059__en-us_topic_0249851123_p1547814741018">Only clusters of version 1.23 or later support Egress rules. Only nodes running EulerOS 2.9 or HCE OS 2.0 are supported.</p>
</div></div>
<pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen14581393131">apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@ -90,7 +97,7 @@ spec:
except:
- 172.16.0.40/32 # This CIDR block cannot be accessed. This value must fall within the range specified by <strong id="cce_10_0059__b52842121410">cidr</strong>.</pre>
<p id="cce_10_0059__en-us_topic_0249851123_p3245846202818">The following figure shows how ipBlock works.</p>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig15678132552812"><span class="figcap"><b>Figure 3 </b>ipBlock</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image6270134419182" src="en-us_image_0000001851745580.png"></span></div>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig15678132552812"><span class="figcap"><b>Figure 3 </b>ipBlock</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image6270134419182" src="en-us_image_0000001950317072.png"></span></div>
<p id="cce_10_0059__en-us_topic_0249851123_p1260313810298">You can define ingress and egress in the same rule.</p>
<pre class="screen" id="cce_10_0059__en-us_topic_0249851123_screen235835922918">apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@ -118,10 +125,10 @@ spec:
matchLabels:
role: web</pre>
<p id="cce_10_0059__en-us_topic_0249851123_p17239137193116">The following figure shows how to use ingress and egress together.</p>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig14112102353618"><span class="figcap"><b>Figure 4 </b>Using both ingress and egress</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image103563915919" src="en-us_image_0000001897906233.png"></span></div>
<div class="fignone" id="cce_10_0059__en-us_topic_0249851123_fig14112102353618"><span class="figcap"><b>Figure 4 </b>Using both ingress and egress</span><br><span><img id="cce_10_0059__en-us_topic_0249851123_image103563915919" src="en-us_image_0000001950317060.png"></span></div>
</div>
<div class="section" id="cce_10_0059__section349662212313"><h4 class="sectiontitle">Creating a Network Policy on the Console</h4><ol id="cce_10_0059__ol10753729162012"><li id="cce_10_0059__li67621546123813"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0059__li275310297205"><span>Choose <strong id="cce_10_0059__b1684093473514"><span id="cce_10_0059__text144061727132711">Policies</span></strong> in the navigation pane, click the <span class="uicontrol" id="cce_10_0059__uicontrol8840143418354"><b>Network Policies</b></span> tab, and click <strong id="cce_10_0059__b1684043412358">Create Network Policy</strong> in the upper right corner.</span><p><ul id="cce_10_0059__ul1275420367216"><li id="cce_10_0059__li207540368218"><strong id="cce_10_0059__b5858127617589">Policy Name</strong>: Specify a network policy name.</li><li id="cce_10_0059__li86551950162110"><strong id="cce_10_0059__b2485142065319">Namespace</strong>: Select a namespace in which the network policy is applied.</li><li id="cce_10_0059__li1811145118419"><strong id="cce_10_0059__b1082493183618">Selector</strong>: Enter a label, select the pod to be associated, and click <strong id="cce_10_0059__b39962039143613">Add</strong>. You can also click <span class="uicontrol" id="cce_10_0059__uicontrol127315410439"><b>Reference Workload Label</b></span> to use the label of an existing workload.</li><li id="cce_10_0059__li20288331248"><strong id="cce_10_0059__b288315258371">Inbound Rule</strong>: Click <span><img id="cce_10_0059__image297081312440" src="en-us_image_0000001851745568.png"></span> to add an inbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p13464141094517"></p>
<p id="cce_10_0059__p1251071818275"><span><img id="cce_10_0059__image3789195442716" src="en-us_image_0000001897906213.png"></span></p>
<div class="section" id="cce_10_0059__section349662212313"><h4 class="sectiontitle">Creating a Network Policy on the Console</h4><ol id="cce_10_0059__ol10753729162012"><li id="cce_10_0059__li67621546123813"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0059__li275310297205"><span>Choose <strong id="cce_10_0059__b1684093473514"><span id="cce_10_0059__text144061727132711">Policies</span></strong> in the navigation pane, click the <span class="uicontrol" id="cce_10_0059__uicontrol8840143418354"><b>Network Policies</b></span> tab, and click <strong id="cce_10_0059__b1684043412358">Create Network Policy</strong> in the upper right corner.</span><p><ul id="cce_10_0059__ul1275420367216"><li id="cce_10_0059__li207540368218"><strong id="cce_10_0059__b5858127617589">Policy Name</strong>: Specify a network policy name.</li><li id="cce_10_0059__li86551950162110"><strong id="cce_10_0059__b2485142065319">Namespace</strong>: Select a namespace in which the network policy is applied.</li><li id="cce_10_0059__li1811145118419"><strong id="cce_10_0059__b1082493183618">Selector</strong>: Enter a label, select the pod to be associated, and click <strong id="cce_10_0059__b39962039143613">Add</strong>. You can also click <span class="uicontrol" id="cce_10_0059__uicontrol127315410439"><b>Reference Workload Label</b></span> to use the label of an existing workload.</li><li id="cce_10_0059__li20288331248"><strong id="cce_10_0059__b288315258371">Inbound Rule</strong>: Click <span><img id="cce_10_0059__image297081312440" src="en-us_image_0000001981276605.png"></span> to add an inbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p13464141094517"></p>
<p id="cce_10_0059__p1251071818275"><span><img id="cce_10_0059__image3789195442716" src="en-us_image_0000001981436449.png"></span></p>
<div class="p" id="cce_10_0059__p16644759445">
<div class="tablenoborder"><a name="cce_10_0059__table166419994515"></a><a name="table166419994515"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0059__table166419994515" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Adding an inbound rule</caption><thead align="left"><tr id="cce_10_0059__row186401397458"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.8.2.2.2.1.4.6.1.2.3.1.1"><p id="cce_10_0059__p163919913452">Parameter</p>
</th>
@ -148,7 +155,7 @@ spec:
</table>
</div>
</div>
</li><li id="cce_10_0059__li208969565264"><strong id="cce_10_0059__b13933104315451">Outbound Rule</strong>: Click <span><img id="cce_10_0059__image190375162714" src="en-us_image_0000001897906225.png"></span> to add an outbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p74227561415"><span><img id="cce_10_0059__image203216571849" src="en-us_image_0000001863378970.png"></span></p>
</li><li id="cce_10_0059__li208969565264"><strong id="cce_10_0059__b13933104315451">Outbound Rule</strong>: Click <span><img id="cce_10_0059__image190375162714" src="en-us_image_0000001981436461.png"></span> to add an outbound rule. For details about parameter settings, see <a href="#cce_10_0059__table166419994515">Table 1</a>.<p id="cce_10_0059__p19496171820718"><span><img id="cce_10_0059__image203216571849" src="en-us_image_0000001950317048.png"></span></p>
<div class="p" id="cce_10_0059__p1052121812515">
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0059__table940510264284" frame="border" border="1" rules="all"><caption><b>Table 2 </b>Adding an outbound rule</caption><thead align="left"><tr id="cce_10_0059__row15405182632814"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.8.2.2.2.1.5.5.1.2.3.1.1"><p id="cce_10_0059__p34051926152811">Parameter</p>
</th>
@ -181,12 +188,12 @@ spec:
</div>
</div>
</li></ul>
</p></li><li id="cce_10_0059__li1513793212118"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0059__uicontrol1498744718284"><b>OK</b></span>.</span></li></ol>
</p></li><li id="cce_10_0059__li1513793212118"><span>Click <span class="uicontrol" id="cce_10_0059__uicontrol1498744718284"><b>OK</b></span>.</span></li></ol>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0677.html">Container Tunnel Network Settings</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0677.html">Tunnel Network Settings</a></div>
</div>
</div>

View File

@ -4,13 +4,16 @@
<div id="body8662426"><div class="section" id="cce_10_0063__section127666327248"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0063__p192873216229">After a node scaling policy is created, you can delete, edit, disable, enable, or clone the policy.</p>
</div>
<div class="section" id="cce_10_0063__section102878407207"><h4 class="sectiontitle">Viewing a Node Scaling Policy</h4><p id="cce_10_0063__p713741135215">You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed.</p>
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li757116188514"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0063__uicontrol885043603616"><b>Nodes</b></span>.On the page displayed, click the <strong id="cce_10_0063__b1785019363361">Node Pools</strong> tab and then the name of the node pool for which an auto scaling policy has been created to view the node pool details.</span></li><li id="cce_10_0063__li391162210375"><span>On the node pool details page, click the <strong id="cce_10_0063__b182822310377">Auto Scaling</strong> tab to view the auto scaling configuration and scaling records.</span></li></ol>
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li757116188514"><span>In the navigation pane, choose <span class="uicontrol" id="cce_10_0063__uicontrol885043603616"><b>Nodes</b></span>.On the page displayed, click the <strong id="cce_10_0063__b1785019363361">Node Pools</strong> tab and then the name of the node pool for which an auto scaling policy has been created to view the node pool details.</span></li><li id="cce_10_0063__li391162210375"><span>On the node pool details page, click the <strong id="cce_10_0063__b182822310377">Auto Scaling</strong> tab to view the auto scaling configuration and scaling records.</span><p><div class="note" id="cce_10_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0063__p1793618441931">You can obtain created auto scaling policies on the <strong id="cce_10_0063__b514212331917"><span id="cce_10_0063__text67571453104013">Policies</span></strong> page.</p>
<ol type="a" id="cce_10_0063__ol1691347738"><li id="cce_10_0063__li5468556932">Log in to the CCE console and click the cluster name to access the cluster console.</li><li id="cce_10_0063__li87313521749">In the navigation pane, choose <strong id="cce_10_0063__b576614533199"><span id="cce_10_0063__text1838374619210">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b810014379203">Node Scaling Policies</strong> tab.</li><li id="cce_10_0063__li141394161742">Check the configuration of the auto scaling policies. Choose <strong id="cce_10_0063__b10717289212">More</strong> &gt; <strong id="cce_10_0063__b817473111210">Scaling History</strong> for the target policy to check the scaling records of the policy.</li></ol>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_10_0063__ol14644105712488"><li id="cce_10_0063__li41181041153517"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li21181041113517"><span>In the navigation pane, choose <strong id="cce_10_0063__b1214315541372"><span id="cce_10_0063__text82292962415">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b6742397389">Node Scaling Policies</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1770171519392">More</strong> &gt; <strong id="cce_10_0063__b88264165396">Delete</strong> in the <strong id="cce_10_0063__b7342193564112">Operation</strong> column.</span></li><li id="cce_10_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_10_0063__wintitle195460432178"><b>Delete Node Scaling Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_10_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_10_0063__uicontrol12723105481711"><b>Yes</b></span> to delete the policy.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_10_0063__ol067875612225"><li id="cce_10_0063__li1148617913919"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li19486498394"><span>In the navigation pane, choose <strong id="cce_10_0063__b19317105710390"><span id="cce_10_0063__text105014172246">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b5317185793910">Node Scaling Policies</strong> tab, locate the row containing the target policy and click <strong id="cce_10_0063__b822154212401">Edit</strong> in the <strong id="cce_10_0063__b152854419415">Operation</strong> column.</span></li><li id="cce_10_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, configure policy parameters listed in <a href="cce_10_0209.html#cce_10_0209__table18763092201">Table 2</a>.</span></li><li id="cce_10_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>In the navigation pane, choose <strong id="cce_10_0063__b182079494212"><span id="cce_10_0063__text15369102114247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1620711418426">Node Scaling Policies</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1020734124213">More</strong> &gt; <strong id="cce_10_0063__b620724164218">Clone</strong> in the <strong id="cce_10_0063__b82081045425">Operation</strong> column.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Clone Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>In the navigation pane, choose <strong id="cce_10_0063__b182079494212"><span id="cce_10_0063__text15369102114247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1620711418426">Node Scaling Policies</strong> tab, locate the row containing the target policy and choose <strong id="cce_10_0063__b1020734124213">More</strong> &gt; <strong id="cce_10_0063__b620724164218">Clone</strong> in the <strong id="cce_10_0063__b82081045425">Operation</strong> column.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Create Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_10_0063__ol0843321258"><li id="cce_10_0063__li1221435414019"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li4214105494011"><span>In the navigation pane, choose <strong id="cce_10_0063__b4140849438"><span id="cce_10_0063__text1726717273247">Policies</span></strong>. On the page displayed, click the <strong id="cce_10_0063__b1814110454317">Node Scaling Policies</strong> tab, locate the row containing the target policy click <strong id="cce_10_0063__b0778424161820">Disable</strong> in the <strong id="cce_10_0063__b1977852441812">Operation</strong> column. If the policy is in the disabled state, click <span class="uicontrol" id="cce_10_0063__uicontrol177902431813"><b>Enable</b></span> in the <strong id="cce_10_0063__b47795246181">Operation</strong> column.</span></li><li id="cce_10_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li></ol>
</div>

View File

@ -6,23 +6,15 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0277.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0129.html">CoreDNS</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0907.html">Scheduling and Elasticity Add-ons</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0066.html">CCE Container Storage (Everest)</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0908.html">Cloud Native Observability Add-ons</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0132.html">CCE Node Problem Detector</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0909.html">Cloud Native Heterogeneous Computing Add-ons</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0154.html">CCE Cluster Autoscaler</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0910.html">Container Network Add-ons</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0205.html">Kubernetes Metrics Server</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0240.html">CCE Advanced HPA</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0141.html">CCE AI Suite (NVIDIA GPU)</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0193.html">Volcano Scheduler</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0127.html">FlexVolume (Discarded)</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0911.html">Container Storage Add-ons</a></strong><br>
</li>
</ul>
</div>

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,8 @@
<div id="body8662426"></div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_bulletin_0089.html">Kubernetes 1.29 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_bulletin_0068.html">Kubernetes 1.28 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_bulletin_0059.html">Kubernetes 1.27 Release Notes</a></strong><br>
@ -12,7 +14,7 @@
</li>
<li class="ulchildlink"><strong><a href="cce_bulletin_0027.html">Kubernetes 1.23 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_bulletin_0026.html">Kubernetes 1.21 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_bulletin_0026.html">Kubernetes 1.21 (EOM) Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_whsnew_0010.html">Kubernetes 1.19 (EOM) Release Notes</a></strong><br>
</li>

View File

@ -8,13 +8,13 @@
<p id="cce_10_0081__p19802113820588">This section describes how node pools work in CCE and how to create and manage node pools.</p>
</div>
<div class="section" id="cce_10_0081__section1486732122217"><h4 class="sectiontitle">Node Pool Architecture</h4><p id="cce_10_0081__p2480134412214">Generally, all nodes in a node pool have the following same attributes:</p>
<ul id="cce_10_0081__ul134808449226"><li id="cce_10_0081__li1848004422220">Node OS</li><li id="cce_10_0081__li890631614331">Node flavor</li><li id="cce_10_0081__li730814322334">Node login mode</li><li id="cce_10_0081__li3978937183319">Node container runtime</li><li id="cce_10_0081__li20480184419225">Startup parameters of Kubernetes components on a node</li><li id="cce_10_0081__li17480104411227">User-defined startup script of a node</li><li id="cce_10_0081__li84806446229"><strong id="cce_10_0081__b452816349419">Kubernetes Labels</strong> and <strong id="cce_10_0081__b65284345410">Taints</strong></li></ul>
<ul id="cce_10_0081__ul134808449226"><li id="cce_10_0081__li1848004422220">Node OS</li><li id="cce_10_0081__li730814322334">Node login mode</li><li id="cce_10_0081__li3978937183319">Node container runtime</li><li id="cce_10_0081__li20480184419225">Startup parameters of Kubernetes components on a node</li><li id="cce_10_0081__li17480104411227">Custom startup script of a node</li><li id="cce_10_0081__li84806446229">Kubernetes labels and taints</li></ul>
<p id="cce_10_0081__p1048019444223">CCE provides the following extended attributes for node pools:</p>
<ul id="cce_10_0081__ul84801544162219"><li id="cce_10_0081__li1480184410229">Node pool OS</li><li id="cce_10_0081__li114801944112213">Maximum number of pods on each node in a node pool</li></ul>
</div>
<div class="section" id="cce_10_0081__section16928123042115"><a name="cce_10_0081__section16928123042115"></a><a name="section16928123042115"></a><h4 class="sectiontitle">Description of <span class="keyword" id="cce_10_0081__keyword729863519811">DefaultPool</span></h4><p id="cce_10_0081__p5444184415215"><span class="keyword" id="cce_10_0081__keyword799943811813">DefaultPool</span> is not a real node pool. It only <strong id="cce_10_0081__b1896884414412">classifies</strong> nodes that are not in the user-created node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.</p>
<div class="section" id="cce_10_0081__section16928123042115"><a name="cce_10_0081__section16928123042115"></a><a name="section16928123042115"></a><h4 class="sectiontitle">Description of <span class="keyword" id="cce_10_0081__keyword729863519811">DefaultPool</span></h4><p id="cce_10_0081__p5444184415215"><span class="keyword" id="cce_10_0081__keyword799943811813">DefaultPool</span> is not a real node pool. It only <strong id="cce_10_0081__b1896884414412">classifies</strong> nodes that are not in the custom node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.</p>
</div>
<div class="section" id="cce_10_0081__section32131316256"><h4 class="sectiontitle">Applicable Scenarios</h4><p id="cce_10_0081__p1945803011253">When a large-scale cluster is required, you are advised to use node pools to manage nodes.</p>
<div class="section" id="cce_10_0081__section32131316256"><h4 class="sectiontitle">Application Scenarios</h4><p id="cce_10_0081__p1945803011253">When a large-scale cluster is required, you are advised to use node pools to manage nodes.</p>
<p id="cce_10_0081__p1491578182512">The following table describes multiple scenarios of large-scale cluster management and the functions of node pools in each scenario.</p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0081__table1736317479258" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Using node pools for different management scenarios</caption><thead align="left"><tr id="cce_10_0081__row336414719256"><th align="left" class="cellrowborder" valign="top" width="39.32%" id="mcps1.3.4.4.2.3.1.1"><p id="cce_10_0081__p5364134792518">Scenario</p>
@ -60,7 +60,7 @@
</tr>
<tr id="cce_10_0081__row1084351717279"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p105796289273">Deleting a node pool</p>
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1916410397318">When a node pool is deleted, the nodes in the node pool are deleted first. Workloads on the original nodes are automatically migrated to available nodes in other node pools.</p>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1916410397318">Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p75791828182711">If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.</p>
</td>
@ -72,7 +72,7 @@
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p105796284275">Do not store important data on nodes in a node pool because the nodes may be deleted after scale-in. Data on the deleted nodes cannot be restored.</p>
</td>
</tr>
<tr id="cce_10_0081__row5843131718272"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p18579132802720">Enabling auto scaling for a node pool</p>
<tr id="cce_10_0081__row5843131718272"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p18579132802720">Disabling auto scaling for a node pool</p>
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p75791228192718">After auto scaling is disabled, the number of nodes in a node pool will not automatically change with the cluster loads.</p>
</td>
@ -86,9 +86,9 @@
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p25795288273">After auto scaling is enabled, you are not advised to manually adjust the node pool size.</p>
</td>
</tr>
<tr id="cce_10_0081__row18431117142713"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p1657922832717">Changing node pool configurations</p>
<tr id="cce_10_0081__row18431117142713"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p1657922832717">Modifying node pool configurations</p>
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), resource tags, and taints.</p>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can change the node pool name and number of nodes, add or delete Kubernetes labels, resource tags, and taints, and adjust node pool configurations such as the disk, OS, and container engine of the node pool.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p857917281274">The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation.</p>
</td>
@ -111,7 +111,7 @@
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p5387151854714">You can configure core components with fine granularity.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><ul id="cce_10_0081__ul131631956486"><li id="cce_10_0081__li16163105164816">This function is supported only in clusters of v1.15 and later. It is not displayed for versions earlier than v1.15.</li><li id="cce_10_0081__li191638515487">The default node pool DefaultPool does not support this type of configuration.</li></ul>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><ul id="cce_10_0081__ul131631956486"><li id="cce_10_0081__li16163105164816">This function is supported only in clusters of v1.15 and later. It is not displayed for versions earlier than v1.15.</li><li id="cce_10_0081__li191638515487">The default node pool does not support this type of configuration.</li></ul>
</td>
</tr>
</tbody>

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
<a name="cce_10_0084"></a><a name="cce_10_0084"></a>
<h1 class="topictitle1">Enabling ICMP Security Group Rules</h1>
<div id="body1530866171131"><div class="section" id="cce_10_0084__section106079439418"><h4 class="sectiontitle">Application Scenarios</h4><p id="cce_10_0084__p34679509418">If a workload uses UDP for both load balancing and health check, enable ICMP security group rules for the backend servers.</p>
<div id="body1530866171131"><div class="section" id="cce_10_0084__section106079439418"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0084__p34679509418">If a workload uses UDP for both load balancing and health check, enable ICMP security group rules for the backend servers.</p>
</div>
<div class="section" id="cce_10_0084__section865612352391"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0084__ol1999461164212"><li id="cce_10_0084__li2114123554110"><span>Log in to the CCE console, choose <span class="uicontrol" id="cce_10_0084__uicontrol16903135110235"><b>Service List</b></span> &gt; <span class="uicontrol" id="cce_10_0084__uicontrol8903205152316"><b>Networking</b></span> &gt; <span class="uicontrol" id="cce_10_0084__uicontrol2903851102314"><b>Virtual Private Cloud</b></span>, and choose <span class="uicontrol" id="cce_10_0084__uicontrol13903195119235"><b>Access Control</b></span> &gt; <span class="uicontrol" id="cce_10_0084__uicontrol1903115192316"><b>Security Groups</b></span> in the navigation pane.</span></li><li id="cce_10_0084__li1211191111308"><span>In the security group list, locate the security group of the cluster. Click the <strong id="cce_10_0084__b104332046247">Inbound Rules</strong> tab page and then <strong id="cce_10_0084__b104331541248">Add Rule</strong>. In the <strong id="cce_10_0084__b143384162410">Add Inbound Rule</strong> dialog box, configure inbound parameters.</span><p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0084__table14257503611" frame="border" border="1" rules="all"><thead align="left"><tr id="cce_10_0084__row02645133615"><th align="left" class="cellrowborder" valign="top" width="16.189999999999998%" id="mcps1.3.2.2.2.2.1.1.6.1.1"><p id="cce_10_0084__p84201847103620">Cluster Type</p>

View File

@ -10,10 +10,10 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0140.html">Connecting to a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0215.html">Upgrading a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0031.html">Managing a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0215.html">Upgrading a Cluster</a></strong><br>
</li>
</ul>
</div>

View File

@ -3,18 +3,18 @@
<h1 class="topictitle1">Overview</h1>
<div id="body0000001159453456"><div class="section" id="cce_10_0094__section17868123416122"><h4 class="sectiontitle">Why We Need Ingresses</h4><p id="cce_10_0094__p19813582419">A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.</p>
<p id="cce_10_0094__p168757241679">An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in <a href="#cce_10_0094__fig18155819416">Figure 1</a>, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.</p>
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001851587340.png"></span></div>
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001950317392.png"></span></div>
<p id="cce_10_0094__p128258846">The following describes the ingress-related definitions:</p>
<ul id="cce_10_0094__ul2875811411"><li id="cce_10_0094__li78145815413">Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.</li><li id="cce_10_0094__li148115817417">Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the target backend Services.</li></ul>
</div>
<div class="section" id="cce_10_0094__section162271821192312"><h4 class="sectiontitle">Working Rules of LoadBalancer Ingress Controller</h4><p id="cce_10_0094__p172542048121220">LoadBalancer Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.</p>
<p id="cce_10_0094__p4254124831218">LoadBalancer Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). <a href="#cce_10_0094__fig122542486129">Figure 2</a> shows the working rules of LoadBalancer Ingress Controller.</p>
<ol id="cce_10_0094__ol525410483123"><li id="cce_10_0094__li8254184813127">A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port.</li><li id="cce_10_0094__li1225474817126">When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule.</li><li id="cce_10_0094__li115615167193">When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service.</li></ol>
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working rules of shared LoadBalancer ingresses in CCE standard and Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001851587344.png"></span></div>
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working rules of shared LoadBalancer ingresses in CCE standard and Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001981276941.png"></span></div>
<p id="cce_10_0094__p3662933103112">When you use <strong id="cce_10_0094__b61532583920">a dedicated load balancer in a CCE Turbo cluster</strong>, pod IP addresses are allocated from the VPC and the load balancer can directly access the pods. When creating an ingress for external cluster access, you can use ELB to access a ClusterIP Service and use pods as the backend server of the ELB listener. In this way, external traffic can directly access the pods in the cluster without being forwarded by node ports.</p>
<div class="fignone" id="cce_10_0094__fig44531612193618"><span class="figcap"><b>Figure 3 </b>Working rules of passthrough networking for dedicated LoadBalancer ingresses in CCE Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image6906154516408" src="en-us_image_0000001897906717.png"></span></div>
<div class="fignone" id="cce_10_0094__fig44531612193618"><span class="figcap"><b>Figure 3 </b>Working rules of passthrough networking for dedicated LoadBalancer ingresses in CCE Turbo clusters</span><br><span><img class="eddx" id="cce_10_0094__image6906154516408" src="en-us_image_0000001950317380.png"></span></div>
</div>
<div class="section" id="cce_10_0094__section3565202819276"><a name="cce_10_0094__section3565202819276"></a><a name="section3565202819276"></a><h4 class="sectiontitle">Services Supported by Ingresses</h4><div class="p" id="cce_10_0094__p109298589133"><a href="#cce_10_0094__table143264518141">Table 1</a> lists the services supported by LoadBalancer ingresses.
<div class="section" id="cce_10_0094__section3565202819276"><a name="cce_10_0094__section3565202819276"></a><a name="section3565202819276"></a><h4 class="sectiontitle">Services Supported by Ingresses</h4><div class="p" id="cce_10_0094__p109298589133"><a href="#cce_10_0094__table143264518141">Table 1</a> lists the Services supported by LoadBalancer ingresses.
<div class="tablenoborder"><a name="cce_10_0094__table143264518141"></a><a name="table143264518141"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0094__table143264518141" width="100%" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Services supported by LoadBalancer ingresses</caption><thead align="left"><tr id="cce_10_0094__row1132645112145"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.3.2.2.2.5.1.1"><p id="cce_10_0094__p33261518148">Cluster Type</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="22%" id="mcps1.3.3.2.2.2.5.1.2"><p id="cce_10_0094__p15326195191413">ELB Type</p>

View File

@ -132,7 +132,7 @@
</thead>
<tbody><tr id="cce_10_0105__row04201302279"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.4.2.2.2.1.2.3.1.1 "><p id="cce_10_0105__p6420110192718">CLI</p>
</td>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b1740229594">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b74996436">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b2000314378">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b2097726279">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
<p id="cce_10_0105__p94203082712">Example command:</p>
<pre class="screen" id="cce_10_0105__screen6420190132712">exec:
command:
@ -152,7 +152,7 @@
</div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0105__section151181981167"><h4 class="sectiontitle">Example YAML</h4><p id="cce_10_0105__p13147194016468">This section uses Nginx as an example to describe how to set the container lifecycle.</p>
<div class="section" id="cce_10_0105__section151181981167"><h4 class="sectiontitle">YAML Example</h4><p id="cce_10_0105__p13147194016468">This section uses Nginx as an example to describe how to set the container lifecycle.</p>
<p id="cce_10_0105__p15279185820">In the following configuration file, the <strong id="cce_10_0105__b964310018294">postStart</strong> command is defined to run the <strong id="cce_10_0105__b9643602292">install.sh</strong> command in the <strong id="cce_10_0105__b18643180192920">/bin/bash</strong> directory. <strong id="cce_10_0105__b1464330152911">preStop</strong> is defined to run the <strong id="cce_10_0105__b46435072919">uninstall.sh</strong> command.</p>
<pre class="screen" id="cce_10_0105__screen8529181815811">apiVersion: apps/v1
kind: Deployment
@ -191,7 +191,7 @@ spec:
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Container</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Workload</a></div>
</div>
</div>

View File

@ -6,26 +6,26 @@
<div class="section" id="cce_10_0107__section17352373317"><h4 class="sectiontitle">Permissions</h4><p id="cce_10_0107__p51211251156">When you access a cluster using kubectl, CCE uses <strong id="cce_10_0107__b204601556154217">kubeconfig</strong> generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_10_0107__b16295666413">kubeconfig</strong> file vary from user to user.</p>
<p id="cce_10_0107__p142391810113">For details about user permissions, see <a href="cce_10_0187.html#cce_10_0187__section1464135853519">Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
</div>
<div class="section" id="cce_10_0107__section37321625113110"><a name="cce_10_0107__section37321625113110"></a><a name="section37321625113110"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console and click the name of the target cluster to access the cluster console. On the <strong id="cce_10_0107__b127302345555"><span id="cce_10_0107__text869825054114"><strong>Overview</strong></span></strong> page, view the access address and kubectl connection procedure.</p>
<div class="section" id="cce_10_0107__section37321625113110"><a name="cce_10_0107__section37321625113110"></a><a name="section37321625113110"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console and click the name of the target cluster to access the cluster console. On the <strong id="cce_10_0107__b127302345555"><span id="cce_10_0107__text869825054114">Overview</span></strong> page, view the access address and kubectl connection procedure.</p>
<div class="p" id="cce_10_0107__p7805114919351">CCE allows you to access a cluster through a private network or a public network.<ul id="cce_10_0107__ul126071124175518"><li id="cce_10_0107__li144192116548"><span class="keyword" id="cce_10_0107__keyword13441034142917">Intranet access</span>: The client that accesses the cluster must be in the same VPC as the cluster.</li><li id="cce_10_0107__li1460752419555">Public access: The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP.<div class="notice" id="cce_10_0107__note2967194410365"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0107__p19671244103610">To bind an EIP to the cluster, go to the <strong id="cce_10_0107__b1061217302"><span id="cce_10_0107__text6807412192418">Overview</span></strong> page and click <strong id="cce_10_0107__b021910485396">Bind</strong> next to <strong id="cce_10_0107__b132197480394">EIP</strong> in the <strong id="cce_10_0107__b14219164815396">Connection Information</strong> area. In a cluster with an EIP bound, kube-apiserver will be exposed to the Internet and may be attacked. To solve this problem, you can configure Advanced Anti-DDoS for the EIP of the node on which kube-apiserver runs.</p>
</div></div>
</li></ul>
</div>
<p id="cce_10_0107__p2842139103716">Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure:</p>
<p id="cce_10_0107__p2842139103716">Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. The process is as follows:</p>
<ol id="cce_10_0107__ol6469105613170"><li id="cce_10_0107__li194691356201712"><span><strong id="cce_10_0107__b469717424401">Download kubectl.</strong></span><p><p id="cce_10_0107__p53069487256">Prepare a computer that can access the public network and install kubectl in CLI mode. You can run the <strong id="cce_10_0107__b2309195102312">kubectl version</strong> command to check whether kubectl has been installed. If kubectl has been installed, skip this step.</p>
<p id="cce_10_0107__p125851851153510">This section uses the Linux environment as an example to describe how to install and configure kubectl. For details, see <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" target="_blank" rel="noopener noreferrer">Installing kubectl</a>.</p>
<ol type="a" id="cce_10_0107__ol735517018289"><li id="cce_10_0107__li551132463520">Log in to your client and download kubectl.<pre class="screen" id="cce_10_0107__screen8511142418352">cd /home
curl -LO https://dl.k8s.io/release/<em id="cce_10_0107__i13511182443516">{v1.25.0}</em>/bin/linux/amd64/kubectl</pre>
<p id="cce_10_0107__p6511924173518"><em id="cce_10_0107__i719013311241"><strong id="cce_10_0107__b16190153182415">{v1.25.0}</strong></em> specifies the version number. Replace it as required.</p>
<p id="cce_10_0107__p6511924173518"><em id="cce_10_0107__i719013311241">{v1.25.0}</em> specifies the version. Replace it as required.</p>
</li><li id="cce_10_0107__li1216814211286">Install kubectl.<pre class="screen" id="cce_10_0107__screen16892115815271">chmod +x kubectl
mv -f kubectl /usr/local/bin</pre>
</li></ol>
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span><strong id="cce_10_0107__b196211619192411">Obtain the kubectl configuration file (kubeconfig).</strong></span><p><p id="cce_10_0107__p1295818109256">On the <strong id="cce_10_0107__b124401123203115"><span id="cce_10_0107__text10158103924216"><strong>Overview</strong></span></strong> page, locate the <strong id="cce_10_0107__b450013549611">Connection Info</strong> area, click <strong id="cce_10_0107__b136512181078">Configure</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the page displayed, download the configuration file.</p>
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span><strong id="cce_10_0107__b196211619192411">Obtain the kubectl configuration file.</strong></span><p><p id="cce_10_0107__p1295818109256">In the <span class="uicontrol" id="cce_10_0107__uicontrol9472521182416"><b>Connection Info</b></span> pane on the <strong id="cce_10_0107__b182944389444"><span id="cce_10_0107__text10158103924216">Overview</span></strong> page, click <strong id="cce_10_0107__b1547035714410">Configure</strong> next to <strong id="cce_10_0107__b5182125174514">kubectl</strong> to check the kubectl connection. On the displayed page, choose <strong id="cce_10_0107__b171301231185115">Intranet access</strong> or <strong id="cce_10_0107__b1113211495516">Public network access</strong> and download the configuration file.</p>
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
</div></div>
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the <strong id="cce_10_0107__b156991854125914">kubeconfig.yaml</strong> file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b175828331240">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the configuration file (for example, <strong id="cce_10_0107__b156991854125914">kubeconfig.yaml</strong>) downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b175828331240">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
mkdir -p $HOME/.kube
mv -f kubeconfig.yaml $HOME/.kube/config</pre>
mv -f <i><span class="varname" id="cce_10_0107__varname937302110334">kubeconfig.yaml</span></i> $HOME/.kube/config</pre>
</li><li id="cce_10_0107__li1480512253214">Switch the kubectl access mode based on service scenarios.<ul id="cce_10_0107__ul91037595229"><li id="cce_10_0107__li5916145112313">Run this command to enable intra-VPC access:<pre class="screen" id="cce_10_0107__screen279213242247">kubectl config use-context internal</pre>
</li><li id="cce_10_0107__li113114274233">Run this command to enable public access (EIP required):<pre class="screen" id="cce_10_0107__screen965013316242">kubectl config use-context external</pre>
</li><li id="cce_10_0107__li104133512481">Run this command to enable public access and two-way authentication (EIP required):<pre class="screen" id="cce_10_0107__screen61712126498">kubectl config use-context externalTLSVerify</pre>
@ -36,7 +36,7 @@ mv -f kubeconfig.yaml $HOME/.kube/config</pre>
</p></li></ol>
</div>
<div class="section" id="cce_10_0107__section1559919152711"><a name="cce_10_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle"><span class="keyword" id="cce_10_0107__keyword311020376452">Two-Way Authentication for Domain Names</span></h4><p id="cce_10_0107__p138948491274">CCE supports two-way authentication for domain names.</p>
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">After an EIP is bound to an API Server, two-way domain name authentication is disabled by default if kubectl is used to access the cluster. You can run <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> to enable the two-way domain name authentication.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.yaml</strong> again.</li><li id="cce_10_0107__li5950658165414">If the two-way domain name authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.yaml</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, download the <strong id="cce_10_0107__b549311585216">kubeconfig.yaml</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001851587804.png"></span></div>
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">After an EIP is bound to an API Server, two-way domain name authentication is disabled by default if kubectl is used to access the cluster. You can run <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> to enable the two-way domain name authentication.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.yaml</strong> again.</li><li id="cce_10_0107__li5950658165414">If the two-way domain name authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.yaml</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, download the <strong id="cce_10_0107__b549311585216">kubeconfig.yaml</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001981436769.png"></span></div>
</li></ul>
</div>
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">FAQs</h4><ul id="cce_10_0107__ul1374831051115"><li id="cce_10_0107__li4748810121112"><strong id="cce_10_0107__b456677171119"><span class="keyword" id="cce_10_0107__keyword0702458114510">Error from server Forbidden</span></strong><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>

View File

@ -12,17 +12,17 @@
</li><li id="cce_10_0112__li104061647154310"><strong id="cce_10_0112__b84235270695818"><span class="keyword" id="cce_10_0112__keyword1395397266173145">CLI</span></strong><p id="cce_10_0112__p105811510164113">CLI is an efficient tool for health check. When using the CLI, you must specify an executable command in a container. The cluster periodically runs the command in the container. If the command output is 0, the health check is successful. Otherwise, the health check fails.</p>
<p id="cce_10_0112__p1658131014413">The CLI mode can be used to replace the HTTP request-based and TCP port-based health check.</p>
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b1610019014247">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b5100905245">1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can use the script command to run the <strong id="cce_10_0112__b16819134246">wget</strong> command to detect the container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
<p id="cce_10_0112__p13488133464119">Check the return code of the response. If the return code is within 200399, the script returns <strong id="cce_10_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b427293111227">1</strong>. </p>
<p id="cce_10_0112__p13488133464119">Check the return code of the response. If the return code is within 200399, the script returns <strong id="cce_10_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b427293111227">1</strong>.</p>
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b9972128102411">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b11973988247">sh/data/scripts/health_check.sh</strong> for command execution.</li></ul>
</div></div>
</li></ul>
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC Check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and obtain its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and obtain its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
</div></div>
</div>
</li></ul>
</div>
<div class="section" id="cce_10_0112__section2050653544516"><h4 class="sectiontitle">Common Parameters</h4>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0112__t045a8ee10cb946eaa4c01da4319b7206" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Common parameter description</caption><thead align="left"><tr id="cce_10_0112__re3891f83a0b242b1bf3f178042398166"><th align="left" class="cellrowborder" valign="top" width="19%" id="mcps1.3.3.2.2.3.1.1"><p id="cce_10_0112__afec93a787dcb46788032cfc70a14a22e">Parameter</p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0112__t045a8ee10cb946eaa4c01da4319b7206" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Common parameters</caption><thead align="left"><tr id="cce_10_0112__re3891f83a0b242b1bf3f178042398166"><th align="left" class="cellrowborder" valign="top" width="19%" id="mcps1.3.3.2.2.3.1.1"><p id="cce_10_0112__afec93a787dcb46788032cfc70a14a22e">Parameter</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="81%" id="mcps1.3.3.2.2.3.1.2"><p id="cce_10_0112__en-us_topic_0052519475_p74835383351">Description</p>
</th>
@ -94,7 +94,7 @@ spec:
periodSeconds: 5
startupProbe: # Startup probe
httpGet: # Checking an HTTP request is used as an example.
path: /healthz # The HTTP check path is <strong id="cce_10_0112__b812787483">/healthz</strong>.
path: /healthz # The HTTP check path is <strong id="cce_10_0112__b975836857">/healthz</strong>.
port: 80 # The check port number is <strong id="cce_10_0112__b561594217264">80</strong>.
failureThreshold: 30
periodSeconds: 10</pre>
@ -102,7 +102,7 @@ spec:
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Container</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Workload</a></div>
</div>
</div>

View File

@ -9,7 +9,7 @@
<p id="cce_10_0113__p78261119155911">Environment variables can be set in the following modes:</p>
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b4564141914250">Custom</strong>: Enter the environment variable name and parameter value.</li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b31161818143614">Added from ConfigMap key</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap</strong>: Import a key in a ConfigMap as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b67861335193619">configmap_value</strong> of <strong id="cce_10_0113__b478643513618">configmap_key</strong> in <strong id="cce_10_0113__b14610123945714">configmap-example</strong> as the value of environment variable <strong id="cce_10_0113__b7786133573616">key1</strong>, an environment variable named <strong id="cce_10_0113__b678683518364">key1</strong> whose value is <strong id="cce_10_0113__b1378615359362">configmap_value</strong> is available in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b12974122713812">secret_value</strong> of <strong id="cce_10_0113__b197472716385">secret_key</strong> in <strong id="cce_10_0113__b722441953910">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b8975627173810">key2</strong>, an environment variable named <strong id="cce_10_0113__b29756275384">key2</strong> whose value is <strong id="cce_10_0113__b097552703811">secret_value</strong> is available in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b19931701407">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if the pod name is imported as the value of environment variable <strong id="cce_10_0113__b1939710417283">key3</strong>, an environment variable named <strong id="cce_10_0113__b11252186142914">key3</strong> whose value is the pod name is available in the container.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b1625513417292">Resource Reference</strong>: The value of <strong id="cce_10_0113__b176281198307">Request</strong> or <strong id="cce_10_0113__b186221022193017">Limit</strong> defined by the container is used as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import the CPU limit of container-1 as the value of environment variable <strong id="cce_10_0113__b272674753017">key4</strong>, an environment variable named <strong id="cce_10_0113__b99015318423">key4</strong> whose value is the CPU limit of container-1 is available in the container.</li></ul>
</div>
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0113__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0113__b1794501219430">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0113__b11945131216432">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0113__li190412461831"><span>When creating a workload, modify the container information in <strong id="cce_10_0113__b101361766447">Container Settings</strong> and click the <strong id="cce_10_0113__b8169124424315">Environment Variables</strong> tab.</span></li><li id="cce_10_0113__li468251942720"><span>Configure environment variables.</span><p><div class="fignone" id="cce_10_0113__fig164568529317"><a name="cce_10_0113__fig164568529317"></a><a name="fig164568529317"></a><span class="figcap"><b>Figure 1 </b>Configuring environment variables</span><br><span><img id="cce_10_0113__image131385146481" src="en-us_image_0000001867802022.png"></span></div>
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0113__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0113__b1794501219430">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0113__b11945131216432">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0113__li190412461831"><span>When creating a workload, modify the container information in <strong id="cce_10_0113__b101361766447">Container Settings</strong> and click the <strong id="cce_10_0113__b8169124424315">Environment Variables</strong> tab.</span></li><li id="cce_10_0113__li468251942720"><span>Configure environment variables.</span><p><div class="fignone" id="cce_10_0113__fig164568529317"><a name="cce_10_0113__fig164568529317"></a><a name="fig164568529317"></a><span class="figcap"><b>Figure 1 </b>Configuring environment variables</span><br><span><img id="cce_10_0113__image131385146481" src="en-us_image_0000001950317180.png"></span></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0113__section19591158201313"><h4 class="sectiontitle">YAML Example</h4><pre class="screen" id="cce_10_0113__screen1034117614147">apiVersion: apps/v1
@ -102,7 +102,7 @@ secret_key=secret_value # Added from key. The key value in the ori
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Container</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0130.html">Configuring a Workload</a></div>
</div>
</div>

View File

@ -11,7 +11,9 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0626.html">Configuring SFS Turbo Mount Options</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_bestpractice_00253_0.html">Using StorageClass to Dynamically Create a Subdirectory in an SFS Turbo File System</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0839.html">(Recommended) Creating an SFS Turbo Subdirectory Using a Dynamic PV</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_bestpractice_00253.html">Dynamically Creating an SFS Turbo Subdirectory Using StorageClass</a></strong><br>
</li>
</ul>

View File

@ -4,7 +4,7 @@
<div id="body1541037494110"><div class="section" id="cce_10_0127__section25311744154917"><h4 class="sectiontitle">Introduction</h4><p id="cce_10_0127__p1574910495496">CCE Container Storage (FlexVolume), also called storage-driver, functions as a standard Kubernetes FlexVolume plugin to allow containers to use EVS, SFS, OBS, and SFS Turbo storage resources. By installing and upgrading storage-driver, you can quickly install and update cloud storage capabilities.</p>
<p id="cce_10_0127__p5414123111414"><strong id="cce_10_0127__b2471165511315">FlexVolume is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.13 or earlier is created.</strong></p>
</div>
<div class="section" id="cce_10_0127__section3993231122718"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0127__ul113072911510"><li id="cce_10_0127__li10330171552010">For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume add-on is compatible with the CSI add-on (<a href="cce_10_0066.html">Everest</a>). Clusters of v1.17 and later versions do not support FlexVolume anymore. Use the Everest add-on.</li><li id="cce_10_0127__li25582642815">The FlexVolume add-on will be maintained by Kubernetes developers, but new functionality will only be added to <a href="cce_10_0066.html">Everest</a>. Do not create CCE storage that uses the FlexVolume add-on (storage-driver) anymore. Otherwise, storage may malfunction.</li><li id="cce_10_0127__li51302291158">This add-on can be installed only in <strong id="cce_10_0127__b363773505113">clusters of v1.13 or earlier</strong>. By default, the <a href="cce_10_0066.html">Everest</a> add-on is installed when clusters of v1.15 or later are created.<div class="note" id="cce_10_0127__note1531776113"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0127__p8532474112"><strong id="cce_10_0127__b411425325110">In a cluster of v1.13 or earlier</strong>, when an upgrade or bug fix is available for storage functionalities, you only need to install or upgrade the storage-driver add-on. Upgrading the cluster or creating a cluster is not required.</p>
<div class="section" id="cce_10_0127__section3993231122718"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0127__ul113072911510"><li id="cce_10_0127__li10330171552010">For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume add-on is compatible with the CSI add-on (<a href="cce_10_0066.html">Everest</a>). Clusters of v1.17 and later versions do not support FlexVolume anymore. Use the Everest add-on.</li><li id="cce_10_0127__li25582642815">The FlexVolume add-on will be maintained by Kubernetes developers, but new functionality will only be added to <a href="cce_10_0066.html">Everest</a>. Do not create CCE storage that uses the FlexVolume add-on (storage-driver) anymore. Otherwise, storage may malfunction.</li><li id="cce_10_0127__li51302291158">This add-on can be installed only in <strong id="cce_10_0127__b363773505113">clusters of v1.13 or earlier</strong>. By default, the <a href="cce_10_0066.html">Everest</a> add-on is installed when clusters of v1.15 or later are created.<div class="note" id="cce_10_0127__note1531776113"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0127__p8532474112"><strong id="cce_10_0127__b411425325110">In a cluster of v1.13 or earlier</strong>, when an upgrade or bug fix is available for storage functionalities, you only need to install or upgrade the storage-driver add-on. Upgrading the cluster or creating a cluster is not required.</p>
</div></div>
</li></ul>
</div>
@ -15,7 +15,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0064.html">Add-ons</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0911.html">Container Storage Add-ons</a></div>
</div>
</div>

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,11 @@
<a name="cce_10_0130"></a><a name="cce_10_0130"></a>
<h1 class="topictitle1">Configuring a Container</h1>
<h1 class="topictitle1">Configuring a Workload</h1>
<div id="body1542098364711"></div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0463.html">Secure Runtime and Common Runtime</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0354.html">Configuring Time Zone Synchronization</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0353.html">Configuring an Image Pull Policy</a></strong><br>
@ -18,13 +20,13 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0113.html">Configuring Environment Variables</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0397.html">Workload Upgrade Policies</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0397.html">Configuring Workload Upgrade Policies</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0232.html">Scheduling Policies (Affinity/Anti-affinity)</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0728.html">Taints and Tolerations</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0728.html">Configuring Tolerance Policies</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0386.html">Labels and Annotations</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0386.html">Configuring Labels and Annotations</a></strong><br>
</li>
</ul>

File diff suppressed because it is too large Load Diff

View File

@ -6,10 +6,12 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0175.html">Connecting to a Cluster Using an X.509 Certificate</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0175.html">Accessing a Cluster Using an X.509 Certificate</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0367.html">Accessing a Cluster Using a Custom Domain Name</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0864.html">Configuring a Cluster's API Server for Internet Access</a></strong><br>
</li>
</ul>
<div class="familylinks">

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More