CCE UMN update -20230818 version

Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com>
Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
This commit is contained in:
Dong, Qiu Jian 2023-12-08 10:20:34 +00:00 committed by zuul
parent 1467c5bfc7
commit e11d42fad0
803 changed files with 29293 additions and 15604 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="en-us_topic_0000001550437509.html">Service Overview</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_productdesc_0000.html">Service Overview</a></div>
</div>
</div>

View File

@ -1,20 +0,0 @@
<a name="cce_01_0203"></a><a name="cce_01_0203"></a>
<h1 class="topictitle1">How Do I Troubleshoot Insufficient EIPs When a Node Is Added?</h1>
<div id="body0000001197234817"><div class="section" id="cce_01_0203__en-us_topic_0089615102_s29366dd7fe2e4257bd9481f435155270"><h4 class="sectiontitle">Symptom</h4><p id="cce_01_0203__en-us_topic_0089615102_a0d47456e1aad4d29bb57c7f8a20a9537">When a node is added, <strong id="cce_01_0203__b11155163491511">EIP</strong> is set to <strong id="cce_01_0203__b1515683414156">Automatically assign</strong>. The node cannot be created, and a message indicating that EIPs are insufficient is displayed.</p>
<div class="fignone" id="cce_01_0203__fig716922665112"><span class="figcap"><b>Figure 1 </b>Purchasing an EIP</span><br><span><img id="cce_01_0203__image241314595120" src="en-us_image_0000001223393901.png"></span></div>
</div>
<div class="section" id="cce_01_0203__section2011614514539"><h4 class="sectiontitle">Solution</h4><p id="cce_01_0203__en-us_topic_0089615102_p81731324722">Two methods are available to solve the problem.</p>
<ul id="cce_01_0203__en-us_topic_0089615102_ul952893916210"><li id="cce_01_0203__en-us_topic_0089615102_li155281239622">Method 1: Unbind the VMs bound with EIPs and add a node again.<ol id="cce_01_0203__en-us_topic_0089615102_ol174337174410"><li id="cce_01_0203__en-us_topic_0089615102_li4432141719411">Log in to the management console.</li><li id="cce_01_0203__en-us_topic_0089615102_li5432017741">Choose <strong id="cce_01_0203__b1366971685212">Service List &gt; Computing</strong> &gt; <strong id="cce_01_0203__b2670216115219">Elastic Cloud Server</strong>.</li><li id="cce_01_0203__li6246201521515">In the ECS list, locate the target ECS and click its name.</li><li id="cce_01_0203__en-us_topic_0089615102_li1943391718417">On the ECS details page, click the <strong id="cce_01_0203__b86011942125313">EIPs</strong> tab. In the EIP list, click <strong id="cce_01_0203__b128311118115517">Unbind</strong> at the row of the target ECS and click <strong id="cce_01_0203__b16461020115516">Yes</strong>.<div class="fignone" id="cce_01_0203__fig1725274315571"><span class="figcap"><b>Figure 2 </b>Unbinding an EIP</span><br><span><img id="cce_01_0203__image10687165213578" src="en-us_image_0000001223152423.png"></span></div>
</li><li id="cce_01_0203__li19724172015380">Return to the <strong id="cce_01_0203__b178805214572">Create Node</strong> page on the CCE console and click <strong id="cce_01_0203__b096111012576">Use existing</strong> to add an EIP.<div class="fignone" id="cce_01_0203__fig38458333918"><span class="figcap"><b>Figure 3 </b>Using an unbound EIP</span><br><span><img id="cce_01_0203__image75231056135817" src="en-us_image_0000001223272345.png"></span></div>
</li></ol>
</li><li id="cce_01_0203__li12711947611">Method 2: Increase the EIP quota.<p id="cce_01_0203__p122717472013"><a name="cce_01_0203__li12711947611"></a><a name="li12711947611"></a>Quotas are used to limit the number of resources available to users. If the existing resource quota cannot meet your service requirements, you can increase your quota.</p>
</li></ul>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_faq_0083.html">Reference</a></div>
</div>
</div>

View File

@ -1,80 +0,0 @@
<a name="cce_01_0204"></a><a name="cce_01_0204"></a>
<h1 class="topictitle1">How Do I Format a Data Disk Using Command Line Injection?</h1>
<div id="body0000001151475056"><p id="cce_01_0204__p48124113534">Before using command line injection, write a script that can format data disks and save it to your OBS bucket. Then, inject a command line that will automatically execute the disk formatting script when the node is up. Use input parameters to specify the size of each docker data disk (for example, the default docker disk of 100 GB and the additional disk of 110 GB) and the mount path (<strong id="cce_01_0204__b1382263611457">/data/code</strong>) of the additional disk. In this example, the script is named <strong id="cce_01_0204__b208244369452">formatdisk.sh</strong>.</p>
<p id="cce_01_0204__p1526762411205">Example command line:</p>
<pre class="screen" id="cce_01_0204__screen758171514537">cd /tmp;curl -k -X GET <strong id="cce_01_0204__b914634474513"><em id="cce_01_0204__i15146114434512">OBS bucket address</em></strong>/formatdisk.sh -1 -O;fdisk -l;sleep 30;bash -x formatdisk.sh <strong id="cce_01_0204__b614784494518">100</strong> <strong id="cce_01_0204__b14781326123215">/data/code</strong>;fdisk -l</pre>
<p id="cce_01_0204__p44543285525">Example script (<strong id="cce_01_0204__b619154612459">formatdisk.sh</strong>):</p>
<pre class="screen" id="cce_01_0204__screen669716413522">dockerdisksize=$1
mountdir=$2
systemdisksize=40
i=0
while [ 20 -gt $i ]; do
echo $i;
if [ $(lsblk -o KNAME,TYPE | grep disk | grep -v nvme | awk '{print $1}' | awk '{ print "/dev/"$1}' |wc -l) -ge 3 ]; then
break
else
sleep 5
fi;
i=$[i+1]
done
all_devices=$(lsblk -o KNAME,TYPE | grep disk | grep -v nvme | awk '{print $1}' | awk '{ print "/dev/"$1}')
for device in ${all_devices[@]}; do
isRawDisk=$(sudo lsblk -n $device 2&gt;/dev/null | grep disk | wc -l)
if [[ ${isRawDisk} &gt; 0 ]]; then
# is it partitioned ?
match=$(sudo lsblk -n $device 2&gt;/dev/null | grep -v disk | wc -l)
if [[ ${match} &gt; 0 ]]; then
# already partited
[[ -n "${DOCKER_BLOCK_DEVICES}" ]] &amp;&amp; echo "Raw disk ${device} has been partition, will skip this device"
continue
fi
else
isPart=$(sudo lsblk -n $device 2&gt;/dev/null | grep part | wc -l)
if [[ ${isPart} -ne 1 ]]; then
# not parted
[[ -n "${DOCKER_BLOCK_DEVICES}" ]] &amp;&amp; echo "Disk ${device} has not been partition, will skip this device"
continue
fi
# is used ?
match=$(sudo lsblk -n $device 2&gt;/dev/null | grep -v part | wc -l)
if [[ ${match} &gt; 0 ]]; then
# already used
[[ -n "${DOCKER_BLOCK_DEVICES}" ]] &amp;&amp; echo "Disk ${device} has been used, will skip this device"
continue
fi
isMount=$(sudo lsblk -n -o MOUNTPOINT $device 2&gt;/dev/null)
if [[ -n ${isMount} ]]; then
# already used
[[ -n "${DOCKER_BLOCK_DEVICES}" ]] &amp;&amp; echo "Disk ${device} has been used, will skip this device"
continue
fi
isLvm=$(sudo sfdisk -lqL 2&gt;&gt;/dev/null | grep $device | grep "8e.*Linux LVM")
if [[ ! -n ${isLvm} ]]; then
# part system type is not Linux LVM
[[ -n "${DOCKER_BLOCK_DEVICES}" ]] &amp;&amp; echo "Disk ${device} system type is not Linux LVM, will skip this device"
continue
fi
fi
block_devices_size=$(sudo lsblk -n -o SIZE $device 2&gt;/dev/null | awk '{ print $1}')
if [[ ${block_devices_size}"x" != "${dockerdisksize}Gx" ]] &amp;&amp; [[ ${block_devices_size}"x" != "${systemdisksize}Gx" ]]; then
echo "n
p
1
w
" | fdisk $device
mkfs -t ext4 ${device}1
mkdir -p $mountdir
echo "${device}1 $mountdir ext4 noatime 0 0" | sudo tee -a /etc/fstab &gt;/dev/null
mount $mountdir
fi
done</pre>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_faq_0083.html">Reference</a></div>
</div>
</div>

View File

@ -8,7 +8,17 @@
</th>
</tr>
</thead>
<tbody><tr id="cce_01_0300__row450749103813"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p195076943820">2023-05-30</p>
<tbody><tr id="cce_01_0300__row115169185311"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p1015129185318">2023-11-06</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul09563134221"><li id="cce_01_0300__li6956131302215">Deleted section "Storage Management: Flexvolume (Deprecated)".</li><li id="cce_01_0300__li974117510255">Updated <a href="cce_10_0020.html">Networking</a>.</li><li id="cce_01_0300__li127038227309">Updated <a href="cce_10_0374.html">Storage</a>.</li><li id="cce_01_0300__li1685474216317">Deleted the description of CentOS 7.7.</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row129431260578"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p149444264572">2023-08-15</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul360263017215"><li id="cce_01_0300__li176025306217">Added <a href="cce_faq_0000.html">FAQs</a>.</li><li id="cce_01_0300__li195968195713">Added <a href="cce_10_0421.html">Differences Between Helm v2 and Helm v3 and Adaptation Solutions</a>.</li><li id="cce_01_0300__li1828865011914">Added <a href="cce_10_0420.html">Deploying an Application Through the Helm v2 Client</a>.</li><li id="cce_01_0300__li75131451291">Added <a href="cce_10_0144.html">Deploying an Application Through the Helm v3 Client</a>.</li><li id="cce_01_0300__li1723812521193">Added <a href="cce_10_0422.html">Converting a Release from Helm v2 to v3</a>.</li><li id="cce_01_0300__li18579532823">Deleted section "Reference".</li></ul>
</td>
</tr>
<tr id="cce_01_0300__row450749103813"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p195076943820">2023-05-30</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><ul id="cce_01_0300__ul1843612311567"><li id="cce_01_0300__li14362312065">Added<a href="cce_10_0652.html">Configuring a Node Pool</a>.</li><li id="cce_01_0300__li48641237869">Added<a href="cce_10_0684.html">Configuring Health Check for Multiple Ports</a>.</li><li id="cce_01_0300__li152057919719">Updated<a href="cce_10_0363.html">Creating a Node</a>.</li><li id="cce_01_0300__li53955101178">Updated<a href="cce_10_0012.html">Creating a Node Pool</a>.</li><li id="cce_01_0300__li16648154715219">Updated<a href="cce_bulletin_0301.html">OS Patch Notes for Cluster Nodes</a>.</li><li id="cce_01_0300__li7404516102217">Updated<a href="cce_productdesc_0005.html">Notes and Constraints</a>.</li></ul>
</td>
@ -25,7 +35,7 @@
</tr>
<tr id="cce_01_0300__row4557205544117"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p555719556417">2022-11-21</p>
</td>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p185571655114118">Added <a href="cce_bestpractice.html">Best Practice</a>.</p>
<td class="cellrowborder" valign="top" width="80.99%" headers="mcps1.3.1.2.3.1.2 "><p id="cce_01_0300__p185571655114118">Added <a href="cce_bestpractice_0000.html">Best Practice</a>.</p>
</td>
</tr>
<tr id="cce_01_0300__row1722210871314"><td class="cellrowborder" valign="top" width="19.009999999999998%" headers="mcps1.3.1.2.3.1.1 "><p id="cce_01_0300__p32228891316">2022-08-27</p>

File diff suppressed because it is too large Load Diff

View File

@ -6,13 +6,9 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0430.html">Basic Cluster Information</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0342.html">CCE Turbo Clusters and CCE Clusters</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0068.html">Kubernetes Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0349.html">Comparing iptables and IPVS</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0068.html">Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0405.html">Cluster Patch Version Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0405.html">Release Notes for CCE Cluster Versions</a></strong><br>
</li>
</ul>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -4,29 +4,29 @@
<div id="body1522665832344"><p id="cce_10_0006__p11116113204610">CCE provides Kubernetes-native container deployment and management and supports lifecycle management of container workloads, including creation, configuration, monitoring, auto scaling, upgrade, uninstall, service discovery, and load balancing.</p>
<div class="section" id="cce_10_0006__section9645114684816"><h4 class="sectiontitle">Pod</h4><p id="cce_10_0006__en-us_topic_0254767870_p356108173515">A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates one or more containers, storage volumes, a unique network IP address, and options that govern how the containers should run.</p>
<p id="cce_10_0006__en-us_topic_0254767870_p4629172611480">Pods can be used in either of the following ways:</p>
<ul id="cce_10_0006__en-us_topic_0254767870_ul062982617481"><li id="cce_10_0006__en-us_topic_0254767870_li1629172611482">A container is running in a pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.</li><li id="cce_10_0006__en-us_topic_0254767870_li1962932615480">Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in <a href="#cce_10_0006__en-us_topic_0254767870_fig347141918551">Figure 1</a>. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory.<div class="fignone" id="cce_10_0006__en-us_topic_0254767870_fig347141918551"><a name="cce_10_0006__en-us_topic_0254767870_fig347141918551"></a><a name="en-us_topic_0254767870_fig347141918551"></a><span class="figcap"><b>Figure 1 </b>Pod</span><br><span><img id="cce_10_0006__en-us_topic_0254767870_image1835215316361" src="en-us_image_0000001518222716.png"></span></div>
<ul id="cce_10_0006__en-us_topic_0254767870_ul062982617481"><li id="cce_10_0006__en-us_topic_0254767870_li1629172611482">A container is running in a pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.</li><li id="cce_10_0006__en-us_topic_0254767870_li1962932615480">Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in <a href="#cce_10_0006__en-us_topic_0254767870_fig347141918551">Figure 1</a>. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory.<div class="fignone" id="cce_10_0006__en-us_topic_0254767870_fig347141918551"><a name="cce_10_0006__en-us_topic_0254767870_fig347141918551"></a><a name="en-us_topic_0254767870_fig347141918551"></a><span class="figcap"><b>Figure 1 </b>Pod</span><br><span><img id="cce_10_0006__en-us_topic_0254767870_image1835215316361" src="en-us_image_0000001695896725.png"></span></div>
</li></ul>
<p id="cce_10_0006__en-us_topic_0254767870_p9163143619182">In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods.</p>
<p id="cce_10_0006__en-us_topic_0254767870_p9163143619182">In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and Jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller typically uses a pod template to create corresponding pods.</p>
</div>
<div class="section" id="cce_10_0006__section1972719357496"><h4 class="sectiontitle">Deployment</h4><p id="cce_10_0006__en-us_topic_0249851113_p13243347131615">A pod is the smallest and simplest unit that you create or deploy in Kubernetes. It is designed to be an ephemeral, one-off entity. A pod can be evicted when node resources are insufficient and disappears along with a cluster node failure. Kubernetes provides controllers to manage pods. Controllers can create and manage pods, and provide replica management, rolling upgrade, and self-healing capabilities. The most commonly used controller is Deployment.</p>
<div class="fignone" id="cce_10_0006__en-us_topic_0249851113_fig12546173933714"><span class="figcap"><b>Figure 2 </b>Relationship between a Deployment and pods</span><br><span><img id="cce_10_0006__en-us_topic_0249851113_image5671529113711" src="en-us_image_0000001569023033.png"></span></div>
<div class="fignone" id="cce_10_0006__en-us_topic_0249851113_fig12546173933714"><span class="figcap"><b>Figure 2 </b>Deployment</span><br><span><img id="cce_10_0006__en-us_topic_0249851113_image5671529113711" src="en-us_image_0000001695896721.png"></span></div>
<p id="cce_10_0006__en-us_topic_0249851113_p35371248184511">A Deployment can contain one or more pods. These pods have the same role. Therefore, the system automatically distributes requests to multiple pods of a Deployment.</p>
<p id="cce_10_0006__en-us_topic_0249851113_p11715188281">A Deployment integrates a lot of functions, including online deployment, rolling upgrade, replica creation, and restoration of online jobs. To some extent, Deployments can be used to realize unattended rollout, which greatly reduces difficulties and operation risks in the rollout process.</p>
</div>
<div class="section" id="cce_10_0006__section14888155424918"><h4 class="sectiontitle">StatefulSet</h4><p id="cce_10_0006__en-us_topic_0249896621_p12502102192418">All pods under a Deployment have the same characteristics except for the name and IP address. If required, a Deployment can use the pod template to create a new pod. If not required, the Deployment can delete any one of the pods.</p>
<p id="cce_10_0006__en-us_topic_0249896621_p2669143675415">However, Deployments cannot meet the requirements in some distributed scenarios when each pod requires its own status or in a distributed database where each pod requires independent storage.</p>
<p id="cce_10_0006__en-us_topic_0249896621_p97277467269">With detailed analysis, it is found that each part of distributed stateful applications plays a different role. For example, the database nodes are deployed in active/standby mode, and pods are dependent on each other. In this case, you need to meet the following requirements for the pods:</p>
<p id="cce_10_0006__en-us_topic_0249896621_p97277467269">With detailed analysis, it is found that each part of distributed stateful applications plays a different role. For example, the database nodes are deployed in active/standby mode, and pods are dependent on each other. In this case, the pods need to meet the following requirements:</p>
<ul id="cce_10_0006__en-us_topic_0249896621_ul1181724132317"><li id="cce_10_0006__en-us_topic_0249896621_li10181102419231">A pod can be recognized by other pods. Therefore, a pod must have a fixed identifier.</li><li id="cce_10_0006__en-us_topic_0249896621_li81819249237">Each pod has an independent storage device. After a pod is deleted and then restored, the data read from the pod must be the same as the previous one. Otherwise, the pod status is inconsistent.</li></ul>
<p id="cce_10_0006__en-us_topic_0249896621_p929315724313">To address the preceding requirements, Kubernetes provides StatefulSets.</p>
<ol id="cce_10_0006__en-us_topic_0249896621_ol117020203559"><li id="cce_10_0006__en-us_topic_0249896621_li183871501692">A StatefulSet provides a fixed name for each pod following a fixed number ranging from 0 to N. After a pod is rescheduled, the pod name and the host name remain unchanged.</li><li id="cce_10_0006__en-us_topic_0249896621_li1789810518913">A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in following sections).</li><li id="cce_10_0006__en-us_topic_0249896621_li43183204569">The StatefulSet creates PersistentVolumeClaims (PVCs) with fixed identifiers to ensure that pods can access the same persistent data after being rescheduled.<p id="cce_10_0006__en-us_topic_0249896621_p8536185392116"><a name="cce_10_0006__en-us_topic_0249896621_li43183204569"></a><a name="en-us_topic_0249896621_li43183204569"></a><span><img id="cce_10_0006__en-us_topic_0249896621_image9125145402111" src="en-us_image_0000001517743628.png"></span></p>
<ol id="cce_10_0006__en-us_topic_0249896621_ol117020203559"><li id="cce_10_0006__en-us_topic_0249896621_li183871501692">A StatefulSet provides a fixed name for each pod following a fixed number ranging from 0 to N. After a pod is rescheduled, the pod name and the host name remain unchanged.</li><li id="cce_10_0006__en-us_topic_0249896621_li1789810518913">A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in the following sections).</li><li id="cce_10_0006__en-us_topic_0249896621_li43183204569">The StatefulSet creates PersistentVolumeClaims (PVCs) with fixed identifiers to ensure that pods can access the same persistent data after being rescheduled.<p id="cce_10_0006__en-us_topic_0249896621_p8536185392116"><a name="cce_10_0006__en-us_topic_0249896621_li43183204569"></a><a name="en-us_topic_0249896621_li43183204569"></a><span><img id="cce_10_0006__en-us_topic_0249896621_image9125145402111" src="en-us_image_0000001647417792.png"></span></p>
</li></ol>
</div>
<div class="section" id="cce_10_0006__section7846281504"><h4 class="sectiontitle">DaemonSet</h4><p id="cce_10_0006__en-us_topic_0249851114_p441104813815">A DaemonSet runs a pod on each node in a cluster and ensures that there is only one pod. This works well for certain system-level applications, such as log collection and resource monitoring, since they must run on each node and need only a few pods. A good example is kube-proxy.</p>
<p id="cce_10_0006__en-us_topic_0249851114_p5986375820">DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet will not create the same pods on other nodes.</p>
<div class="fignone" id="cce_10_0006__en-us_topic_0249851114_fig27588261914"><span class="figcap"><b>Figure 3 </b>DaemonSet</span><br><span><img id="cce_10_0006__en-us_topic_0249851114_image13336133243518" src="en-us_image_0000001518062772.png"></span></div>
<div class="fignone" id="cce_10_0006__en-us_topic_0249851114_fig27588261914"><span class="figcap"><b>Figure 3 </b>DaemonSet</span><br><span><img id="cce_10_0006__en-us_topic_0249851114_image13336133243518" src="en-us_image_0000001647577048.png"></span></div>
</div>
<div class="section" id="cce_10_0006__section153173319578"><h4 class="sectiontitle">Job and Cron Job</h4><p id="cce_10_0006__en-us_topic_0249851115_p10889736123218">Jobs and cron jobs allow you to run short lived, one-off tasks in batch. They ensure the task pods run to completion.</p>
<ul id="cce_10_0006__en-us_topic_0249851115_ul197714911354"><li id="cce_10_0006__en-us_topic_0249851115_li47711097352">A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after successfully completing tasks based on user configurations.</li><li id="cce_10_0006__en-us_topic_0249851115_li249061111353">A cron job runs a job periodically on a specified schedule. A cron job object is similar to a line of a crontab file in Linux.</li></ul>
<ul id="cce_10_0006__en-us_topic_0249851115_ul197714911354"><li id="cce_10_0006__en-us_topic_0249851115_li47711097352">A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after completing tasks based on user configurations.</li><li id="cce_10_0006__en-us_topic_0249851115_li249061111353">A cron job runs a job periodically on a specified schedule. A cron job object is similar to a line of a crontab file in Linux.</li></ul>
<p id="cce_10_0006__en-us_topic_0249851115_p166171774387">This run-to-completion feature of jobs is especially suitable for one-off tasks, such as continuous integration (CI).</p>
</div>
<div class="section" id="cce_10_0006__section3891192610218"><h4 class="sectiontitle">Workload Lifecycle</h4>
@ -38,17 +38,17 @@
</thead>
<tbody><tr id="cce_10_0006__row14889152173415"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p1788905212343">Running</p>
</td>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p188914522345">All pods are running.</p>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p188914522345">All pods are running or the number of pods is 0.</p>
</td>
</tr>
<tr id="cce_10_0006__row12889195263417"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p1888915253412">Unready</p>
</td>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p12889152113418">A container is abnormal, the number of pods is 0, or the workload is in pending state.</p>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p12889152113418">The container malfunctions and the pod under the workload is not working.</p>
</td>
</tr>
<tr id="cce_10_0006__row12889195213419"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p6889135218347">Upgrading/Rolling back</p>
<tr id="cce_10_0006__row1940155313521"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p9415165881719">Processing</p>
</td>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p18889052203414">The workload is being upgraded or rolled back.</p>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p1141511585170">The workload is not running but no error is reported.</p>
</td>
</tr>
<tr id="cce_10_0006__row6241718141019"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p132017221115">Available</p>
@ -71,11 +71,6 @@
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p8805854104812">The workload is being deleted.</p>
</td>
</tr>
<tr id="cce_10_0006__row1280465420481"><td class="cellrowborder" valign="top" width="25%" headers="mcps1.4.7.2.2.3.1.1 "><p id="cce_10_0006__p19354132811102">Pausing</p>
</td>
<td class="cellrowborder" valign="top" width="75%" headers="mcps1.4.7.2.2.3.1.2 "><p id="cce_10_0006__p535418282104">The workload is being paused.</p>
</td>
</tr>
</tbody>
</table>
</div>

View File

@ -45,7 +45,7 @@
</tr>
<tr id="cce_10_0007__en-us_topic_0107283638_row133611447101912"><td class="cellrowborder" valign="top" width="24.610000000000003%" headers="mcps1.3.1.2.1.2.3.1.1 "><p id="cce_10_0007__en-us_topic_0107283638_p73613476199"><a href="#cce_10_0007__en-us_topic_0107283638_section5931193015488">Manage Label</a></p>
</td>
<td class="cellrowborder" valign="top" width="75.39%" headers="mcps1.3.1.2.1.2.3.1.2 "><p id="cce_10_0007__en-us_topic_0107283638_p136104716195">Labels are key-value pairs and can be attached to workloads for affinity and anti-affinity scheduling. Jobs and Cron Jobs do not support this operation.</p>
<td class="cellrowborder" valign="top" width="75.39%" headers="mcps1.3.1.2.1.2.3.1.2 "><p id="cce_10_0007__en-us_topic_0107283638_p136104716195">Labels are attached to workloads as key-value pairs to manage and select workloads. Jobs and Cron Jobs do not support this operation.</p>
</td>
</tr>
<tr id="cce_10_0007__en-us_topic_0107283638_row123611847141914"><td class="cellrowborder" valign="top" width="24.610000000000003%" headers="mcps1.3.1.2.1.2.3.1.1 "><p id="cce_10_0007__en-us_topic_0107283638_p5361154721910"><a href="#cce_10_0007__en-us_topic_0107283638_section14423721191418">Delete</a></p>
@ -69,54 +69,53 @@
</div>
</div>
<div class="section" id="cce_10_0007__section7200124254011"><a name="cce_10_0007__section7200124254011"></a><a name="section7200124254011"></a><h4 class="sectiontitle">Monitoring a Workload</h4><p id="cce_10_0007__en-us_topic_0107283638_p785625243110">You can view the CPU and memory usage of Deployments and pods on the CCE console to determine the resource specifications you may need. This section uses a Deployment as an example to describe how to monitor a workload.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol121998089396"><li id="cce_10_0007__en-us_topic_0107283638_li9879311402"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b16451125013714">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li2774856895942"><span>Click the <strong id="cce_10_0007__b7166115916267">Deployments</strong> tab and click <span class="uicontrol" id="cce_10_0007__uicontrol791710184219"><b>Monitor</b></span> of the target workload. On the page that is displayed, you can view CPU usage and memory usage of the workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li08455571501"><span>Click the workload name. On the <strong id="cce_10_0007__b1288064014552">Pods</strong> tab page, click the <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol594931162312"><b>Monitor</b></span> of the target pod to view its CPU and memory usage.</span></li></ol>
<ol id="cce_10_0007__en-us_topic_0107283638_ol121998089396"><li id="cce_10_0007__en-us_topic_0107283638_li9879311402"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b163134191278">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li2774856895942"><span>Click the <strong id="cce_10_0007__b1414121112713">Deployments</strong> tab and click <span class="uicontrol" id="cce_10_0007__uicontrol0414821162719"><b>Monitor</b></span> of the target workload. On the page that is displayed, you can view CPU usage and memory usage of the workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li08455571501"><span>Click the workload name. On the <strong id="cce_10_0007__b1170422772711">Pods</strong> tab page, click the <span class="uicontrol" id="cce_10_0007__uicontrol9704112716273"><b>Monitor</b></span> of the target pod to view its CPU and memory usage.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section51511928173817"><a name="cce_10_0007__en-us_topic_0107283638_section51511928173817"></a><a name="en-us_topic_0107283638_section51511928173817"></a><h4 class="sectiontitle">Viewing Logs</h4><p id="cce_10_0007__en-us_topic_0107283638_p7643185724813">You can view logs of Deployments, StatefulSets, DaemonSets, and jobs. This section uses a Deployment as an example to describe how to view logs.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol14644105712488"><li id="cce_10_0007__en-us_topic_0107283638_li2619151017014"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b16561443103711">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1340513385528"><span>Click the <strong id="cce_10_0007__b1288235873716">Deployments</strong> tab and click the <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol124081538105216"><b>View Log</b></span> of the target workload.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p17548132715421">On the displayed <strong id="cce_10_0007__b135652223918">View Log</strong> window, you can view logs by time.</p>
<div class="notice" id="cce_10_0007__note177339212275"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0007__p998763711231">Before viewing logs, ensure that the time of the browser is the same as that on the backend server.</p>
</div></div>
<ol id="cce_10_0007__en-us_topic_0107283638_ol14644105712488"><li id="cce_10_0007__en-us_topic_0107283638_li2619151017014"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b153351729122716">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1340513385528"><span>Click the <strong id="cce_10_0007__b24101331162716">Deployments</strong> tab and click the <span class="uicontrol" id="cce_10_0007__uicontrol741018314276"><b>View Log</b></span> of the target workload.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p17548132715421">On the displayed <strong id="cce_10_0007__b793112517535">View Log</strong> window, you can view logs.</p>
<div class="note" id="cce_10_0007__en-us_topic_0107283638_note216713316213"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p101679316212">The displayed logs are standard output logs of containers and do not have persistence and advanced O&amp;M capabilities. To use more comprehensive log capabilities, see <a href="cce_10_0553.html">Logs</a>. If the function of collecting standard output is enabled for the workload (enabled by default), you can go to AOM to view more workload logs. For details, see <a href="cce_10_0018.html">Using ICAgent to Collect Container Logs</a>.</p>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section17604174417381"><a name="cce_10_0007__en-us_topic_0107283638_section17604174417381"></a><a name="en-us_topic_0107283638_section17604174417381"></a><h4 class="sectiontitle">Upgrading a Workload</h4><p id="cce_10_0007__en-us_topic_0107283638_p96551832133014">You quickly upgrade Deployments, StatefulSets, and DaemonSets on the CCE console.</p>
<p id="cce_10_0007__en-us_topic_0107283638_p1243174462216">This section uses a Deployment as an example to describe how to upgrade a workload.</p>
<p id="cce_10_0007__en-us_topic_0107283638_p15663124812311">Before replacing an image or image version, upload the new image to the SWR service.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol583013911434"><li id="cce_10_0007__li112420494810"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b27119519592">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li148303911437"><span>Click the <strong id="cce_10_0007__b12745715915">Deployments</strong> tab and click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol119161298236"><b>Upgrade</b></span> of the target workload.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note104981317262"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0007__en-us_topic_0107283638_ul6585203411317"><li id="cce_10_0007__en-us_topic_0107283638_li658513412313">Workloads cannot be upgraded in batches.</li><li id="cce_10_0007__en-us_topic_0107283638_li175851834193120">Before performing an in-place StatefulSet upgrade, you must manually delete old pods. Otherwise, the upgrade status is always displayed as <strong id="cce_10_0007__en-us_topic_0107283638_b340512519164">Upgrading</strong>.</li></ul>
<ol id="cce_10_0007__en-us_topic_0107283638_ol583013911434"><li id="cce_10_0007__li112420494810"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1068713277289">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li148303911437"><span>Click the <strong id="cce_10_0007__b1023143010284">Deployments</strong> tab and click <span class="uicontrol" id="cce_10_0007__uicontrol162343062817"><b>Upgrade</b></span> of the target workload.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note104981317262"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0007__en-us_topic_0107283638_ul6585203411317"><li id="cce_10_0007__en-us_topic_0107283638_li658513412313">Workloads cannot be upgraded in batches.</li><li id="cce_10_0007__en-us_topic_0107283638_li175851834193120">Before performing an in-place StatefulSet upgrade, you must manually delete old pods. Otherwise, the upgrade status is always displayed as <strong id="cce_10_0007__en-us_topic_0107283638_b340512519164">Processing</strong>.</li></ul>
</div></div>
</p></li><li id="cce_10_0007__en-us_topic_0107283638_li8831149194314"><span>Upgrade the workload based on service requirements. The method for setting parameter is the same as that for creating a workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li3833189134315"><span>After the update is complete, click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol8834189154313"><b>Upgrade Workload</b></span>, manually confirm the YAML file, and submit the upgrade.</span></li></ol>
</p></li><li id="cce_10_0007__en-us_topic_0107283638_li8831149194314"><span>Upgrade the workload based on service requirements. The method for setting parameter is the same as that for creating a workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li3833189134315"><span>After the update is complete, click <span class="uicontrol" id="cce_10_0007__uicontrol5311635122814"><b>Upgrade Workload</b></span>, manually confirm the YAML file, and submit the upgrade.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section21669213390"><a name="cce_10_0007__en-us_topic_0107283638_section21669213390"></a><a name="en-us_topic_0107283638_section21669213390"></a><h4 class="sectiontitle">Editing a YAML file</h4><p id="cce_10_0007__en-us_topic_0107283638_p879119319360">You can modify and download the YAML files of Deployments, StatefulSets, DaemonSets, and pods on the CCE console. YAML files of jobs and cron jobs can only be viewed, copied, and downloaded. This section uses a Deployment as an example to describe how to edit the YAML file.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol1879112311361"><li id="cce_10_0007__li635115103505"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b99711226311">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li1335171017509"><span>Click the <strong id="cce_10_0007__b7271634435">Deployments</strong> tab and choose <strong id="cce_10_0007__b1914134211314">More</strong> &gt; <strong id="cce_10_0007__b114612441131">Edit YAML</strong> in the <strong id="cce_10_0007__b1611811155410">Operation</strong> column of the target workload. In the dialog box that is displayed, modify the YAML file.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li97921133367"><span>Click <strong id="cce_10_0007__en-us_topic_0107283638_b37471280123">Edit</strong> and then <strong id="cce_10_0007__en-us_topic_0107283638_b12759155891220">OK</strong> to save the changes.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li87324268415"><span>(Optional) In the <strong id="cce_10_0007__en-us_topic_0107283638_b8257102371317">Edit YAML</strong> window, click <strong id="cce_10_0007__en-us_topic_0107283638_b13222327121315">Download</strong> to download the YAML file.</span></li></ol>
<ol id="cce_10_0007__en-us_topic_0107283638_ol1879112311361"><li id="cce_10_0007__li635115103505"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b95501137142817">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li1335171017509"><span>Click the <strong id="cce_10_0007__b1413614042816">Deployments</strong> tab and choose <strong id="cce_10_0007__b413716406287">More</strong> &gt; <strong id="cce_10_0007__b18137240202819">Edit YAML</strong> in the <strong id="cce_10_0007__b21377402282">Operation</strong> column of the target workload. In the dialog box that is displayed, modify the YAML file.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li97921133367"><span>Click <strong id="cce_10_0007__b1165164173410">OK</strong>.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li87324268415"><span>(Optional) In the <strong id="cce_10_0007__en-us_topic_0107283638_b8257102371317">Edit YAML</strong> window, click <strong id="cce_10_0007__en-us_topic_0107283638_b13222327121315">Download</strong> to download the YAML file.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section13324541124815"><a name="cce_10_0007__en-us_topic_0107283638_section13324541124815"></a><a name="en-us_topic_0107283638_section13324541124815"></a><h4 class="sectiontitle">Rolling Back a Workload (Available Only for Deployments)</h4><p id="cce_10_0007__en-us_topic_0107283638_p252119142614">CCE records the release history of all Deployments. You can roll back a Deployment to a specified version.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol165211495268"><li id="cce_10_0007__en-us_topic_0107283638_li0901438403"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b795484911515">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1254215491914"><span>Click the <strong id="cce_10_0007__b101810579516">Deployments</strong> tab, choose <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol15439491916"><b>More &gt; Roll Back</b></span> in the <strong id="cce_10_0007__b12851052966">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li383212838"><span>Switch to the <strong id="cce_10_0007__b330495017189">Change History</strong> tab page, click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol1583332138"><b>Roll Back to This Version</b></span> of the target version, manually confirm the YAML file, and click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol255289318"><b>OK</b></span>.</span><p><p id="cce_10_0007__p119891725195220"></p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol165211495268"><li id="cce_10_0007__en-us_topic_0107283638_li0901438403"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1982864212286">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1254215491914"><span>Click the <strong id="cce_10_0007__b0953744172818">Deployments</strong> tab, choose <span class="uicontrol" id="cce_10_0007__uicontrol1195354418284"><b>More &gt; Roll Back</b></span> in the <strong id="cce_10_0007__b8954204472812">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li383212838"><span>Switch to the <strong id="cce_10_0007__b9222047122811">Change History</strong> tab page, click <span class="uicontrol" id="cce_10_0007__uicontrol13223154715289"><b>Roll Back to This Version</b></span> of the target version, manually confirm the YAML file, and click <span class="uicontrol" id="cce_10_0007__uicontrol5223104722812"><b>OK</b></span>.</span><p><p id="cce_10_0007__p119891725195220"></p>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__section132451237607"><a name="cce_10_0007__section132451237607"></a><a name="section132451237607"></a><h4 class="sectiontitle">Redeploying a Workload</h4><p id="cce_10_0007__p15601819195812">After you redeploy a workload, all pods in the workload will be restarted. This section uses Deployments as an example to illustrate how to redeploy a workload.</p>
<ol id="cce_10_0007__ol0529114105916"><li id="cce_10_0007__li152911415912"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b354372611238">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li2052917419597"><span>Click the <strong id="cce_10_0007__b29910305233">Deployments</strong> tab and choose <strong id="cce_10_0007__b173340152314">More</strong> &gt; <strong id="cce_10_0007__b20709174214233">Redeploy</strong> in the <strong id="cce_10_0007__b152255482410">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__li052984175917"><span>In the dialog box that is displayed, click <span class="uicontrol" id="cce_10_0007__uicontrol55291047592"><b>Yes</b></span> to redeploy the workload.</span></li></ol>
<ol id="cce_10_0007__ol0529114105916"><li id="cce_10_0007__li152911415912"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1861155692810">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li2052917419597"><span>Click the <strong id="cce_10_0007__b13542145872817">Deployments</strong> tab and choose <strong id="cce_10_0007__b1454245820284">More</strong> &gt; <strong id="cce_10_0007__b6543165817284">Redeploy</strong> in the <strong id="cce_10_0007__b18543858112819">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__li052984175917"><span>In the dialog box that is displayed, click <span class="uicontrol" id="cce_10_0007__uicontrol8574100202910"><b>Yes</b></span> to redeploy the workload.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section12087915401"><a name="cce_10_0007__en-us_topic_0107283638_section12087915401"></a><a name="en-us_topic_0107283638_section12087915401"></a><h4 class="sectiontitle">Disabling/Enabling Upgrade (Available Only for Deployments)</h4><p id="cce_10_0007__p209311112155710">Only Deployments support this operation.</p>
<ul id="cce_10_0007__ul978411517574"><li id="cce_10_0007__li177841115105714">After the upgrade is disabled, the upgrade command can be delivered but will not be applied to the pods.<p id="cce_10_0007__p28775173578"><a name="cce_10_0007__li177841115105714"></a><a name="li177841115105714"></a>If you are performing a rolling upgrade, the rolling upgrade stops after the disabling upgrade command is delivered. In this case, the new and old pods co-exist.</p>
</li><li id="cce_10_0007__li14784141565720">If a Deployment is being upgraded, it can be upgraded or rolled back. Its pods will inherit the latest updates of the Deployment. If they are inconsistent, the pods are upgraded automatically according to the latest information of the Deployment.</li></ul>
<div class="notice" id="cce_10_0007__en-us_topic_0107283638_note10276839151110"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0007__en-us_topic_0107283638_p17277163941114">Deployments in the disable upgrade state cannot be rolled back.</p>
</div></div>
<ol id="cce_10_0007__en-us_topic_0107283638_ol1188315418332"><li id="cce_10_0007__en-us_topic_0107283638_li1388334119335"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b12661165123017">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1588424111338"><span>Click the <strong id="cce_10_0007__b311310115308">Deployments</strong> tab and choose <strong id="cce_10_0007__b1292342073613">More</strong> &gt; <strong id="cce_10_0007__b4531823163610">Disable/Enable Upgrade</strong> in the <strong id="cce_10_0007__b1629163073616">Operation</strong> column of the workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1288404118334"><span>In the dialog box that is displayed, click <strong id="cce_10_0007__b954833253613">Yes</strong>.</span></li></ol>
<ol id="cce_10_0007__en-us_topic_0107283638_ol1188315418332"><li id="cce_10_0007__en-us_topic_0107283638_li1388334119335"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b11769141672918">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1588424111338"><span>Click the <strong id="cce_10_0007__b199921814299">Deployments</strong> tab and choose <strong id="cce_10_0007__b1799951820293">More</strong> &gt; <strong id="cce_10_0007__b17031913299">Disable/Enable Upgrade</strong> in the <strong id="cce_10_0007__b180719162911">Operation</strong> column of the workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li1288404118334"><span>In the dialog box that is displayed, click <strong id="cce_10_0007__b1688621162914">Yes</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section5931193015488"><a name="cce_10_0007__en-us_topic_0107283638_section5931193015488"></a><a name="en-us_topic_0107283638_section5931193015488"></a><h4 class="sectiontitle">Managing Labels</h4><p id="cce_10_0007__en-us_topic_0107283638_p13735621112611">Labels are key-value pairs and can be attached to workloads. Workload labels are often used for affinity and anti-affinity scheduling. You can add labels to multiple workloads or a specified workload.</p>
<p id="cce_10_0007__en-us_topic_0107283638_p224810259211">You can manage the labels of Deployments, StatefulSets, and DaemonSets based on service requirements. This section uses Deployments as an example to describe how to manage labels.</p>
<p id="cce_10_0007__en-us_topic_0107283638_a25e981ca77e9418595f87f7400290694">In the following figure, three labels (release, env, and role) are defined for workload APP 1, APP 2, and APP 3. The values of these labels vary with workload.</p>
<ul id="cce_10_0007__en-us_topic_0107283638_ul7510750182517"><li id="cce_10_0007__en-us_topic_0107283638_li451085020252">Label of APP 1: [release:alpha;env:development;role:frontend]</li><li id="cce_10_0007__en-us_topic_0107283638_li168003418263">Label of APP 2: [release:beta;env:testing;role:frontend]</li><li id="cce_10_0007__en-us_topic_0107283638_li28171735122615">Label of APP 3: [release:alpha;env:production;role:backend]</li></ul>
<p id="cce_10_0007__en-us_topic_0107283638_a3ba0363fee1e4f84a725cc485dc75fab">If you set <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol5203060593540"><b>key</b></span> to <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol1036857593540"><b>role</b></span> and <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol5756907293540"><b>value</b></span> to <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol4045549593540"><b>frontend</b></span> when using workload scheduling or another function, APP 1 and APP 2 will be selected.</p>
<div class="fignone" id="cce_10_0007__en-us_topic_0107283638_ff50c8d071f4d462bb116f4e3d67c131c"><span class="figcap"><b>Figure 1 </b>Label example</span><br><span><img class="vsd" id="cce_10_0007__en-us_topic_0107283638_image1555172419716" src="en-us_image_0000001517903028.png"></span></div>
<ol id="cce_10_0007__en-us_topic_0107283638_ol6251112511220"><li id="cce_10_0007__en-us_topic_0107283638_li53548551606"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b534795220364">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li22871259152611"><span>Click the <strong id="cce_10_0007__b8876856103619">Deployments</strong> tab and choose <strong id="cce_10_0007__b754745914364">More</strong> &gt; <strong id="cce_10_0007__b651528153710">Manage Label</strong> in the <strong id="cce_10_0007__b11780310143710">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li47616189277"><span>Click <strong id="cce_10_0007__b192795402372">Add</strong>, enter a key and a value, and click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol636022132816"><b>OK</b></span>.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note163751811133416"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p03751011133411">A key-value pair must contain 1 to 63 characters starting and ending with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed.</p>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section5931193015488"><a name="cce_10_0007__en-us_topic_0107283638_section5931193015488"></a><a name="en-us_topic_0107283638_section5931193015488"></a><h4 class="sectiontitle">Managing Labels</h4><p id="cce_10_0007__en-us_topic_0107283638_p13735621112611">Labels are key-value pairs and can be attached to workloads. You can manage and select workloads by labels. You can add labels to multiple workloads or a specified workload.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol6251112511220"><li id="cce_10_0007__en-us_topic_0107283638_li53548551606"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1335702382915">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li22871259152611"><span>Click the <strong id="cce_10_0007__b1838219256291">Deployments</strong> tab and choose <strong id="cce_10_0007__b4383162552919">More</strong> &gt; <strong id="cce_10_0007__b2383225142917">Manage Label</strong> in the <strong id="cce_10_0007__b18383182512912">Operation</strong> column of the target workload.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li47616189277"><span>Click <strong id="cce_10_0007__b97761327172916">Add</strong>, enter a key and a value, and click <span class="uicontrol" id="cce_10_0007__uicontrol1277618274294"><b>OK</b></span>.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note163751811133416"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p03751011133411">A key-value pair must contain 1 to 63 characters starting and ending with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed.</p>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section14423721191418"><a name="cce_10_0007__en-us_topic_0107283638_section14423721191418"></a><a name="en-us_topic_0107283638_section14423721191418"></a><h4 class="sectiontitle">Deleting a Workload/Job</h4><p id="cce_10_0007__en-us_topic_0107283638_p44461328132920">You can delete a workload or job that is no longer needed. Deleted workloads or jobs cannot be recovered. Exercise caution when you perform this operation. This section uses a Deployment as an example to describe how to delete a workload.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol16301162312555"><li id="cce_10_0007__li1824612582414"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b3899151143811">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li23014231555"><span>In the same row as the workload you will delete, choose <strong id="cce_10_0007__en-us_topic_0107283638_b2032918125613">Operation</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b0329141219611">More</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b23291912765">Delete</strong>.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p11245223162515">Read the system prompts carefully. A workload cannot be recovered after it is deleted. Exercise caution when performing this operation.</p>
<ol id="cce_10_0007__en-us_topic_0107283638_ol16301162312555"><li id="cce_10_0007__li1824612582414"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b4293132919298">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__en-us_topic_0107283638_li23014231555"><span>In the same row as the workload you will delete, choose <strong id="cce_10_0007__en-us_topic_0107283638_b2032918125613">Operation</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b0329141219611">More</strong> &gt; <strong id="cce_10_0007__en-us_topic_0107283638_b23291912765">Delete</strong>.</span><p><p id="cce_10_0007__en-us_topic_0107283638_p11245223162515">Read the system prompts carefully. A workload cannot be recovered after it is deleted. Exercise caution when performing this operation.</p>
</p></li><li id="cce_10_0007__en-us_topic_0107283638_li1566102365617"><span>Click <strong id="cce_10_0007__en-us_topic_0107283638_b2297164413617">Yes</strong>.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note1933510551189"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0007__en-us_topic_0107283638_ul204031813191914"><li id="cce_10_0007__en-us_topic_0107283638_li7404151371913">If the node where the pod is located is unavailable or shut down and the workload cannot be deleted, you can forcibly delete the pod from the pod list on the workload details page.</li><li id="cce_10_0007__en-us_topic_0107283638_li10404113191914">Ensure that the storage volumes to be deleted are not used by other workloads. If these volumes are imported or have snapshots, you can only unbind them.</li></ul>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section1947616516301"><a name="cce_10_0007__en-us_topic_0107283638_section1947616516301"></a><a name="en-us_topic_0107283638_section1947616516301"></a><h4 class="sectiontitle">Viewing Events</h4><p id="cce_10_0007__p16951182315188">This section uses Deployments as an example to illustrate how to view events of a workload. To view the event of a job or cron jon, click <span class="uicontrol" id="cce_10_0007__en-us_topic_0107283638_uicontrol10969103513352"><b>View Event</b></span> in the <strong id="cce_10_0007__b515302823911">Operation</strong> column of the target workload.</p>
<ol id="cce_10_0007__ol114609411810"><li id="cce_10_0007__li146044118811"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b126901436393">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li14460104111813"><span>On the <strong id="cce_10_0007__b9574104873918">Deployments</strong> tab page, click the target workload. In the <strong id="cce_10_0007__b9306314174612">Pods</strong> tab page, click the <span class="uicontrol" id="cce_10_0007__uicontrol41791215196"><b>View Events</b></span> to view the event name, event type, number of occurrences, Kubernetes event, first occurrence time, and last occurrence time.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note645916250256"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p2459102542512">Event data will be retained for one hour and then automatically deleted.</p>
<div class="section" id="cce_10_0007__en-us_topic_0107283638_section1947616516301"><a name="cce_10_0007__en-us_topic_0107283638_section1947616516301"></a><a name="en-us_topic_0107283638_section1947616516301"></a><h4 class="sectiontitle">Events</h4><p id="cce_10_0007__p16951182315188">This section uses Deployments as an example to illustrate how to view events of a workload. To view the event of a job or cron jon, click <span class="uicontrol" id="cce_10_0007__uicontrol5141163802911"><b>View Event</b></span> in the <strong id="cce_10_0007__b5141193842916">Operation</strong> column of the target workload.</p>
<ol id="cce_10_0007__ol114609411810"><li id="cce_10_0007__li146044118811"><span>Log in to the CCE console, go to an existing cluster, and choose <strong id="cce_10_0007__b1144092910">Workloads</strong> in the navigation pane.</span></li><li id="cce_10_0007__li14460104111813"><span>On the <strong id="cce_10_0007__b10635642182913">Deployments</strong> tab page, click the target workload. In the <strong id="cce_10_0007__b1463504252913">Pods</strong> tab page, click the <span class="uicontrol" id="cce_10_0007__uicontrol96354422296"><b>View Events</b></span> to view the event name, event type, number of occurrences, Kubernetes event, first occurrence time, and last occurrence time.</span><p><div class="note" id="cce_10_0007__en-us_topic_0107283638_note645916250256"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0007__en-us_topic_0107283638_p2459102542512">Event data will be retained for one hour and then automatically deleted.</p>
</div></div>
</p></li></ol>
</div>

View File

@ -1,20 +1,20 @@
<a name="cce_10_0009"></a><a name="cce_10_0009"></a>
<h1 class="topictitle1">Using a Third-Party Image</h1>
<h1 class="topictitle1">Using Third-Party Images</h1>
<div id="body1523239642063"><div class="section" id="cce_10_0009__section96721544452"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0009__p106141253452">CCE allows you to create workloads using images pulled from third-party image repositories.</p>
<p id="cce_10_0009__p1261413531252">Generally, a third-party image repository can be accessed only after authentication (using your account and password). CCE uses the secret-based authentication to pull images. Therefore, you need to create a secret for an image repository before pulling images from the repository.</p>
<p id="cce_10_0009__p1261413531252">Generally, a third-party image repository can be accessed only after authentication (using your account and password). CCE uses the secret-based authentication to pull images. Therefore, create a secret for an image repository before pulling images from the repository.</p>
</div>
<div class="section" id="cce_10_0009__section14876601632"><h4 class="sectiontitle">Prerequisites</h4><p id="cce_10_0009__p545510319312">The node where the workload is running is accessible from public networks.</p>
</div>
<div class="section" id="cce_10_0009__section0402183334411"><h4 class="sectiontitle">Using the Console</h4><ol id="cce_10_0009__ol1748117409446"><li id="cce_10_0009__li16481144064414"><a name="cce_10_0009__li16481144064414"></a><a name="li16481144064414"></a><span>Create a secret for accessing a third-party image repository.</span><p><p id="cce_10_0009__p75695254516">Click the cluster name and access the cluster console. In the navigation pane, choose <strong id="cce_10_0009__b11184624185410">ConfigMaps and Secrets</strong>. On the <strong id="cce_10_0009__b20186164716540">Secrets</strong> tab page, click <strong id="cce_10_0009__b146515575540">Create Secret</strong> in the upper right corner. Set <strong id="cce_10_0009__b1869041155912">Secret Type</strong> to <strong id="cce_10_0009__b138415925918">kubernetes.io/dockerconfigjson</strong>. For details, see <a href="cce_10_0153.html">Creating a Secret</a>.</p>
<div class="section" id="cce_10_0009__section0402183334411"><h4 class="sectiontitle">Using the Console</h4><ol id="cce_10_0009__ol1748117409446"><li id="cce_10_0009__li16481144064414"><a name="cce_10_0009__li16481144064414"></a><a name="li16481144064414"></a><span>Create a secret for accessing a third-party image repository.</span><p><p id="cce_10_0009__p75695254516">Click the cluster name to access the cluster console. In the navigation pane, choose <strong id="cce_10_0009__b361110141418">ConfigMaps and Secrets</strong>. On the <strong id="cce_10_0009__b146111414154116">Secrets</strong> tab, click <strong id="cce_10_0009__b9611714144114">Create Secret</strong> in the upper right corner. Set <strong id="cce_10_0009__b1161113140413">Secret Type</strong> to <strong id="cce_10_0009__b461115144411">kubernetes.io/dockerconfigjson</strong>. For details, see <a href="cce_10_0153.html">Creating a Secret</a>.</p>
<p id="cce_10_0009__p819111064514">Enter the user name and password used to access the third-party image repository.</p>
</p></li><li id="cce_10_0009__li13221161713456"><span>When creating a workload, you can enter a private image path in the format of <strong id="cce_10_0009__b2330142033520">domainname/namespace/imagename:tag</strong> in <span class="uicontrol" id="cce_10_0009__uicontrol252303262917"><b>Image Name</b></span> and select the key created in <a href="#cce_10_0009__li16481144064414">1</a>.</span><p><p id="cce_10_0009__p79771915112918"></p>
</p></li><li id="cce_10_0009__li13221161713456"><span>When creating a workload, you can enter a private image path in the format of <em id="cce_10_0009__i138551445252">domainname/namespace/imagename:tag</em> for <span class="uicontrol" id="cce_10_0009__uicontrol252303262917"><b>Image Name</b></span> and select the key created in <a href="#cce_10_0009__li16481144064414">1</a> for <span class="uicontrol" id="cce_10_0009__uicontrol1912713512391"><b>Image Access Credential</b></span>.</span><p><p id="cce_10_0009__p79771915112918"></p>
</p></li><li id="cce_10_0009__li1682113518595"><span>Set other parameters and click <span class="uicontrol" id="cce_10_0009__uicontrol14664142510020"><b>Create Workload</b></span>.</span></li></ol>
</div>
<div class="section" id="cce_10_0009__section18217101117197"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_10_0009__ol84677271516"><li id="cce_10_0009__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0009__li54671627213"><span>Create a secret of the dockercfg type using kubectl.</span><p><pre class="screen" id="cce_10_0009__screen1466527017">kubectl create secret docker-registry <strong id="cce_10_0009__b184651127812">myregistrykey </strong>--docker-server=<strong id="cce_10_0009__b124669278112">DOCKER_REGISTRY_SERVER</strong> --docker-username=<strong id="cce_10_0009__b9466927114">DOCKER_USER</strong> --docker-password=<strong id="cce_10_0009__b1046662715116">DOCKER_PASSWORD</strong> --docker-email=<strong id="cce_10_0009__b54661627119">DOCKER_EMAIL</strong></pre>
<p id="cce_10_0009__p164665271714">In the preceding commands, <strong id="cce_10_0009__b740124517418">myregistrykey</strong> indicates the secret name, and other parameters are described as follows:</p>
<div class="section" id="cce_10_0009__section18217101117197"><h4 class="sectiontitle">Using kubectl</h4><ol id="cce_10_0009__ol84677271516"><li id="cce_10_0009__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0009__li54671627213"><span>Use kubectl to create a secret of the kubernetes.io/dockerconfigjson.</span><p><pre class="screen" id="cce_10_0009__screen1466527017">kubectl create secret docker-registry <strong id="cce_10_0009__b184651127812"><i><span class="varname" id="cce_10_0009__varname20740165882418">myregistrykey</span></i> </strong> -n <strong id="cce_10_0009__b1984843918253"><i><span class="varname" id="cce_10_0009__varname846884372519">default</span></i></strong> --docker-server=<strong id="cce_10_0009__b124669278112"><i><span class="varname" id="cce_10_0009__varname153949106259">DOCKER_REGISTRY_SERVER</span></i></strong> --docker-username=<strong id="cce_10_0009__b9466927114"><i><span class="varname" id="cce_10_0009__varname6836161311251">DOCKER_USER</span></i></strong> --docker-password=<strong id="cce_10_0009__b1046662715116"><i><span class="varname" id="cce_10_0009__varname321011555243">DOCKER_PASSWORD</span></i></strong> --docker-email=<strong id="cce_10_0009__b54661627119"><i><span class="varname" id="cce_10_0009__varname17516111722514">DOCKER_EMAIL</span></i></strong></pre>
<p id="cce_10_0009__p164665271714">In the preceding command, <em id="cce_10_0009__i18443812102618">myregistrykey</em> indicates the key name, <em id="cce_10_0009__i8904529112612">default</em> indicates the namespace where the key is located, and other parameters are as follows:</p>
<ul id="cce_10_0009__ul84670278112"><li id="cce_10_0009__li4467142711112"><strong id="cce_10_0009__b640184594119">DOCKER_REGISTRY_SERVER</strong>: address of a third-party image repository, for example, <strong id="cce_10_0009__b240104584114">www.3rdregistry.com</strong> or <strong id="cce_10_0009__b1440215458415">10.10.10.10:443</strong></li><li id="cce_10_0009__li13467127716"><strong id="cce_10_0009__b164021745114117">DOCKER_USER</strong>: account used for logging in to a third-party image repository</li><li id="cce_10_0009__li746782712110"><strong id="cce_10_0009__b1539245574117">DOCKER</strong><strong id="cce_10_0009__b4392185511418">_PASSWORD</strong>: password used for logging in to a third-party image repository</li><li id="cce_10_0009__li1546712278117"><strong id="cce_10_0009__b10402845154110">DOCKER_EMAIL</strong>: email of a third-party image repository</li></ul>
</p></li><li id="cce_10_0009__li161523518110"><span>Use a third-party image to create a workload.</span><p><div class="p" id="cce_10_0009__p13583471429">A dockecfg secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication.<pre class="screen" id="cce_10_0009__screen0583771125">apiVersion: v1
</p></li><li id="cce_10_0009__li161523518110"><span>Use a third-party image to create a workload.</span><p><div class="p" id="cce_10_0009__p13583471429">A kubernetes.io/dockerconfigjson secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication.<pre class="screen" id="cce_10_0009__screen0583771125">apiVersion: v1
kind: Pod
metadata:
name: foo

View File

@ -4,35 +4,35 @@
<div id="body1522665832344"><p id="cce_10_0010__p13310145119810">You can learn about a cluster network from the following two aspects:</p>
<ul id="cce_10_0010__ul65247121891"><li id="cce_10_0010__li14524161214917">What is a cluster network like? A cluster consists of multiple nodes, and pods (or containers) are running on the nodes. Nodes and containers need to communicate with each other. For details about the cluster network types and their functions, see <a href="#cce_10_0010__section1131733719195">Cluster Network Structure</a>.</li><li id="cce_10_0010__li55241612391">How is pod access implemented in a cluster? Accessing a pod or container is a process of accessing services of a user. Kubernetes provides <a href="#cce_10_0010__section1860619221134">Service</a> and <a href="#cce_10_0010__section1248852094313">Ingress</a> to address pod access issues. This section summarizes common network access scenarios. You can select the proper scenario based on site requirements. For details about the network access scenarios, see <a href="#cce_10_0010__section1286493159">Access Scenarios</a>.</li></ul>
<div class="section" id="cce_10_0010__section1131733719195"><a name="cce_10_0010__section1131733719195"></a><a name="section1131733719195"></a><h4 class="sectiontitle">Cluster Network Structure</h4><p id="cce_10_0010__p3299181794916">All nodes in the cluster are located in a VPC and use the VPC network. The container network is managed by dedicated network add-ons.</p>
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001518222536.png"></span></p>
<ul id="cce_10_0010__ul1916179122617"><li id="cce_10_0010__li13455145754315"><strong id="cce_10_0010__b19468105563811">Node Network</strong><p id="cce_10_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. You need to select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
<p id="cce_10_0010__p452843519446"><span><img id="cce_10_0010__image94831936164418" src="en-us_image_0000001647576700.png"></span></p>
<ul id="cce_10_0010__ul1916179122617"><li id="cce_10_0010__li13455145754315"><strong id="cce_10_0010__b19468105563811">Node Network</strong><p id="cce_10_0010__p17682193014812">A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. Select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model.</p>
</li><li id="cce_10_0010__li16131141644715"><strong id="cce_10_0010__b1975815172433">Container Network</strong><p id="cce_10_0010__p523322010499">A container network assigns IP addresses to containers in a cluster. CCE inherits the IP-Per-Pod-Per-Network network model of Kubernetes. That is, each pod has an independent IP address on a network plane and all containers in a pod share the same network namespace. All pods in a cluster exist in a directly connected flat network. They can access each other through their IP addresses without using NAT. Kubernetes only provides a network mechanism for pods, but does not directly configure pod networks. The configuration of pod networks is implemented by specific container network add-ons. The container network add-ons are responsible for configuring networks for pods and managing container IP addresses.</p>
<p id="cce_10_0010__p3753153443514">Currently, CCE supports the following container network models:</p>
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance.</li></ul>
<ul id="cce_10_0010__ul1751111534368"><li id="cce_10_0010__li133611549182410">Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch.</li><li id="cce_10_0010__li285944033514">VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model applies to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster.</li><li id="cce_10_0010__li5395140132618">Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance.</li></ul>
<p id="cce_10_0010__p397482011109">The performance, networking scale, and application scenarios of a container network vary according to the container network model. For details about the functions and features of different container network models, see <a href="cce_10_0281.html">Overview</a>.</p>
</li><li id="cce_10_0010__li9139522183714"><strong id="cce_10_0010__b1885317214113">Service Network</strong><p id="cce_10_0010__p584703114499">Service is also a Kubernetes object. Each Service has a fixed IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster.</p>
</li><li id="cce_10_0010__li9139522183714"><strong id="cce_10_0010__b1885317214113">Service Network</strong><p id="cce_10_0010__p584703114499">Service is also a Kubernetes object. Each Service has a static IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster.</p>
</li></ul>
</div>
<div class="section" id="cce_10_0010__section1860619221134"><a name="cce_10_0010__section1860619221134"></a><a name="section1860619221134"></a><h4 class="sectiontitle">Service</h4><p id="cce_10_0010__p314709111318">A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods.</p>
<div class="fignone" id="cce_10_0010__en-us_topic_0249851121_fig163156154816"><span class="figcap"><b>Figure 1 </b>Accessing pods through a Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851121_image1926812771312" src="en-us_image_0000001517743432.png"></span></div>
<div class="section" id="cce_10_0010__section1860619221134"><a name="cce_10_0010__section1860619221134"></a><a name="section1860619221134"></a><h4 class="sectiontitle">Service</h4><p id="cce_10_0010__p314709111318">A Service is used for pod access. With a static IP address, a Service forwards access traffic to pods and performs load balancing for these pods.</p>
<div class="fignone" id="cce_10_0010__en-us_topic_0249851121_fig163156154816"><span class="figcap"><b>Figure 1 </b>Accessing pods through a Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851121_image1926812771312" src="en-us_image_0000001695896373.png"></span></div>
<p id="cce_10_0010__p831948183818">You can configure the following types of Services:</p>
<ul id="cce_10_0010__ul953218444116"><li id="cce_10_0010__li87791418174620">ClusterIP: used to make the Service only reachable from within a cluster.</li><li id="cce_10_0010__li17876227144612">NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node.</li><li id="cce_10_0010__li94953274615">LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, to which a load balancer routes, and external systems only need to access the load balancer.</li></ul>
<p id="cce_10_0010__p1677717174140">For details about the Service, see <a href="cce_10_0249.html">Service Overview</a>.</p>
<p id="cce_10_0010__p1677717174140">For details about the Service, see <a href="cce_10_0249.html">Overview</a>.</p>
</div>
<div class="section" id="cce_10_0010__section1248852094313"><a name="cce_10_0010__section1248852094313"></a><a name="section1248852094313"></a><h4 class="sectiontitle">Ingress</h4><p id="cce_10_0010__p96672218193">Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward requests using layer-7 HTTP and HTTPS protocols. Domain names and paths can be used to achieve finer granularities.</p>
<div class="fignone" id="cce_10_0010__fig816719454212"><span class="figcap"><b>Figure 2 </b>Ingress-Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851122_image8371183511310" src="en-us_image_0000001517903016.png"></span></div>
<p id="cce_10_0010__p174691141141410">For details about the ingress, see <a href="cce_10_0094.html">Ingress Overview</a>.</p>
<div class="fignone" id="cce_10_0010__fig816719454212"><span class="figcap"><b>Figure 2 </b>Ingress-Service</span><br><span><img id="cce_10_0010__en-us_topic_0249851122_image8371183511310" src="en-us_image_0000001647417440.png"></span></div>
<p id="cce_10_0010__p174691141141410">For details about the ingress, see <a href="cce_10_0094.html">Overview</a>.</p>
</div>
<div class="section" id="cce_10_0010__section1286493159"><a name="cce_10_0010__section1286493159"></a><a name="section1286493159"></a><h4 class="sectiontitle">Access Scenarios</h4><p id="cce_10_0010__p1558001514155">Workload access scenarios can be categorized as follows:</p>
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li8904911447">Access through the internet requires an EIP to be bound the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through the intranet requires an internal IP address to be bound the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
</li><li id="cce_10_0010__li1066365520014">The workload accesses the external network.<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. </li><li id="cce_10_0010__li8257105318237">Accessing a public network: You need to assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing Public Networks from a Container</a>.</li></ul>
<ul id="cce_10_0010__ul125010117542"><li id="cce_10_0010__li1466355519018">Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other.</li><li id="cce_10_0010__li1014011111110">Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster.<ul id="cce_10_0010__ul101426119117"><li id="cce_10_0010__li8904911447">Access through the public network: An EIP should be bound to the node or load balancer.</li><li id="cce_10_0010__li2501311125411">Access through the private network: The workload can be accessed through the internal IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs.</li></ul>
</li><li id="cce_10_0010__li1066365520014">The workload can access the external network as follows:<ul id="cce_10_0010__ul17529512239"><li id="cce_10_0010__li26601017165619">Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. </li><li id="cce_10_0010__li8257105318237">Accessing a public network: Assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see <a href="cce_10_0400.html">Accessing Public Networks from a Container</a>.</li></ul>
</li></ul>
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001568822741.png"></span></div>
<div class="fignone" id="cce_10_0010__fig13795829151515"><span class="figcap"><b>Figure 3 </b>Network access diagram</span><br><span><img id="cce_10_0010__image445972519529" src="en-us_image_0000001647576708.png"></span></div>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0020.html">Networking</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0020.html">Network</a></div>
</div>
</div>

View File

@ -1,16 +1,16 @@
<a name="cce_10_0011"></a><a name="cce_10_0011"></a>
<h1 class="topictitle1">Intra-Cluster Access (ClusterIP)</h1>
<h1 class="topictitle1">ClusterIP</h1>
<div id="body1522736584192"><div class="section" id="cce_10_0011__section13559184110492"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0011__p32401248184910">ClusterIP Services allow workloads in the same cluster to use their cluster-internal domain names to access each other.</p>
<p id="cce_10_0011__p653753053815">The cluster-internal domain name format is <em id="cce_10_0011__i8179113533712">&lt;Service name&gt;</em>.<em id="cce_10_0011__i14179133519374">&lt;Namespace of the workload&gt;</em><strong id="cce_10_0011__b164892813716">.svc.cluster.local:</strong><em id="cce_10_0011__i19337102815712">&lt;Port&gt;</em>, for example, <strong id="cce_10_0011__b8115811381">nginx.default.svc.cluster.local:80</strong>.</p>
<p id="cce_10_0011__p1778412445517"><a href="#cce_10_0011__fig192245420557">Figure 1</a> shows the mapping relationships between access channels, container ports, and access ports.</p>
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001569023045.png"></span></div>
<div class="fignone" id="cce_10_0011__fig192245420557"><a name="cce_10_0011__fig192245420557"></a><a name="fig192245420557"></a><span class="figcap"><b>Figure 1 </b>Intra-cluster access (ClusterIP)</span><br><span><img id="cce_10_0011__image1942163010278" src="en-us_image_0000001647417816.png"></span></div>
</div>
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>Choose <strong id="cce_10_0011__b85507206148">Networking</strong> in the navigation pane and click <strong id="cce_10_0011__b1938115214148">Create Service</strong> in the upper right corner.</span></li><li id="cce_10_0011__li3476651017144"><span>Set intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Add</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b150413392315954">Port Settings</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
<div class="section" id="cce_10_0011__section51925078171335"><h4 class="sectiontitle">Creating a ClusterIP Service</h4><ol id="cce_10_0011__ol1321170617144"><li id="cce_10_0011__li41731123658"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0011__li836916478329"><span>Choose <strong id="cce_10_0011__b85507206148">Networking</strong> in the navigation pane and click <strong id="cce_10_0011__b1938115214148">Create Service</strong> in the upper right corner.</span></li><li id="cce_10_0011__li3476651017144"><span>Set intra-cluster access parameters.</span><p><ul id="cce_10_0011__ul4446314017144"><li id="cce_10_0011__li6462394317144"><strong id="cce_10_0011__b181470402505">Service Name</strong>: Service name, which can be the same as the workload name.</li><li id="cce_10_0011__li89543531070"><strong id="cce_10_0011__b2091115317145">Service Type</strong>: Select <strong id="cce_10_0011__b291265312145">ClusterIP</strong>.</li><li id="cce_10_0011__li4800017144"><strong id="cce_10_0011__b3997151161512">Namespace</strong>: Namespace to which the workload belongs.</li><li id="cce_10_0011__li43200017144"><strong id="cce_10_0011__b16251723161514">Selector</strong>: Add a label and click <strong id="cce_10_0011__b157041550131611">Add</strong>. A Service selects a pod based on the added label. You can also click <strong id="cce_10_0011__b796831114161">Reference Workload Label</strong> to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click <strong id="cce_10_0011__b1117311264160">OK</strong>.</li><li id="cce_10_0011__li388800117144"><strong id="cce_10_0011__b102328496354">Port</strong><ul id="cce_10_0011__ul13757123384316"><li id="cce_10_0011__li475711338435"><strong id="cce_10_0011__b712192113108">Protocol</strong>: protocol used by the Service.</li><li id="cce_10_0011__li353122153610"><strong id="cce_10_0011__b2766425101013">Service Port</strong>: port used by the Service. The port number ranges from 1 to 65535.</li><li id="cce_10_0011__li177581033194316"><strong id="cce_10_0011__b2045852761014">Container Port</strong>: port on which the workload listens. For example, Nginx uses port 80 by default.</li></ul>
</li></ul>
</p></li><li id="cce_10_0011__li5563226917144"><span>Click <strong id="cce_10_0011__b15590122052614">OK</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0011__section9813121512319"><h4 class="sectiontitle">Setting the Access Type Using kubectl</h4><p id="cce_10_0011__p1626583075113">You can run kubectl commands to set the access type (Service). This section uses a Nginx workload as an example to describe how to implement intra-cluster access using kubectl.</p>
<div class="section" id="cce_10_0011__section9813121512319"><h4 class="sectiontitle">Setting the Access Type Using kubectl</h4><p id="cce_10_0011__p1626583075113">You can run kubectl commands to set the access type (Service). This section uses an Nginx workload as an example to describe how to implement intra-cluster access using kubectl.</p>
<ol id="cce_10_0011__ol19191171513118"><li id="cce_10_0011__li2338171784610"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_0011__li1020013819415"><span>Create and edit the <strong id="cce_10_0011__b1451217585494">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b0512195818499">nginx-clusterip-svc.yaml</strong> files.</span><p><p id="cce_10_0011__p1527690125210">The file names are user-defined. <strong id="cce_10_0011__b1073117514231">nginx-deployment.yaml</strong> and <strong id="cce_10_0011__b1373115162318">nginx-clusterip-svc.yaml</strong> are merely example file names.</p>
<div class="p" id="cce_10_0011__p7581950184318"><strong id="cce_10_0011__b111191541172515">vi nginx-deployment.yaml</strong><pre class="screen" id="cce_10_0011__screen47713471440">apiVersion: apps/v1
kind: Deployment
@ -112,7 +112,7 @@ Commercial support is available at
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0247.html">Services</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0247.html">Service</a></div>
</div>
</div>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -2,20 +2,20 @@
<h1 class="topictitle1">Using ICAgent to Collect Container Logs</h1>
<div id="body1522667123001"><p id="cce_10_0018__p78381781804">CCE works with AOM to collect workload logs. When creating a node, CCE installs the ICAgent for you (the DaemonSet named <strong id="cce_10_0018__b3710330164314">icagent</strong> in the kube-system namespace of the cluster). After the ICAgent collects workload logs and reports them to AOM, you can view workload logs on the CCE or AOM console.</p>
<div class="section" id="cce_10_0018__section17884754413"><h4 class="sectiontitle">Notes and Constraints</h4><p id="cce_10_0018__p23831558355">The ICAgent only collects <strong id="cce_10_0018__b39280572146">*.log</strong>, <strong id="cce_10_0018__b1793513574146">*.trace</strong>, and <strong id="cce_10_0018__b29351157191412">*.out</strong> text log files.</p>
<div class="section" id="cce_10_0018__section17884754413"><h4 class="sectiontitle">Constraints</h4><p id="cce_10_0018__p23831558355">The ICAgent only collects <strong id="cce_10_0018__b39280572146">*.log</strong>, <strong id="cce_10_0018__b1793513574146">*.trace</strong>, and <strong id="cce_10_0018__b29351157191412">*.out</strong> text log files.</p>
</div>
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001569182673.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image168953502558" src="en-us_image_0000001569022957.png"></span></div>
<div class="section" id="cce_10_0018__section1951732710"><h4 class="sectiontitle">Using ICAgent to Collect Logs</h4><ol id="cce_10_0018__ol1253654833013"><li id="cce_10_0018__li19284854163014"><span>When <a href="cce_10_0047.html">creating a workload</a>, set logging for the container.</span></li><li id="cce_10_0018__li2427158104715"><span>Click <span><img id="cce_10_0018__image134281583473" src="en-us_image_0000001695737369.png"></span> to add a log policy.</span><p><div class="p" id="cce_10_0018__p9862125810472">The following uses Nginx as an example. Log policies vary depending on workloads.<div class="fignone" id="cce_10_0018__fig19856172153216"><span class="figcap"><b>Figure 1 </b>Adding a log policy</span><br><span><img id="cce_10_0018__image664110265156" src="en-us_image_0000001691644354.png"></span></div>
</div>
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Storage Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>Host Path</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>Container Path</b></span>.</span><p>
</p></li><li id="cce_10_0018__li1479392315150"><span>Set <strong id="cce_10_0018__b5461630195419">Volume Type</strong> to <span class="uicontrol" id="cce_10_0018__uicontrol105212302547"><b>Host Path</b></span> or <span class="uicontrol" id="cce_10_0018__uicontrol1752103095410"><b>Container Path</b></span>.</span><p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0018__table115901715550" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Configuring log policies</caption><thead align="left"><tr id="cce_10_0018__row45851074554"><th align="left" class="cellrowborder" valign="top" width="22.12%" id="mcps1.3.3.2.3.2.1.2.3.1.1"><p id="cce_10_0018__p115843785517">Parameter</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="77.88000000000001%" id="mcps1.3.3.2.3.2.1.2.3.1.2"><p id="cce_10_0018__p12584573550">Description</p>
</th>
</tr>
</thead>
<tbody><tr id="cce_10_0018__row1458511725510"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p115855785514">Storage Type</p>
<tbody><tr id="cce_10_0018__row1458511725510"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p115855785514">Volume Type</p>
</td>
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><ul id="cce_10_0018__ul76619114518"><li id="cce_10_0018__li136612110459"><strong id="cce_10_0018__b5948115385420">Host Path</strong> (hostPath): A host path is mounted to the specified container path (mount path). In the node host path, you can view the container logs output into the mount path.</li><li id="cce_10_0018__li6661161104511"><strong id="cce_10_0018__b347751165515">Container Path</strong> (emptyDir): A temporary path of the node is mounted to the specified path (mount path). Log data that exists in the temporary path but is not reported by the collector to AOM will disappear after the pod is deleted.</li></ul>
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><ul id="cce_10_0018__ul76619114518"><li id="cce_10_0018__li136612110459"><strong id="cce_10_0018__b5948115385420">Host Path</strong> (hostPath): A host path is mounted to the specified container path (mount path). In the node host path, you can view the container logs output into the mount path.</li><li id="cce_10_0018__li6661161104511"><strong id="cce_10_0018__b10357296196">Container Path</strong> (emptyDir): A temporary path of the node is mounted to the specified path (mount path). Log data that exists in the temporary path but is not reported by the collector to AOM will disappear after the pod is deleted.</li></ul>
</td>
</tr>
<tr id="cce_10_0018__row796372144912"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p161422710493">Host Path</p>
@ -23,7 +23,7 @@
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0018__p171432774911">Enter a host path, for example, <strong id="cce_10_0018__b17669511375">/var/paas/sys/log/nginx</strong>.</p>
</td>
</tr>
<tr id="cce_10_0018__row19587147165512"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p1158647155518">Container Path</p>
<tr id="cce_10_0018__row19587147165512"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p1158647155518">Mount Path</p>
</td>
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><div class="p" id="cce_10_0018__p358711715554">Container path (for example, <strong id="cce_10_0018__b8656121314711">/tmp</strong>) to which the storage resources will be mounted.<div class="notice" id="cce_10_0018__note155879745516"><span class="noticetitle"> NOTICE: </span><div class="noticebody"><ul id="cce_10_0018__ul14587570556"><li id="cce_10_0018__li95877735510">Do not mount storage to a system directory such as <strong id="cce_10_0018__b0630176102713">/</strong> or <strong id="cce_10_0018__b063118642719">/var/run</strong>; this action may cause a container error to occur. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload.</li><li id="cce_10_0018__li1258777175519">When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged.</li><li id="cce_10_0018__li1943916477113">AOM collects only the first 20 log files that have been modified recently. It collects files from 2 levels of subdirectories by default.</li><li id="cce_10_0018__li545718441116">AOM only collects <span class="uicontrol" id="cce_10_0018__uicontrol27371025162017"><b>.log</b></span>, <span class="uicontrol" id="cce_10_0018__uicontrol874242592011"><b>.trace</b></span>, and <span class="uicontrol" id="cce_10_0018__uicontrol1974322522012"><b>.out</b></span> text log files in the mount paths.</li><li id="cce_10_0018__li866676185016">For details about how to set permissions for mount points in a container, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" target="_blank" rel="noopener noreferrer">Configure a Security Context for a Pod or Container</a>.</li></ul>
</div></div>
@ -38,10 +38,19 @@
<ul id="cce_10_0018__ul1358877135514"><li id="cce_10_0018__li115872725517"><strong id="cce_10_0018__b67128281231">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li8587474550"><strong id="cce_10_0018__b37109352310">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li14587127185513"><strong id="cce_10_0018__b1246417411639">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li10587117175514"><strong id="cce_10_0018__b1232314820315">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li758816716559"><strong id="cce_10_0018__b15921753534">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
</td>
</tr>
<tr id="cce_10_0018__row1740653212476"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p84061032144714">Collection Path</p>
</td>
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0018__p157615551480">A collection path narrows down the scope of collection to specified logs. </p>
<ul id="cce_10_0018__ul1676055194810"><li id="cce_10_0018__li2761555134814">If no collection path is specified, log files in <strong id="cce_10_0018__b471281022817">.log</strong>, <strong id="cce_10_0018__b1171218102289">.trace</strong>, and <strong id="cce_10_0018__b1671221022818">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li13761955144810"><strong id="cce_10_0018__b71586376261">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b559191242719">.log</strong>, <strong id="cce_10_0018__b1515321718274">.trace</strong>, and <strong id="cce_10_0018__b1766462192711">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li27745518483">* in log file names indicates a fuzzy match.</li></ul>
<p id="cce_10_0018__p197795574820">Example: The collection path <strong id="cce_10_0018__b591619449318">/tmp/**/test*.log</strong> indicates that all <strong id="cce_10_0018__b4875453173116">.log</strong> files prefixed with <strong id="cce_10_0018__b1651618112234">test</strong> will be collected from <strong id="cce_10_0018__b442040193212">/tmp</strong> and subdirectories at 5 levels deep.</p>
<div class="caution" id="cce_10_0018__note1039671516135"><span class="cautiontitle"> CAUTION: </span><div class="cautionbody"><p id="cce_10_0018__p5396171516138">Ensure that the ICAgent version is 5.12.22 or later.</p>
</div></div>
</td>
</tr>
<tr id="cce_10_0018__row85891275552"><td class="cellrowborder" valign="top" width="22.12%" headers="mcps1.3.3.2.3.2.1.2.3.1.1 "><p id="cce_10_0018__p258847105513">Log Dump</p>
</td>
<td class="cellrowborder" valign="top" width="77.88000000000001%" headers="mcps1.3.3.2.3.2.1.2.3.1.2 "><p id="cce_10_0018__p674918171418">Log dump refers to rotating log files on a local host.</p>
<ul id="cce_10_0018__ul1493171511410"><li id="cce_10_0018__li129311551416"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li79301514142"><strong id="cce_10_0018__b1646121113016">Disabled</strong>: AOM does not dump log files.</li></ul>
<ul id="cce_10_0018__ul1493171511410"><li id="cce_10_0018__li129311551416"><strong id="cce_10_0018__b14264156295">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped. A new <strong id="cce_10_0018__b194262015152911">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b9346139122920">.zip</strong> files. When the number of <strong id="cce_10_0018__b53461239162918">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1347539132915">.zip</strong> files will be deleted.</li><li id="cce_10_0018__li79301514142"><strong id="cce_10_0018__b1646121113016">Disabled</strong>: AOM does not dump log files.</li></ul>
<div class="note" id="cce_10_0018__note746743620142"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0018__ul148957434143"><li id="cce_10_0018__li3895243141410">AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur.</li><li id="cce_10_0018__li7895164351412">Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have already set rotation for log files, skip the configuration. Otherwise, conflicts may occur.</li><li id="cce_10_0018__li589554311145">You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files.</li></ul>
</div></div>
</td>
@ -124,7 +133,7 @@ spec:
logs:
rotate: Hourly
annotations:
pathPattern: '**'
format: ''
volumes:
- hostPath:
@ -146,8 +155,8 @@ spec:
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p6329709512">Extended host path</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p32881805119">Extended host paths contain pod IDs or container names to distinguish different containers into which the host path is mounted.</p>
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1766445251">Pod</span>.</p>
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b466439911">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b746148577">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b678656736">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b1079307725">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
<p id="cce_10_0018__p1728888115112">A level-3 directory is added to the original volume directory/subdirectory. You can easily obtain the files output by a single <span class="keyword" id="cce_10_0018__keyword1954519765">Pod</span>.</p>
<ul id="cce_10_0018__ul2028828105113"><li id="cce_10_0018__li428815865110"><strong id="cce_10_0018__b975065369">None</strong>: No extended path is configured. </li><li id="cce_10_0018__li62889814517"><strong id="cce_10_0018__b1689154494">PodUID</strong>: ID of a pod.</li><li id="cce_10_0018__li528818135113"><strong id="cce_10_0018__b925397375">PodName</strong>: name of a pod.</li><li id="cce_10_0018__li62882084517"><strong id="cce_10_0018__b2107748081">PodUID/ContainerName</strong>: ID of a pod or name of a container.</li><li id="cce_10_0018__li528898175110"><strong id="cce_10_0018__b8818125942116">PodName/ContainerName</strong>: name of a pod or container.</li></ul>
</td>
</tr>
<tr id="cce_10_0018__row732915085118"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p17329004514">policy.logs.rotate</p>
@ -155,11 +164,22 @@ spec:
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p123292055113">Log dump</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p1017113396539">Log dump refers to rotating log files on a local host.</p>
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b228801547">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b618877522">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b67462932">.zip</strong> files. When the number of <strong id="cce_10_0018__b478147095">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1992183573">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b1231713624">Disabled</strong>: AOM does not dump log files.</li></ul>
<ul id="cce_10_0018__ul1617120398533"><li id="cce_10_0018__li71711639105316"><strong id="cce_10_0018__b4837638192520">Enabled</strong>: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new <strong id="cce_10_0018__b98429388254">.zip</strong> file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 <strong id="cce_10_0018__b2216332192917">.zip</strong> files. When the number of <strong id="cce_10_0018__b1621653252914">.zip</strong> files exceeds 20, earlier <strong id="cce_10_0018__b1321623212917">.zip</strong> files will be deleted. After the dump is complete, the log file in AOM will be cleared.</li><li id="cce_10_0018__li817133985315"><strong id="cce_10_0018__b583150473">Disabled</strong>: AOM does not dump log files.</li></ul>
<div class="note" id="cce_10_0018__note121711639195319"><span class="notetitle"> NOTE: </span><div class="notebody"><ul id="cce_10_0018__ul817183918533"><li id="cce_10_0018__li9171183945310">AOM rotates log files using copytruncate. Before enabling log dumping, ensure that log files are written in the append mode. Otherwise, file holes may occur.</li><li id="cce_10_0018__li1117153914535">Currently, mainstream log components such as Log4j and Logback support log file rotation. If you have set rotation for log files, skip the configuration. Otherwise, conflicts may occur.</li><li id="cce_10_0018__li317113915532">You are advised to configure log file rotation for your own services to flexibly control the size and number of rolled files.</li></ul>
</div></div>
</td>
</tr>
<tr id="cce_10_0018__row14329504511"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p93292045113">policy.logs.annotations.pathPattern</p>
</td>
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p14388112019519">Collection path</p>
</td>
<td class="cellrowborder" valign="top" width="63.71%" headers="mcps1.3.4.7.2.4.1.3 "><p id="cce_10_0018__p63882201153">A collection path narrows down the scope of collection to specified logs. </p>
<ul id="cce_10_0018__ul73883209510"><li id="cce_10_0018__li14388162011513">If no collection path is specified, log files in <strong id="cce_10_0018__b2106395307">.log</strong>, <strong id="cce_10_0018__b553678455">.trace</strong>, and <strong id="cce_10_0018__b1169250673">.out</strong> formats will be collected from the specified path.</li><li id="cce_10_0018__li03886201854"><strong id="cce_10_0018__b378475335">/Path/**/</strong> indicates that all log files in <strong id="cce_10_0018__b1506880273">.log</strong>, <strong id="cce_10_0018__b251849409">.trace</strong>, and <strong id="cce_10_0018__b541073936">.out</strong> formats will be recursively collected from the specified path and all subdirectories at 5 levels deep.</li><li id="cce_10_0018__li1938811201058">* in log file names indicates a fuzzy match.</li></ul>
<p id="cce_10_0018__p17388152013515">Example: The collection path <strong id="cce_10_0018__b19951612237">/tmp/**/test*.log</strong> indicates that all <strong id="cce_10_0018__b49571315239">.log</strong> files prefixed with <strong id="cce_10_0018__b4958101202315">test</strong> will be collected from <strong id="cce_10_0018__b695815172316">/tmp</strong> and subdirectories at 5 levels deep.</p>
<div class="caution" id="cce_10_0018__note153881220751"><span class="cautiontitle"> CAUTION: </span><div class="cautionbody"><p id="cce_10_0018__p938810204516">Ensure that the ICAgent version is 5.12.22 or later.</p>
</div></div>
</td>
</tr>
<tr id="cce_10_0018__row10264639195415"><td class="cellrowborder" valign="top" width="17.06%" headers="mcps1.3.4.7.2.4.1.1 "><p id="cce_10_0018__p17265103911544">policy.logs.annotations.format</p>
</td>
<td class="cellrowborder" valign="top" width="19.23%" headers="mcps1.3.4.7.2.4.1.2 "><p id="cce_10_0018__p17265039125417">Multi-line log matching</p>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0019"></a><a name="cce_10_0019"></a>
<h1 class="topictitle1">Charts</h1>
<h1 class="topictitle1">Helm Chart</h1>
<div id="body1522665832345"></div>
<div>
<ul class="ullinks">
@ -8,6 +8,14 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0146.html">Deploying an Application from a Chart</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0421.html">Differences Between Helm v2 and Helm v3 and Adaptation Solutions</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0420.html">Deploying an Application Through the Helm v2 Client</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0144.html">Deploying an Application Through the Helm v3 Client</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0422.html">Converting a Release from Helm v2 to v3</a></strong><br>
</li>
</ul>
</div>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0020"></a><a name="cce_10_0020"></a>
<h1 class="topictitle1">Networking</h1>
<h1 class="topictitle1">Network</h1>
<div id="body1506570432072"></div>
<div>
<ul class="ullinks">
@ -8,20 +8,20 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0280.html">Container Network Models</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0247.html">Services</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0247.html">Service</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0248.html">Ingresses</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0359.html">DNS</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0675.html">Container Network Settings</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0679.html">Cluster Network Settings</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0399.html">Configuring Intra-VPC Access</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0400.html">Accessing Public Networks from a Container</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0059.html">Network Policies</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0402.html">Host Network</a></strong><br>
</li>
</ul>
</div>

View File

@ -1,6 +1,6 @@
<a name="cce_10_0024"></a><a name="cce_10_0024"></a>
<h1 class="topictitle1">Cloud Trace Service (CTS)</h1>
<h1 class="topictitle1">CTS Logs</h1>
<div id="body1525226397666"></div>
<div>
<ul class="ullinks">
@ -9,5 +9,9 @@
<li class="ulchildlink"><strong><a href="cce_10_0026.html">Querying CTS Logs</a></strong><br>
</li>
</ul>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0705.html">Observability</a></div>
</div>
</div>

View File

@ -590,7 +590,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Cloud Trace Service (CTS)</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">CTS Logs</a></div>
</div>
</div>

View File

@ -3,19 +3,19 @@
<h1 class="topictitle1">Querying CTS Logs</h1>
<div id="body1525226397666"><div class="section" id="cce_10_0026__section19908104613460"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0026__p1349415403233">After you enable CTS, the system starts recording operations on CCE resources. Operation records of the last 7 days can be viewed on the CTS management console.</p>
</div>
<div class="section" id="cce_10_0026__section208814582456"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0026__ol968681862911"><li id="cce_10_0026__li18356228445"><span>Log in to the management console.</span></li><li id="cce_10_0026__li14905725134512"><span>Click <span><img id="cce_10_0026__image1180502423211" src="en-us_image_0000001569182497.gif"></span> in the upper left corner and select a region.</span></li><li id="cce_10_0026__li56856187296"><span>Choose <strong id="cce_10_0026__b161841334316020">Service List</strong> from the main menu. Choose <strong id="cce_10_0026__b14174101155814">Management &amp; Deployment</strong> &gt; <strong id="cce_10_0026__b1917414113585">Cloud Trace Service</strong>.</span></li><li id="cce_10_0026__li6685018122920"><span>In the navigation pane of the CTS console, choose <strong id="cce_10_0026__b091641316584">Cloud Trace Service</strong> &gt; <strong id="cce_10_0026__b6917813165811">Trace List</strong>.</span></li><li id="cce_10_0026__li0686618152911"><span>On the <strong id="cce_10_0026__b156310494616044">Trace List</strong> page, query operation records based on the search criteria. Currently, the trace list supports trace query based on the combination of the following search criteria:</span><p><ul id="cce_10_0026__ul2686318142919"><li id="cce_10_0026__li9685018132914"><strong id="cce_10_0026__b147767585916113">Trace Source</strong>, <strong id="cce_10_0026__b33843206916113">Resource Type</strong>, and <strong id="cce_10_0026__b104136949616113">Search By</strong><p id="cce_10_0026__p068517181297">Select the search criteria from the drop-down lists. Select <strong id="cce_10_0026__b987393825817">CCE</strong> from the <strong id="cce_10_0026__b1287312387583">Trace Source</strong> drop-down list.</p>
<div class="section" id="cce_10_0026__section208814582456"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0026__ol968681862911"><li id="cce_10_0026__li18356228445"><span>Log in to the management console.</span></li><li id="cce_10_0026__li14905725134512"><span>Click <span><img id="cce_10_0026__image1180502423211" src="en-us_image_0000001647417272.gif"></span> in the upper left corner and select a region.</span></li><li id="cce_10_0026__li56856187296"><span>Choose <strong id="cce_10_0026__b161841334316020">Service List</strong> from the main menu. Choose <strong id="cce_10_0026__b14174101155814">Management &amp; Deployment</strong> &gt; <strong id="cce_10_0026__b1917414113585">Cloud Trace Service</strong>.</span></li><li id="cce_10_0026__li6685018122920"><span>In the navigation pane of the CTS console, choose <strong id="cce_10_0026__b091641316584">Cloud Trace Service</strong> &gt; <strong id="cce_10_0026__b6917813165811">Trace List</strong>.</span></li><li id="cce_10_0026__li0686618152911"><span>On the <strong id="cce_10_0026__b156310494616044">Trace List</strong> page, query operation records based on the search criteria. Currently, the trace list supports trace query based on the combination of the following search criteria:</span><p><ul id="cce_10_0026__ul2686318142919"><li id="cce_10_0026__li9685018132914"><strong id="cce_10_0026__b147767585916113">Trace Source</strong>, <strong id="cce_10_0026__b33843206916113">Resource Type</strong>, and <strong id="cce_10_0026__b104136949616113">Search By</strong><p id="cce_10_0026__p068517181297">Select the search criteria from the drop-down lists. Select <strong id="cce_10_0026__b987393825817">CCE</strong> from the <strong id="cce_10_0026__b1287312387583">Trace Source</strong> drop-down list.</p>
<p id="cce_10_0026__p26851618102915">If you select <strong id="cce_10_0026__b23175131216221">Trace name</strong> from the <strong id="cce_10_0026__b172899127516221">Search By</strong> drop-down list, specify the trace name.</p>
<p id="cce_10_0026__p7685191818293">If you select <strong id="cce_10_0026__b33083335616231">Resource ID</strong> from the <strong id="cce_10_0026__b153919820216231">Search By</strong> drop-down list, select or enter a specific resource ID.</p>
<p id="cce_10_0026__p166851718102917">If you select <strong id="cce_10_0026__b50135831116238">Resource name</strong> from the <strong id="cce_10_0026__b186507588316238">Search By</strong> drop-down list, select or enter a specific resource name.</p>
</li><li id="cce_10_0026__li1968671815297"><strong id="cce_10_0026__b168444573616245">Operator</strong>: Select a specific operator (at user level rather than account level).</li><li id="cce_10_0026__li368641832910"><strong id="cce_10_0026__b113712261116258">Trace Status</strong>: Set this parameter to any of the following values: <strong id="cce_10_0026__b135890568716258">All trace statuses</strong>, <strong id="cce_10_0026__b192911413716258">normal</strong>, <strong id="cce_10_0026__b59570413316258">warning</strong>, and <strong id="cce_10_0026__b169117565716258">incident</strong>.</li><li id="cce_10_0026__li12686118112916">Time range: You can query traces generated during any time range in the last seven days.</li></ul>
</p></li><li id="cce_10_0026__li01301836122914"><span>Click <span><img id="cce_10_0026__image07291172331" src="en-us_image_0000001569182505.png"></span> on the left of a trace to expand its details, as shown below.</span><p><div class="fignone" id="cce_10_0026__fig1324117817394"><span class="figcap"><b>Figure 1 </b>Expanding trace details</span><br><span><img id="cce_10_0026__image19242788396" src="en-us_image_0000001569022781.png"></span></div>
</p></li><li id="cce_10_0026__li186863182294"><span>Click <strong id="cce_10_0026__b25871212163720">View Trace</strong> in the <strong id="cce_10_0026__b1597141217374">Operation</strong> column. The trace details are displayed.</span><p><div class="fignone" id="cce_10_0026__fig365411360512"><span class="figcap"><b>Figure 2 </b>Viewing event details</span><br><span><img id="cce_10_0026__image21436386418" src="en-us_image_0000001517743372.png"></span></div>
</p></li><li id="cce_10_0026__li01301836122914"><span>Click <span><img id="cce_10_0026__image07291172331" src="en-us_image_0000001695896213.png"></span> on the left of a trace to expand its details, as shown below.</span><p><div class="fignone" id="cce_10_0026__fig1324117817394"><span class="figcap"><b>Figure 1 </b>Expanding trace details</span><br><span><img id="cce_10_0026__image19242788396" src="en-us_image_0000001695896201.png"></span></div>
</p></li><li id="cce_10_0026__li186863182294"><span>Click <strong id="cce_10_0026__b25871212163720">View Trace</strong> in the <strong id="cce_10_0026__b1597141217374">Operation</strong> column. The trace details are displayed.</span><p><div class="fignone" id="cce_10_0026__fig365411360512"><span class="figcap"><b>Figure 2 </b>Viewing event details</span><br><span><img id="cce_10_0026__image21436386418" src="en-us_image_0000001695736933.png"></span></div>
</p></li></ol>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">Cloud Trace Service (CTS)</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0024.html">CTS Logs</a></div>
</div>
</div>

View File

@ -1,33 +1,35 @@
<a name="cce_10_0028"></a><a name="cce_10_0028"></a>
<h1 class="topictitle1">Creating a CCE Cluster</h1>
<div id="body1505899032898"><p id="cce_10_0028__p126541913151116">On the CCE console, you can easily create Kubernetes clusters. Kubernetes can manage container clusters at scale. A cluster manages a group of node resources.</p>
<p id="cce_10_0028__p162026117205">In CCE, you can create a CCE cluster to manage VMs. By using high-performance network models, hybrid clusters provide a multi-scenario, secure, and stable runtime environment for containers.</p>
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_10_0028__li124606217339">You can create a maximum of 50 clusters in a single region.</li><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
<h1 class="topictitle1">Creating a Cluster</h1>
<div id="body1505899032898"><p id="cce_10_0028__p126541913151116">On the CCE console, you can easily create Kubernetes clusters. After a cluster is created, the master node is hosted by CCE. You only need to create worker nodes. In this way, you can implement cost-effective O&amp;M and efficient service deployment.</p>
<div class="section" id="cce_10_0028__section1386743114294"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0028__ul686414167496"><li id="cce_10_0028__li190817135320">During the node creation, software packages are downloaded from OBS using the domain name. Use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.</li><li id="cce_10_0028__li124606217339">You can create a maximum of 50 clusters in a single region.</li><li id="cce_10_0028__li1186441616491">After a cluster is created, the following items cannot be changed:<ul id="cce_10_0028__ul1386431634910"><li id="cce_10_0028__li6864131614492">Cluster type</li><li id="cce_10_0028__li359558115311">Number of master nodes in the cluster</li><li id="cce_10_0028__li452948112016">AZ of a master node</li><li id="cce_10_0028__li1686412165496">Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (<a href="#cce_10_0028__li1895772174715">request forwarding</a>) settings.</li><li id="cce_10_0028__li1686451618494">Network model. For example, change <strong id="cce_10_0028__b16979154810810">Tunnel network</strong> to <strong id="cce_10_0028__b1297916485820">VPC network</strong>.</li></ul>
</li></ul>
</div>
<div class="section" id="cce_10_0028__section176228482126"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0028__ol1233331493511"><li id="cce_10_0028__li833491416359"><span>Log in to the CCE console. Choose <strong id="cce_10_0028__b1563535515135">Clusters</strong>. On the displayed page, click <strong id="cce_10_0028__b1861116237141">Create</strong> next to <strong id="cce_10_0028__b1563618552135">CCE cluster</strong>.</span></li><li id="cce_10_0028__li1569162220359"><span>Set cluster parameters.</span><p><div class="p" id="cce_10_0028__p5653205823718"><strong id="cce_10_0028__b14641318112618">Basic Settings</strong><ul id="cce_10_0028__ul5395195853710"><li id="cce_10_0028__li1739455810379"><strong id="cce_10_0028__b15847145841720">Cluster Name</strong></li><li id="cce_10_0028__li163957587379"><strong id="cce_10_0028__b89145218188">Cluster Version</strong>: Select the Kubernetes version used by the cluster.</li><li id="cce_10_0028__li5395358163711"><strong id="cce_10_0028__b01681447141713">Cluster Scale</strong>: maximum number of nodes that can be managed by the cluster. </li><li id="cce_10_0028__li467617271013"><strong id="cce_10_0028__b1538713714413">HA</strong>: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities.<div class="p" id="cce_10_0028__p15811036101">You can also expand advanced settings and customize the master node distribution mode. The following two modes are supported:<ul id="cce_10_0028__ul729432918812"><li id="cce_10_0028__li1529418293815"><strong id="cce_10_0028__b939210361624">Random</strong>: Master nodes are created in different AZs for DR.</li><li id="cce_10_0028__li103958393117"><strong id="cce_10_0028__b5810610331">Custom</strong>: You can determine the location of each master node.<ul id="cce_10_0028__ul1220719413117"><li id="cce_10_0028__li62941529381"><strong id="cce_10_0028__b292085817517">Host</strong>: Master nodes are created on different hosts in the same AZ.</li><li id="cce_10_0028__li32946293815"><strong id="cce_10_0028__b01923920215">Custom</strong>: You can determine the location of each master node.</li></ul>
<div class="section" id="cce_10_0028__section176228482126"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0028__ol1233331493511"><li id="cce_10_0028__li16411127162211"><span>Log in to the CCE console.</span></li><li id="cce_10_0028__li833491416359"><span>Choose <strong id="cce_10_0028__b1495755015314">Clusters</strong>. On the displayed page, select the type of the cluster to be created and click <strong id="cce_10_0028__b6697519153215">Create</strong>.</span></li><li id="cce_10_0028__li1569162220359"><span>Specify cluster parameters.</span><p><div class="p" id="cce_10_0028__p5653205823718"><strong id="cce_10_0028__b14641318112618">Basic Settings</strong><ul id="cce_10_0028__ul5395195853710"><li id="cce_10_0028__li1739455810379"><strong id="cce_10_0028__b2652101414325">Cluster Name</strong>: indicates the name of the cluster to be created. The cluster name must be unique under the same account.</li><li id="cce_10_0028__li163957587379"><strong id="cce_10_0028__b17378145120437">Cluster Version</strong>: Select the Kubernetes version used by the cluster.</li><li id="cce_10_0028__li5395358163711"><strong id="cce_10_0028__b1770657104313">Cluster Scale</strong>: maximum number of nodes that can be managed by the cluster. </li><li id="cce_10_0028__li467617271013">HA: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities.<div class="p" id="cce_10_0028__p15811036101"><a name="cce_10_0028__li467617271013"></a><a name="li467617271013"></a>You can also expand advanced settings and customize the master node distribution mode. The following two modes are supported:<ul id="cce_10_0028__ul729432918812"><li id="cce_10_0028__li1529418293815"><strong id="cce_10_0028__b1929619586454">Random</strong>: Master nodes are created in different AZs for DR.</li><li id="cce_10_0028__li103958393117"><strong id="cce_10_0028__b5810610331">Custom</strong>: You can determine the location of each master node.<ul id="cce_10_0028__ul1220719413117"><li id="cce_10_0028__li62941529381"><strong id="cce_10_0028__b292085817517">Host</strong>: Master nodes are created on different hosts in the same AZ.</li><li id="cce_10_0028__li32946293815"><strong id="cce_10_0028__b01923920215">Custom</strong>: You can determine the location of each master node.</li></ul>
</li></ul>
</div>
</li></ul>
</div>
<p id="cce_10_0028__p1816113443815"><strong id="cce_10_0028__b156891146112919">Network Settings</strong></p>
<p id="cce_10_0028__p850019415499">The cluster network settings cover nodes, containers, and Services. For details about the cluster networking and container network models, see <a href="cce_10_0010.html">Overview</a>.</p>
<ul id="cce_10_0028__ul1923917171387"><li id="cce_10_0028__li12239017103811"><strong id="cce_10_0028__b121761901306">Network Model</strong>: CCE clusters support <strong id="cce_10_0028__b61631519134014">VPC network</strong> and <strong id="cce_10_0028__b1332162318404">tunnel network</strong> models. For details, see <a href="cce_10_0283.html">VPC Network</a> and <a href="cce_10_0282.html">Container Tunnel Network</a>.</li><li id="cce_10_0028__li551715378214"><strong id="cce_10_0028__b112892113017">VPC</strong>: Select the VPC to which the cluster belongs. If no VPC is available, click <strong id="cce_10_0028__b4281322305">Create VPC</strong> to create one. The VPC cannot be changed after creation. </li><li id="cce_10_0028__li270184618382"><strong id="cce_10_0028__b7755955308">Master Node Subnet</strong>: Select the subnet where the master node is deployed. If no subnet is available, click <strong id="cce_10_0028__b15755858308">Create Subnet</strong> to create one. The subnet cannot be changed after creation.</li><li id="cce_10_0028__li2079235711388"><strong id="cce_10_0028__b188791031570">Container CIDR Block</strong>: Set the CIDR block used by containers. </li><li id="cce_10_0028__li411411954713"><strong id="cce_10_0028__b1371915515249">Service CIDR Block</strong>: CIDR block for Services used by containers in the same cluster to access each other. The value determines the maximum number of Services you can create. The value cannot be changed after creation.</li></ul>
<ul id="cce_10_0028__ul1923917171387"><li id="cce_10_0028__li12239017103811">Network Model: CCE clusters support <span class="uicontrol" id="cce_10_0028__uicontrol19729415374"><b>VPC network</b></span> and <span class="uicontrol" id="cce_10_0028__uicontrol10972104133713"><b>Tunnel network</b></span>. CCE Turbo clusters support <span class="uicontrol" id="cce_10_0028__uicontrol7643123519375"><b>Cloud Native Network 2.0.</b></span>. For details, see <a href="cce_10_0281.html">Overview</a>.</li><li id="cce_10_0028__li551715378214"><strong id="cce_10_0028__b112892113017">VPC</strong>: Select the VPC to which the cluster belongs. If no VPC is available, click <strong id="cce_10_0028__b4281322305">Create VPC</strong> to create one. The value cannot be changed after creation.</li><li id="cce_10_0028__li270184618382"><strong id="cce_10_0028__b7755955308">Master Node Subnet</strong>: Select the subnet where the master node is deployed. If no subnet is available, click <strong id="cce_10_0028__b15755858308">Create Subnet</strong> to create one. The subnet cannot be changed after creation.</li><li id="cce_10_0028__li17658613185"><strong id="cce_10_0028__b7492195291610">Container CIDR Block</strong> (CCE Cluster): Specify the CIDR block used by containers, which determines the maximum number of containers in the cluster. </li><li id="cce_10_0028__li1915125922118"><strong id="cce_10_0028__b587920114287">Default Pod Subnet</strong> (CCE Turbo Cluster): Select the subnet where the container is located. If no subnet is available, click <span class="uicontrol" id="cce_10_0028__uicontrol1339574710456"><b>Create Subnet</b></span>. The pod subnet determines the maximum number of containers in the cluster. You can add pod subnets after creating the cluster.</li><li id="cce_10_0028__li411411954713"><strong id="cce_10_0028__b1371915515249">Service CIDR Block</strong>: CIDR block for Services used by containers in the same cluster to access each other. The value determines the maximum number of Services you can create. The value cannot be changed after creation.</li></ul>
<p id="cce_10_0028__p3866175612467"><strong id="cce_10_0028__b0441658114611">Advanced Settings</strong></p>
<ul id="cce_10_0028__ul89571727475"><li id="cce_10_0028__li1895772174715"><strong id="cce_10_0028__b18677104202412">Request Forwarding</strong>: The IPVS and iptables modes are supported. For details, see <a href="cce_10_0349.html">Comparing iptables and IPVS</a>.</li><li id="cce_10_0028__li1736133045414"><strong id="cce_10_0028__b187484602716">CPU Manager</strong>: For details, see <a href="cce_10_0351.html">Binding CPU Cores</a>.</li><li id="cce_10_0028__li1045281605310"><strong id="cce_10_0028__b6538204304212">Certificate Authentication</strong>:<ul id="cce_10_0028__ul8453616205317"><li id="cce_10_0028__li104539168533"><strong id="cce_10_0028__b220412510445">Default</strong>: The X509-based authentication mode is enabled by default. X509 is a commonly used certificate format.</li><li id="cce_10_0028__li8453141615535"><strong id="cce_10_0028__b16300163832117">Custom:</strong> The cluster can identify users based on the header in the request body for authentication. <p id="cce_10_0028__p184531416105312">You need to upload your <strong id="cce_10_0028__b845836182415">CA root certificate</strong>, <strong id="cce_10_0028__b164583622416">client certificate</strong>, and <strong id="cce_10_0028__b194581169243">private key</strong> of the client certificate.</p>
<div class="caution" id="cce_10_0028__note13453101613535"><span class="cautiontitle"><img src="public_sys-resources/caution_3.0-en-us.png"> </span><div class="cautionbody"><ul id="cce_10_0028__ul34531816155312"><li id="cce_10_0028__li114531516195313">Upload a file <strong id="cce_10_0028__b199411240122414">smaller than 1 MB</strong>. The CA certificate and client certificate can be in <strong id="cce_10_0028__b1195014407247">.crt</strong> or <strong id="cce_10_0028__b15950140182418">.cer</strong> format. The private key of the client certificate can only be uploaded <strong id="cce_10_0028__b199501540102417">unencrypted</strong>.</li><li id="cce_10_0028__li18453516185319">The validity period of the client certificate must be longer than five years.</li><li id="cce_10_0028__li104531916125318">The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. <strong id="cce_10_0028__b19737142016505">If the certificate is invalid, the cluster cannot be created</strong>.</li><li id="cce_10_0028__li6694716185918">Starting from v1.25, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm.</li></ul>
<ul id="cce_10_0028__ul89571727475"><li id="cce_10_0028__li1895772174715"><a name="cce_10_0028__li1895772174715"></a><a name="li1895772174715"></a><strong id="cce_10_0028__b18677104202412">Request Forwarding</strong>: The IPVS and iptables modes are supported. For details, see <a href="cce_10_0349.html">Comparing iptables and IPVS</a>.</li><li id="cce_10_0028__li1736133045414"><strong id="cce_10_0028__b388665713297">CPU Manager</strong>: When enabled, CPU cores will be exclusively allocated to workload pods. For details, see <a href="cce_10_0351.html">CPU Policy</a>.</li><li id="cce_10_0028__li086416351551">Resource Tag:<p id="cce_10_0028__p1352104214110"><a name="cce_10_0028__li086416351551"></a><a name="li086416351551"></a>You can add resource tags to classify resources.</p>
</li><li id="cce_10_0028__li1045281605310"><strong id="cce_10_0028__b6538204304212">Certificate Authentication</strong>:<ul id="cce_10_0028__ul8453616205317"><li id="cce_10_0028__li104539168533"><strong id="cce_10_0028__b220412510445">Default</strong>: The X509-based authentication mode is enabled by default. X509 is a commonly used certificate format.</li><li id="cce_10_0028__li8453141615535"><strong id="cce_10_0028__b16300163832117">Custom:</strong> The cluster can identify users based on the header in the request body for authentication. <p id="cce_10_0028__p184531416105312">Upload your <strong id="cce_10_0028__b845836182415">CA root certificate</strong>, <strong id="cce_10_0028__b164583622416">client certificate</strong>, and <strong id="cce_10_0028__b194581169243">private key</strong> of the client certificate.</p>
<div class="caution" id="cce_10_0028__note13453101613535"><span class="cautiontitle"><img src="public_sys-resources/caution_3.0-en-us.png"> </span><div class="cautionbody"><ul id="cce_10_0028__ul34531816155312"><li id="cce_10_0028__li114531516195313">Upload a file <strong id="cce_10_0028__b199411240122414">smaller than 1 MiB</strong>. The CA certificate and client certificate can be in <strong id="cce_10_0028__b1195014407247">.crt</strong> or <strong id="cce_10_0028__b15950140182418">.cer</strong> format. The private key of the client certificate can only be uploaded <strong id="cce_10_0028__b199501540102417">unencrypted</strong>.</li><li id="cce_10_0028__li18453516185319">The validity period of the client certificate must be longer than five years.</li><li id="cce_10_0028__li104531916125318">The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. <strong id="cce_10_0028__b19737142016505">If the certificate is invalid, the cluster cannot be created</strong>.</li><li id="cce_10_0028__li6694716185918">Starting from v1.25, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm.</li></ul>
</div></div>
</li></ul>
</li><li id="cce_10_0028__li8833185203815"><strong id="cce_10_0028__b891711174919">Description</strong>: The value can contain a maximum of 200 English characters.</li></ul>
</p></li><li id="cce_10_0028__li9641724418"><span>Click <strong id="cce_10_0028__b194907314482">Next: Add-on Configuration</strong>.</span><p><p id="cce_10_0028__en-us_topic_0000001243981077_p157905523575"><strong id="cce_10_0028__b595244015487">Domain Name Resolution</strong>: Uses the <a href="cce_10_0129.html">coredns</a> add-on, installed by default, to resolve domain names and connect to the cloud DNS server.</p>
<p id="cce_10_0028__en-us_topic_0000001243981077_p292215338261"><strong id="cce_10_0028__b3546177134911">Container Storage</strong>: Uses the <a href="cce_10_0066.html">everest</a> add-on, installed by default, to provide container storage based on CSI and connect to cloud storage services such as EVS.</p>
<div class="p" id="cce_10_0028__en-us_topic_0000001243981077_p1042341817336"><strong id="cce_10_0028__b078412875610">Service logs</strong><ul id="cce_10_0028__en-us_topic_0000001243981077_ul1532032363417"><li id="cce_10_0028__en-us_topic_0000001243981077_li078322903611">Using ICAgent:<p id="cce_10_0028__en-us_topic_0000001243981077_p5238153093619"><a name="cce_10_0028__en-us_topic_0000001243981077_li078322903611"></a><a name="en-us_topic_0000001243981077_li078322903611"></a>A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured.</p>
</li><li id="cce_10_0028__li8833185203815"><strong id="cce_10_0028__b38242164517">Description</strong>: The description cannot exceed 200 characters.</li></ul>
</p></li><li id="cce_10_0028__li9641724418"><span>Click <strong id="cce_10_0028__b347615133319">Next: Add-on Configuration</strong>.</span><p><p id="cce_10_0028__en-us_topic_0000001243981077_p1586101655210"><strong id="cce_10_0028__b166370310332">Domain Name Resolution</strong>:</p>
<ul id="cce_10_0028__en-us_topic_0000001243981077_ul142841929175215"><li id="cce_10_0028__en-us_topic_0000001243981077_li102842291524"><strong id="cce_10_0028__b18354405339">Domain Name Resolution</strong>: The <a href="cce_10_0129.html">coredns</a> add-on is installed by default to resolve domain names and connect to the cloud DNS server.</li></ul>
<p id="cce_10_0028__en-us_topic_0000001243981077_p292215338261"><strong id="cce_10_0028__b182812298345">Container Storage</strong>: The <a href="cce_10_0066.html">everest</a> add-on is installed by default to provide container storage based on CSI and connect to cloud storage services such as EVS.</p>
<p id="cce_10_0028__en-us_topic_0000001243981077_p1588245112019"><strong id="cce_10_0028__b462244505112">Fault Detection</strong>: The <a href="cce_10_0132.html">npd</a> add-on is installed by default to provide node fault detection and isolation for the cluster, helping you identify node problems in a timely manner.</p>
<div class="p" id="cce_10_0028__en-us_topic_0000001243981077_p1042341817336"><strong id="cce_10_0028__b1378210551678">Data Plane Logs</strong><ul id="cce_10_0028__en-us_topic_0000001243981077_ul1532032363417"><li id="cce_10_0028__en-us_topic_0000001243981077_li078322903611">Using ICAgent:<p id="cce_10_0028__en-us_topic_0000001243981077_p5238153093619"><a name="cce_10_0028__en-us_topic_0000001243981077_li078322903611"></a><a name="en-us_topic_0000001243981077_li078322903611"></a>A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured.</p>
<p id="cce_10_0028__en-us_topic_0000001243981077_p161195033716">You can collect stdout logs as required.</p>
</li></ul>
</div>
<p id="cce_10_0028__en-us_topic_0000001243981077_p357714145121"><strong id="cce_10_0028__b167302337554">Overload Control</strong>: If overload control is enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available.</p>
</p></li><li id="cce_10_0028__li72711456163617"><span>After setting the parameters, click <span class="uicontrol" id="cce_10_0028__uicontrol677013344165"><b>Next: Confirm</b></span>. After confirming that the cluster configuration information is correct, select <strong id="cce_10_0028__b10770193415164">I have read and understand the preceding instructions</strong> and click <strong id="cce_10_0028__b4771183411610">Submit</strong>.</span><p><p id="cce_10_0028__p1020211168316">It takes about 6 to 10 minutes to create a cluster. You can click <strong id="cce_10_0028__b1712383711547">Back to Cluster List</strong> to perform other operations on the cluster or click <strong id="cce_10_0028__b3123193725416">Go to Cluster Events</strong> to view the cluster details.</p>
<p id="cce_10_0028__en-us_topic_0000001243981077_p357714145121"><strong id="cce_10_0028__b12368124516358">Overload Control</strong>: If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. For details, see <a href="cce_10_0602.html">Cluster Overload Control</a>.</p>
</p></li><li id="cce_10_0028__li72711456163617"><span>After the parameters are specified, click <span class="uicontrol" id="cce_10_0028__uicontrol16152220165119"><b>Next: Confirm</b></span>. The cluster resource list is displayed. Confirm the information and click <span class="uicontrol" id="cce_10_0028__uicontrol1915242018519"><b>Submit</b></span>.</span><p><p id="cce_10_0028__p1020211168316">It takes about 6 to 10 minutes to create a cluster. You can click <strong id="cce_10_0028__b1712383711547">Back to Cluster List</strong> to perform other operations on the cluster or click <strong id="cce_10_0028__b3123193725416">Go to Cluster Events</strong> to view the cluster details.</p>
</p></li></ol>
</div>
<div class="section" id="cce_10_0028__section125261255139"><h4 class="sectiontitle">Related Operations</h4><ul id="cce_10_0028__ul912451119262"><li id="cce_10_0028__li1030825181117">After creating a cluster, you can use the Kubernetes command line (CLI) tool kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</li><li id="cce_10_0028__li312413114263">Add nodes to the cluster. For details, see <a href="cce_10_0363.html">Creating a Node</a>.</li></ul>
@ -35,7 +37,7 @@
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0091.html">Clusters</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0298.html">Creating a Cluster</a></div>
</div>
</div>

View File

@ -6,12 +6,14 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0213.html">Cluster Configuration Management</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0602.html">Cluster Overload Control</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0403.html">Changing Cluster Scale</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0212.html">Deleting a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0214.html">Hibernating and Waking Up a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0602.html">Cluster Overload Control</a></strong><br>
</li>
</ul>
<div class="familylinks">

View File

@ -1,17 +1,17 @@
<a name="cce_10_00356"></a><a name="cce_10_00356"></a>
<h1 class="topictitle1">Accessing a Container</h1>
<div id="body0000001151211236"><div class="section" id="cce_10_00356__section7379040716"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_00356__p1134114511811">If you encounter unexpected problems when using a container, you can log in to the container for debugging.</p>
<div id="body0000001151211236"><div class="section" id="cce_10_00356__section7379040716"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_00356__p1134114511811">If you encounter unexpected problems when using a container, you can log in to the container to debug it.</p>
</div>
<div class="section" id="cce_10_00356__section1293318163114"><h4 class="sectiontitle">Logging In to a Container Using kubectl</h4><ol id="cce_10_00356__ol1392823394416"><li id="cce_10_00356__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_00356__li1020013819415"><span id="cce_10_00356__p49510201338">Run the following command to view the created pod:</span><p><pre class="screen" id="cce_10_00356__screen156898195914">kubectl get pod</pre>
<div class="section" id="cce_10_00356__section1293318163114"><h4 class="sectiontitle">Logging In to a Container Using kubectl</h4><ol id="cce_10_00356__ol1392823394416"><li id="cce_10_00356__li1681024195710"><span>Use kubectl to connect to the cluster. For details, see <a href="cce_10_0107.html">Connecting to a Cluster Using kubectl</a>.</span></li><li id="cce_10_00356__li1020013819415"><span>Run the following command to view the created pod:</span><p><pre class="screen" id="cce_10_00356__screen156898195914">kubectl get pod</pre>
<div class="p" id="cce_10_00356__p18257204595920">The example output is as follows:<pre class="screen" id="cce_10_00356__screen7944553592">NAME READY STATUS RESTARTS AGE
nginx-59d89cb66f-mhljr 1/1 Running 0 11m</pre>
</div>
</p></li><li id="cce_10_00356__li356233617436"><span>Query the name of the container in the pod.</span><p><pre class="screen" id="cce_10_00356__screen5352174217439">kubectl get po <i><span class="varname" id="cce_10_00356__varname373018473433">nginx-59d89cb66f-mhljr</span></i> -o jsonpath='{range .spec.containers[*]}{.name}{end}{"\n"}'</pre>
</p></li><li id="cce_10_00356__li356233617436"><span>Query the container name in the pod.</span><p><pre class="screen" id="cce_10_00356__screen5352174217439">kubectl get po <i><span class="varname" id="cce_10_00356__varname373018473433">nginx-59d89cb66f-mhljr</span></i> -o jsonpath='{range .spec.containers[*]}{.name}{end}{"\n"}'</pre>
<div class="p" id="cce_10_00356__p3651112824414">The example output is as follows:<pre class="screen" id="cce_10_00356__screen1965142811442">container-1</pre>
</div>
</p></li><li id="cce_10_00356__li15567184714456"><span>Run the following command to log in to the container named <strong id="cce_10_00356__b1875816432427">container-1</strong> in <strong id="cce_10_00356__b46855020427">nginx-59d89cb66f-mhljrPod</strong>:</span><p><pre class="screen" id="cce_10_00356__screen208681724173519">kubectl exec -it <i><span class="varname" id="cce_10_00356__varname42937231455">nginx-59d89cb66f-mhljr</span></i> -c <i><span class="varname" id="cce_10_00356__varname115981226164513">container-1</span></i> -- /bin/sh</pre>
</p></li><li id="cce_10_00356__li1582141517375"><span>To exit the container, run the <strong id="cce_10_00356__b15873927134616">exit</strong> command.</span></li></ol>
</p></li><li id="cce_10_00356__li15567184714456"><span>Run the following command to log in to the <strong id="cce_10_00356__b3731665115">container-1</strong> container in the <strong id="cce_10_00356__b36437831215">nginx-59d89cb66f-mhljr</strong> pod:</span><p><pre class="screen" id="cce_10_00356__screen208681724173519">kubectl exec -it <i><span class="varname" id="cce_10_00356__varname42937231455">nginx-59d89cb66f-mhljr</span></i> -c <i><span class="varname" id="cce_10_00356__varname115981226164513">container-1</span></i> -- /bin/sh</pre>
</p></li><li id="cce_10_00356__li1582141517375"><span>To exit the container, run the <strong id="cce_10_00356__b14552222123017">exit</strong> command.</span></li></ol>
</div>
</div>
<div>

View File

@ -3,15 +3,15 @@
<h1 class="topictitle1">Stopping a Node</h1>
<div id="body1564130562761"><div class="section" id="cce_10_0036__section127213017388"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0036__p866311509249">After a node in the cluster is stopped, services on the node are also stopped. Before stopping a node, ensure that discontinuity of the services on the node will not result in adverse impacts.</p>
</div>
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Notes and Constraints</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during node deletion. Back up related data in advance.</li><li id="cce_10_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
<div class="section" id="cce_10_0036__section1489437103610"><h4 class="sectiontitle">Constraints</h4><ul id="cce_10_0036__ul0917755162415"><li id="cce_10_0036__li1891719552246">Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours.</li><li id="cce_10_0036__li791875552416">Unexpected risks may occur during node deletion. Back up related data in advance.</li><li id="cce_10_0036__li15918105582417">While the node is being deleted, the backend will set the node to the unschedulable state.</li><li id="cce_10_0036__li12918145520241">Only worker nodes can be stopped.</li></ul>
</div>
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster.</span></li><li id="cce_10_0036__li6687049203616"><span>In the navigation pane, choose <strong id="cce_10_0036__b06131727172613">Nodes</strong>. In the right pane, click the name of the node to be stopped.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b1247467161417">Stop</strong> in the instance status area. In the displayed dialog box, click <strong id="cce_10_0036__b12474177131414">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image6847636155" src="en-us_image_0000001518062704.png"></span></div>
<div class="section" id="cce_10_0036__section14341135612442"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0036__ol5687174923613"><li id="cce_10_0036__li133915311359"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0036__li6687049203616"><span>In the navigation pane, choose <strong id="cce_10_0036__b0764291362">Nodes</strong>. In the right pane, click the name of the node to be stopped.</span></li><li id="cce_10_0036__li117301253183717"><span>In the upper right corner of the ECS details page, click <strong id="cce_10_0036__b109484372618">Stop</strong>. In the displayed dialog box, click <strong id="cce_10_0036__b29489372612">Yes</strong>.</span><p><div class="fignone" id="cce_10_0036__fig19269101385311"><span class="figcap"><b>Figure 1 </b>ECS details page</span><br><span><img id="cce_10_0036__image6847636155" src="en-us_image_0000001647417648.png"></span></div>
</p></li></ol>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0183.html">Nodes</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0672.html">Management Nodes</a></div>
</div>
</div>

View File

@ -0,0 +1,23 @@
<a name="cce_10_0044"></a><a name="cce_10_0044"></a>
<h1 class="topictitle1">Elastic Volume Service (EVS)</h1>
<div id="body0000001487281736"></div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0613.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0614.html">Using an Existing EVS Disk Through a Static PV</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0615.html">Using an EVS Disk Through a Dynamic PV</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0616.html">Dynamically Mounting an EVS Disk to a StatefulSet</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0381.html">Snapshots and Backups</a></strong><br>
</li>
</ul>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0374.html">Storage</a></div>
</div>
</div>

View File

@ -6,31 +6,15 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0006.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0047.html">Creating a Deployment</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0048.html">Creating a StatefulSet</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0216.html">Creating a DaemonSet</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0150.html">Creating a Job</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0151.html">Creating a Cron Job</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0007.html">Managing Workloads and Jobs</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0673.html">Creating a Workload</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0130.html">Configuring a Container</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0345.html">GPU Scheduling</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0551.html">CPU Core Binding</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_00356.html">Accessing a Container</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0386.html">Pod Labels and Annotations</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0007.html">Managing Workloads and Jobs</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0423.html">Volcano Scheduling</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0288.html">Security Group Policies</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0463.html">Kata Runtime and Common Runtime</a></strong><br>
</li>
</ul>
</div>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -4,18 +4,18 @@
<div id="body8662426"><div class="section" id="cce_10_0063__section127666327248"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0063__p192873216229">After a node scaling policy is created, you can delete, edit, disable, enable, or clone the policy.</p>
</div>
<div class="section" id="cce_10_0063__section102878407207"><h4 class="sectiontitle">Viewing a Node Scaling Policy</h4><p id="cce_10_0063__p713741135215">You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed.</p>
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li3967519744"><span>Choose <strong id="cce_10_0063__b75474128512">Node Scaling</strong> in the navigation pane and click <span><img id="cce_10_0063__image1254712122518" src="en-us_image_0000001517743464.png"></span> in front of the policy to be viewed.</span></li><li id="cce_10_0063__li641003813527"><span>In the expanded area, the <span class="uicontrol" id="cce_10_0063__uicontrol864413924614"><b>Associated Node Pools</b></span>, <span class="uicontrol" id="cce_10_0063__uicontrol1164419910465"><b>Rules</b></span>, and <span class="uicontrol" id="cce_10_0063__uicontrol1964516974613"><b>Scaling History</b></span> tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0063__p268214718213">You can also disable or enable auto scaling on the <strong id="cce_10_0063__b57750163232">Node Pools</strong> page.</p>
<ol type="a" id="cce_10_0063__ol15169162582120"><li id="cce_10_0063__li13169425162117">Log in to the CCE console and access the cluster console.</li><li id="cce_10_0063__li716942518219">In the navigation pane, choose <strong id="cce_10_0063__b189612560310">Nodes</strong> and switch to the <strong id="cce_10_0063__b19818721244">Node Pools</strong> tab page.</li><li id="cce_10_0063__li2016919259214">Click <span class="uicontrol" id="cce_10_0063__uicontrol1689716319372"><b>Edit</b></span> of the node pool to be operated. In the <span class="uicontrol" id="cce_10_0063__uicontrol3989194019311"><b>Edit Node Pool</b></span> dialog box that is displayed, set the limits of the number of nodes.</li></ol>
<ol id="cce_10_0063__ol17409123885219"><li id="cce_10_0063__li148293318248"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li3967519744"><span>Choose <strong id="cce_10_0063__b75474128512">Node Scaling</strong> in the navigation pane and click <span><img id="cce_10_0063__image1254712122518" src="en-us_image_0000001695896485.png"></span> in front of the policy to be viewed.</span></li><li id="cce_10_0063__li641003813527"><span>In the expanded area, the <span class="uicontrol" id="cce_10_0063__uicontrol864413924614"><b>Associated Node Pools</b></span>, <span class="uicontrol" id="cce_10_0063__uicontrol1164419910465"><b>Rules</b></span>, and <span class="uicontrol" id="cce_10_0063__uicontrol1964516974613"><b>Scaling History</b></span> tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0063__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0063__p268214718213">You can also disable or enable auto scaling on the <strong id="cce_10_0063__b57750163232">Node Pools</strong> page.</p>
<ol type="a" id="cce_10_0063__ol15169162582120"><li id="cce_10_0063__li13169425162117">Log in to the CCE console and click the cluster name to access the cluster console.</li><li id="cce_10_0063__li716942518219">In the navigation pane, choose <strong id="cce_10_0063__b189612560310">Nodes</strong> and switch to the <strong id="cce_10_0063__b19818721244">Node Pools</strong> tab.</li><li id="cce_10_0063__li498811231504">Locate the row containing the target node pool and click <span class="uicontrol" id="cce_10_0063__uicontrol95019393011"><b>Update Node Pool</b></span>. In the window that slides out from the right, enable <strong id="cce_10_0063__b66112515390">Auto Scaling</strong>, and configure <strong id="cce_10_0063__b754915540391">Max. Nodes</strong>, <strong id="cce_10_0063__b1444816563399">Min. Nodes</strong>, and <strong id="cce_10_0063__b1724785813397">Cooldown Period</strong>.</li></ol>
</div></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_10_0063__ol14644105712488"><li id="cce_10_0063__li41181041153517"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li21181041113517"><span>Choose <strong id="cce_10_0063__b12846115045411">Node Scaling</strong> in the navigation pane and choose <strong id="cce_10_0063__b1011425314546">More</strong> &gt; <strong id="cce_10_0063__b264025517541">Delete</strong> next to the policy to be deleted.</span></li><li id="cce_10_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_10_0063__wintitle195460432178"><b>Delete Node Scaling Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_10_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_10_0063__uicontrol12723105481711"><b>Yes</b></span> to delete the policy.</span></li></ol>
<div class="section" id="cce_10_0063__section128584032017"><h4 class="sectiontitle">Deleting a Node Scaling Policy</h4><ol id="cce_10_0063__ol14644105712488"><li id="cce_10_0063__li41181041153517"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li21181041113517"><span>Choose <strong id="cce_10_0063__b12846115045411">Node Scaling</strong> in the navigation pane and choose <strong id="cce_10_0063__b1011425314546">More</strong> &gt; <strong id="cce_10_0063__b264025517541">Delete</strong> next to the policy to be deleted.</span></li><li id="cce_10_0063__li19809141991015"><span>In the <span class="wintitle" id="cce_10_0063__wintitle195460432178"><b>Delete Node Scaling Policy</b></span> dialog box displayed, confirm whether to delete the policy.</span></li><li id="cce_10_0063__li1340513385528"><span>Click <span class="uicontrol" id="cce_10_0063__uicontrol12723105481711"><b>Yes</b></span> to delete the policy.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_10_0063__ol067875612225"><li id="cce_10_0063__li1148617913919"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li19486498394"><span>Choose <strong id="cce_10_0063__b154762701819">Node Scaling</strong> in the navigation pane and click <span class="uicontrol" id="cce_10_0063__uicontrol14476675189"><b>Edit</b></span> in the <strong id="cce_10_0063__b647707161811">Operation</strong> column of the policy to be edited.</span></li><li id="cce_10_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, modify policy parameter values listed in <a href="cce_10_0209.html#cce_10_0209__table18763092201">Table 1</a>.</span></li><li id="cce_10_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
<div class="section" id="cce_10_0063__section5652756162214"><h4 class="sectiontitle">Editing a Node Scaling Policy</h4><ol id="cce_10_0063__ol067875612225"><li id="cce_10_0063__li1148617913919"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li19486498394"><span>Choose <strong id="cce_10_0063__b154762701819">Node Scaling</strong> in the navigation pane and click <span class="uicontrol" id="cce_10_0063__uicontrol14476675189"><b>Edit</b></span> in the <strong id="cce_10_0063__b647707161811">Operation</strong> column of the policy to be edited.</span></li><li id="cce_10_0063__li56781856152211"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol7933134119486"><b>Edit Node Scaling Policy</b></span> page displayed, modify policy parameter values listed in <a href="cce_10_0209.html#cce_10_0209__table18763092201">Table 1</a>.</span></li><li id="cce_10_0063__li86781756112220"><span>After the configuration is complete, click <span class="uicontrol" id="cce_10_0063__uicontrol07463587480"><b>OK</b></span>.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>Choose <strong id="cce_10_0063__b889784805914">Node Scaling</strong> in the navigation pane and choose <strong id="cce_10_0063__b889764816594">More</strong> &gt; <strong id="cce_10_0063__b1589719489596">Clone</strong> next to the policy to be cloned.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Clone Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
<div class="section" id="cce_10_0063__section367810565223"><h4 class="sectiontitle">Cloning a Node Scaling Policy</h4><ol id="cce_10_0063__ol1283103252519"><li id="cce_10_0063__li20680159143911"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li1068085914390"><span>Choose <strong id="cce_10_0063__b889784805914">Node Scaling</strong> in the navigation pane and choose <strong id="cce_10_0063__b889764816594">More</strong> &gt; <strong id="cce_10_0063__b1589719489596">Clone</strong> next to the policy to be cloned.</span></li><li id="cce_10_0063__li128363212514"><span>On the <span class="uicontrol" id="cce_10_0063__uicontrol162071440144911"><b>Clone Node Scaling Policy</b></span> page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements.</span></li><li id="cce_10_0063__li383732172512"><span>Click <strong id="cce_10_0063__b76092016183">OK</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_10_0063__ol0843321258"><li id="cce_10_0063__li1221435414019"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0063__li4214105494011"><span>Choose <strong id="cce_10_0063__b13778122416181">Node Scaling</strong> in the navigation pane and click <strong id="cce_10_0063__b0778424161820">Disable</strong> in the <strong id="cce_10_0063__b1977852441812">Operation</strong> column of the policy to be disabled. If the policy is in the disabled state, click <span class="uicontrol" id="cce_10_0063__uicontrol177902431813"><b>Enable</b></span> in the <strong id="cce_10_0063__b47795246181">Operation</strong> column of the policy.</span></li><li id="cce_10_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li></ol>
<div class="section" id="cce_10_0063__section4771832152513"><h4 class="sectiontitle">Enabling or Disabling a Node Scaling Policy</h4><ol id="cce_10_0063__ol0843321258"><li id="cce_10_0063__li1221435414019"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0063__li4214105494011"><span>Choose <strong id="cce_10_0063__b13778122416181">Node Scaling</strong> in the navigation pane and click <strong id="cce_10_0063__b0778424161820">Disable</strong> in the <strong id="cce_10_0063__b1977852441812">Operation</strong> column of the policy to be disabled. If the policy is in the disabled state, click <span class="uicontrol" id="cce_10_0063__uicontrol177902431813"><b>Enable</b></span> in the <strong id="cce_10_0063__b47795246181">Operation</strong> column of the policy.</span></li><li id="cce_10_0063__li78473252510"><span>In the dialog box displayed, confirm whether to disable or enable the node policy.</span></li></ol>
</div>
</div>
<div>

View File

@ -6,11 +6,9 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0277.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0129.html">coredns (System Resource Add-On, Mandatory)</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0129.html">coredns (System Resource Add-on, Mandatory)</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0127.html">storage-driver (System Resource Add-On, Discarded)</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0066.html">everest (System Resource Add-On, Mandatory)</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0066.html">everest (System Resource Add-on, Mandatory)</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0132.html">npd</a></strong><br>
</li>
@ -22,6 +20,8 @@
</li>
<li class="ulchildlink"><strong><a href="cce_10_0193.html">volcano</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0127.html">storage-driver(Flexvolume, Deprecated)</a></strong><br>
</li>
</ul>
</div>

File diff suppressed because it is too large Load Diff

View File

@ -1,18 +1,18 @@
<a name="cce_10_0068"></a><a name="cce_10_0068"></a>
<h1 class="topictitle1">Release Notes</h1>
<h1 class="topictitle1">Kubernetes Release Notes</h1>
<div id="body8662426"></div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0467.html">CCE Kubernetes 1.25 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_bulletin_0058.html">Kubernetes 1.25 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0468.html">CCE Kubernetes 1.23 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_bulletin_0027.html">Kubernetes 1.23 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0469.html">CCE Kubernetes 1.21 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_bulletin_0026.html">Kubernetes 1.21 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0470.html">CCE Kubernetes 1.19 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_whsnew_0010.html">Kubernetes 1.19 Release Notes</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0471.html">CCE Kubernetes 1.17 Release Notes</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_whsnew_0007.html">Kubernetes 1.17 (EOM) Release Notes</a></strong><br>
</li>
</ul>

View File

@ -8,13 +8,13 @@
<p id="cce_10_0081__p19802113820588">This section describes how node pools work in CCE and how to create and manage node pools.</p>
</div>
<div class="section" id="cce_10_0081__section1486732122217"><h4 class="sectiontitle">Node Pool Architecture</h4><p id="cce_10_0081__p2480134412214">Generally, all nodes in a node pool have the following same attributes:</p>
<ul id="cce_10_0081__ul134808449226"><li id="cce_10_0081__li1848004422220">Node OS</li><li id="cce_10_0081__li890631614331">Node specifications</li><li id="cce_10_0081__li730814322334">Node login mode.</li><li id="cce_10_0081__li3978937183319">Node runtime.</li><li id="cce_10_0081__li20480184419225">Startup parameters of Kubernetes components on a node</li><li id="cce_10_0081__li17480104411227">User-defined startup script of a node</li><li id="cce_10_0081__li84806446229"><strong id="cce_10_0081__b156321822191410">K8s Labels</strong> and <strong id="cce_10_0081__b037992416144">Taints</strong></li></ul>
<ul id="cce_10_0081__ul134808449226"><li id="cce_10_0081__li1848004422220">Node OS</li><li id="cce_10_0081__li890631614331">Node specifications</li><li id="cce_10_0081__li730814322334">Node login mode</li><li id="cce_10_0081__li3978937183319">Node container runtime</li><li id="cce_10_0081__li20480184419225">Startup parameters of Kubernetes components on a node</li><li id="cce_10_0081__li17480104411227">User-defined startup script of a node</li><li id="cce_10_0081__li84806446229"><strong id="cce_10_0081__b452816349419">Kubernetes Labels</strong> and <strong id="cce_10_0081__b65284345410">Taints</strong></li></ul>
<p id="cce_10_0081__p1048019444223">CCE provides the following extended attributes for node pools:</p>
<ul id="cce_10_0081__ul84801544162219"><li id="cce_10_0081__li1480184410229">Node pool OS</li><li id="cce_10_0081__li114801944112213">Maximum number of pods on each node in a node pool</li></ul>
</div>
<div class="section" id="cce_10_0081__section16928123042115"><a name="cce_10_0081__section16928123042115"></a><a name="section16928123042115"></a><h4 class="sectiontitle">Description of DefaultPool</h4><p id="cce_10_0081__p5444184415215">DefaultPool is not a real node pool. It only <strong id="cce_10_0081__b186624665612">classifies</strong> nodes that are not in the user-created node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.</p>
<div class="section" id="cce_10_0081__section16928123042115"><a name="cce_10_0081__section16928123042115"></a><a name="section16928123042115"></a><h4 class="sectiontitle">Description of <span class="keyword" id="cce_10_0081__keyword729863519811">DefaultPool</span></h4><p id="cce_10_0081__p5444184415215"><span class="keyword" id="cce_10_0081__keyword799943811813">DefaultPool</span> is not a real node pool. It only <strong id="cce_10_0081__b1896884414412">classifies</strong> nodes that are not in the user-created node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.</p>
</div>
<div class="section" id="cce_10_0081__section32131316256"><h4 class="sectiontitle">Applicable Scenarios</h4><p id="cce_10_0081__p1945803011253">When a large-scale cluster is required, you are advised to use node pools to manage nodes.</p>
<div class="section" id="cce_10_0081__section32131316256"><h4 class="sectiontitle">Application Scenarios</h4><p id="cce_10_0081__p1945803011253">When a large-scale cluster is required, you are advised to use node pools to manage nodes.</p>
<p id="cce_10_0081__p1491578182512">The following table describes multiple scenarios of large-scale cluster management and the functions of node pools in each scenario.</p>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0081__table1736317479258" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Using node pools for different management scenarios</caption><thead align="left"><tr id="cce_10_0081__row336414719256"><th align="left" class="cellrowborder" valign="top" width="39.32%" id="mcps1.3.4.4.2.3.1.1"><p id="cce_10_0081__p5364134792518">Scenario</p>
@ -47,7 +47,7 @@
</th>
<th align="left" class="cellrowborder" valign="top" width="39.603960396039604%" id="mcps1.3.5.2.1.4.1.2"><p id="cce_10_0081__p14843191752714">Description</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="38.453845384538454%" id="mcps1.3.5.2.1.4.1.3"><p id="cce_10_0081__p48430171279">Notes</p>
<th align="left" class="cellrowborder" valign="top" width="38.453845384538454%" id="mcps1.3.5.2.1.4.1.3"><p id="cce_10_0081__p48430171279">Precaution</p>
</th>
</tr>
</thead>
@ -60,7 +60,7 @@
</tr>
<tr id="cce_10_0081__row1084351717279"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p105796289273">Deleting a node pool</p>
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1916410397318">Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools.</p>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1916410397318">When a node pool is deleted, the nodes in the node pool are deleted first. Workloads on the original nodes are automatically migrated to available nodes in other node pools.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p75791828182711">If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.</p>
</td>
@ -76,7 +76,7 @@
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p75791228192718">After auto scaling is disabled, the number of nodes in a node pool will not automatically change with the cluster loads.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p17590142818271">/</p>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p17590142818271">None</p>
</td>
</tr>
<tr id="cce_10_0081__row98435171275"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p18579928102713">Adjusting the size of a node pool</p>
@ -88,7 +88,7 @@
</tr>
<tr id="cce_10_0081__row18431117142713"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p1657922832717">Changing node pool configurations</p>
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), and taints.</p>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p3579182816279">You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), and taints and adjust the disk, OS, and container engine configurations of the node pool.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p857917281274">The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation.</p>
</td>
@ -104,7 +104,7 @@
</td>
<td class="cellrowborder" valign="top" width="39.603960396039604%" headers="mcps1.3.5.2.1.4.1.2 "><p id="cce_10_0081__p1025414163462">You can copy the configuration of an existing node pool to create a new node pool.</p>
</td>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p1425461620464">/</p>
<td class="cellrowborder" valign="top" width="38.453845384538454%" headers="mcps1.3.5.2.1.4.1.3 "><p id="cce_10_0081__p1425461620464">None</p>
</td>
</tr>
<tr id="cce_10_0081__row1238761814711"><td class="cellrowborder" valign="top" width="21.942194219421943%" headers="mcps1.3.5.2.1.4.1.1 "><p id="cce_10_0081__p20387141844717">Setting Kubernetes parameters</p>
@ -119,7 +119,7 @@
</div>
</div>
<div class="section" id="cce_10_0081__section12603142443319"><h4 class="sectiontitle"><span class="keyword" id="cce_10_0081__keyword134411635193118">Deploying a Workload in a Specified Node Pool</span></h4><p id="cce_10_0081__p153911712353">When creating a workload, you can constrain pods to run in a specified node pool.</p>
<p id="cce_10_0081__p554031713358">For example, on the CCE console, you can set the affinity between the workload and the node on the <strong id="cce_10_0081__b65991804713">Scheduling Policies</strong> tab page on the workload details page to forcibly deploy the workload to a specific node pool. In this way, the workload runs only on nodes in the node pool. If you need to better control where the workload is to be scheduled, you can use affinity or anti-affinity policies between workloads and nodes described in <a href="cce_10_0232.html">Scheduling Policy (Affinity/Anti-affinity)</a>.</p>
<p id="cce_10_0081__p554031713358">For example, on the CCE console, you can set the affinity between the workload and the node on the <strong id="cce_10_0081__b65991804713">Scheduling Policies</strong> tab page on the workload details page to forcibly deploy the workload to a specific node pool. In this way, the workload runs only on nodes in the node pool. To better control where the workload is to be scheduled, you can use affinity or anti-affinity policies between workloads and nodes described in <a href="cce_10_0232.html">Scheduling Policy (Affinity/Anti-affinity)</a>.</p>
<p id="cce_10_0081__p614655184910">For example, you can use container's resource request as a nodeSelector so that workloads will run only on the nodes that meet the resource request.</p>
<p id="cce_10_0081__p1854041717353">If the workload definition file defines a container that requires four CPUs, the scheduler will not choose the nodes with two CPUs to run workloads.</p>
</div>

View File

@ -4,8 +4,8 @@
<div id="body1508729244098"><div class="section" id="cce_10_0083__section11873141710246"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0083__p799618243249">After an HPA policy is created, you can update, clone, edit, and delete the policy, as well as edit the YAML file.</p>
</div>
<div class="section" id="cce_10_0083__section14993443181414"><h4 class="sectiontitle">Checking an HPA Policy</h4><p id="cce_10_0083__p713741135215">You can view the rules, status, and events of an HPA policy and handle exceptions based on the error information displayed.</p>
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611">Workload Scaling</strong>. On the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab page, click <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001568902521.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, you can view the <span class="uicontrol" id="cce_10_0083__uicontrol783043616"><b>Rules</b></span>, <span class="uicontrol" id="cce_10_0083__uicontrol79110193616"><b>Status</b></span>, and <span class="uicontrol" id="cce_10_0083__uicontrol897073610"><b>Events</b></span> tab pages. If the policy is abnormal, locate and rectify the fault based on the error information.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
<ol type="a" id="cce_10_0083__ol1691347738"><li id="cce_10_0083__li5468556932">Log in to the CCE console and access the cluster console.</li><li id="cce_10_0083__li87313521749">In the navigation pane, choose <strong id="cce_10_0083__b01748420311">Workloads</strong>. Click the workload name to view its details.</li><li id="cce_10_0083__li1769110474318">On the workload details page, swich to the <strong id="cce_10_0083__b3716156354">Auto Scaling</strong> tab page to view the HPA policies. You can also view the scaling policies you configured in <strong id="cce_10_0083__b81591132105417">Workload Scaling</strong>.</li></ol>
<ol id="cce_10_0083__ol17409123885219"><li id="cce_10_0083__li754610559213"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li4409153817525"><span>In the navigation pane, choose <strong id="cce_10_0083__b9595121512611">Policies</strong>. On the <span class="uicontrol" id="cce_10_0083__uicontrol124101738135219"><b>HPA Policies</b></span> tab page, click <span><img id="cce_10_0083__image1569143785619" src="en-us_image_0000001695737185.png"></span> next to the target HPA policy.</span></li><li id="cce_10_0083__li641003813527"><span>In the expanded area, view the <strong id="cce_10_0083__b3109121911720">Rule</strong> and <strong id="cce_10_0083__b1952313227174">Status</strong> tabs. Click <span class="uicontrol" id="cce_10_0083__uicontrol4974153615019"><b>View Events</b></span> in the <strong id="cce_10_0083__b1620814781718">Operation</strong> column. If the policy malfunctions, locate and rectify the fault based on the error message displayed on the page.</span><p><div class="note" id="cce_10_0083__note13404926203311"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="cce_10_0083__p1793618441931">You can also view the created HPA policy on the workload details page.</p>
<ol type="a" id="cce_10_0083__ol1691347738"><li id="cce_10_0083__li5468556932">Log in to the CCE console and click the cluster name to access the cluster console.</li><li id="cce_10_0083__li87313521749">In the navigation pane, choose <strong id="cce_10_0083__b01748420311">Workloads</strong>. Click the workload name to view its details.</li><li id="cce_10_0083__li1769110474318">On the workload details page, click the <strong id="cce_10_0083__b3716156354">Auto Scaling</strong> tab to view the HPA policies. You can also view the scaling policies you configured in the <strong id="cce_10_0083__b81591132105417">Workload Scaling</strong> page.</li></ol>
</div></div>
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0083__table56931825193212" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Event types and names</caption><thead align="left"><tr id="cce_10_0083__row269117254324"><th align="left" class="cellrowborder" valign="top" width="17.531753175317533%" id="mcps1.3.2.3.3.2.2.2.4.1.1"><p id="cce_10_0083__p176911125153211">Event Type</p>
@ -91,11 +91,11 @@
</p></li></ol>
</div>
<div class="section" id="cce_10_0083__section119901143111420"><h4 class="sectiontitle">Updating an HPA Policy</h4><p id="cce_10_0083__p18160164715245">An HPA policy is used as an example.</p>
<ol id="cce_10_0083__ol14644105712488"><li id="cce_10_0083__li584173114516"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li2619151017014"><span>In the navigation pane, choose <strong id="cce_10_0083__b147314501477">Workload Scaling</strong>. Click <span class="uicontrol" id="cce_10_0083__uicontrol177214131984"><b>Update</b></span> in the <span class="uicontrol" id="cce_10_0083__uicontrol1275919157101"><b>Operation</b></span> column of the target policy.</span></li><li id="cce_10_0083__li19809141991015"><span>On the <span class="uicontrol" id="cce_10_0083__uicontrol630212429422"><b>Update HPA Policy</b></span> page displayed, set the policy parameters listed in <a href="cce_10_0208.html#cce_10_0208__table8638121213265">Table 1</a>.</span></li><li id="cce_10_0083__li1340513385528"><span>Click <strong id="cce_10_0083__b084931111349">Update</strong>.</span></li></ol>
<ol id="cce_10_0083__ol14644105712488"><li id="cce_10_0083__li584173114516"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li2619151017014"><span>On the cluster console, choose <strong id="cce_10_0083__b147314501477">Workload Scaling</strong> in the navigation pane. Locate the row that contains the target policy and choose <strong id="cce_10_0083__b1859319500414">More</strong> &gt; <strong id="cce_10_0083__b043275516414">Edit</strong> in the <span class="uicontrol" id="cce_10_0083__uicontrol1275919157101"><b>Operation</b></span> column.</span></li><li id="cce_10_0083__li19809141991015"><span>On the <span class="uicontrol" id="cce_10_0083__uicontrol162451231134212"><b>Edit HPA Policy</b></span> page, configure the parameters as listed in <a href="cce_10_0208.html#cce_10_0208__table8638121213265">Table 1</a>.</span></li><li id="cce_10_0083__li1340513385528"><span>Click <strong id="cce_10_0083__b18186175524217">OK</strong>.</span></li></ol>
</div>
<div class="section" id="cce_10_0083__section14894314131710"><h4 class="sectiontitle">Editing the YAML File (HPA Policy)</h4><ol id="cce_10_0083__ol836024781710"><li id="cce_10_0083__li4747132218612"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li197478221868"><span>In the navigation pane, choose <strong id="cce_10_0083__b158614271186">Workload Scaling</strong>. Click <span class="uicontrol" id="cce_10_0083__uicontrol173601547121714"><b>More &gt; Edit YAML</b></span> in the <span class="uicontrol" id="cce_10_0083__uicontrol15862927288"><b>Operation</b></span> column of the target HPA policy.</span></li><li id="cce_10_0083__li3360104719175"><span>In the <span class="uicontrol" id="cce_10_0083__uicontrol19625193112511"><b>Edit YAML</b></span> dialog box displayed, edit or download the YAML file.</span></li><li id="cce_10_0083__li1236014711712"><span>Click the close button in the upper right corner.</span></li></ol>
<div class="section" id="cce_10_0083__section14894314131710"><h4 class="sectiontitle">Editing the YAML File (HPA Policy)</h4><ol id="cce_10_0083__ol836024781710"><li id="cce_10_0083__li4747132218612"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li197478221868"><span>In the navigation pane, choose <strong id="cce_10_0083__b158614271186">Policies</strong>. Choose <span class="uicontrol" id="cce_10_0083__uicontrol173601547121714"><b>Edit YAML</b></span> in the <span class="uicontrol" id="cce_10_0083__uicontrol15862927288"><b>Operation</b></span> column of the target HPA policy.</span></li><li id="cce_10_0083__li3360104719175"><span>In the <span class="uicontrol" id="cce_10_0083__uicontrol19625193112511"><b>Edit YAML</b></span> dialog box displayed, edit or download the YAML file.</span></li></ol>
</div>
<div class="section" id="cce_10_0083__section13361947161717"><h4 class="sectiontitle">Deleting an HPA Policy</h4><ol id="cce_10_0083__ol86853731815"><li id="cce_10_0083__li15254352473"><span>Log in to the CCE console and access the cluster console.</span></li><li id="cce_10_0083__li1725419521973"><span>In the navigation pane, choose <strong id="cce_10_0083__b26191366917">Workload Scaling</strong>. Click <span class="uicontrol" id="cce_10_0083__uicontrol156193361795"><b>Delete</b></span> in the <span class="uicontrol" id="cce_10_0083__uicontrol1462014361391"><b>Operation</b></span> column of the target policy.</span></li><li id="cce_10_0083__li96803718182"><span>In the dialog box displayed, click <strong id="cce_10_0083__b135291150994">Yes</strong>.</span></li></ol>
<div class="section" id="cce_10_0083__section13361947161717"><h4 class="sectiontitle">Deleting an HPA Policy</h4><ol id="cce_10_0083__ol86853731815"><li id="cce_10_0083__li15254352473"><span>Log in to the CCE console and click the cluster name to access the cluster console.</span></li><li id="cce_10_0083__li1725419521973"><span>In the navigation pane, choose <strong id="cce_10_0083__b26191366917">Policies</strong>. Choose <span class="uicontrol" id="cce_10_0083__uicontrol156193361795"><b>Delete</b></span> &gt; <strong id="cce_10_0083__b154116114551">Delete</strong> in the <span class="uicontrol" id="cce_10_0083__uicontrol1462014361391"><b>Operation</b></span> column of the target policy.</span></li><li id="cce_10_0083__li96803718182"><span>In the dialog box displayed, click <strong id="cce_10_0083__b135291150994">Yes</strong>.</span></li></ol>
</div>
</div>
<div>

View File

@ -0,0 +1,16 @@
<a name="cce_10_0084"></a><a name="cce_10_0084"></a>
<h1 class="topictitle1">Enabling ICMP Security Group Rules</h1>
<div id="body1530866171131"><div class="section" id="cce_10_0084__section106079439418"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0084__p34679509418">If a workload uses UDP for both load balancing and health check, enable ICMP security group rules for the backend servers.</p>
</div>
<div class="section" id="cce_10_0084__section865612352391"><h4 class="sectiontitle">Procedure</h4><ol id="cce_10_0084__ol1999461164212"><li id="cce_10_0084__li1947817217515"><span>Log in to the ECS console, find the ECS corresponding to any node where the workload runs, and click the ECS name. On the displayed ECS details page, record the security group name.</span></li><li id="cce_10_0084__li2114123554110"><span>Log in to the VPC console. In the navigation pane on the left, choose <span class="uicontrol" id="cce_10_0084__uicontrol1587081612913"><b>Access Control &gt; Security Groups</b></span>. In the security group list on the right, click the security group name obtained in step 1.</span></li><li id="cce_10_0084__li201591224113516"><span>On the page displayed, click the <span class="uicontrol" id="cce_10_0084__uicontrol9411262192"><b>Inbound Rules</b></span> tab and click <span class="uicontrol" id="cce_10_0084__uicontrol0982204218191"><b>Add Rule</b></span> to add an inbound rule for ECS. Then, click <span class="uicontrol" id="cce_10_0084__uicontrol28571458151915"><b>OK</b></span>.</span><p><div class="note" id="cce_10_0084__note18685241993"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0084__ul885417541386"><li id="cce_10_0084__li188541540813">You only need to add security group rules to any node where the workload runs.</li><li id="cce_10_0084__li385515419812">The security group must have rules to allow access from the CIDR block 100.125.0.0/16.</li></ul>
</div></div>
</p></li></ol>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0014.html">LoadBalancer</a></div>
</div>
</div>

View File

@ -6,20 +6,14 @@
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0002.html">Cluster Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0298.html">Creating a CCE Turbo Cluster</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0298.html">Creating a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0028.html">Creating a CCE Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0140.html">Using kubectl to Run a Cluster</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0140.html">Connecting to a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0215.html">Upgrading a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0031.html">Managing a Cluster</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0175.html">Obtaining a Cluster Certificate</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0403.html">Changing Cluster Scale</a></strong><br>
</li>
</ul>
</div>

View File

@ -1,16 +1,64 @@
<a name="cce_10_0094"></a><a name="cce_10_0094"></a>
<h1 class="topictitle1">Ingress Overview</h1>
<div id="body0000001159453456"><div class="section" id="cce_10_0094__section17868123416122"><h4 class="sectiontitle">Why We Need Ingresses</h4><p id="cce_10_0094__p19813582419">A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, that is, ingress.</p>
<h1 class="topictitle1">Overview</h1>
<div id="body0000001159453456"><div class="section" id="cce_10_0094__section17868123416122"><h4 class="sectiontitle">Why We Need Ingresses</h4><p id="cce_10_0094__p19813582419">A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.</p>
<p id="cce_10_0094__p168757241679">An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in <a href="#cce_10_0094__fig18155819416">Figure 1</a>, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.</p>
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001517903200.png"></span></div>
<div class="fignone" id="cce_10_0094__fig18155819416"><a name="cce_10_0094__fig18155819416"></a><a name="fig18155819416"></a><span class="figcap"><b>Figure 1 </b>Ingress diagram</span><br><span><img class="eddx" id="cce_10_0094__image98185817414" src="en-us_image_0000001695896861.png"></span></div>
<p id="cce_10_0094__p128258846">The following describes the ingress-related definitions:</p>
<ul id="cce_10_0094__ul2875811411"><li id="cce_10_0094__li78145815413">Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.</li><li id="cce_10_0094__li148115817417">Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services.</li></ul>
</div>
<div class="section" id="cce_10_0094__section162271821192312"><h4 class="sectiontitle">Working Principle of ELB Ingress Controller</h4><p id="cce_10_0094__p172542048121220">ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.</p>
<p id="cce_10_0094__p4254124831218">ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). <a href="#cce_10_0094__fig122542486129">Figure 2</a> shows the working principle of ELB Ingress Controller.</p>
<ol id="cce_10_0094__ol525410483123"><li id="cce_10_0094__li8254184813127">A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port.</li><li id="cce_10_0094__li1225474817126">When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule.</li><li id="cce_10_0094__li115615167193">When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service.</li></ol>
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working principle of ELB Ingress Controller</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001568822925.png"></span></div>
<div class="fignone" id="cce_10_0094__fig122542486129"><a name="cce_10_0094__fig122542486129"></a><a name="fig122542486129"></a><span class="figcap"><b>Figure 2 </b>Working principle of ELB Ingress Controller</span><br><span><img class="eddx" id="cce_10_0094__image725424815120" src="en-us_image_0000001647577184.png"></span></div>
</div>
<div class="section" id="cce_10_0094__section3565202819276"><a name="cce_10_0094__section3565202819276"></a><a name="section3565202819276"></a><h4 class="sectiontitle">Services Supported by Ingresses</h4><div class="p" id="cce_10_0094__p109298589133"><a href="#cce_10_0094__table143264518141">Table 1</a> lists the services supported by ELB Ingresses.
<div class="tablenoborder"><a name="cce_10_0094__table143264518141"></a><a name="table143264518141"></a><table cellpadding="4" cellspacing="0" summary="" id="cce_10_0094__table143264518141" width="100%" frame="border" border="1" rules="all"><caption><b>Table 1 </b>Services supported by ELB Ingresses</caption><thead align="left"><tr id="cce_10_0094__row1132645112145"><th align="left" class="cellrowborder" valign="top" width="15%" id="mcps1.3.3.2.2.2.5.1.1"><p id="cce_10_0094__p33261518148">Cluster Type</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="22%" id="mcps1.3.3.2.2.2.5.1.2"><p id="cce_10_0094__p15326195191413">ELB Type</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="32%" id="mcps1.3.3.2.2.2.5.1.3"><p id="cce_10_0094__p203261651101412">ClusterIP</p>
</th>
<th align="left" class="cellrowborder" valign="top" width="31%" id="mcps1.3.3.2.2.2.5.1.4"><p id="cce_10_0094__p5326155161412">NodePort</p>
</th>
</tr>
</thead>
<tbody><tr id="cce_10_0094__row1326185141413"><td class="cellrowborder" rowspan="2" valign="top" width="15%" headers="mcps1.3.3.2.2.2.5.1.1 "><p id="cce_10_0094__p132635111148">CCE cluster</p>
</td>
<td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.3.2.2.2.5.1.2 "><p id="cce_10_0094__p10326351161414">Shared load balancer</p>
</td>
<td class="cellrowborder" valign="top" width="32%" headers="mcps1.3.3.2.2.2.5.1.3 "><p id="cce_10_0094__p19326155131415">Not supported</p>
</td>
<td class="cellrowborder" valign="top" width="31%" headers="mcps1.3.3.2.2.2.5.1.4 "><p id="cce_10_0094__p6326155113145">Supported</p>
</td>
</tr>
<tr id="cce_10_0094__row432645171419"><td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.1 "><p id="cce_10_0094__p173261451161412">Dedicated load balancer</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.2 "><p id="cce_10_0094__p27617265710">Not supported (<span class="keyword" id="cce_10_0094__keyword36811143589">Failed to access the dedicated load balancers because no ENI is bound to the associated pod of the ClusterIP Service.</span>)</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.3 "><p id="cce_10_0094__p932616517145">Supported</p>
</td>
</tr>
<tr id="cce_10_0094__row210381718312"><td class="cellrowborder" rowspan="2" valign="top" width="15%" headers="mcps1.3.3.2.2.2.5.1.1 "><p id="cce_10_0094__p793917163112">CCE Turbo cluster</p>
</td>
<td class="cellrowborder" valign="top" width="22%" headers="mcps1.3.3.2.2.2.5.1.2 "><p id="cce_10_0094__p79312171318">Shared load balancer</p>
</td>
<td class="cellrowborder" valign="top" width="32%" headers="mcps1.3.3.2.2.2.5.1.3 "><p id="cce_10_0094__p19406115844519">Not supported</p>
</td>
<td class="cellrowborder" valign="top" width="31%" headers="mcps1.3.3.2.2.2.5.1.4 "><p id="cce_10_0094__p340620581455">Supported</p>
</td>
</tr>
<tr id="cce_10_0094__row6103817153115"><td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.1 "><p id="cce_10_0094__p179301713116">Dedicated load balancer</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.2 "><p id="cce_10_0094__p124061958154512">Supported</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.3.3.2.2.2.5.1.3 "><p id="cce_10_0094__p4406758154518">Not supported (<span class="keyword" id="cce_10_0094__keyword1214411171616">Failed to access the dedicated load balancers because an ENI has been bound to the associated pod of the NodePort Service.</span>)</p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
<div>

View File

@ -78,7 +78,7 @@
<tbody><tr id="cce_10_0105__row65339348218"><td class="cellrowborder" valign="top" width="28.999999999999996%" headers="mcps1.3.2.6.2.2.1.2.3.1.1 "><p id="cce_10_0105__p353573415215">Command</p>
</td>
<td class="cellrowborder" valign="top" width="71%" headers="mcps1.3.2.6.2.2.1.2.3.1.2 "><p id="cce_10_0105__p853515342215">Enter an executable command, for example, <span class="parmvalue" id="cce_10_0105__parmvalue1840810251351"><b>/run/server</b></span>.</p>
<p id="cce_10_0105__p2595134133217">If there are multiple commands, separate them with spaces. If the command contains a space, you need to add a quotation mark ("").</p>
<p id="cce_10_0105__p2595134133217">If there are multiple executable commands, write them in different lines.</p>
<div class="note" id="cce_10_0105__note11952193619513"><span class="notetitle"> NOTE: </span><div class="notebody"><p id="cce_10_0105__p1795213665120">In the case of multiple commands, you are advised to run <strong id="cce_10_0105__b10788153303513">/bin/sh</strong> or other <strong id="cce_10_0105__b67943339350">shell</strong> commands. Other commands are used as parameters.</p>
</div></div>
</td>
@ -115,7 +115,7 @@
<tr id="cce_10_0105__row925519462389"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.3.2.2.2.1.2.3.1.1 "><p id="cce_10_0105__p1261104603816">HTTP request</p>
</td>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.3.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p162625461389">Send an HTTP request for post-start processing. The related parameters are described as follows:</p>
<ul id="cce_10_0105__ul426364613385"><li id="cce_10_0105__li1126311461385"><strong id="cce_10_0105__b8207132175211">Path</strong>: (optional) request URL.</li><li id="cce_10_0105__li12641946173820"><strong id="cce_10_0105__b1099715385217">Port</strong>: (mandatory) request port.</li><li id="cce_10_0105__li726644612387"><strong id="cce_10_0105__b97156610528">Host</strong>: (optional) IP address of the request. The default value is the IP address of the node where the container resides.</li></ul>
<ul id="cce_10_0105__ul426364613385"><li id="cce_10_0105__li1126311461385"><strong id="cce_10_0105__b8207132175211">Path</strong>: (optional) request URL.</li><li id="cce_10_0105__li12641946173820"><strong id="cce_10_0105__b1099715385217">Port</strong>: (mandatory) request port.</li><li id="cce_10_0105__li726644612387"><strong id="cce_10_0105__b3208629133010">Host</strong>: (optional) requested host IP address. The default value is the IP address of the pod.</li></ul>
</td>
</tr>
</tbody>
@ -132,7 +132,7 @@
</thead>
<tbody><tr id="cce_10_0105__row04201302279"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.4.2.2.2.1.2.3.1.1 "><p id="cce_10_0105__p6420110192718">CLI</p>
</td>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b1145908843">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b237295393">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p94204010271">Set commands to be executed in the container for pre-stop processing. The command format is <strong id="cce_10_0105__b1784169779">Command Args[1] Args[2]...</strong>. <strong id="cce_10_0105__b1109858898">Command</strong> is a system command or a user-defined executable program. If no path is specified, an executable program in the default path will be selected. If multiple commands need to be executed, write the commands into a script for execution.</p>
<p id="cce_10_0105__p94203082712">Example command:</p>
<pre class="screen" id="cce_10_0105__screen6420190132712">exec:
command:
@ -144,7 +144,7 @@
<tr id="cce_10_0105__row4421190152715"><td class="cellrowborder" valign="top" width="23%" headers="mcps1.3.4.2.2.2.1.2.3.1.1 "><p id="cce_10_0105__p154216032719">HTTP request</p>
</td>
<td class="cellrowborder" valign="top" width="77%" headers="mcps1.3.4.2.2.2.1.2.3.1.2 "><p id="cce_10_0105__p15421160122715">Send an HTTP request for pre-stop processing. The related parameters are described as follows:</p>
<ul id="cce_10_0105__ul204215052713"><li id="cce_10_0105__li124214052715"><strong id="cce_10_0105__b198341981321">Path</strong>: (optional) request URL.</li><li id="cce_10_0105__li104211303275"><strong id="cce_10_0105__b684213101329">Port</strong>: (mandatory) request port.</li><li id="cce_10_0105__li164227011277"><strong id="cce_10_0105__b1681213521">Host</strong>: (optional) IP address of the request. The default value is the IP address of the node where the container resides.</li></ul>
<ul id="cce_10_0105__ul204215052713"><li id="cce_10_0105__li124214052715"><strong id="cce_10_0105__b198341981321">Path</strong>: (optional) request URL.</li><li id="cce_10_0105__li104211303275"><strong id="cce_10_0105__b684213101329">Port</strong>: (mandatory) request port.</li><li id="cce_10_0105__li164227011277"><strong id="cce_10_0105__b27761596332">Host</strong>: (optional) requested host IP address. The default value is the IP address of the pod.</li></ul>
</td>
</tr>
</tbody>
@ -171,19 +171,19 @@ spec:
containers:
- image: nginx
<strong id="cce_10_0105__b2557141820811"> command:</strong>
<strong id="cce_10_0105__b602760589517"> - sleep 3600</strong> <strong id="cce_10_0105__b557238559517"> #Startup command</strong>
<strong id="cce_10_0105__b8559121812811"> - sleep 3600</strong> <strong id="cce_10_0105__b156241814811"> # Startup command</strong>
imagePullPolicy: Always
lifecycle:
<strong id="cce_10_0105__b356919181282"> postStart:</strong>
<strong id="cce_10_0105__b05719181388"> exec:</strong>
<strong id="cce_10_0105__b657619181180"> command:</strong>
<strong id="cce_10_0105__b157817184810"> - /bin/bash</strong>
<strong id="cce_10_0105__b5984012195132"> - install.sh</strong> <strong id="cce_10_0105__b737670595132"> #Post-start command</strong>
<strong id="cce_10_0105__b45827187811"> - install.sh</strong> <strong id="cce_10_0105__b19593191810816"> # Post-start command</strong>
<strong id="cce_10_0105__b175988181287"> preStop:</strong>
<strong id="cce_10_0105__b360111181288"> exec:</strong>
<strong id="cce_10_0105__b360516183814"> command:</strong>
<strong id="cce_10_0105__b14607718384"> - /bin/bash</strong>
<strong id="cce_10_0105__b110644295144"> - uninstall.sh</strong> <strong id="cce_10_0105__b556114495144"> #<strong id="cce_10_0105__b815245295144">Pre-stop command</strong></strong>
<strong id="cce_10_0105__b761121810811"> - uninstall.sh</strong> <strong id="cce_10_0105__b76121518482"> # Pre-stop command</strong>
name: nginx
imagePullSecrets:
- name: default-secret</pre>

View File

@ -1,29 +1,29 @@
<a name="cce_10_0107"></a><a name="cce_10_0107"></a>
<h1 class="topictitle1">Connecting to a Cluster Using kubectl</h1>
<div id="body1512462600292"><div class="section" id="cce_10_0107__section14234115144"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0107__p133539491408">This section uses a CCE cluster as an example to describe how to connect to a CCE cluster using kubectl.</p>
<div id="body1512462600292"><div class="section" id="cce_10_0107__section14234115144"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0107__p133539491408">This section uses a CCE cluster as an example to describe how to connect to a CCE cluster using <span class="keyword" id="cce_10_0107__keyword19467121518447">kubectl</span>.</p>
</div>
<div class="section" id="cce_10_0107__section17352373317"><h4 class="sectiontitle">Permission Description</h4><p id="cce_10_0107__p51211251156">When you access a cluster using kubectl, CCE uses the<strong id="cce_10_0107__b10486036194010"> kubeconfig.json</strong> file generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_10_0107__b16295666413">kubeconfig.json</strong> file vary from user to user.</p>
<div class="section" id="cce_10_0107__section17352373317"><h4 class="sectiontitle">Permissions</h4><p id="cce_10_0107__p51211251156">When you access a cluster using kubectl, CCE uses <strong id="cce_10_0107__b9161182320391"><span class="keyword" id="cce_10_0107__keyword1354319447418">kubeconfig</span>.json</strong> generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a <strong id="cce_10_0107__b16295666413">kubeconfig.json</strong> file vary from user to user.</p>
<p id="cce_10_0107__p142391810113">For details about user permissions, see <a href="cce_10_0187.html#cce_10_0187__section1464135853519">Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
</div>
<div class="section" id="cce_10_0107__section37321625113110"><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console, click the name of the cluster to be connected, and view the access address and kubectl connection procedure on the cluster details page.</p>
<div class="p" id="cce_10_0107__p7805114919351">CCE allows you to access a cluster through a <strong id="cce_10_0107__b1121513014419">VPC network</strong> or a <strong id="cce_10_0107__b102151530104414">public network</strong>.<ul id="cce_10_0107__ul126071124175518"><li id="cce_10_0107__li144192116548"><strong id="cce_10_0107__b11171195110442">Intra-VPC access</strong>: The client that accesses the cluster must be in the same VPC as the cluster.</li><li id="cce_10_0107__li1460752419555"><strong id="cce_10_0107__b98386924517">Public access</strong>:The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP.<div class="notice" id="cce_10_0107__note2967194410365"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0107__p19671244103610">To bind a public IP (EIP) to the cluster, go to the cluster details page and click <strong id="cce_10_0107__b470443910483">Bind</strong> next to <strong id="cce_10_0107__b972314117489">EIP</strong> in the <strong id="cce_10_0107__b18899104418484">Connection Information</strong> pane. In a cluster with an EIP bound, kube-apiserver will be exposed to public networks and may be attacked. You are advised to configure Advanced Anti-DDoS (AAD) for the EIP of the node where kube-apiserver resides.</p>
<div class="section" id="cce_10_0107__section37321625113110"><a name="cce_10_0107__section37321625113110"></a><a name="section37321625113110"></a><h4 class="sectiontitle">Using kubectl</h4><p id="cce_10_0107__p764905418355">To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console, click the name of the cluster to be connected, and view the access address and kubectl connection procedure on the cluster details page.</p>
<div class="p" id="cce_10_0107__p7805114919351">CCE allows you to access a cluster through a private network or a public network.<ul id="cce_10_0107__ul126071124175518"><li id="cce_10_0107__li144192116548"><span class="keyword" id="cce_10_0107__keyword13441034142917">Intranet access</span>: The client that accesses the cluster must be in the same VPC as the cluster.</li><li id="cce_10_0107__li1460752419555">Public access: The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP.<div class="notice" id="cce_10_0107__note2967194410365"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_10_0107__p19671244103610">To bind a public IP (EIP) to the cluster, go to the cluster details page and click <strong id="cce_10_0107__b021910485396">Bind</strong> next to <strong id="cce_10_0107__b132197480394">EIP</strong> in the <strong id="cce_10_0107__b14219164815396">Connection Information</strong> pane. In a cluster with an EIP bound, kube-apiserver will be exposed to public networks and may be attacked. You are advised to configure Advanced Anti-DDoS (AAD) for the EIP of the node where kube-apiserver resides.</p>
</div></div>
</li></ul>
</div>
<p id="cce_10_0107__p2842139103716">Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure:</p>
<ol id="cce_10_0107__ol6469105613170"><li id="cce_10_0107__li194691356201712"><span>Download kubectl.</span><p><p id="cce_10_0107__p53069487256">Prepare a computer that can access the public network and install kubectl in CLI mode. You can run the <strong id="cce_10_0107__b188225352515">kubectl version</strong> command to check whether kubectl has been installed. If kubectl has been installed, skip this step.</p>
<ol id="cce_10_0107__ol6469105613170"><li id="cce_10_0107__li194691356201712"><span><strong id="cce_10_0107__b469717424401">Download kubectl.</strong></span><p><p id="cce_10_0107__p53069487256">Prepare a computer that can access the public network and install kubectl in CLI mode. You can run the <strong id="cce_10_0107__b2309195102312">kubectl version</strong> command to check whether kubectl has been installed. If kubectl has been installed, skip this step.</p>
<p id="cce_10_0107__p125851851153510">This section uses the Linux environment as an example to describe how to install and configure kubectl. For details, see <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" target="_blank" rel="noopener noreferrer">Installing kubectl</a>.</p>
<ol type="a" id="cce_10_0107__ol735517018289"><li id="cce_10_0107__li551132463520">Log in to your client and download kubectl.<pre class="screen" id="cce_10_0107__screen8511142418352">cd /home
curl -LO https://dl.k8s.io/release/<em id="cce_10_0107__i13511182443516">{v1.25.0}</em>/bin/linux/amd64/kubectl</pre>
<p id="cce_10_0107__p6511924173518"><em id="cce_10_0107__i1251116243353"><strong id="cce_10_0107__b3575202815342">{v1.25.0}</strong></em> specifies the version number. Replace it as required.</p>
<p id="cce_10_0107__p6511924173518"><em id="cce_10_0107__i719013311241">{v1.25.0}</em> specifies the version number. Replace it as required.</p>
</li><li id="cce_10_0107__li1216814211286">Install kubectl.<pre class="screen" id="cce_10_0107__screen16892115815271">chmod +x kubectl
mv -f kubectl /usr/local/bin</pre>
</li></ol>
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span>Obtain the kubectl configuration file (kubeconfig).</span><p><p id="cce_10_0107__p1295818109256">On the <strong id="cce_10_0107__b450013549611">Connection Information</strong> pane on the cluster details page, click <strong id="cce_10_0107__b136512181078">Learn more</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the window displayed, download the configuration file.</p>
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig.json</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li62692399615">By default, two-way authentication is disabled for domain names in the current cluster. You can run the <strong id="cce_10_0107__b76312129249">kubectl config use-context externalTLSVerify</strong> command to enable two-way authentication. For details, see <a href="#cce_10_0107__section1559919152711">Two-Way Authentication for Domain Names</a>. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_10_0107__b940713611819">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
</p></li><li id="cce_10_0107__li34691156151712"><a name="cce_10_0107__li34691156151712"></a><a name="li34691156151712"></a><span><strong id="cce_10_0107__b196211619192411">Obtain the kubectl configuration file (kubeconfig).</strong></span><p><p id="cce_10_0107__p1295818109256">In the <strong id="cce_10_0107__b450013549611">Connection Information</strong> pane on the cluster details page, click <strong id="cce_10_0107__b136512181078">Configure</strong> next to <strong id="cce_10_0107__b177317221173">kubectl</strong>. On the window displayed, download the configuration file.</p>
<div class="note" id="cce_10_0107__note191638104210"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><ul id="cce_10_0107__ul795610485546"><li id="cce_10_0107__li495634817549">The kubectl configuration file <strong id="cce_10_0107__b11741123981418">kubeconfig.json</strong> is used for cluster authentication. If the file is leaked, your clusters may be attacked.</li><li id="cce_10_0107__li62692399615">By default, two-way authentication is disabled for domain names in the current cluster. You can run the <strong id="cce_10_0107__b76312129249">kubectl config use-context externalTLSVerify</strong> command to enable two-way authentication. For details, see <a href="#cce_10_0107__section1559919152711">Two-Way Authentication for Domain Names</a>. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b940713611819">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li16956194817544">The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console.</li><li id="cce_10_0107__li1537643019239">If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of <strong id="cce_10_0107__b5859154717398">$home/.kube/config</strong>.</li></ul>
</div></div>
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the kubeconfig.json configuration file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b13872204384014">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
</p></li><li id="cce_10_0107__li25451059122317"><a name="cce_10_0107__li25451059122317"></a><a name="li25451059122317"></a><span>Configure kubectl.</span><p><div class="p" id="cce_10_0107__p109826082413">Configure kubectl (A Linux OS is used).<ol type="a" id="cce_10_0107__ol2291154772010"><li id="cce_10_0107__li102911547102012">Log in to your client and copy the kubeconfig.json configuration file downloaded in <a href="#cce_10_0107__li34691156151712">2</a> to the <strong id="cce_10_0107__b175828331240">/home</strong> directory on your client.</li><li id="cce_10_0107__li114766383477">Configure the kubectl authentication file.<pre class="screen" id="cce_10_0107__screen849155210477">cd /home
mkdir -p $HOME/.kube
mv -f kubeconfig.json $HOME/.kube/config</pre>
</li><li id="cce_10_0107__li1480512253214">Switch the kubectl access mode based on service scenarios.<ul id="cce_10_0107__ul91037595229"><li id="cce_10_0107__li5916145112313">Run this command to enable intra-VPC access:<pre class="screen" id="cce_10_0107__screen279213242247">kubectl config use-context internal</pre>
@ -35,14 +35,14 @@ mv -f kubeconfig.json $HOME/.kube/config</pre>
</div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0107__section1559919152711"><a name="cce_10_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle">Two-Way Authentication for Domain Names</h4><p id="cce_10_0107__p138948491274">Currently, CCE supports two-way authentication for domain names.</p>
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">Two-way authentication is disabled for domain names by default. You can run the <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> command to switch to the externalTLSVerify context to enable it.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.json</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, you can download the <strong id="cce_10_0107__b549311585216">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image3414621613" src="en-us_image_0000001568822965.png"></span></div>
<div class="section" id="cce_10_0107__section1559919152711"><a name="cce_10_0107__section1559919152711"></a><a name="section1559919152711"></a><h4 class="sectiontitle"><span class="keyword" id="cce_10_0107__keyword311020376452">Two-Way Authentication for Domain Names</span></h4><p id="cce_10_0107__p138948491274">CCE supports two-way authentication for domain names.</p>
<ul id="cce_10_0107__ul88981331482"><li id="cce_10_0107__li1705116151915">Two-way authentication is disabled for domain names by default. You can run the <strong id="cce_10_0107__b198732542582">kubectl config use-context externalTLSVerify</strong> command to switch to the externalTLSVerify context to enable it.</li><li id="cce_10_0107__li1807459174818">When an EIP is bound to or unbound from a cluster, or a custom domain name is configured or updated, the cluster server certificate will be added the latest cluster access address (including the EIP bound to the cluster and all custom domain names configured for the cluster).</li><li id="cce_10_0107__li17898153310483">Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in <strong id="cce_10_0107__b196404619200">Synchronize Certificate</strong> in <strong id="cce_10_0107__b364620682012">Operation Records</strong>.</li><li id="cce_10_0107__li614337712">For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download <strong id="cce_10_0107__b121611451417">kubeconfig.json</strong> again.</li><li id="cce_10_0107__li5950658165414">If the domain name two-way authentication is not supported, <strong id="cce_10_0107__b56091346184712">kubeconfig.json</strong> contains the <strong id="cce_10_0107__b1961534614476">"insecure-skip-tls-verify": true</strong> field, as shown in <a href="#cce_10_0107__fig1941342411">Figure 1</a>. To use two-way authentication, you can download the <strong id="cce_10_0107__b549311585216">kubeconfig.json</strong> file again and enable two-way authentication for the domain names.<div class="fignone" id="cce_10_0107__fig1941342411"><a name="cce_10_0107__fig1941342411"></a><a name="fig1941342411"></a><span class="figcap"><b>Figure 1 </b>Two-way authentication disabled for domain names</span><br><span><img id="cce_10_0107__image13288203455018" src="en-us_image_0000001726718109.png"></span></div>
</li></ul>
</div>
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">Common Issues</h4><ul id="cce_10_0107__ul1374831051115"><li id="cce_10_0107__li4748810121112"><strong id="cce_10_0107__b456677171119">Error from server Forbidden</strong><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
<div class="section" id="cce_10_0107__section1628510591883"><h4 class="sectiontitle">Common Issues</h4><ul id="cce_10_0107__ul1374831051115"><li id="cce_10_0107__li4748810121112"><strong id="cce_10_0107__b456677171119"><span class="keyword" id="cce_10_0107__keyword0702458114510">Error from server Forbidden</span></strong><p id="cce_10_0107__p75241832114916">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
<pre class="screen" id="cce_10_0107__screen5530165114117"># kubectl get deploy Error from server (Forbidden): deployments.apps is forbidden: User "0c97ac3cb280f4d91fa7c0096739e1f8" cannot list resource "deployments" in API group "apps" in the namespace "default"</pre>
<p id="cce_10_0107__p1418636115119">The cause is that the user does not have the permissions to operate the Kubernetes resources. For details about how to assign permissions, see <a href="cce_10_0189.html">Namespace Permissions (Kubernetes RBAC-based)</a>.</p>
</li><li id="cce_10_0107__li0365152110"><strong id="cce_10_0107__b1829619716131">The connection to the server localhost:8080 was refused</strong><p id="cce_10_0107__p1776396131212">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
</li><li id="cce_10_0107__li0365152110"><strong id="cce_10_0107__b1829619716131"><span class="keyword" id="cce_10_0107__keyword159213536451">The connection to the server localhost:8080 was refused</span></strong><p id="cce_10_0107__p1776396131212">When you use kubectl to create or query Kubernetes resources, the following output is returned:</p>
<pre class="screen" id="cce_10_0107__screen197636617124">The connection to the server localhost:8080 was refused - did you specify the right host or port?</pre>
<p id="cce_10_0107__p87631764129">The cause is that cluster authentication is not configured for the kubectl client. For details, see <a href="#cce_10_0107__li25451059122317">3</a>.</p>
</li></ul>
@ -50,7 +50,7 @@ mv -f kubeconfig.json $HOME/.kube/config</pre>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0140.html">Using kubectl to Run a Cluster</a></div>
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0140.html">Connecting to a Cluster</a></div>
</div>
</div>

View File

@ -1,14 +1,18 @@
<a name="cce_10_0110"></a><a name="cce_10_0110"></a>
<h1 class="topictitle1">Monitoring and Alarm</h1>
<h1 class="topictitle1">Monitoring</h1>
<div id="body0000001219165543"><p id="cce_10_0110__p8060118"></p>
</div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0182.html">Monitoring Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0201.html">Custom Monitoring</a></strong><br>
<li class="ulchildlink"><strong><a href="cce_10_0201.html">Monitoring Custom Metrics on AOM</a></strong><br>
</li>
</ul>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0705.html">Observability</a></div>
</div>
</div>

View File

@ -0,0 +1,22 @@
<a name="cce_10_0111"></a><a name="cce_10_0111"></a>
<h1 class="topictitle1">Scalable File Service (SFS)</h1>
<div id="body0000001487121868"><p id="cce_10_0111__p8060118"></p>
</div>
<div>
<ul class="ullinks">
<li class="ulchildlink"><strong><a href="cce_10_0617.html">Overview</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0619.html">Using an Existing SFS File System Through a Static PV</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0620.html">Using an SFS File System Through a Dynamic PV</a></strong><br>
</li>
<li class="ulchildlink"><strong><a href="cce_10_0337.html">Configuring SFS Volume Mount Options</a></strong><br>
</li>
</ul>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_10_0374.html">Storage</a></div>
</div>
</div>

View File

@ -3,20 +3,20 @@
<h1 class="topictitle1">Setting Health Check for a Container</h1>
<div id="body1512535109871"><div class="section" id="cce_10_0112__section1731112174912"><h4 class="sectiontitle">Scenario</h4><p id="cce_10_0112__p8242924192"><span class="keyword" id="cce_10_0112__keyword22817116429">Health check</span> regularly checks the health status of containers during container running. If the health check function is not configured, a pod cannot detect application exceptions or automatically restart the application to restore it. This will result in a situation where the pod status is normal but the application in the pod is abnormal.</p>
<p id="cce_10_0112__a77e71e69afde4757ab0ef6087b2e30de">Kubernetes provides the following health check probes:</p>
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b125689275012">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b17568627507">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started.</li></ul>
<ul id="cce_10_0112__ul1867812287915"><li id="cce_10_0112__li574951765020"><strong id="cce_10_0112__b1644144411235">Liveness probe</strong> (livenessProbe): checks whether a container is still alive. It is similar to the <strong id="cce_10_0112__b5645134422313">ps</strong> command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed.</li><li id="cce_10_0112__li36781028792"><strong id="cce_10_0112__b1729242134220">Readiness probe</strong> (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. </li><li id="cce_10_0112__li142001552181016"><strong id="cce_10_0112__b86001053354">Startup probe</strong> (startupProbe): checks when a containerized application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started.</li></ul>
</div>
<div class="section" id="cce_10_0112__section476025319384"><h4 class="sectiontitle">Check Method</h4><ul id="cce_10_0112__ul2492162133910"><li id="cce_10_0112__li19505918465"><strong id="cce_10_0112__b84235270695216"><span class="keyword" id="cce_10_0112__keyword122935940517318">HTTP request</span></strong><p id="cce_10_0112__p17738122617398">This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path.</p>
<p id="cce_10_0112__p051511331505">For example, for a container that provides HTTP services, the HTTP check path is <strong id="cce_10_0112__b2043313277265">/health-check</strong>, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. You can also add one or more headers to an HTTP request. For example, set the request header name to <strong id="cce_10_0112__b1372104918911">Custom-Header</strong> and the corresponding value to <strong id="cce_10_0112__b9755554916">example</strong>.</p>
<div class="section" id="cce_10_0112__section476025319384"><h4 class="sectiontitle">Check Method</h4><ul id="cce_10_0112__ul2492162133910"><li id="cce_10_0112__li19505918465"><strong id="cce_10_0112__b84235270695216"><span class="keyword" id="cce_10_0112__keyword122935940517318">HTTP request</span></strong><p id="cce_10_0112__p17738122617398">This health check mode applies to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path.</p>
<p id="cce_10_0112__p051511331505">For example, for a container that provides HTTP services, the HTTP check path is <strong id="cce_10_0112__b2043313277265">/health-check</strong>, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. You can also add one or more headers to an HTTP request. For example, set the request header name to <strong id="cce_10_0112__b1157115313232">Custom-Header</strong> and the corresponding value to <strong id="cce_10_0112__b195721853152316">example</strong>.</p>
</li><li id="cce_10_0112__li92491637166"><strong id="cce_10_0112__b84235270695641"><span class="keyword" id="cce_10_0112__keyword84450853173134">TCP port</span></strong><p id="cce_10_0112__p14198132922215">For a container that provides TCP communication services, the cluster periodically establishes a TCP connection to the container. If the connection is successful, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port.</p>
<p id="cce_10_0112__p1525113371164">For example, if you have a Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails.</p>
<p id="cce_10_0112__p1525113371164">For example, if you have an Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails.</p>
</li><li id="cce_10_0112__li104061647154310"><strong id="cce_10_0112__b84235270695818"><span class="keyword" id="cce_10_0112__keyword1395397266173145">CLI</span></strong><p id="cce_10_0112__p105811510164113">CLI is an efficient tool for health check. When using the CLI, you must specify an executable command in a container. The cluster periodically runs the command in the container. If the command output is 0, the health check is successful. Otherwise, the health check fails.</p>
<p id="cce_10_0112__p1658131014413">The CLI mode can be used to replace the HTTP request-based and TCP port-based health check.</p>
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b167704361017">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b177010361817">1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can use the script command to run the <strong id="cce_10_0112__b18539192117411">wget</strong> command to detect the container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
<ul id="cce_10_0112__ul16409174744313"><li id="cce_10_0112__li7852728174119">For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns <strong id="cce_10_0112__b1610019014247">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b5100905245">1</strong>.</li><li id="cce_10_0112__li241104715431">For an HTTP request, you can use the script command to run the <strong id="cce_10_0112__b16819134246">wget</strong> command to detect the container.<p id="cce_10_0112__p16488203413413"><strong id="cce_10_0112__b422541134110">wget http://127.0.0.1:80/health-check</strong></p>
<p id="cce_10_0112__p13488133464119">Check the return code of the response. If the return code is within 200399, the script returns <strong id="cce_10_0112__b14498132912217">0</strong>. Otherwise, the script returns <strong id="cce_10_0112__b427293111227">1</strong>. </p>
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b1134963416348">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b183492034173418">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
<div class="notice" id="cce_10_0112__note124141947164311"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7414047164318"><li id="cce_10_0112__li81561727181416">Put the program to be executed in the container image so that the program can be executed. </li><li id="cce_10_0112__li204153475437">If the command to be executed is a shell script, do not directly specify the script as the command, but add a script parser. For example, if the script is <strong id="cce_10_0112__b9972128102411">/data/scripts/health_check.sh</strong>, you must specify <strong id="cce_10_0112__b11973988247">sh/data/scripts/health_check.sh</strong> for command execution. The reason is that the cluster is not in the terminal environment when executing programs in a container. </li></ul>
</div></div>
</li></ul>
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC Check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and query its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
</li><li id="cce_10_0112__li198471623132818"><strong id="cce_10_0112__b51081513324">gRPC Check</strong><div class="p" id="cce_10_0112__p489181312320">gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and obtain its status.<div class="notice" id="cce_10_0112__note621111643611"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><ul id="cce_10_0112__ul7170123014392"><li id="cce_10_0112__li6171630113911">The gRPC check is supported only in CCE clusters of v1.25 or later.</li><li id="cce_10_0112__li0171193083917">To use gRPC for check, your application must support the <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" target="_blank" rel="noopener noreferrer">gRPC health checking protocol</a>.</li><li id="cce_10_0112__li8171163015392">Similar to HTTP and TCP probes, if the port is incorrect or the application does not support the health checking protocol, the check fails.</li></ul>
</div></div>
</div>
</li></ul>
@ -48,7 +48,7 @@
</tr>
<tr id="cce_10_0112__row122252672114"><td class="cellrowborder" valign="top" width="27%" headers="mcps1.3.3.2.2.3.1.1 "><p id="cce_10_0112__p22251962213"><strong id="cce_10_0112__b10473150161116">Success Threshold</strong> (successThreshold)</p>
</td>
<td class="cellrowborder" valign="top" width="73%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_10_0112__p68862519317">Minimum consecutive successes for the probe to be considered successful after having failed. For example, if this parameter is set to <strong id="cce_10_0112__b1156818516814">1</strong>, the workload status is normal only when the health check is successful for one consecutive time after the health check fails.</p>
<td class="cellrowborder" valign="top" width="73%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_10_0112__p68862519317">Minimum consecutive successes for the probe to be considered successful after having failed. For example, if this parameter is set to <strong id="cce_10_0112__b539251217254">1</strong>, the workload status is normal only when the health check is successful for one consecutive time after the health check fails.</p>
<p id="cce_10_0112__p4789856113112">The default value is <strong id="cce_10_0112__b1813915471751">1</strong>, which is also the minimum value.</p>
<p id="cce_10_0112__p922520614215">The value of this parameter is fixed to <strong id="cce_10_0112__b699525419516">1</strong> in <strong id="cce_10_0112__b599610546519">Liveness Probe</strong> and <strong id="cce_10_0112__b69965541558">Startup Probe</strong>.</p>
</td>
@ -57,7 +57,7 @@
</td>
<td class="cellrowborder" valign="top" width="73%" headers="mcps1.3.3.2.2.3.1.2 "><p id="cce_10_0112__p9644133173213">Number of retry times when the detection fails.</p>
<p id="cce_10_0112__p111011316163216">Giving up in case of liveness probe means to restart the container. In case of readiness probe the pod will be marked Unready.</p>
<p id="cce_10_0112__p446822117214">The default value is <strong id="cce_10_0112__b678816599810">3</strong>. The minimum value is <strong id="cce_10_0112__b164221341992">1</strong>.</p>
<p id="cce_10_0112__p446822117214">The default value is <strong id="cce_10_0112__b18801222192519">3</strong>. The minimum value is <strong id="cce_10_0112__b9698122717253">1</strong>.</p>
</td>
</tr>
</tbody>

View File

@ -7,9 +7,9 @@
<p id="cce_10_0113__p26271321192016">Configurations must be imported to a container as arguments. Otherwise, configurations will be lost after the container restarts.</p>
</div></div>
<p id="cce_10_0113__p78261119155911">Environment variables can be set in the following modes:</p>
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b116911771613">Custom</strong></li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b151552136536">Added from ConfigMap</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap key</strong>: Import a key in a ConfigMap as the value of an environment variable. For example, if you import <strong id="cce_10_0113__b766612214405">configmap_value</strong> of <strong id="cce_10_0113__b1650503044013">configmap_key</strong> in a ConfigMap as the value of environment variable <strong id="cce_10_0113__b1518565015405">key1</strong>, an environment variable named <strong id="cce_10_0113__b434614560403">key1</strong> with its value <strong id="cce_10_0113__b215043504113">is configmap_value</strong> exists in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. For example, if you import <strong id="cce_10_0113__b20412138105018">secret_value</strong> of <strong id="cce_10_0113__b1248714112506">secret_key</strong> in secret <strong id="cce_10_0113__b2010675411500">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b1260612005113">key2</strong>, an environment variable named <strong id="cce_10_0113__b2906162410511">key2</strong> with its value <strong id="cce_10_0113__b26293438519">secret_value</strong> exists in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b31881558104120">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable, for example, the pod name.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b11429919184010">Resource Reference</strong>: Use the field defined by a container as the value of the environment variable, for example, the CPU limit of the container.</li></ul>
<ul id="cce_10_0113__ul1669104610598"><li id="cce_10_0113__li266913468594"><strong id="cce_10_0113__b4564141914250">Custom</strong>: Enter the environment variable name and parameter value.</li><li id="cce_10_0113__li13148164912599"><strong id="cce_10_0113__b31161818143614">Added from ConfigMap key</strong>: Import all keys in a ConfigMap as environment variables.</li><li id="cce_10_0113__li1855315291026"><strong id="cce_10_0113__b5398577535">Added from ConfigMap</strong>: Import a key in a ConfigMap as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b67861335193619">configmap_value</strong> of <strong id="cce_10_0113__b478643513618">configmap_key</strong> in a ConfigMap as the value of environment variable <strong id="cce_10_0113__b7786133573616">key1</strong>, an environment variable named <strong id="cce_10_0113__b678683518364">key1</strong> whose value is <strong id="cce_10_0113__b1378615359362">configmap_value</strong> exists in the container.</li><li id="cce_10_0113__li1727795616592"><strong id="cce_10_0113__b675162614437">Added from secret</strong>: Import all keys in a secret as environment variables.</li><li id="cce_10_0113__li93353201773"><strong id="cce_10_0113__b0483141614480">Added from secret key</strong>: Import the value of a key in a secret as the value of an environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import <strong id="cce_10_0113__b12974122713812">secret_value</strong> of <strong id="cce_10_0113__b197472716385">secret_key</strong> in secret <strong id="cce_10_0113__b722441953910">secret-example</strong> as the value of environment variable <strong id="cce_10_0113__b8975627173810">key2</strong>, an environment variable named <strong id="cce_10_0113__b29756275384">key2</strong> whose value is <strong id="cce_10_0113__b097552703811">secret_value</strong> exists in the container.</li><li id="cce_10_0113__li1749760535"><strong id="cce_10_0113__b19931701407">Variable value/reference</strong>: Use the field defined by a pod as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if the pod name is imported as the value of environment variable <strong id="cce_10_0113__b1939710417283">key3</strong>, an environment variable named <strong id="cce_10_0113__b11252186142914">key3</strong> exists in the container and its value is the pod name.</li><li id="cce_10_0113__li16129071317"><strong id="cce_10_0113__b1625513417292">Resource Reference</strong>: The value of <strong id="cce_10_0113__b176281198307">Request</strong> or <strong id="cce_10_0113__b186221022193017">Limit</strong> defined by the container is used as the value of the environment variable. As shown in <a href="#cce_10_0113__fig164568529317">Figure 1</a>, if you import the CPU limit of container-1 as the value of environment variable <strong id="cce_10_0113__b272674753017">key4</strong>, an environment variable named <strong id="cce_10_0113__b99015318423">key4</strong> exists in the container and its value is the CPU limit of container-1.</li></ul>
</div>
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li190412461831"><span>Log in to the CCE console. When creating a workload, select <strong id="cce_10_0113__b23218253516">Environment Variables</strong> under <strong id="cce_10_0113__b1763818524318">Container Settings</strong>.</span></li><li id="cce_10_0113__li468251942720"><span>Set environment variables.</span><p><p id="cce_10_0113__p886115513386"><span><img id="cce_10_0113__image486125516381" src="en-us_image_0000001569022913.png"></span></p>
<div class="section" id="cce_10_0113__section13829152011595"><h4 class="sectiontitle">Adding Environment Variables</h4><ol id="cce_10_0113__ol4904646935"><li id="cce_10_0113__li330462393220"><span>Log in to the CCE console.</span></li><li id="cce_10_0113__li2075471341"><span>Click the cluster name to go to the cluster console, choose <strong id="cce_10_0113__b1794501219430">Workloads</strong> in the navigation pane, and click the <strong id="cce_10_0113__b11945131216432">Create Workload</strong> in the upper right corner.</span></li><li id="cce_10_0113__li190412461831"><span>When creating a workload, modify the container information in the <strong id="cce_10_0113__b101361766447">Container Settings</strong> area and click the <strong id="cce_10_0113__b8169124424315">Environment Variables</strong> tab.</span></li><li id="cce_10_0113__li468251942720"><span>Configure environment variables.</span><p><div class="fignone" id="cce_10_0113__fig164568529317"><a name="cce_10_0113__fig164568529317"></a><a name="fig164568529317"></a><span class="figcap"><b>Figure 1 </b>Configuring environment variables</span><br><span><img id="cce_10_0113__image486125516381" src="en-us_image_0000001695896581.png"></span></div>
</p></li></ol>
</div>
<div class="section" id="cce_10_0113__section19591158201313"><h4 class="sectiontitle">YAML Example</h4><pre class="screen" id="cce_10_0113__screen1034117614147">apiVersion: apps/v1

Some files were not shown because too many files have changed in this diff Show More