Files
doc-exports/docs/cce/umn/cce_bestpractice_00220.html
Dong, Qiu Jian e11d42fad0 CCE UMN update -20230818 version
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com>
Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
2023-12-08 10:20:34 +00:00

100 lines
8.2 KiB
HTML

<a name="cce_bestpractice_00220"></a><a name="cce_bestpractice_00220"></a>
<h1 class="topictitle1">Implementing High Availability for Applications in CCE</h1>
<div id="body8662426"><div class="section" id="cce_bestpractice_00220__en-us_topic_0226102193_en-us_topic_0161035939_section1222403512500"><h4 class="sectiontitle">Basic Principles</h4><p id="cce_bestpractice_00220__en-us_topic_0226102193_p153705219139">To achieve high availability for your CCE containers, you can do as follows:</p>
<ol id="cce_bestpractice_00220__en-us_topic_0226102193_ol1288495941210"><li id="cce_bestpractice_00220__en-us_topic_0226102193_li12884059121212">Deploy three master nodes for the cluster.</li><li id="cce_bestpractice_00220__en-us_topic_0226102193_li3884135917125">Create nodes in different AZs. When nodes are deployed across AZs, you can customize scheduling policies based on your requirements to maximize resource utilization.</li><li id="cce_bestpractice_00220__li07770269459">Create multiple node pools in different AZs and use them for node scaling.</li><li id="cce_bestpractice_00220__en-us_topic_0226102193_li178031817171620">Set the number of pods to be greater than 2 when creating a workload.</li><li id="cce_bestpractice_00220__en-us_topic_0226102193_li158841559121215">Set pod affinity rules to distribute pods to different AZs and nodes.</li></ol>
</div>
<div class="section" id="cce_bestpractice_00220__en-us_topic_0226102193_en-us_topic_0161035939_section1184815509400"><h4 class="sectiontitle">Procedure</h4><p id="cce_bestpractice_00220__en-us_topic_0226102193_p1865915314124">Assume that there are four nodes in a cluster distributed in different AZs.</p>
<pre class="screen" id="cce_bestpractice_00220__screen11470194417389">$ kubectl get node -L topology.kubernetes.io/zone,kubernetes.io/hostname
NAME STATUS ROLES AGE VERSION ZONE HOSTNAME
192.168.5.112 Ready &lt;none&gt; 42m v1.21.7-r0-CCE21.11.1.B007 zone01 192.168.5.112
192.168.5.179 Ready &lt;none&gt; 42m v1.21.7-r0-CCE21.11.1.B007 zone01 192.168.5.179
192.168.5.252 Ready &lt;none&gt; 37m v1.21.7-r0-CCE21.11.1.B007 zone02 192.168.5.252
192.168.5.8 Ready &lt;none&gt; 33h v1.21.7-r0-CCE21.11.1.B007 zone03 192.168.5.8</pre>
<p id="cce_bestpractice_00220__p886104233818">Create workloads according to the following podAntiAffinity rules:</p>
<ul id="cce_bestpractice_00220__ul1752111682611"><li id="cce_bestpractice_00220__li75291617269">Pod anti-affinity in an AZ. Configure the parameters as follows:<ul id="cce_bestpractice_00220__ul31352148254"><li id="cce_bestpractice_00220__li01351814132518"><strong id="cce_bestpractice_00220__b13495615122314">weight</strong>: A larger weight value indicates a higher priority of scheduling. In this example, set it to <strong id="cce_bestpractice_00220__b3741917112815">50</strong>.</li><li id="cce_bestpractice_00220__li2042781913286"><strong id="cce_bestpractice_00220__b14967113632312">topologyKey</strong>: includes a default or custom key for the node label that the system uses to denote a topology domain. A topology key determines the scope where the pod should be scheduled to. In this example, set this parameter to <strong id="cce_bestpractice_00220__b13632852202618">topology.kubernetes.io/zone</strong>, which is the label for identifying the AZ where the node is located.</li><li id="cce_bestpractice_00220__li2013561417256"><strong id="cce_bestpractice_00220__b642431013275">labelSelector</strong>: Select the label of the workload to realize the anti-affinity between this container and the workload.</li></ul>
</li><li id="cce_bestpractice_00220__li639824118295">The second one is the pod anti-affinity in the node hostname. Configure the parameters as follows:<ul id="cce_bestpractice_00220__ul1511765542911"><li id="cce_bestpractice_00220__li11738115542910"><strong id="cce_bestpractice_00220__b17461363287">weight</strong>: Set it to <strong id="cce_bestpractice_00220__b4153202512280">50</strong>.</li><li id="cce_bestpractice_00220__li1055413454318"><strong id="cce_bestpractice_00220__b428919513281">topologyKey</strong>: Set it to <strong id="cce_bestpractice_00220__b12890125122912">kubernetes.io/hostname</strong>.</li><li id="cce_bestpractice_00220__li142325363212"><strong id="cce_bestpractice_00220__b1837122732712">labelSelector</strong>: Select the label of the pod, which is anti-affinity with the pod.</li></ul>
</li></ul>
<pre class="screen" id="cce_bestpractice_00220__screen6767131210242">kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container-0
image: nginx:alpine
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector: # Select the label of the workload to realize the anti-affinity between this container and the workload.
matchExpressions:
- key: app
operator: In
values:
- nginx
namespaces:
- default
topologyKey: topology.kubernetes.io/zone # It takes effect in the same AZ.
- weight: 50
podAffinityTerm:
labelSelector: # Select the label of the workload to realize the anti-affinity between this container and the workload.
matchExpressions:
- key: app
operator: In
values:
- nginx
namespaces:
- default
topologyKey: kubernetes.io/hostname # It takes effect on the node.
imagePullSecrets:
- name: default-secret</pre>
<p id="cce_bestpractice_00220__p19693164315341">Create a workload and view the node where the pod is located.</p>
<pre class="screen" id="cce_bestpractice_00220__screen10335113414356">$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6fffd8d664-dpwbk 1/1 Running 0 17s 10.0.0.132 192.168.5.112
nginx-6fffd8d664-qhclc 1/1 Running 0 17s 10.0.1.133 192.168.5.252</pre>
<p id="cce_bestpractice_00220__p1553611101018">Increase the number of pods to 3. The pod is scheduled to another node, and the three nodes are in three different AZs.</p>
<pre class="screen" id="cce_bestpractice_00220__screen1081114401019">$ kubectl scale --replicas=3 deploy/nginx
deployment.apps/nginx scaled
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6fffd8d664-8t7rv 1/1 Running 0 3s 10.0.0.9 192.168.5.8
nginx-6fffd8d664-dpwbk 1/1 Running 0 2m45s 10.0.0.132 192.168.5.112
nginx-6fffd8d664-qhclc 1/1 Running 0 2m45s 10.0.1.133 192.168.5.252</pre>
<p id="cce_bestpractice_00220__p11531840191016">Increase the number of pods to 4. The pod is scheduled to the last node. With podAntiAffinity rules, pods can be evenly distributed to AZs and nodes.</p>
<pre class="screen" id="cce_bestpractice_00220__screen1180259111318">$ kubectl scale --replicas=4 deploy/nginx
deployment.apps/nginx scaled
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6fffd8d664-8t7rv 1/1 Running 0 2m30s 10.0.0.9 192.168.5.8
nginx-6fffd8d664-dpwbk 1/1 Running 0 5m12s 10.0.0.132 192.168.5.112
nginx-6fffd8d664-h796b 1/1 Running 0 78s 10.0.1.5 192.168.5.179
nginx-6fffd8d664-qhclc 1/1 Running 0 5m12s 10.0.1.133 192.168.5.252</pre>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_bestpractice_0323.html">Disaster Recovery</a></div>
</div>
</div>