diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001160642449.png b/doc/best-practice/source/_static/images/en-us_image_0000001160642449.png new file mode 100644 index 0000000..0e64a89 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001160642449.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001217540332.png b/doc/best-practice/source/_static/images/en-us_image_0000001217540332.png new file mode 100644 index 0000000..8d7de35 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001217540332.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001244128426.png b/doc/best-practice/source/_static/images/en-us_image_0000001244128426.png new file mode 100644 index 0000000..c78c7b1 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001244128426.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001346954904.png b/doc/best-practice/source/_static/images/en-us_image_0000001346954904.png new file mode 100644 index 0000000..eb5f35c Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001346954904.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001346958352.png b/doc/best-practice/source/_static/images/en-us_image_0000001346958352.png new file mode 100644 index 0000000..0ee400d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001346958352.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001347115504.png b/doc/best-practice/source/_static/images/en-us_image_0000001347115504.png new file mode 100644 index 0000000..e48f8b2 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001347115504.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001348013634.png b/doc/best-practice/source/_static/images/en-us_image_0000001348013634.png new file mode 100644 index 0000000..e35626b Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001348013634.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001348551216.png b/doc/best-practice/source/_static/images/en-us_image_0000001348551216.png new file mode 100644 index 0000000..ceb7add Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001348551216.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001349490578.png b/doc/best-practice/source/_static/images/en-us_image_0000001349490578.png new file mode 100644 index 0000000..0a3347d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001349490578.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001349649242.png b/doc/best-practice/source/_static/images/en-us_image_0000001349649242.png new file mode 100644 index 0000000..fc2294e Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001349649242.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001349986824.png b/doc/best-practice/source/_static/images/en-us_image_0000001349986824.png new file mode 100644 index 0000000..09409a9 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001349986824.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001350206690.png b/doc/best-practice/source/_static/images/en-us_image_0000001350206690.png new file mode 100644 index 0000000..bb4c62d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001350206690.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001380832974.png b/doc/best-practice/source/_static/images/en-us_image_0000001380832974.png new file mode 100644 index 0000000..603c946 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001380832974.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001380992506.png b/doc/best-practice/source/_static/images/en-us_image_0000001380992506.png new file mode 100644 index 0000000..fe48c09 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001380992506.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001381152106.png b/doc/best-practice/source/_static/images/en-us_image_0000001381152106.png new file mode 100644 index 0000000..111d397 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001381152106.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001394383746.png b/doc/best-practice/source/_static/images/en-us_image_0000001394383746.png new file mode 100644 index 0000000..21e746d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001394383746.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001398072089.png b/doc/best-practice/source/_static/images/en-us_image_0000001398072089.png new file mode 100644 index 0000000..2fec2fe Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001398072089.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001399151117.png b/doc/best-practice/source/_static/images/en-us_image_0000001399151117.png new file mode 100644 index 0000000..a7e33a5 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001399151117.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001399186601.png b/doc/best-practice/source/_static/images/en-us_image_0000001399186601.png new file mode 100644 index 0000000..70c1e89 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001399186601.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001399198561.png b/doc/best-practice/source/_static/images/en-us_image_0000001399198561.png new file mode 100644 index 0000000..08727f7 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001399198561.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001399673949.png b/doc/best-practice/source/_static/images/en-us_image_0000001399673949.png new file mode 100644 index 0000000..5186a57 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001399673949.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001399744097.png b/doc/best-practice/source/_static/images/en-us_image_0000001399744097.png new file mode 100644 index 0000000..b42d9b3 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001399744097.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001400577445.png b/doc/best-practice/source/_static/images/en-us_image_0000001400577445.png new file mode 100644 index 0000000..a788099 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001400577445.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001400816961.png b/doc/best-practice/source/_static/images/en-us_image_0000001400816961.png new file mode 100644 index 0000000..824386e Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001400816961.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001400827629.png b/doc/best-practice/source/_static/images/en-us_image_0000001400827629.png new file mode 100644 index 0000000..f7d8404 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001400827629.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001402114285.png b/doc/best-practice/source/_static/images/en-us_image_0000001402114285.png new file mode 100644 index 0000000..68d6fd7 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001402114285.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416038826.png b/doc/best-practice/source/_static/images/en-us_image_0000001416038826.png new file mode 100644 index 0000000..ea1bf0e Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416038826.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416065764.png b/doc/best-practice/source/_static/images/en-us_image_0000001416065764.png new file mode 100644 index 0000000..5aea612 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416065764.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416249976.png b/doc/best-practice/source/_static/images/en-us_image_0000001416249976.png new file mode 100644 index 0000000..e098dab Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416249976.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416531766.png b/doc/best-practice/source/_static/images/en-us_image_0000001416531766.png new file mode 100644 index 0000000..e894668 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416531766.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416537408.png b/doc/best-practice/source/_static/images/en-us_image_0000001416537408.png new file mode 100644 index 0000000..e36c722 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416537408.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416735230.png b/doc/best-practice/source/_static/images/en-us_image_0000001416735230.png new file mode 100644 index 0000000..e1b3f88 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416735230.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001416735446.png b/doc/best-practice/source/_static/images/en-us_image_0000001416735446.png new file mode 100644 index 0000000..31c54cb Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001416735446.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001418569120.png b/doc/best-practice/source/_static/images/en-us_image_0000001418569120.png new file mode 100644 index 0000000..394eb9a Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001418569120.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001418569168.png b/doc/best-practice/source/_static/images/en-us_image_0000001418569168.png new file mode 100644 index 0000000..6851d8d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001418569168.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001418729104.png b/doc/best-practice/source/_static/images/en-us_image_0000001418729104.png new file mode 100644 index 0000000..a132dc5 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001418729104.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001418729128.png b/doc/best-practice/source/_static/images/en-us_image_0000001418729128.png new file mode 100644 index 0000000..4d57a0d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001418729128.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001431432309.png b/doc/best-practice/source/_static/images/en-us_image_0000001431432309.png new file mode 100644 index 0000000..320c8c9 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001431432309.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001465971797.png b/doc/best-practice/source/_static/images/en-us_image_0000001465971797.png new file mode 100644 index 0000000..e00ea55 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001465971797.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001465972145.png b/doc/best-practice/source/_static/images/en-us_image_0000001465972145.png new file mode 100644 index 0000000..cb7fbdf Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001465972145.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001465973233.png b/doc/best-practice/source/_static/images/en-us_image_0000001465973233.png new file mode 100644 index 0000000..303547f Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001465973233.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001466618025.png b/doc/best-practice/source/_static/images/en-us_image_0000001466618025.png new file mode 100644 index 0000000..1fd5560 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001466618025.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001466646017.png b/doc/best-practice/source/_static/images/en-us_image_0000001466646017.png new file mode 100644 index 0000000..6a2375f Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001466646017.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001468605617.png b/doc/best-practice/source/_static/images/en-us_image_0000001468605617.png new file mode 100644 index 0000000..f5da59a Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001468605617.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001468885853.png b/doc/best-practice/source/_static/images/en-us_image_0000001468885853.png new file mode 100644 index 0000000..d2ea735 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001468885853.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001468885889.png b/doc/best-practice/source/_static/images/en-us_image_0000001468885889.png new file mode 100644 index 0000000..a132dc5 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001468885889.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001469005545.png b/doc/best-practice/source/_static/images/en-us_image_0000001469005545.png new file mode 100644 index 0000000..d23cff7 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001469005545.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001469005601.png b/doc/best-practice/source/_static/images/en-us_image_0000001469005601.png new file mode 100644 index 0000000..d2ea735 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001469005601.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001471311349.png b/doc/best-practice/source/_static/images/en-us_image_0000001471311349.png new file mode 100644 index 0000000..60dda36 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001471311349.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001471430697.png b/doc/best-practice/source/_static/images/en-us_image_0000001471430697.png new file mode 100644 index 0000000..f465002 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001471430697.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001474360185.png b/doc/best-practice/source/_static/images/en-us_image_0000001474360185.png new file mode 100644 index 0000000..809743f Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001474360185.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001480031958.png b/doc/best-practice/source/_static/images/en-us_image_0000001480031958.png new file mode 100644 index 0000000..290b94f Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001480031958.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001480191270.png b/doc/best-practice/source/_static/images/en-us_image_0000001480191270.png new file mode 100644 index 0000000..290b94f Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001480191270.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001606845825.png b/doc/best-practice/source/_static/images/en-us_image_0000001606845825.png new file mode 100644 index 0000000..dc8f6a3 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001606845825.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0000001606847653.png b/doc/best-practice/source/_static/images/en-us_image_0000001606847653.png new file mode 100644 index 0000000..5bcc3a5 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0000001606847653.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0261817733.png b/doc/best-practice/source/_static/images/en-us_image_0261817733.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0261817733.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0261817737.png b/doc/best-practice/source/_static/images/en-us_image_0261817737.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0261817737.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0261817740.png b/doc/best-practice/source/_static/images/en-us_image_0261817740.png new file mode 100644 index 0000000..d2e4674 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0261817740.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0261817741.png b/doc/best-practice/source/_static/images/en-us_image_0261817741.png new file mode 100644 index 0000000..b83dc3e Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0261817741.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0264587870.png b/doc/best-practice/source/_static/images/en-us_image_0264587870.png new file mode 100644 index 0000000..a522624 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0264587870.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0264587871.png b/doc/best-practice/source/_static/images/en-us_image_0264587871.png new file mode 100644 index 0000000..fceb742 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0264587871.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0264642164.png b/doc/best-practice/source/_static/images/en-us_image_0264642164.png new file mode 100644 index 0000000..3b60f71 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0264642164.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0266402292.png b/doc/best-practice/source/_static/images/en-us_image_0266402292.png new file mode 100644 index 0000000..efa537a Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0266402292.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0266402293.png b/doc/best-practice/source/_static/images/en-us_image_0266402293.png new file mode 100644 index 0000000..014453c Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0266402293.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0266405132.png b/doc/best-practice/source/_static/images/en-us_image_0266405132.png new file mode 100644 index 0000000..abb9bf2 Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0266405132.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0266405133.png b/doc/best-practice/source/_static/images/en-us_image_0266405133.png new file mode 100644 index 0000000..4ef32bb Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0266405133.png differ diff --git a/doc/best-practice/source/_static/images/en-us_image_0275263250.png b/doc/best-practice/source/_static/images/en-us_image_0275263250.png new file mode 100644 index 0000000..ac2b00d Binary files /dev/null and b/doc/best-practice/source/_static/images/en-us_image_0275263250.png differ diff --git a/doc/best-practice/source/auto_scaling/auto_scaling_based_on_elb_monitoring_metrics.rst b/doc/best-practice/source/auto_scaling/auto_scaling_based_on_elb_monitoring_metrics.rst new file mode 100644 index 0000000..bad5af7 --- /dev/null +++ b/doc/best-practice/source/auto_scaling/auto_scaling_based_on_elb_monitoring_metrics.rst @@ -0,0 +1,522 @@ +:original_name: cce_bestpractice_00283.html + +.. _cce_bestpractice_00283: + +Auto Scaling Based on ELB Monitoring Metrics +============================================ + +Issues +------ + +In :ref:`Using HPA and CA for Auto Scaling of Workloads and Nodes `, auto scaling is performed based on the usage of resources such as CPU and memory. + +However, resource usage usually lags. Such scaling cannot perfectly support services such as flash sales and social media that require quick and elastic scaling. + +Solution +-------- + +This section describes an auto scaling solution based on ELB monitoring metrics. Compared with CPU/memory usage-based auto scaling, auto scaling based on ELB QPS data is more targeted and timely. + +The key of this solution is to obtain the ELB metric data and report the data to Prometheus, convert the data in Prometheus to the metric data that can be identified by HPA, and then perform auto scaling based on the converted data. + +The implementation scheme is as follows: + +#. Develop a Prometheus exporter to obtain ELB metric data, convert the data into the format required by Prometheus, and report it to Prometheus. This section uses `cloudeye-exporter `__ as an example. +#. Convert the Prometheus data into the Kubernetes metric API for the HPA controller to use. +#. Set an HPA rule to use ELB monitoring data as auto scaling metrics. + + +.. figure:: /_static/images/en-us_image_0000001160642449.png + :alt: **Figure 1** ELB traffic flows and monitoring data + + **Figure 1** ELB traffic flows and monitoring data + +.. note:: + + Other metrics can be collected in the similar way. + +Prerequisites +------------- + +- You must be familiar with Prometheus and be able to write the Prometheus exporter. +- The kube-prometheus-stack add-on has been installed in the cluster. This add-on supports clusters of v1.17 or later. + +Building an Exporter Image +-------------------------- + +This section uses `cloudeye-exporter `__ to monitor load balancer metrics. To develop an exporter, see :ref:`Appendix: Developing an Exporter `. + +#. Log in to a cluster node that can access the public network and compile a Dockerfile. + + .. code-block:: + + vi Dockerfile + + Example Dockerfile: + + .. code-block:: + + FROM ubuntu:18.04 + RUN apt-get update \ + && apt-get install -y git ca-certificates curl \ + && update-ca-certificates \ + && curl -O https://dl.google.com/go/go1.14.14.linux-amd64.tar.gz \ + && tar -zxf go1.14.14.linux-amd64.tar.gz -C /usr/local \ + && git clone https://github.com/huaweicloud/cloudeye-exporter \ + && export PATH=$PATH:/usr/local/go/bin \ + && export GO111MODULE=on \ + && export GOPROXY=https://goproxy.cn,direct \ + && export GONOSUMDB=* \ + && cd cloudeye-exporter \ + && go build + CMD ["/cloudeye-exporter/cloudeye-exporter -config=/tmp/clouds.yml"] + +#. Build an image. The image name is **cloudeye-exporter** and the image version is 1.0. + + .. code-block:: + + docker build --network host . -t cloudeye-exporter:1.0 + +#. Push the image to SWR. + + a. .. _cce_bestpractice_00283__li18414155617327: + + (Optional) Log in to the SWR console, choose **Organization Management** in the navigation pane, and click **Create Organization** in the upper right corner to create an organization. + + Skip this step if you already have an organization. + + b. .. _cce_bestpractice_00283__li1141405620325: + + In the navigation pane, choose **My Images** and then click **Upload Through Client**. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command. + + c. Run the login command copied in the previous step on the cluster node. If the login is successful, the message "Login Succeeded" is displayed. + + d. Tag the **cloudeye-exporter** image. + + **docker tag** **[Image name 1:Tag 1]** **[Image repository address]/[Organization name]/[Image name 2:Tag 2]** + + - **[Image name 1:Tag 1]**: name and tag of the local image to be uploaded. + - **[Image repository address]**: The domain name at the end of the login command in :ref:`2 ` is the image repository address, which can be obtained on the SWR console. + - **[Organization name]**: name of the organization created in :ref:`1 `. + - **[Image name 2:Tag 2]**: desired image name and tag to be displayed on the SWR console. + + Example: + + **docker tag** **cloudeye-exporter:1.0 swr.ap-southeast-1.myhuaweicloud.com/cloud-develop/cloudeye-exporter:1.0** + + e. Push the image to the image repository. + + **docker push** **[Image repository address]/[Organization name]/[Image name 2:Tag 2]** + + Example: + + **docker push swr.ap-southeast-1.myhuaweicloud.com/cloud-develop/cloudeye-exporter:1.0** + + The following information will be returned upon a successful push: + + .. code-block:: + + ... + 030***: Pushed + 1.0: digest: sha256:eb7e3bbd*** size: ** + + To view the pushed image, go to the SWR console and refresh the **My Images** page. + +Deploying the Exporter +---------------------- + +Prometheus can dynamically monitor pods if you add Prometheus annotations to the pods (the default path is **/metrics**). This section uses `cloudeye-exporter `__ as an example. + +Common annotations in Prometheus are as follows: + +- **prometheus.io/scrape**: If the value is **true**, the pod will be monitored. +- **prometheus.io/path**: URL from which the data is collected. The default value is **/metrics**. +- **prometheus.io/port**: port number of the endpoint to collect data from. +- **prometheus.io/scheme**: Defaults to **http**. If HTTPS is configured for security purposes, change the value to **https**. + +#. Use kubectl to connect to the cluster. + +#. Create a secret, which will be used by **cloudeye-exporter** for authentication. + + a. Create the **clouds.yml** file with the following content: + + .. code-block:: + + global: + prefix: "huaweicloud" + scrape_batch_size: 10 + port: ":8087" + metric_path: "/metrics" + auth: + auth_url: "https://iam.ap-southeast-1.myhuaweicloud.com/v3" + project_name: "ap-southeast-1" + access_key: "********" + secret_key: "***********" + region: "ap-southeast-1" + + The values of **access_key** and **secret_key** can be obtained from `Access Keys `__. + + b. Obtain the Base64-encrypted string of the preceding file. + + .. code-block:: + + cat clouds.yml | base64 -w0 ;echo + + c. Create the **clouds-secret.yaml** file with the following content: + + .. code-block:: + + apiVersion: v1 + kind: Secret + data: + clouds.yml: ICAga***** # Replace it with the Base64-encrypted string. + metadata: + annotations: + description: '' + name: 'clouds.yml' + namespace: default # Namespace where the key is located. + labels: {} + type: Opaque + + d. Create a secret. + + .. code-block:: + + kubectl apply -f clouds-secret.yaml + +#. Create the **cloudeye-exporter-deployment.yaml** file with the following content: + + .. code-block:: + + kind: Deployment + apiVersion: apps/v1 + metadata: + name: cloudeye-exporter + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app: cloudeye-exporter + version: v1 + template: + metadata: + labels: + app: cloudeye-exporter + version: v1 + spec: + volumes: + - name: vol-166055064743016314 + secret: + secretName: clouds.yml + defaultMode: 420 + containers: + - name: container-1 + image: swr.ap-southeast-1.myhuaweicloud.com/cloud-develop/cloudeye-exporter:1.0 + command: + - /cloudeye-exporter/cloudeye-exporter + - '-config=/tmp/clouds.yml' + resources: {} + volumeMounts: + - name: vol-166055064743016314 + readOnly: true + mountPath: /tmp + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + imagePullPolicy: IfNotPresent + restartPolicy: Always + terminationGracePeriodSeconds: 30 + dnsPolicy: ClusterFirst + securityContext: {} + imagePullSecrets: + - name: default-secret + schedulerName: default-scheduler + strategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 25% + maxSurge: 25% + revisionHistoryLimit: 10 + progressDeadlineSeconds: 600 + + Create the preceding workload. + + .. code-block:: + + kubectl apply -f cloudeye-exporter-deployment.yaml + +#. Create the **cloudeye-exporter-service.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: cloudeye-exporter + namespace: default + labels: + app: cloudeye-exporter + version: v1 + annotations: + prometheus.io/port: '8087' + prometheus.io/scrape: 'true' + prometheus.io/path: "/metrics" + prometheus.io/scheme: "http" + spec: + ports: + - name: cce-service-0 + protocol: TCP + port: 8087 + targetPort: 8087 + selector: + app: cloudeye-exporter + version: v1 + type: ClusterIP + + Create the preceding Service. + + .. code-block:: + + kubectl apply -f cloudeye-exporter-service.yaml + +Interconnecting with Prometheus +------------------------------- + +After collecting monitoring data, Prometheus needs to convert the data into the Kubernetes metric API for the HPA controller to perform auto scaling. + +In this example, the ELB metrics associated with the workload need to be monitored. Therefore, the target workload must use the Service or ingress of the **LoadBalancer** type. + +#. .. _cce_bestpractice_00283__li1638516102712: + + View the access mode of the workload to be monitored and obtain the ELB listener ID. + + a. On the CCE cluster console, choose **Networking**. On the **Services** or **Ingresses** tab page, view the Service or ingress of the **LoadBalancer** type and click the load balancer to access the load balancer page. + + |image2| + + b. On the **Listeners** tab, view the listener corresponding to the workload and copy the listener ID. + + |image3| + +#. Use kubectl to connect to the cluster and add Prometheus configurations. In this example, collect load balancer metrics. For details about advanced usage, see `Configuration `__. + + a. Create the **prometheus-additional.yaml** file, add the following content to the file, and save the file: + + .. code-block:: + + - job_name: elb_metric + params: + services: ['SYS.ELB'] + kubernetes_sd_configs: + - role: endpoints + relabel_configs: + - action: keep + regex: '8087' + source_labels: + - __meta_kubernetes_service_annotation_prometheus_io_port + - action: replace + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + source_labels: + - __address__ + - __meta_kubernetes_service_annotation_prometheus_io_port + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + - action: replace + source_labels: + - __meta_kubernetes_namespace + target_label: kubernetes_namespace + - action: replace + source_labels: + - __meta_kubernetes_service_name + target_label: kubernetes_service + + b. Use the preceding configuration file to create a secret named **additional-scrape-configs**. + + .. code-block:: + + kubectl create secret generic additional-scrape-configs --from-file prometheus-additional.yaml -n monitoring --dry-run=client -o yaml | kubectl apply -f - + + c. Modify the Prometheus node object. + + .. code-block:: + + kubectl edit prometheus server -n monitoring + + Add the following content to the **spec** field and save the file: + + .. code-block:: + + spec: + additionalScrapeConfigs: + key: prometheus-additional.yaml + name: additional-scrape-configs + + d. Check whether the modification has taken effect. + + .. code-block:: + + kubectl get secret prometheus-server -n monitoring -o jsonpath="{.data['prometheus\.yaml\.gz']}" | base64 --decode | gzip -d | grep -A3 elb + + If any command output is displayed, the modification has taken effect. + +#. Add the configmap configuration of **custom-metrics-apiserver** to **user-adapter-config**. (In earlier versions, the name of this configuration item is **adapter-config**.) + + .. code-block:: + + kubectl edit configmap adapter-config -nmonitoring + + Add the following content under the **rules** field and save the file. Replace the listener ID obtained in :ref:`1 ` with the value of **seriesQuery**. + + .. code-block:: + + apiVersion: v1 + data: + config.yaml: |- + rules: + - metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>) + resources: + overrides: + kubernetes_namespace: + resource: namespace + kubernetes_service: + resource: service + name: + matches: huaweicloud_sys_elb_(.*) + as: "elb01_${1}" + seriesQuery: '{lbaas_listener_id="94424*****"}' # ELB listener ID + ... + +#. Redeploy the **custom-metrics-apiserver** workload in the **monitoring** namespace. + + |image4| + +Creating an HPA Policy +---------------------- + +After the data reported by the exporter to Prometheus is converted into the Kubernetes metric API by using the Prometheus adapter, you can create an HPA policy for auto scaling. + +#. Create an HPA policy. The inbound traffic of the ELB load balancer is used to trigger scale-out. When the value of **m7_in_Bps** (inbound traffic rate) exceeds 1,000, the nginx Deployment will be scaled. + + .. code-block:: + + apiVersion: autoscaling/v2 + kind: HorizontalPodAutoscaler + metadata: + name: nginx + namespace: default + spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: nginx + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Object + object: + metric: + name: elb01_listener_m7_in_Bps + describedObject: + apiVersion: v1 + kind: Service + name: cloudeye-exporter + target: + type: Value + value: 1000 + + + .. figure:: /_static/images/en-us_image_0000001606847653.png + :alt: **Figure 2** Created HPA Policy + + **Figure 2** Created HPA Policy + +#. After the HPA policy is created, perform a pressure test on the workload (accessing the pods through ELB). Then, the HPA controller determines whether scaling is required based on the configured value. + + In the **Events** dialog box, obtain scaling records in the **Kubernetes Event** column. + + + .. figure:: /_static/images/en-us_image_0000001606845825.png + :alt: **Figure 3** Scaling events + + **Figure 3** Scaling events + +ELB Listener Metrics +-------------------- + +The following table lists the ELB listener metrics that can be collected using the method described in this section. + +.. table:: **Table 1** ELB listener metrics + + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Metric | Name | Unit | Description | + +===================+====================================+==============+=================================================================================================================================================================+ + | m1_cps | Concurrent Connections | Count | Number of concurrent connections processed by a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m1e_server_rps | Reset Packets from Backend Servers | Count/Second | Number of reset packets sent from the backend server to clients. These reset packages are generated by the backend server and then forwarded by load balancers. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m1f_lvs_rps | Reset Packets from Load Balancers | Count/Second | Number of reset packets sent from load balancers. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m21_client_rps | Reset Packets from Clients | Count/Second | Number of reset packets sent from clients to the backend server. These reset packages are generated by the clients and then forwarded by load balancers. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m22_in_bandwidth | Inbound Bandwidth | bit/s | Inbound bandwidth of a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m23_out_bandwidth | Outbound Bandwidth | bit/s | Outbound bandwidth of a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m2_act_conn | Active Connections | Count | Number of current active connections. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m3_inact_conn | Inactive Connections | Count | Number of current inactive connections. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m4_ncps | New Connections | Count | Number of current new connections. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m5_in_pps | Incoming Packets | Count | Number of packets sent to a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m6_out_pps | Outgoing Packets | Count | Number of packets sent from a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m7_in_Bps | Inbound Rate | byte/s | Number of incoming bytes per second on a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | m8_out_Bps | Outbound Rate | byte/s | Number of outgoing bytes per second on a load balancer. | + +-------------------+------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_bestpractice_00283__section4980111417441: + +Appendix: Developing an Exporter +-------------------------------- + +Prometheus periodically calls the **/metrics** API of the exporter to obtain metric data. Applications only need to report monitoring data through **/metrics**. You can select a Prometheus client in a desired language and integrate it into applications to implement the **/metrics** API. For details about the client, see `Prometheus CLIENT LIBRARIES `__. For details about how to write the exporter, see `WRITING EXPORTERS `__. + +The monitoring data must be in the format that Prometheus supports. Each data record provides the ELB ID, listener ID, namespace where the Service is located, Service name, and Service UID as labels, as shown in the following figure. + +|image5| + +To obtain the preceding data, perform the following steps: + +#. Query all Services. + + The **annotations** field in the returned information contains the ELB associated with the Service. + + - kubernetes.io/elb.id + - kubernetes.io/elb.class + +#. Use the `listener query API `__ to query the listener ID based on the ELB instance ID obtained in the previous step. + +#. Obtain the ELB monitoring data. + + The ELB monitoring data is queried using the CES API `used to query monitoring data in batches `__. For details about ELB monitoring metrics, see `Monitoring Metrics `__. Example: + + - **m1_cps**: number of concurrent connections + - **m5_in_pps**: number of incoming data packets + - **m6_out_pps**: number of outgoing data packets + - **m7_in_Bps**: incoming rate + - **m8_out_Bps**: outgoing rate + +#. Aggregate data in the format that Prometheus supports and expose the data through the **/metrics** API. + + The Prometheus client can easily call the **/metrics** API. For details, see `CLIENT LIBRARIES `__. For details about how to develop an exporter, see `WRITING EXPORTERS `__. + +.. |image1| image:: /_static/images/en-us_image_0000001380832974.png +.. |image2| image:: /_static/images/en-us_image_0000001431432309.png +.. |image3| image:: /_static/images/en-us_image_0000001380992506.png +.. |image4| image:: /_static/images/en-us_image_0000001394383746.png +.. |image5| image:: /_static/images/en-us_image_0000001381152106.png diff --git a/doc/best-practice/source/auto_scaling/index.rst b/doc/best-practice/source/auto_scaling/index.rst new file mode 100644 index 0000000..787aa3e --- /dev/null +++ b/doc/best-practice/source/auto_scaling/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_bestpractice_0090.html + +.. _cce_bestpractice_0090: + +Auto Scaling +============ + +- :ref:`Auto Scaling Based on ELB Monitoring Metrics ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + auto_scaling_based_on_elb_monitoring_metrics diff --git a/doc/best-practice/source/cluster/creating_an_ipv4_ipv6_dual-stack_cluster_in_cce.rst b/doc/best-practice/source/cluster/creating_an_ipv4_ipv6_dual-stack_cluster_in_cce.rst new file mode 100644 index 0000000..2403ec5 --- /dev/null +++ b/doc/best-practice/source/cluster/creating_an_ipv4_ipv6_dual-stack_cluster_in_cce.rst @@ -0,0 +1,269 @@ +:original_name: cce_bestpractice_00222.html + +.. _cce_bestpractice_00222: + +Creating an IPv4/IPv6 Dual-Stack Cluster in CCE +=============================================== + +This section describes how to set up a VPC with IPv6 CIDR block and create a cluster and nodes with an IPv6 address in the VPC, so that the nodes can access the Internet. + +Overview +-------- + +IPv6 addresses are used to deal with the problem of IPv4 address exhaustion. If a worker node (such as an ECS) in the current cluster uses IPv4, the node can run in dual-stack mode after IPv6 is enabled. Specifically, the node has both IPv4 and IPv6 addresses, which can be used to access the intranet or public network. + +Application Scenarios +--------------------- + +- If your application needs to provide Services for users who use IPv6 clients, you can use IPv6 EIPs or the IPv4 and IPv6 dual-stack function. +- If your application needs to both provide Services for users who use IPv6 clients and analyze the access request data, you can use only the IPv4 and IPv6 dual-stack function. +- If internal communication is required between your application systems or between your application system and another system (such as the database system), you can use only the IPv4 and IPv6 dual-stack function. + +For details about the dual stack, see `IPv4 and IPv6 Dual-Stack Network `__. + +Constraints +----------- + +- Clusters that support IPv4/IPv6 dual stack: + + +-----------------+--------------------------+-----------------+-------------------------------------------------------------------------+ + | Cluster Type | Cluster Network Model | Version | Remarks | + +=================+==========================+=================+=========================================================================+ + | CCE cluster | Container tunnel network | v1.15 or later | IPv4/IPv6 dual stack will be generally available for clusters of v1.23. | + | | | | | + | | | | ELB dual stack is not supported. | + +-----------------+--------------------------+-----------------+-------------------------------------------------------------------------+ + +- Worker nodes and master nodes in Kubernetes clusters use IPv4 addresses to communicate with each other. +- If the Service type is set to **LoadBalancer (DNAT)**, only IPv4 addresses are supported. +- Only one IPv6 address can be bound to each NIC. +- When IPv4/IPv6 dual stack is enabled for the cluster, DHCP unlimited lease cannot be enabled for the selected node subnet. +- If a dual-stack cluster is used, do not change the load balancer protocol version on the ELB console. + +Step 1: Create a VPC +-------------------- + +Before creating your VPCs, determine how many VPCs, the number of subnets, and what IP address ranges you will need. For details, see `Network Planning `__. + +.. note:: + + - The basic operations for IPv4 and IPv6 dual-stack networks are the same as those for IPv4 networks. Only some parameters are different. + - For details about the IPv6 billing policy, supported ECS types, and supported regions, see `IPv4 and IPv6 Dual-Stack Network `__. + +Perform the following operations to create a VPC named **vpc-ipv6** and its default subnet named **subnet-ipv6**. + +#. Log in to the management console. + +#. Click |image1| in the upper left corner of the management console and select a region and a project. + +#. Under **Networking**, click **Virtual Private Cloud**. + +#. Click **Create VPC**. + +#. Set the VPC and subnet parameters. + + When configuring a subnet, select **Enable** for **IPv6 CIDR Block** to automatically allocate an IPv6 CIDR block to the subnet. IPv6 cannot be disabled after the subnet is created. Currently, you are not allowed to specify a custom IPv6 CIDR block. + + .. table:: **Table 1** VPC configuration parameters + + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | Parameter | Description | Example Value | + +=========================+=======================================================================================================================================================================================================================================================================================================================================+==========================+ + | Region | Specifies the desired region. Regions are geographic areas that are physically isolated from each other. The networks inside different regions are not connected to each other, so resources cannot be shared across different regions. For lower network latency and faster access to your resources, select the region nearest you. | AP-Singapore | + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | Name | VPC name. | vpc-ipv6 | + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | IPv4 CIDR Block | Specifies the Classless Inter-Domain Routing (CIDR) block of the VPC. The CIDR block of a subnet can be the same as the CIDR block for the VPC (for a single subnet in the VPC) or a subset (for multiple subnets in the VPC). | 192.168.0.0/16 | + | | | | + | | The following CIDR blocks are supported: | | + | | | | + | | 10.0.0.0/8-24 | | + | | | | + | | 172.16.0.0/12-24 | | + | | | | + | | 192.168.0.0/16-24 | | + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | Enterprise Project | When creating a VPC, you can add the VPC to an enabled enterprise project. | default | + | | | | + | | An enterprise project facilitates project-level management and grouping of cloud resources and users. The name of the default project is **default**. | | + | | | | + | | For details about how to create and manage enterprise projects, see `Enterprise Management User Guide `__. | | + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | Tag (Advanced Settings) | Specifies the VPC tag, which consists of a key and value pair. You can add a maximum of ten tags for each VPC. | - **Tag key**: vpc_key1 | + | | | - **Key value**: vpc-01 | + | | The tag key and value must meet the requirements listed in :ref:`Table 3 `. | | + +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + + .. table:: **Table 2** Subnet parameter description + + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Parameter | Description | Example Value | + +========================+=======================================================================================================================================================================================================================================================================================================================================================================================================================================================+=============================+ + | AZ | An AZ is a geographic location with independent power supply and network facilities in a region. AZs are physically isolated, and AZs in the same VPC are interconnected through an internal network. | AZ2 | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Name | Specifies the subnet name. | subnet-ipv6 | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | IPv4 CIDR Block | Specifies the IPv4 CIDR block for the subnet. This value must be within the VPC CIDR range. | 192.168.0.0/24 | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | IPv6 CIDR Block | Select **Enable** for **IPv6 CIDR Block**. An IPv6 CIDR block will be automatically assigned to the subnet. IPv6 cannot be disabled after the subnet is created. Currently, you are not allowed to specify a custom IPv6 CIDR block. | N/A | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Associated Route Table | Specifies the default route table to which the subnet will be associated. You can change the route table to a custom route table. | Default | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Advanced Settings | | | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Gateway | Specifies the gateway address of the subnet. | 192.168.0.1 | + | | | | + | | This IP address is used to communicate with other subnets. | | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | DNS Server Address | By default, two DNS server addresses are configured. You can change them if necessary. When multiple IP addresses are available, separate them with a comma (,). | 100.125.x.x | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | DHCP Lease Time | Specifies the period during which a client can use an IP address automatically assigned by the DHCP server. After the lease time expires, a new IP address will be assigned to the client. If a DHCP lease time is changed, the new lease automatically takes effect when half of the current lease time has passed. To make the change take effect immediately, restart the ECS or log in to the ECS to cause the DHCP lease to automatically renew. | 365 days or 300 hours | + | | | | + | | .. caution:: | | + | | | | + | | CAUTION: | | + | | When IPv4/IPv6 dual stack is enabled for the cluster, DHCP unlimited lease cannot be enabled for the selected node subnet. | | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Tag | Specifies the subnet tag, which consists of a key and value pair. You can add a maximum of ten tags to each subnet. | - **Tag key**: subnet_key1 | + | | | - **Key value**: subnet-01 | + | | The tag key and value must meet the requirements listed in :ref:`Table 4 `. | | + +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + + .. _cce_bestpractice_00222__en-us_topic_0226102195_en-us_topic_0213478735_en-us_topic_0118066459_table63360804153019: + + .. table:: **Table 3** VPC tag key and value requirements + + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + | Parameter | Requirement | Example Value | + +=======================+================================================================================+=======================+ + | Tag key | - Cannot be left blank. | vpc_key1 | + | | - Must be unique in a VPC. | | + | | - Can contain a maximum of 36 characters. | | + | | - Can contain letters, digits, underscores (_), and hyphens (-). | | + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + | Tag value | - Can contain a maximum of 43 characters. | vpc-01 | + | | - Can contain letters, digits, underscores (_), periods (.), and hyphens (-). | | + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + + .. _cce_bestpractice_00222__en-us_topic_0226102195_en-us_topic_0213478735_en-us_topic_0118066459_table4168255153519: + + .. table:: **Table 4** Subnet tag key and value requirements + + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + | Parameter | Requirement | Example Value | + +=======================+================================================================================+=======================+ + | Tag key | - Cannot be left blank. | subnet_key1 | + | | - Must be unique for each subnet. | | + | | - Can contain a maximum of 36 characters. | | + | | - Can contain letters, digits, underscores (_), and hyphens (-). | | + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + | Tag value | - Can contain a maximum of 43 characters. | subnet-01 | + | | - Can contain letters, digits, underscores (_), periods (.), and hyphens (-). | | + +-----------------------+--------------------------------------------------------------------------------+-----------------------+ + +#. Click **Create Now**. + +Step 2: Create a CCE Cluster +---------------------------- + +**Creating a CCE cluster** + +#. Log in to the CCE console and create a cluster. + + Complete the network settings as follows. For other configurations, see `Buying a CCE Cluster `__. + + - **Network Model**: Select **Tunnel network**. + - **VPC**: Select the created VPC **vpc-ipv6**. + - **Master Node Subnet**: Select a subnet with IPv6 enabled. + - **IPv4/IPv6 Dual Stack**: Enable this function. After this function is enabled, cluster resources, including nodes and workloads, can be accessed through IPv6 CIDR blocks. + - **Container CIDR Block**: A proper mask must be set for the container CIDR block. The mask determines the number of available nodes in the cluster. If the mask of the container CIDR block in the cluster is set improperly, there will be only a small number of available nodes in the cluster. + + + .. figure:: /_static/images/en-us_image_0000001217540332.png + :alt: **Figure 1** Configuring network settings + + **Figure 1** Configuring network settings + +#. Create a node. + + The CCE console displays the nodes that support IPv6. You can directly select a node. For details, see `Creating a Node `__. + + After the creation is complete, access the cluster details page. Then, click the node name to go to the ECS details page and view the automatically allocated IPv6 address. + +Step 3: Buy a Shared Bandwidth and Adding an IPv6 Address to It +--------------------------------------------------------------- + +By default, the IPv6 address can only be used for private network communication. If you want to use this IPv6 address to access the Internet or be accessed by IPv6 clients on the Internet, buy a shared bandwidth and add the IPv6 address to it. + +If you already have a shared bandwidth, you can add the IPv6 address to the shared bandwidth without purchasing one. + +**Buying a Shared Bandwidth** + +#. Log in to the management console. +#. Click |image2| in the upper left corner of the management console and select a region and a project. +#. Choose **Service List** > **Networking** > **Virtual Private Cloud**. +#. In the navigation pane, choose **Elastic IP and Bandwidth** > **Shared Bandwidths**. +#. In the upper right corner, click **Buy Shared Bandwidth**. On the displayed page, configure parameters as prompted. + + .. table:: **Table 5** Parameter description + + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Parameter | Description | Example Value | + +=======================+=======================================================================================================================================================================================================================================================================================================================================+=======================+ + | Billing Mode | Specifies the billing mode of a shared bandwidth. The billing mode can be: | Yearly/Monthly | + | | | | + | | - **Yearly/Monthly**: You pay for the bandwidth by year or month before using it. No charges will be incurred for the bandwidth during its validity period. | | + | | - **Pay-per-use**: You pay for the bandwidth based on the amount of time you use the bandwidth. | | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Region | Specifies the desired region. Regions are geographic areas that are physically isolated from each other. The networks inside different regions are not connected to each other, so resources cannot be shared across different regions. For lower network latency and faster access to your resources, select the region nearest you. | AP-Singapore | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Billed By | Specifies the shared bandwidth billing factor. | Select **Bandwidth**. | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Bandwidth | Specifies the shared bandwidth size in Mbit/s. The minimum bandwidth that can be purchased is 5 Mbit/s. | 10 | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Bandwidth Name | Specifies the name of the shared bandwidth. | Bandwidth-001 | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Enterprise Project | When assigning the shared bandwidth, you can add the shared bandwidth to an enabled enterprise project. | default | + | | | | + | | An enterprise project facilitates project-level management and grouping of cloud resources and users. The name of the default project is **default**. | | + | | | | + | | For details about how to create and manage enterprise projects, see `Enterprise Management User Guide `__. | | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Required Duration | Specifies the required duration of the shared bandwidth to be purchased. Configure this parameter only in yearly/monthly billing mode. | 2 months | + +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + +#. Click **Next** to confirm the configurations and then click **Buy Now**. + +**Adding an IPv6 Address to a Shared Bandwidth** + +#. On the **Shared Bandwidths** page, choose **More** > **Add Public IP Address** in the **Operation** column. + + + .. figure:: /_static/images/en-us_image_0275263250.png + :alt: **Figure 2** Adding an IPv6 address to a shared bandwidth + + **Figure 2** Adding an IPv6 address to a shared bandwidth + +#. Add the IPv6 address to the shared bandwidth. + + + .. figure:: /_static/images/en-us_image_0261817740.png + :alt: **Figure 3** Adding a dual-stack NIC IPv6 address + + **Figure 3** Adding a dual-stack NIC IPv6 address + +#. Click **OK**. + +**Verifying the Result** + +Log in to an ECS and ping an IPv6 address on the Internet to verify the connectivity. **ping6 ipv6.baidu.com** is used as an example here. The execution result is displayed in :ref:`Figure 4 `. + +.. _cce_bestpractice_00222__en-us_topic_0226102195_en-us_topic_0213478735_en-us_topic_0118066459_fig12339172511196: + +.. figure:: /_static/images/en-us_image_0261817741.png + :alt: **Figure 4** Result verification + + **Figure 4** Result verification + +.. |image1| image:: /_static/images/en-us_image_0261817733.png +.. |image2| image:: /_static/images/en-us_image_0261817737.png diff --git a/doc/best-practice/source/cluster/index.rst b/doc/best-practice/source/cluster/index.rst new file mode 100644 index 0000000..2953e73 --- /dev/null +++ b/doc/best-practice/source/cluster/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_bestpractice_0050.html + +.. _cce_bestpractice_0050: + +Cluster +======= + +- :ref:`Creating an IPv4/IPv6 Dual-Stack Cluster in CCE ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_an_ipv4_ipv6_dual-stack_cluster_in_cce diff --git a/doc/best-practice/source/devops/index.rst b/doc/best-practice/source/devops/index.rst new file mode 100644 index 0000000..0a70109 --- /dev/null +++ b/doc/best-practice/source/devops/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_bestpractice_0322.html + +.. _cce_bestpractice_0322: + +DevOps +====== + +- :ref:`Installing, Deploying, and Interconnecting Jenkins with SWR and CCE Clusters ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/index diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/index.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/index.rst new file mode 100644 index 0000000..396d155 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/index.rst @@ -0,0 +1,18 @@ +:original_name: cce_bestpractice_0046.html + +.. _cce_bestpractice_0046: + +Installing, Deploying, and Interconnecting Jenkins with SWR and CCE Clusters +============================================================================ + +- :ref:`Solution Overview ` +- :ref:`Resource and Cost Planning ` +- :ref:`Procedure ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + solution_overview + resource_and_cost_planning + procedure/index diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/configuring_jenkins_agent.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/configuring_jenkins_agent.rst new file mode 100644 index 0000000..c678b83 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/configuring_jenkins_agent.rst @@ -0,0 +1,302 @@ +:original_name: cce_bestpractice_0068.html + +.. _cce_bestpractice_0068: + +Configuring Jenkins Agent +========================= + +After Jenkins is installed, the following information may display, indicating that Jenkins uses a Master for local build and Agents are not configured. + +|image1| + +If you install Jenkins using a Master, you can build a pipeline after performing operations in :ref:`Installing and Deploying Jenkins Master `. For details, see :ref:`Using Jenkins to Build a Pipeline `. + +If you install Jenkins using a Master and Agents, you can select either of the following solutions to configure Agents. + +- :ref:`Fixed Agent `: The Agent container keeps running and occupying cluster resources after a job is built. This configuration is simple. +- :ref:`Dynamic Agent `: An Agent container is dynamically created during job build and is killed after the job is built. In this way, resources can be dynamically allocated and the resource utilization is high. This configuration is complex. + +In this section, the Agent is containerized using the **jenkins/inbound-agent:4.13.3-1** image. + +.. _cce_bestpractice_0068__section4389814132313: + +Adding a Fixed Agent to Jenkins +------------------------------- + +#. Log in to the Jenkins dashboard, click **Manage Jenkins** on the left, and choose **System Configuration** > **Manage nodes and clouds**. + +#. .. _cce_bestpractice_0068__li71765242468: + + Click **New Node** on the left, enter the node name **fixed-agent** (which can be customized), and select **Permanent Agent** for **Type**. + + |image2| + +#. .. _cce_bestpractice_0068__li161761824134615: + + Specify the following node information: + + - **Number of executors**: The default value is **1**. Set this parameter as required. + - **Remote root directory**: Enter **/home/jenkins/agent**. + - Launch method: Select **Launch agent by connecting it to the controller**. + + Retain the values for other parameters and click **Save**. + + |image3| + +#. .. _cce_bestpractice_0068__li1940395317112: + + In the **Nodes** page, click the new node. The Agent status is disconnected, and the method for connecting the node to Jenkins is provided. This command applies to VM installation. In this example, container-based installation is used. Therefore, you only need to copy the secret, as shown in the following figure. + + |image4| + +#. Log in to the CCE console, click the target cluster. Choose **Workloads** > **Deployments** and click **Create Workload** on the right. + +#. Configure basic workload parameters. + + - **Workload Name**: agent (user-defined) + - **Namespace**: Select the namespace where Jenkins will be deployed. You can create a namespace. + - **Pods**: Set it to **1**. + + |image5| + +#. Configure basic container parameters. + + - **Image Name**: Enter **jenkins/inbound-agent:4.13.3-1**. The image version may change with time. Select an image version as required or use the latest version. + + - **CPU Quota**: In this example, set **Limit** to **2** cores. + + - **Memory Quota**: Set **Limit** to **2048** MiB. + + - .. _cce_bestpractice_0068__li095142718510: + + **Privileged Container**: must be enabled so that the container can obtain permissions on the host. Otherwise, Docker commands cannot be executed in the container. + + Retain the default values for other parameters. + + |image6| + +#. Run the following commands to set the environment variables: + + - **JENKINS_URL**: access path of Jenkins. Enter the IP address of port 8080 set in :ref:`6 ` (ports 8080 and 50000 must be enabled for the IP address), for example, **http://10.247.222.254:8080**. + - **JENKINS_AGENT_NAME**: name of the Agent set in :ref:`2 `. In this example, the value is **fixed-agent**. + - **JENKINS_SECRET**: secret copied from :ref:`4 `. + - **JENKINS_AGENT_WORKDIR**: remote work directory configured in :ref:`3 `, that is, **/home/jenkins/agent**. + + |image7| + +#. .. _cce_bestpractice_0068__li1063018426368: + + Add permissions to the Jenkins container so that Docker commands can be executed in the Jenkins container. + + a. Ensure that **Privileged Container** is enabled in :ref:`3 `. + + b. Choose **Data Storage** > **Local Volumes**, add a local volume, and mount the host path to the corresponding container path. + + .. table:: **Table 1** Mounting path + + +--------------+-----------------------------+--------------------------------------------+ + | Storage Type | Host Path | Mounting Path | + +==============+=============================+============================================+ + | hostPath | **/var/run/docker.sock** | **/var/run/docker.sock** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/docker** | **/usr/bin/docker** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/lib64/libltdl.so.7** | **/usr/lib/x86_64-linux-gnu/libltdl.so.7** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/kubectl** | **/usr/local/bin/kubectl** | + +--------------+-----------------------------+--------------------------------------------+ + + After the mounting is complete, the page shown in :ref:`Figure 1 ` is displayed. + + .. _cce_bestpractice_0068__cce_bestpractice_0067_fig12199840155011: + + .. figure:: /_static/images/en-us_image_0000001474360185.png + :alt: **Figure 1** Mounting the host paths to the corresponding container paths + + **Figure 1** Mounting the host paths to the corresponding container paths + + c. In **Security Context**, set **User ID** to **0** (user **root**). + + + .. figure:: /_static/images/en-us_image_0000001399198561.png + :alt: **Figure 2** Configuring the user + + **Figure 2** Configuring the user + +#. Retain the default settings for **Advanced Settings** and click **Create Workload**. + +#. Go to the Jenkins page and refresh the node status to **In sync**. + + |image8| + + .. note:: + + After the Agent is configured, you are advised to set the number of concurrent build jobs of the Master to **0**. That is, you use the Agent for build. For details, see :ref:`Modifying the Number of Concurrent Build Jobs `. + +.. _cce_bestpractice_0068__section178515154347: + +Setting a Dynamic Agent for Jenkins +----------------------------------- + +#. **Install the plug-in.** + + On the Jenkins dashboard page, click **Manage Jenkins** on the left and choose **System Configuration** > **Manage Plugins**. On the **Available** tab page, filter and install **Kubernetes Plugin** and **Kubernetes Plugin**. + + |image9| + + The plug-in version may change with time. Select a plug-in version as required. + + - `Kubernetes Plugin `__: 3734.v562b_b_a_627ea_c + + It is used to run dynamic Agents in the Kubernetes cluster, create a Kubernetes pod for each started Agent, and stop the pod after each build is complete. + + - `Kubernetes CLI Plugin `__: 1.10.3 + + kubectl can be configured for jobs to interact with Kubernetes clusters. + + .. note:: + + The Jenkins plug-ins are provided by the plug-in maintainer and may be iterated due to security risks. + +#. .. _cce_bestpractice_0068__li692213493137: + + **Add cluster access credentials to Jenkins.** + + Add cluster access credentials to Jenkins in advance. For details, see :ref:`Setting Cluster Access Credentials `. + +#. **Specify basic cluster information.** + + On the Jenkins dashboard page, click **Manage Jenkins** on the left and choose **System Configuration** > **Manage nodes and clouds**. Click **Configure Clouds** on the left to configure the cluster. Click **Add a new cloud** and select **Kubernetes**. The cluster name can be customized. + +#. **Enter Kubernetes Cloud details.** + + Set the following cluster parameters and retain the values for other parameters, as shown in :ref:`Figure 3 `. + + - **Kubernetes URL**: cluster API server address. You can enter **https://kubernetes.default.svc.cluster.local:443**. + - **Credentials**: Select the cluster credential added in :ref:`2 `. You can click **Test Connection** to check whether the cluster is connected. + - **Jenkins URL**: Jenkins access path. Enter the IP address of port 8080 set in :ref:`6 ` **(ports 8080 and 50000 must be enabled for the IP address, that is, the intra-cluster access address)**, for example, **http://10.247.222.254:8080**. + + .. _cce_bestpractice_0068__fig18911427111212: + + .. figure:: /_static/images/en-us_image_0000001349986824.png + :alt: **Figure 3** Example + + **Figure 3** Example + +#. **Pod Template**: Click **Add Pod Template > Pod Template details** and set pod template parameters. + + - Set the basic parameters of the pod template, as shown in :ref:`Figure 4 `. + + - **Name**: **jenkins-agent** + - **Namespace**: **cicd** + - **Labels**: **jenkins-agent** + - **Usage**: Select **Use this node as much as possible**. + + .. _cce_bestpractice_0068__fig9911122712120: + + .. figure:: /_static/images/en-us_image_0000001399744097.png + :alt: **Figure 4** Basic parameters of the pod template + + **Figure 4** Basic parameters of the pod template + + - Add a container. Click **Add Container > Container Template**. :ref:`Figure 5 ` shows the parameters. + + - **Name**: The value must be **jnlp**. + - **Docker image**: **jenkins/inbound-agent:4.13.3-1**. The image version may change with time. Select an image version as required or use the latest version. + - **Working directory**: **/home/jenkins/agent** is selected by default. + - **Command to run**/**Arguments to pass to the command**: Delete the existing default value and leave these two parameters empty. + - **Allocate pseudo-TTY**: Select this parameter. + - Select **Run in privileged mode** and set **Run As User ID** to **0** (**root** user). + + .. _cce_bestpractice_0068__fig61355244198: + + .. figure:: /_static/images/en-us_image_0000001350206690.png + :alt: **Figure 5** Container template parameters + + **Figure 5** Container template parameters + + - Add a volume: Choose **Add Volume > Host Path Volume** to mount the host path in :ref:`Table 2 ` to the corresponding path of the container. + + .. _cce_bestpractice_0068__table113644311271: + + .. table:: **Table 2** Mounting path + + +--------------+-----------------------------+--------------------------------------------+ + | Storage Type | Host Path | Mounting Path | + +==============+=============================+============================================+ + | hostPath | **/var/run/docker.sock** | **/var/run/docker.sock** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/docker** | **/usr/bin/docker** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/lib64/libltdl.so.7** | **/usr/lib/x86_64-linux-gnu/libltdl.so.7** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/kubectl** | **/usr/local/bin/kubectl** | + +--------------+-----------------------------+--------------------------------------------+ + + After the mounting is complete, the page shown in :ref:`Figure 6 ` is displayed. + + .. _cce_bestpractice_0068__fig1365113122713: + + .. figure:: /_static/images/en-us_image_0000001399673949.png + :alt: **Figure 6** Mounting the host paths to the corresponding container paths + + **Figure 6** Mounting the host paths to the corresponding container paths + + - **Run As User ID**: **0** (**root** user) + + - **Workspace Volume**: working directory of the agent. Persistence is recommended. Select **Host Path Workspace Volume** and set **Host path** to **/home/jenkins/agent**. + + |image10| + +#. Click **Save**. + + .. note:: + + After the Agent is configured, you are advised to set the number of concurrent build jobs of the Master to **0**. That is, you use the Agent for build. For details, see :ref:`Modifying the Number of Concurrent Build Jobs `. + +.. _cce_bestpractice_0068__section18661165610151: + +Setting Cluster Access Credentials +---------------------------------- + +The certificate file that can be identified in Jenkins is in PKCS#12 format. Therefore, convert the cluster certificate to a PFX certificate file in PKCS#12 format. + +#. Log in to the CCE console and go to the cluster console. Choose **Cluster Information > Connection Information** to download the cluster certificate. The downloaded certificate contains three files: **ca.crt**, **client.crt**, and **client.key**. + + |image11| + +#. .. _cce_bestpractice_0068__li76361310202119: + + Log in to a Linux host, place the three certificate files in the same directory, and use OpenSSL to convert the certificate into a **cert.pfx** certificate. After the certificate is generated, the system prompts you to enter a custom password. + + .. code-block:: + + openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt + +#. On the Jenkins console, choose **Manage Jenkins** > **Manage Credentials** and click **Global**. You can also create a domain. + + |image12| + +#. Click **Add Credential**. + + - **Kind**: Select **Certificate**. + - **Scope**: Select **Global**. + - **Certificate**: Select **Upload PKCS#12 certificate** and upload the **cert.pfx** file generated in :ref:`2 `. + - **Password**: The password customized during **cert.pfx** conversion. + - **ID**: Set this parameter to **k8s-test-cert**, which can be customized. + + |image13| + +.. |image1| image:: /_static/images/en-us_image_0000001465972145.png +.. |image2| image:: /_static/images/en-us_image_0000001465973233.png +.. |image3| image:: /_static/images/en-us_image_0000001416537408.png +.. |image4| image:: /_static/images/en-us_image_0000001466618025.png +.. |image5| image:: /_static/images/en-us_image_0000001348551216.png +.. |image6| image:: /_static/images/en-us_image_0000001399151117.png +.. |image7| image:: /_static/images/en-us_image_0000001399186601.png +.. |image8| image:: /_static/images/en-us_image_0000001416065764.png +.. |image9| image:: /_static/images/en-us_image_0000001471430697.png +.. |image10| image:: /_static/images/en-us_image_0000001416735230.png +.. |image11| image:: /_static/images/en-us_image_0000001400816961.png +.. |image12| image:: /_static/images/en-us_image_0000001416735446.png +.. |image13| image:: /_static/images/en-us_image_0000001400577445.png diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/index.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/index.rst new file mode 100644 index 0000000..6bfe086 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_bestpractice_0345.html + +.. _cce_bestpractice_0345: + +Procedure +========= + +- :ref:`Installing and Deploying Jenkins Master ` +- :ref:`Configuring Jenkins Agent ` +- :ref:`Using Jenkins to Build a Pipeline ` +- :ref:`Interconnecting Jenkins with RBAC of Kubernetes Clusters (Example) ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + installing_and_deploying_jenkins_master + configuring_jenkins_agent + using_jenkins_to_build_a_pipeline + interconnecting_jenkins_with_rbac_of_kubernetes_clusters_example diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/installing_and_deploying_jenkins_master.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/installing_and_deploying_jenkins_master.rst new file mode 100644 index 0000000..9118af2 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/installing_and_deploying_jenkins_master.rst @@ -0,0 +1,196 @@ +:original_name: cce_bestpractice_0067.html + +.. _cce_bestpractice_0067: + +Installing and Deploying Jenkins Master +======================================= + +.. note:: + + On the Jenkins page, the UI strings in Chinese and English are different. The screenshots in this section are for your reference only. + +Selecting an Image +------------------ + +Select an image from Docker Hub. For this test, select **jenkinsci/blueocean:2.346.3**, which is bound with all Blue Ocean add-ons and functions. For details, see `Installing Jenkins `__. + +Preparations +------------ + +- Before creating a containerized workload, buy a cluster (the cluster must contain at least one node with four vCPUs and 8 GB memory). For details, see `Buying a CCE Cluster `__. +- To enable access to a workload from a public network, ensure that an elastic IP address (EIP) has been bound to or a load balancer has been configured for at least one node in the cluster. + +Installing and Deploying Jenkins on CCE +--------------------------------------- + +#. Log in to the CCE console, choose **Workloads** > **Deployments** and click **Create Workload** on the upper right corner. + +#. Configure basic workload parameters. + + - **Workload Name**: jenkins (user-defined) + - **Namespace**: Select the namespace where Jenkins will be deployed. You can create a namespace. + - **Pods**: Set it to **1**. + + |image1| + +#. Configure basic container parameters. + + - **Image Name**: Enter **jenkinsci/blueocean**. Select an image tag as required. In this example, the latest tag is used by default. + + - **CPU Quota**: Set **Limit** to **2** cores. + + - **Memory Quota**: Set **Limit** to **2048** MiB. + + - .. _cce_bestpractice_0067__li095142718510: + + **Privileged Container**: If Jenkins is deployed with a single Master, enable **Privileged Container** so that the container can perform operations on the host. Otherwise, Docker commands cannot be executed in the Jenkins Master container. + + Retain the default values for other parameters. + + + .. figure:: /_static/images/en-us_image_0000001416038826.png + :alt: **Figure 1** Basic container parameters + + **Figure 1** Basic container parameters + +#. Choose **Data Storage** > **PersistentVolumeClaims (PVCs)** and add a persistent volume. + + In the displayed dialog box, select a cloud volume and enter **/var/jenkins_home** in the mount path to mount a cloud volume for Jenkins to store data persistently. + + .. note:: + + The cloud storage type can be **EVS** or **SFS**. If no cloud storage is available, click **Create PVC**. + + If you select **EVS**, the AZ of the EVS disk must be the same as that of the node. + + + .. figure:: /_static/images/en-us_image_0000001346958352.png + :alt: **Figure 2** Adding a cloud volume + + **Figure 2** Adding a cloud volume + +#. Add permissions to the Jenkins container so that Docker commands can be executed in the Jenkins container. + + a. Ensure that **Privileged Container** is enabled in :ref:`3 `. + + b. Choose **Data Storage** > **Local Volumes**, add a local volume, and mount the host path to the corresponding container path. + + .. table:: **Table 1** Mounting path + + +--------------+-----------------------------+--------------------------------------------+ + | Storage Type | Host Path | Mounting Path | + +==============+=============================+============================================+ + | hostPath | **/var/run/docker.sock** | **/var/run/docker.sock** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/docker** | **/usr/bin/docker** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/lib64/libltdl.so.7** | **/usr/lib/x86_64-linux-gnu/libltdl.so.7** | + +--------------+-----------------------------+--------------------------------------------+ + | hostPath | **/usr/bin/kubectl** | **/usr/local/bin/kubectl** | + +--------------+-----------------------------+--------------------------------------------+ + + After the mounting is complete, the page shown in :ref:`Figure 3 ` is displayed. + + .. _cce_bestpractice_0067__fig12199840155011: + + .. figure:: /_static/images/en-us_image_0000001474360185.png + :alt: **Figure 3** Mounting the host paths to the corresponding container paths + + **Figure 3** Mounting the host paths to the corresponding container paths + + c. In **Security Context**, set **User ID** to **0** (user **root**). + + + .. figure:: /_static/images/en-us_image_0000001347115504.png + :alt: **Figure 4** Configuring the user + + **Figure 4** Configuring the user + +#. .. _cce_bestpractice_0067__li46301742113619: + + Specify the access mode in **Service Configuration**. + + The Jenkins container image has two ports: 8080 and 50000. Configure them separately. Port 8080 is used for web login, and port 50000 is used for the connection between Master and Agent. + + In this example, two Services are created: + + - **LoadBalancer**: provides external web access using port 8080. You can also select **NodePort** to provide external access. + + Set the Service name to **jenkins** (customizable), the container port to **8080**, the access port to **8080**, and retain the default values for other parameters. + + - **ClusterIP**: used by the Agent to connect to the Master. The IP addresses of **jenkins-web** and **jenkins-agent** need to be the same. Therefore, port 8080 for web access and port 50000 for agent access are included. + + Set the Service name to **agent** (customizable), the container port 1 to **8080**, the access port 1 to **8080**, the container port 2 to **50000**, the access port 2 to **50000**, and retain the default values for other parameters. + + .. note:: + + In this example, Agents and the Master are deployed in the same cluster. Therefore, the Agents can use the ClusterIP Service to connect to the Master. + + If Agents need to connect to the Master across clusters or through the public network, select a proper Service type. Note that the IP addresses of **Jenkins-web** and **Jenkins-agent** need to be the same. Therefore, **ports 8080 and 50000 must be enabled for the IP address connected to jenkins-agent**. For addresses used only for web access, enable only the port 8080. + + + .. figure:: /_static/images/en-us_image_0000001349649242.png + :alt: **Figure 5** Adding a Service + + **Figure 5** Adding a Service + +#. Retain the default settings for **Advanced Settings** and click **Create Workload**. + +#. Click **Back to Deployment List** to view the Deployment status. If the workload is in the **Running** status, the Jenkins application is accessible. + + + .. figure:: /_static/images/en-us_image_0000001398072089.png + :alt: **Figure 6** Viewing the workload status + + **Figure 6** Viewing the workload status + +Logging In and Initializing Jenkins +----------------------------------- + +#. On the CCE console, click the target cluster. Choose **Networking** in the navigation pane. On the **Services** tab page, view the Jenkins access mode. + + + .. figure:: /_static/images/en-us_image_0000001349490578.png + :alt: **Figure 7** Access mode corresponding to port 8080 + + **Figure 7** Access mode corresponding to port 8080 + +#. Enter **EIP:8080** of the load balancer in the browser address box to visit the Jenkins configuration page. + + When you visit the page for the first time, you are prompted to obtain the initial administrator password. You can obtain the password from the Jenkins pod. Before running the following commands, connect to the cluster using kubectl. For details, see `Connecting to a Cluster Using kubectl `__. + + .. code-block:: + + # kubectl get pod -n cicd + NAME READY STATUS RESTARTS AGE + jenkins-7c69b6947c-5gvlm 1/1 Running 0 17m + # kubectl exec -it jenkins-7c69b6947c-5gvlm -n cicd -- /bin/sh + # cat /var/jenkins_home/secrets/initialAdminPassword + b10eabe29a9f427c9b54c01a9c3383ae + +#. The system prompts you to select the default recommended add-on and create an administrator upon the first login. After the initial configuration is complete, the Jenkins page is displayed. + + |image2| + +.. _cce_bestpractice_0067__section270062718585: + +Modifying the Number of Concurrent Build Jobs +--------------------------------------------- + +#. On the Jenkins dashboard page, click **Manage Jenkins** on the left, choose **System Configuration** > **Manage nodes and clouds**, and select **Configure** from the drop-down list of the target node. + + |image3| + + .. note:: + + - You can modify the number of concurrent build jobs on both Master and Agent. The following uses Master as an example. + - If the :ref:`Master is used with Agents `, you are advised to set the number of concurrent build jobs of Master to **0**. That is, all build jobs are performed using Agents. If a :ref:`single Master ` is used, you do not need to change the value to **0**. + +#. Modify the maximum number of concurrent build jobs. In this example, the value is changed to **2**. You can change the value as required. + + |image4| + +.. |image1| image:: /_static/images/en-us_image_0000001346954904.png +.. |image2| image:: /_static/images/en-us_image_0000001465971797.png +.. |image3| image:: /_static/images/en-us_image_0000001471311349.png +.. |image4| image:: /_static/images/en-us_image_0000001416531766.png diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/interconnecting_jenkins_with_rbac_of_kubernetes_clusters_example.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/interconnecting_jenkins_with_rbac_of_kubernetes_clusters_example.rst new file mode 100644 index 0000000..ced7ba4 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/interconnecting_jenkins_with_rbac_of_kubernetes_clusters_example.rst @@ -0,0 +1,218 @@ +:original_name: cce_bestpractice_0070.html + +.. _cce_bestpractice_0070: + +Interconnecting Jenkins with RBAC of Kubernetes Clusters (Example) +================================================================== + +Prerequisites +------------- + +RBAC must be enabled for the cluster. + +Scenario 1: Namespace-based Permissions Control +----------------------------------------------- + +**Create a service account and a role, and add a RoleBinding.** + +.. code-block:: + + $ kubectl create ns dev + $ kubectl -n dev create sa dev + + $ cat < dev-user-role.yml + kind: Role + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + namespace: dev + name: dev-user-pod + rules: + - apiGroups: ["*"] + resources: ["deployments", "pods", "pods/log"] + verbs: ["get", "watch", "list", "update", "create", "delete"] + EOF + kubectl create -f dev-user-role.yml + + $ kubectl create rolebinding dev-view-pod \ + --role=dev-user-pod \ + --serviceaccount=dev:dev \ + --namespace=dev + +**Generate the kubeconfig file of a specified service account.** + +.. note:: + + - In clusters earlier than v1.21, a token is obtained by mounting the secret of the service account to a pod. Tokens obtained this way are permanent. This approach is no longer recommended starting from version 1.21. Service accounts will stop auto creating secrets in clusters from version 1.25. + + In clusters of version 1.21 or later, you can use the `TokenRequest `__ API to `obtain the token `__ and use the projected volume to mount the token to the pod. Such tokens are valid for a fixed period. When the mounting pod is deleted, the token automatically becomes invalid. For details, see `Service Account Token Security Improvement `__. + + - If you need a token that never expires, you can also `manually manage secrets for service accounts `__. Although a permanent service account token can be manually created, you are advised to use a short-lived token by calling the `TokenRequest `__ API for higher security. + +.. code-block:: + + $ SECRET=$(kubectl -n dev get sa dev -o go-template='{{range .secrets}}{{.name}}{{end}}') + $ API_SERVER="https://172.22.132.51:6443" + $ CA_CERT=$(kubectl -n dev get secret ${SECRET} -o yaml | awk '/ca.crt:/{print $2}') + $ cat < dev.conf + apiVersion: v1 + kind: Config + clusters: + - cluster: + certificate-authority-data: $CA_CERT + server: $API_SERVER + name: cluster + EOF + + $ TOKEN=$(kubectl -n dev get secret ${SECRET} -o go-template='{{.data.token}}') + $ kubectl config set-credentials dev-user \ + --token=`echo ${TOKEN} | base64 -d` \ + --kubeconfig=dev.conf + + $ kubectl config set-context default \ + --cluster=cluster \ + --user=dev-user \ + --kubeconfig=dev.conf + + $ kubectl config use-context default \ + --kubeconfig=dev.conf + +Verification in the CLI + +.. code-block:: + + $ kubectl --kubeconfig=dev.conf get po + Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev:dev" cannot list pods in the namespace "default" + + $ kubectl -n dev --kubeconfig=dev.conf run nginx --image nginx --port 80 --restart=Never + $ kubectl -n dev --kubeconfig=dev.conf get po + NAME READY STATUS RESTARTS AGE + nginx 1/1 Running 0 39s + +**Verify whether the permissions meet the expectation in Jenkins.** + +#. Add the kubeconfig file with permissions control settings to Jenkins. + +#. Start the Jenkins job. In this example, Jenkins fails to be deployed in namespace **default** but is successfully deployed in namespace **dev**. + + |image1| + + |image2| + +Scenario 2: Resource-based Permissions Control +---------------------------------------------- + +#. Generate the service account, role, and binding. + + .. code-block:: + + kubectl -n dev create sa sa-test0304 + + cat < test0304-role.yml + kind: Role + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + namespace: dev + name: role-test0304 + rules: + - apiGroups: ["*"] + resources: ["deployments"] + resourceNames: ["tomcat03", "tomcat04"] + verbs: ["get", "update", "patch"] + EOF + kubectl create -f test0304-role.yml + + kubectl create rolebinding test0304-bind \ + --role=role-test0304 \ + --serviceaccount=dev:sa-test0304\ + --namespace=dev + +#. Generate the kubeconfig file. + + .. note:: + + - In clusters earlier than v1.21, a token is obtained by mounting the secret of the service account to a pod. Tokens obtained this way are permanent. This approach is no longer recommended starting from version 1.21. Service accounts will stop auto creating secrets in clusters from version 1.25. + + In clusters of version 1.21 or later, you can use the `TokenRequest `__ API to `obtain the token `__ and use the projected volume to mount the token to the pod. Such tokens are valid for a fixed period. When the mounting pod is deleted, the token automatically becomes invalid. For details, see `Service Account Token Security Improvement `__. + + - If you need a token that never expires, you can also `manually manage secrets for service accounts `__. Although a permanent service account token can be manually created, you are advised to use a short-lived token by calling the `TokenRequest `__ API for higher security. + + .. code-block:: + + SECRET=$(kubectl -n dev get sa sa-test0304 -o go-template='{{range .secrets}}{{.name}}{{end}}') + API_SERVER=" https://192.168.0.153:5443" + CA_CERT=$(kubectl -n dev get secret ${SECRET} -o yaml | awk '/ca.crt:/{print $2}') + cat < test0304.conf + apiVersion: v1 + kind: Config + clusters: + - cluster: + certificate-authority-data: $CA_CERT + server: $API_SERVER + name: cluster + EOF + + TOKEN=$(kubectl -n dev get secret ${SECRET} -o go-template='{{.data.token}}') + kubectl config set-credentials test0304-user \ + --token=`echo ${TOKEN} | base64 -d` \ + --kubeconfig=test0304.conf + + kubectl config set-context default \ + --cluster=cluster \ + --user=test0304-user \ + --kubeconfig=test0304.conf + + kubectl config use-context default \ + --kubeconfig=test0304.conf + +#. Verify that Jenkins is running as expected. + + In the pipeline script, update the Deployments of tomcat03, tomcat04, and tomcat05 in sequence. + + .. code-block:: + + try { + kubernetesDeploy( + kubeconfigId: "test0304", + configs: "test03.yaml") + println "hooray, success" + } catch (e) { + println "oh no! Deployment failed! " + println e + } + echo "test04" + try { + kubernetesDeploy( + kubeconfigId: "test0304", + configs: "test04.yaml") + println "hooray, success" + } catch (e) { + println "oh no! Deployment failed! " + println e + } + echo "test05" + try { + kubernetesDeploy( + kubeconfigId: "test0304", + configs: "test05.yaml") + println "hooray, success" + } catch (e) { + println "oh no! Deployment failed! " + println e + } + + Viewing the running result: + + + .. figure:: /_static/images/en-us_image_0266405132.png + :alt: **Figure 1** test03 + + **Figure 1** test03 + + + .. figure:: /_static/images/en-us_image_0266405133.png + :alt: **Figure 2** test04 + + **Figure 2** test04 + +.. |image1| image:: /_static/images/en-us_image_0266402292.png +.. |image2| image:: /_static/images/en-us_image_0266402293.png diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/using_jenkins_to_build_a_pipeline.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/using_jenkins_to_build_a_pipeline.rst new file mode 100644 index 0000000..85da797 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/procedure/using_jenkins_to_build_a_pipeline.rst @@ -0,0 +1,124 @@ +:original_name: cce_bestpractice_0069.html + +.. _cce_bestpractice_0069: + +Using Jenkins to Build a Pipeline +================================= + +.. _cce_bestpractice_0069__section259016810206: + +Obtaining a Long-Term Valid Login Command +----------------------------------------- + +During Jenkins installation and deployment, the Docker commands have been configured in the container (see :ref:`9 `). Therefore, no additional configuration is required for interconnecting Jenkins with SWR. You can directly run the Docker commands. You only need to obtain a long-term valid SWR login command. For details, see `Obtaining a Long-Term Valid Login Command `__. + +For example, the command of this account is as follows: + +.. code-block:: + + docker login -u ap-southeast-1@xxxxx -p xxxxx swr.ap-southeast-1.myhuaweicloud.com + +Creating a Pipeline to Build and Push Images +-------------------------------------------- + +In this example, Jenkins is used to build a pipeline to pull code from the code repository, package the code into an image, and push the image to SWR. + +The pipeline creation procedure is as follows: + +#. Click **New Item** on the Jenkins page. + +#. Enter a task name and select **Create Pipeline**. + + |image1| + +#. Configure only the pipeline script. + + |image2| + + The following pipeline scripts are for reference only. You can customize the script. For details about the syntax, see `Pipeline `__. + + Some parameters in the example need to be modified: + + - **git_url**: Address of your code repository. Replace it with the actual address. + - **swr_login**: The login command obtained in :ref:`Obtaining a Long-Term Valid Login Command `. + - **swr_region**: SWR region. + - **organization**: The actual organization name in SWR. + - **build_name**: Name of the created image. + - **credential**: The cluster credential added to Jenkins. Enter the credential ID. If you want to deploy the service in another cluster, add the access credential of the cluster to Jenkins again. For details, see :ref:`Setting Cluster Access Credentials `. + - **apiserver**: IP address of the API server where the application cluster is deployed. Ensure that the IP address can be accessed from the Jenkins cluster. + + .. code-block:: + + //Define the code repository address. + def git_url = 'https://github.com/lookforstar/jenkins-demo.git' + //Define the SWR login command. + def swr_login = 'docker login -u ap-southeast-1@xxxxx -p xxxxx swr.ap-southeast-1.myhuaweicloud.com' + //Define the SWR region. + def swr_region = 'ap-southeast-1' + //Define the name of the SWR organization to be uploaded. + def organization = 'container' + //Define the image name. + def build_name = 'jenkins-demo' + //Certificate ID of the cluster to be deployed + def credential = 'k8s-token' + //API server address of the cluster. Ensure that the address can be accessed from the Jenkins cluster. + def apiserver = 'https://192.168.0.100:6443' + + pipeline { + agent any + stages { + stage('Clone') { + steps{ + echo "1.Clone Stage" + git url: git_url + script { + build_tag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() + } + } + } + stage('Test') { + steps{ + echo "2.Test Stage" + } + } + stage('Build') { + steps{ + echo "3.Build Docker Image Stage" + sh "docker build -t swr.${swr_region}.myhuaweicloud.com/${organization}/${build_name}:${build_tag} ." + //${build_tag} indicates that the build_tag variable is obtained as the image tag. It is the return value of the git rev-parse --short HEAD command, that is, commit ID. + } + } + stage('Push') { + steps{ + echo "4.Push Docker Image Stage" + sh swr_login + sh "docker push swr.${swr_region}.myhuaweicloud.com/${organization}/${build_name}:${build_tag}" + } + } + stage('Deploy') { + steps{ + echo "5. Deploy Stage" + echo "This is a deploy step to test" + script { + sh "cat k8s.yaml" + echo "begin to config kubenetes" + try { + withKubeConfig([credentialsId: credential, serverUrl: apiserver]) { + sh 'kubectl apply -f k8s.yaml' + //The YAML file is stored in the code repository. The following is only an example. Replace it as required. + } + println "hooray, success" + } catch (e) { + println "oh no! Deployment failed! " + println e + } + } + } + } + } + } + +#. Save the settings and execute the Jenkins job. + +.. |image1| image:: /_static/images/en-us_image_0000001466646017.png +.. |image2| image:: /_static/images/en-us_image_0000001416249976.png diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/resource_and_cost_planning.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/resource_and_cost_planning.rst new file mode 100644 index 0000000..d9bcd55 --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/resource_and_cost_planning.rst @@ -0,0 +1,44 @@ +:original_name: cce_bestpractice_0344.html + +.. _cce_bestpractice_0344: + +Resource and Cost Planning +========================== + +.. important:: + + The fees listed here are estimates. The actual fees will be displayed on the Huawei Cloud console. + +The required resources are as follows: + +.. table:: **Table 1** Resource and cost planning + + +------------------------------+-----------------------------------------------+-----------------+----------------------------------------------------------------------------------------+ + | Resource | Description | Quantity | Estimated Fee (USD) | + +==============================+===============================================+=================+========================================================================================+ + | Cloud Container Engine (CCE) | Pay-per-use recommended | 1 | 2.91/hour | + | | | | | + | | - Cluster type: CCE cluster | | | + | | - CCE cluster version: v1.25 | | | + | | - Cluster scale: 50 nodes | | | + | | - HA: Yes | | | + +------------------------------+-----------------------------------------------+-----------------+----------------------------------------------------------------------------------------+ + | VM | Pay-per-use recommended | 1 | 1.00/hour | + | | | | | + | | - VM type: General computing-plus | | | + | | - Specifications: 4 vCPUs \| 8 GiB | | | + | | - OS: EulerOS 2.9 | | | + | | - System disk: 50 GiB \| General-purpose SSD | | | + | | - Data disk: 100 GiB \| General-purpose SSD | | | + +------------------------------+-----------------------------------------------+-----------------+----------------------------------------------------------------------------------------+ + | Elastic Volume Service (EVS) | Pay-per-use recommended | 1 | 0.1/hour | + | | | | | + | | - EVS disk specifications: 100 GB | | | + | | - EVS disk type: General-purpose SSD | | | + +------------------------------+-----------------------------------------------+-----------------+----------------------------------------------------------------------------------------+ + | Load Balancer (ELB) | Pay-per-use recommended | 1 | 0.32/hour + 0.80/GB (The traffic fee is charged based on the actual outbound traffic.) | + | | | | | + | | - Type: Shared | | | + | | - Billed By: Traffic | | | + | | - Bandwidth: 5 Mbit/s | | | + +------------------------------+-----------------------------------------------+-----------------+----------------------------------------------------------------------------------------+ diff --git a/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/solution_overview.rst b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/solution_overview.rst new file mode 100644 index 0000000..dabe2be --- /dev/null +++ b/doc/best-practice/source/devops/installing_deploying_and_interconnecting_jenkins_with_swr_and_cce_clusters/solution_overview.rst @@ -0,0 +1,85 @@ +:original_name: cce_bestpractice_0066.html + +.. _cce_bestpractice_0066: + +Solution Overview +================= + +What Is Jenkins? +---------------- + +Jenkins is an open source continuous integration (CI) tool that provides user-friendly GUIs. It originates from Hudson and is used to automate all sorts of tasks related to building, testing, and delivering or deploying software. + +Jenkins is written in Java and can run in popular servlet containers such as Tomcat, or run independently. It is usually used together with the version control tools (or SCM tools) and build tools. Jenkins supports various languages and is compatible with third-party build tools, such as Maven, Ant, and Gradle. It seamlessly integrates with common version control tools, such as SVN and Git, and can directly connect to source code hosting services, such as GitHub. + +Notes and Constraints +--------------------- + +- This solution can be deployed only in CCE clusters. It is not supported in DeC. + +Solution Architecture +--------------------- + +You can install Jenkins using the following methods: + +- .. _cce_bestpractice_0066__li15367141115319: + + You can use a single Master to install Jenkins. The Master handles jobs and builds and releases services. However, security risks may exist. + +- .. _cce_bestpractice_0066__li18811913234: + + Another one is to use Master+Agents. Master schedules build jobs to Agents for execution, and monitors Agent status. Agents execute build jobs dispatched by the Master and return the job progress and result. + +You can install the Master and Agents on VMs, containers, or combination of the two. For details, see :ref:`Table 1 `. + +.. _cce_bestpractice_0066__table5475121718413: + +.. table:: **Table 1** Jenkins deployment modes + + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Deployment Mode | Master | Agent | Advantages and Disadvantages | + +=================+=================================+=================================+==========================================================================================================================================================================================================================================================================================+ + | Single Master | VMs | ``-`` | - Advantage: Localized construction is easy to operate. | + | | | | - Disadvantage: Job management and execution are performed on the same VM and the security risk is high. | + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Single Master | Containers | ``-`` | - Advantage: Kubernetes containers support self-healing. | + | | | | - Disadvantage: Job management and execution are not isolated. Security risks exist. | + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Master+Agents | VMs | VMs | - Advantage: Job management and execution are isolated and the security risk is low. | + | | | | - Disadvantage: Agents are fixed. Resources cannot be scheduled and the resource utilization is low and the maintenance cost is high. | + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | | Containers (Kubernetes cluster) | - Advantage: Containerized Agents can be fixed or dynamic. Kubernetes schedules the dynamic Agents, improving the resource utilization. Jobs can be evenly allocated based on the scheduling policy, which is easy to maintain. | + | | | | - Disadvantage: The Master may break down and the recovery cost is high. | + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Master+Agents | Containers (Kubernetes cluster) | Containers (Kubernetes cluster) | - Advantage: Containerized Agents can be fixed or dynamic. Kubernetes schedules the dynamic Agents, improving the resource utilization. The Master is self-healing and the maintenance cost is low. Agents and the Master can be deployed in the same cluster or in different clusters. | + | | | | - Disadvantage: The system is complex and the environment is difficult to set up. | + +-----------------+---------------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +In this section, Jenkins is installed with the containerized Master and Agents. Kubernetes schedules the dynamic Agents. For details about the architecture, see :ref:`Figure 1 `. + +- The Master handles jobs. Install Kubernetes add-ons on the Master to use the Kubernetes platform resources. +- The Kubernetes platform generates pods for Agents to execute jobs. When a job is scheduled on the Master, the Master sends a request to the Kubernetes platform using the Kubernetes add-on. After receiving the request, Kubernetes builds a pod using the pod template to send requests to the Master. After the Master is successfully connected, you can execute the job on the pod. + +.. _cce_bestpractice_0066__fig8214101615391: + +.. figure:: /_static/images/en-us_image_0000001348013634.png + :alt: **Figure 1** Installing Jenkins on Kubernetes + + **Figure 1** Installing Jenkins on Kubernetes + +Procedure +--------- + +#. :ref:`Installing and Deploying Jenkins Master ` + + Jenkins Master is deployed in the CCE cluster using container images. + +#. :ref:`Configuring Jenkins Agent ` + + Jenkins can fix Agents in the cluster or use the pipeline to interconnect with CCE to provide pods for Agents to execute jobs. The dynamic Agents use Kubernetes add-ons to configure cluster authentication and user permissions. + +#. :ref:`Using Jenkins to Build a Pipeline ` + + The Jenkins pipeline interconnects with SWR and calls **docker build/login/push** commands in Agents to package and push images automatically. + + You can also use pipelines to deploy and upgrade Kubernetes resources (such as Deployments, Services, ingresses, and jobs). diff --git a/doc/best-practice/source/index.rst b/doc/best-practice/source/index.rst index 20c2051..cfb7d0f 100644 --- a/doc/best-practice/source/index.rst +++ b/doc/best-practice/source/index.rst @@ -2,3 +2,10 @@ Cloud Container Engine - Best Practice ====================================== +.. toctree:: + :maxdepth: 1 + + migration/index + devops/index + auto_scaling/index + cluster/index diff --git a/doc/best-practice/source/migration/index.rst b/doc/best-practice/source/migration/index.rst new file mode 100644 index 0000000..66ae6f5 --- /dev/null +++ b/doc/best-practice/source/migration/index.rst @@ -0,0 +1,16 @@ +:original_name: cce_bestpractice_00237.html + +.. _cce_bestpractice_00237: + +Migration +========= + +- :ref:`Migrating Container Images ` +- :ref:`Migrating Clusters from Other Clouds to CCE ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + migrating_container_images/index + migrating_clusters_from_other_clouds_to_cce/index diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/index.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/index.rst new file mode 100644 index 0000000..6958e48 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/index.rst @@ -0,0 +1,18 @@ +:original_name: cce_bestpractice_0013.html + +.. _cce_bestpractice_0013: + +Migrating Clusters from Other Clouds to CCE +=========================================== + +- :ref:`Solution Overview ` +- :ref:`Resource and Cost Planning ` +- :ref:`Procedure ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + solution_overview + resource_and_cost_planning + procedure/index diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/backing_up_kubernetes_objects_of_the_ack_cluster.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/backing_up_kubernetes_objects_of_the_ack_cluster.rst new file mode 100644 index 0000000..fcc685a --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/backing_up_kubernetes_objects_of_the_ack_cluster.rst @@ -0,0 +1,41 @@ +:original_name: cce_bestpractice_0337.html + +.. _cce_bestpractice_0337: + +Backing Up Kubernetes Objects of the ACK Cluster +================================================ + +#. To back up a WordPress application with PV data, add an annotation to the corresponding pod. If you do not need to back up the PV data, skip this step. + + .. code-block:: console + + # kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,... + + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# kubectl get pod -n wordpress + NAME READY STATUSRESTARTS AGE + wordpress-67796d86b5-f9bfm 1/1 Running 1 39m + wordpress-mysql-645b796d8d-6k8wh 1/1 Running 0 38m + + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# kubectl -n wordpress annotate pod/wordpress-67796d86b5-f9bfm backup.velero.io/backup-volumes=wordpress-pvc + pod/wordpress-67796d86b5-f9bfm annotated + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# kubectl -n wordpress annotate pod/wordpress-mysql-645b796d8d-6k8wh backup.velero.io/backup-volumes=wordpress-mysql-pvc + pod/wordpress-mysql-645b796d8d-6k8wh annotated + +#. Execute the backup task. + + .. code-block:: console + + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# velero backup create wordpress-ack-backup --include-namespaces wordpress + Backup request "wordpress-ack-backup" submitted successfully. + Run `velero backup describe wordpress-ack-backup` or `velero backup logs wordpress-ack-backup` for more details. + +#. Check whether the backup task is successful. + + .. code-block:: console + + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# velero backup get + NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR + wordpress-ack-backup InProgress 2020-07-07 20:31:19 +0800 CST 29d default + [root@iZbp1cqobeh1iyyf7qgvvzZ ack2cce]# velero backup get + NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR + wordpress-ack-backup Completed 2020-07-07 20:31:19 +0800 CST 29d default diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/debugging_and_starting_the_application.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/debugging_and_starting_the_application.rst new file mode 100644 index 0000000..1e93b31 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/debugging_and_starting_the_application.rst @@ -0,0 +1,24 @@ +:original_name: cce_bestpractice_0062.html + +.. _cce_bestpractice_0062: + +Debugging and Starting the Application +====================================== + +Debug and access the application to check data. + +#. Log in to the `CCE console `__. In the navigation pane, choose **Resource Management** > **Network**. Click the EIP next to the WordPress service. + + + .. figure:: /_static/images/en-us_image_0264587870.png + :alt: **Figure 1** Obtaining the access address + + **Figure 1** Obtaining the access address + +#. If the access is normal, and the migration is successful. + + + .. figure:: /_static/images/en-us_image_0264587871.png + :alt: **Figure 2** WordPress welcome page + + **Figure 2** WordPress welcome page diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/index.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/index.rst new file mode 100644 index 0000000..0956d7a --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/index.rst @@ -0,0 +1,32 @@ +:original_name: cce_bestpractice_0335.html + +.. _cce_bestpractice_0335: + +Procedure +========= + +- :ref:`Migrating Data ` +- :ref:`Installing the Migration Tool ` +- :ref:`Migrating Resources in a Cluster (Velero) ` +- :ref:`Migrating Resources in a Cluster (e-backup) ` +- :ref:`Preparing Object Storage and Velero ` +- :ref:`Backing Up Kubernetes Objects of the ACK Cluster ` +- :ref:`Restoring Kubernetes Objects in the Created CCE Cluster ` +- :ref:`Updating Resources Accordingly ` +- :ref:`Debugging and Starting the Application ` +- :ref:`Others ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + migrating_data + installing_the_migration_tool + migrating_resources_in_a_cluster_velero + migrating_resources_in_a_cluster_e-backup + preparing_object_storage_and_velero + backing_up_kubernetes_objects_of_the_ack_cluster + restoring_kubernetes_objects_in_the_created_cce_cluster + updating_resources_accordingly + debugging_and_starting_the_application + others diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/installing_the_migration_tool.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/installing_the_migration_tool.rst new file mode 100644 index 0000000..6753cc6 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/installing_the_migration_tool.rst @@ -0,0 +1,203 @@ +:original_name: cce_bestpractice_0063.html + +.. _cce_bestpractice_0063: + +Installing the Migration Tool +============================= + +Velero is an open-source backup and migration tool for Kubernetes clusters. It integrates the persistent volume (PV) data backup capability of the Restic tool and can be used to back up Kubernetes resource objects (such as Deployments, jobs, Services, and ConfigMaps) in the source cluster. Data in the PV mounted to the pod is backed up and uploaded to the object storage. When a disaster occurs or migration is required, the target cluster can use Velero to obtain the corresponding backup data from OBS and restore cluster resources as required. + +According to :ref:`en-us_topic_0000001217183655.html#section96147345128 `, prepare temporary object storage to store backup files before the migration. Velero supports OSB or `MinIO `__ as the object storage. OBS requires sufficient storage space for storing backup files. You can estimate the storage space based on your cluster scale and data volume. You are advised to use OBS for backup. For details about how to deploy Velero, see :ref:`Installing Velero `. + +CCE supports backup and restore using the e-backup add-on, which is compatible with Velero and uses OBS as the storage backend. You can use Velero in on-premises clusters and use e-backup in CCE. + +- Without e-backup: Install Velero in the source and target clusters by following the instructions described in this topic, and migrate resources by referring to :ref:`Migrating Resources in a Cluster (Velero) `. +- With e-backup: Install Velero in the source cluster and use OBS as the storage backend by following the instructions described in :ref:`Installing Velero `, and install e-backup in the target CCE cluster and migrate resources by referring to :ref:`Migrating Resources in a Cluster (e-backup) `. + +Prerequisites +------------- + +- The Kubernetes version of the source on-premises cluster must be 1.10 or later, and the cluster can use DNS and Internet services properly. +- If you use OBS to store backup files, obtain the AK/SK of a user who has the right to operate OBS. For details, see `Obtaining Access Keys (AK/SK) `__. +- If you use MinIO to store backup files, bind an EIP to the server where MinIO is installed and enable the API and console port of MinIO in the security group. +- The target CCE cluster has been created. +- The source cluster and target cluster must each have at least one idle node. It is recommended that the node specifications be 4 vCPUs and 8 GiB memory or higher. + +Installing MinIO +---------------- + +MinIO is an open-source, high-performance object storage tool compatible with the S3 API protocol. If MinIO is used to store backup files for cluster migration, you need a temporary server to deploy MinIO and provide services for external systems. If you use OBS to store backup files, skip this section and go to :ref:`Installing Velero `. + +MinIO can be installed in any of the following locations: + +- Temporary ECS outside the cluster + + If the MinIO server is installed outside the cluster, backup files will not be affected when a catastrophic fault occurs in the cluster. + +- Idle nodes in the cluster + + You can remotely log in to a node to install the MinIO server or install MinIO in a container. For details, see the official Velero documentation at https://velero.io/docs/v1.7/contributions/minio/#set-up-server. + + .. important:: + + For example, to install MinIO in a container, run the following command: + + - The storage type in the YAML file provided by Velero is **emptyDir**. You are advised to change the storage type to **HostPath** or **Local**. Otherwise, backup files will be permanently lost after the container is restarted. + - Ensure that the MinIO service is accessible externally. Otherwise, backup files cannot be downloaded outside the cluster. You can change the Service type to NodePort or use other types of public network access Services. + +Regardless of which deployment method is used, the server where MinIO is installed must have sufficient storage space, an EIP must be bound to the server, and the MinIO service port must be enabled in the security group. Otherwise, backup files cannot be uploaded or downloaded. + +In this example, MinIO is installed on a temporary ECS outside the cluster. + +#. Download MinIO. + + .. code-block:: + + mkdir /opt/minio + mkdir /opt/miniodata + cd /opt/minio + wget https://dl.minio.io/server/minio/release/linux-amd64/minio + chmod +x minio + +#. .. _cce_bestpractice_0063__en-us_topic_0000001172022292_li126129251432: + + Set the username and password of MinIO. + + The username and password set using this method are temporary environment variables and must be reset after the service is restarted. Otherwise, the default root credential **minioadmin:minioadmin** will be used to create the service. + + .. code-block:: + + export MINIO_ROOT_USER=minio + export MINIO_ROOT_PASSWORD=minio123 + +#. Create a service. In the command, **/opt/miniodata/** indicates the local disk path for MinIO to store data. + + The default API port of MinIO is 9000, and the console port is randomly generated. You can use the **--console-address** parameter to specify a console port. + + .. code-block:: + + ./minio server /opt/miniodata/ --console-address ":30840" & + + .. note:: + + Enable the API and console ports in the firewall and security group on the server where MinIO is to be installed. Otherwise, access to the object bucket will fail. + +#. Use a browser to access http://{*EIP of the node where MinIO resides*}:30840. The MinIO console page is displayed. + +.. _cce_bestpractice_0063__en-us_topic_0000001172022292_section138392220432: + +Installing Velero +----------------- + +Go to the OBS console or MinIO console and create a bucket named **velero** to store backup files. You can custom the bucket name, which must be used when installing Velero. Otherwise, the bucket cannot be accessed and the backup fails. For details, see :ref:`4 `. + +.. important:: + + - Velero instances need to be installed and deployed in both the **source and target clusters**. The installation procedures are the same, which are used for backup and restoration, respectively. + - The master node of a CCE cluster does not provide a port for remote login. You can install Velero using kubectl. + - If there are a large number of resources to back up, you are advised to adjust the CPU and memory resources of Velero and Restic to 1 vCPU and 1 GiB memory or higher. For details, see :ref:`en-us_topic_0000001217423605.html#section321054511332 `. + - The object storage bucket for storing backup files must be **empty**. + +Download the latest, stable binary file from https://github.com/vmware-tanzu/velero/releases. This section uses Velero 1.7.0 as an example. The installation process in the source cluster is the same as that in the target cluster. + +#. Download the binary file of Velero 1.7.0. + + .. code-block:: + + wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz + +#. Install the Velero client. + + .. code-block:: + + tar -xvf velero-v1.7.0-linux-amd64.tar.gz + cp ./velero-v1.7.0-linux-amd64/velero /usr/local/bin + +#. .. _cce_bestpractice_0063__en-us_topic_0000001172022292_li197871715322: + + Create the access key file **credentials-velero** for the backup object storage. + + .. code-block:: + + vim credentials-velero + + Replace the AK/SK in the file based on the site requirements. When you use OBS, you can obtain the AK/SK by referring to `Obtaining Access Keys (AK/SK) `__. If MinIO is used, the AK/SK are the username and password created in :ref:`2 `. + + .. code-block:: + + [default] + aws_access_key_id = {AK} + aws_secret_access_key = {SK} + +#. .. _cce_bestpractice_0063__en-us_topic_0000001172022292_li1722825643415: + + Deploy the Velero server. Change the value of **--bucket** to the name of the created object storage bucket. In this example, the bucket name is **velero**. For more information about custom installation parameters, see `Customize Velero Install `__. + + .. code-block:: + + velero install \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.2.1 \ + --bucket velero \ + --secret-file ./credentials-velero \ + --use-restic \ + --use-volume-snapshots=false \ + --backup-location-config region=ap-southeast-1,s3ForcePathStyle="true",s3Url=http://obs.ap-southeast-1.myhuaweicloud.com + + .. table:: **Table 1** Installation parameters of Velero + + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+========================================================================================================================================================================================================================================================================================+ + | --provider | Vendor who provides the plug-in. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --plugins | API component compatible with AWS S3. Both OBS and MinIO support the S3 protocol. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --bucket | Name of the object storage bucket for storing backup files. The bucket must be created in advance. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --secret-file | Secret file for accessing the object storage, that is, the **credentials-velero** file created in :ref:`3 `. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --use-restic | Whether to use Restic to support PV data backup. You are advised to enable this function. Otherwise, storage volume resources cannot be backed up. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --use-volume-snapshots | Whether to create the VolumeSnapshotLocation object for PV snapshot, which requires support from the snapshot program. Set this parameter to **false**. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --backup-location-config | OBS bucket configurations, including region, s3ForcePathStyle, and s3Url. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | region | Region to which object storage bucket belongs. | + | | | + | | - If OBS is used, set this parameter according to your region, for example, **ap-southeast-1**. | + | | - If MinIO is used, set this parameter to **minio**. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | s3ForcePathStyle | The value **true** indicates that the S3 file path format is used. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | s3Url | API access address of the object storage bucket. | + | | | + | | - If OBS is used, set this parameter to **http://obs.{region}.myhuaweicloud.com** (*region* indicates the region where the object storage bucket is located). For example, if the region is Hong Kong (ap-southeast-1), the value is **http://obs.ap-southeast-1.myhuaweicloud.com**. | + | | - If MinIO is used, set this parameter to **http://{EIP of the node where minio is located}:9000**. The value of this parameter is determined based on the IP address and port of the node where MinIO is installed. | + | | | + | | .. note:: | + | | | + | | - The access port in s3Url must be set to the API port of MinIO instead of the console port. The default API port of MinIO is 9000. | + | | - To access MinIO installed outside the cluster, enter the public IP address of MinIO. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. By default, a namespace named **velero** is created for the Velero instance. Run the following command to view the pod status: + + .. code-block:: + + $ kubectl get pod -n velero + NAME READY STATUS RESTARTS AGE + restic-rn29c 1/1 Running 0 16s + velero-c9ddd56-tkzpk 1/1 Running 0 16s + + .. note:: + + To prevent memory insufficiency during backup in the actual production environment, you are advised to change the CPU and memory allocated to Restic and Velero by referring to :ref:`en-us_topic_0000001217423605.html#section321054511332 `. + +#. Check the interconnection between Velero and the object storage and ensure that the status is **Available**. + + .. code-block:: + + $ velero backup-location get + NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT + default aws velero Available 2021-10-22 15:21:12 +0800 CST ReadWrite true diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_data.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_data.rst new file mode 100644 index 0000000..f55d7b0 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_data.rst @@ -0,0 +1,35 @@ +:original_name: cce_bestpractice_0059.html + +.. _cce_bestpractice_0059: + +Migrating Data +============== + +Migrating Databases and Storage +------------------------------- + +**Database migration** + +O&M personnel or development personnel migrate databases using Data Replication Service (DRS). For details, see `Migrating Databases Across Cloud Platforms `__. + +**Storage migration** + +O&M personnel or development personnel migrate data in object storage using Object Storage Migration Service (OMS). For details on OMS, see `Object Storage Migration Service `__. + +.. note:: + + Currently, you can use OMS to migrate object storage data from Amazon Web Services (AWS), Alibaba Cloud, Microsoft Azure, Baidu Cloud, Kingsoft Cloud, QingCloud, Qiniu Cloud, and Tencent Cloud to Huawei Cloud `OBS `__. + +- Create a bucket on OBS. For details, see `Creating a Bucket `__. +- Create a migration task on OMS. For details, see `Creating an Object Storage Migration Task `__. + +Migrating Container Images +-------------------------- + +#. Export the container images used in ACK clusters. + + Pull the images to the client by referring to the operation guide of Alibaba Cloud Container Registry (ACR). + +#. Upload the image files to Huawei Cloud SWR. + + Run the **docker pull** command to push the image to Huawei Cloud. For details, see `Uploading an Image Through a Container Engine Client `__. diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_e-backup.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_e-backup.rst new file mode 100644 index 0000000..ef7973f --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_e-backup.rst @@ -0,0 +1,241 @@ +:original_name: cce_bestpractice_0020.html + +.. _cce_bestpractice_0020: + +Migrating Resources in a Cluster (e-backup) +=========================================== + +Application Scenarios +--------------------- + +This section describes how to use Velero to back up resources in an on-premises cluster and use e-backup to restore resources in a CCE cluster. + +WordPress is used as an example to describe how to migrate an application from an on-premises Kubernetes cluster to a CCE cluster. The WordPress application consists of the WordPress and MySQL components, which are containerized. The two components are bound to two local storage volumes of the Local type respectively and provide external access through the NodePort Service. + +Before the migration, use a browser to access the WordPress site, create a site named **Migrate to CCE**, and publish an article to verify the integrity of PV data after the migration. The article published in WordPress will be stored in the **wp_posts** table of the MySQL database. If the migration is successful, all contents in the database will be migrated to the new cluster. You can verify the PV data migration based on the migration result. + +Prerequisites +------------- + +- Before the migration, clear the abnormal pod resources in the source cluster. If the pod is in the abnormal state and has a PVC mounted, the PVC is in the pending state after the cluster is migrated. +- Ensure that the cluster on the CCE side does not have the same resources as the cluster to be migrated because Velero does not restore the same resources by default. +- To ensure that container images can be properly pulled after cluster migration, migrate the images to SWR. For details, see `Uploading an Image Through a Container Engine Client `__. +- CCE does not support EVS disks of the **ReadWriteMany** type. If resources of this type exist in the source cluster, change the storage type to **ReadWriteOnce**. +- Velero integrates the Restic tool to back up and restore storage volumes. Currently, the storage volumes of the HostPath type are not supported. For details, see `Restic Restrictions `__. To back up storage volumes of this type, replace the hostPath volumes with local volumes by referring to :ref:`en-us_topic_0000001217423605.html#section11197194820367 `. If a backup task involves storage of the HostPath type, the storage volumes of this type will be automatically skipped and a warning message will be generated. This will not cause a backup failure. + +Backing Up Applications in the Source Cluster +--------------------------------------------- + +#. .. _cce_bestpractice_0020__en-us_topic_0000001244128082_li686918502812: + + (Optional) To back up the data of a specified storage volume in the pod, add an annotation to the pod. The annotation template is as follows: + + .. code-block:: + + kubectl -n annotate backup.velero.io/backup-volumes=,,... + + - ****: namespace where the pod is located. + - ****: pod name. + - ****: name of the persistent volume mounted to the pod. You can run the **describe** statement to query the pod information. The **Volume** field indicates the names of all persistent volumes attached to the pod. + + Add annotations to the pods of WordPress and MySQL. The pod names are **wordpress-758fbf6fc7-s7fsr** and **mysql-5ffdfbc498-c45lh**. As the pods are in the default namespace **default**, the **-n ** parameter can be omitted. + + .. code-block:: + + kubectl annotate pod/wordpress-758fbf6fc7-s7fsr backup.velero.io/backup-volumes=wp-storage + kubectl annotate pod/mysql-5ffdfbc498-c45lh backup.velero.io/backup-volumes=mysql-storage + +#. Back up the application. During the backup, you can specify resources based on parameters. If no parameter is added, the entire cluster resources are backed up by default. For details about the parameters, see `Resource filtering `__. + + - **--default-volumes-to-restic**: indicates that the Restic tool is used to back up all storage volumes mounted to the pod. Storage volumes of the HostPath type are not supported. If this parameter is not specified, the storage volume specified by annotation in :ref:`1 ` is backed up by default. This parameter is available only when **--use-restic** is specified during :ref:`Velero installation `. + + .. code-block:: + + velero backup create --default-volumes-to-restic + + - **--include-namespaces**: backs up resources in a specified namespace. + + .. code-block:: + + velero backup create --include-namespaces + + - **--include-resources**: backs up the specified resources. + + .. code-block:: + + velero backup create --include-resources deployments + + - **--selector**: backs up resources that match the selector. + + .. code-block:: + + velero backup create --selector = + + In this section, resources in the namespace **default** are backed up. **wordpress-backup** is the backup name. Specify the same backup name when restoring applications. Example: + + .. code-block:: + + velero backup create wordpress-backup --include-namespaces default --default-volumes-to-restic + + If the following information is displayed, the backup task is successfully created: + + .. code-block:: + + Backup request "wordpress-backup" submitted successfully. Run `velero backup describe wordpress-backup` or `velero backup logs wordpress-backup` for more details. + +#. Check the backup status. + + .. code-block:: + + velero backup get + + Information similar to the following is displayed: + + .. code-block:: + + NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR + wordpress-backup Completed 0 0 2021-10-14 15:32:07 +0800 CST 29d default + + In addition, you can go to the object bucket to view the backup files. The backups path is the application resource backup path, and the restic path is the PV data backup path. + + |image1| + +Installing e-backup in the Target Cluster +----------------------------------------- + +CCE provides e-backup for cluster backup and restore. You can install this add-on and create a storage location to restore resources. + +**Installing the e-backup add-on** + +#. Log in to the CCE console. In the navigation pane, choose **Add-ons**. Locate the e-backup add-on and click **Install** under it. + +#. In the **Install Add-on** drawer, select the target cluster, configure parameters, and click **Install**. + + The following parameter can be configured: + + **volumeWorkerNum**: number of concurrent volume backup jobs. The default value is **3**. + +**Creating a secret** + +#. Obtain an access key. + + Log in to the CCE console, move the cursor to the username in the upper right corner, and choose **My Credentials**. In the navigation pane on the left, choose **Access Keys**. On the page displayed, click **Add Access Key**. + +#. .. _cce_bestpractice_0020__en-us_topic_0000001244128082_en-us_topic_0000001252978785_li12643172610310: + + Create a key file and format it into a string using Base64. + + .. code-block:: + + # Create a key file. + $ vi credential-for-huawei-obs + HUAWEI_CLOUD_ACCESS_KEY_ID=your_access_key + HUAWEI_CLOUD_SECRET_ACCESS_KEY=your_secret_key + + # Use Base64 to format the string. + $ base64 -w 0 credential-for-huawei-obs + XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXHWOBS + +#. Create a secret. + + Create a secret using the following YAML content: + + .. code-block:: + + apiVersion: v1 + kind: Secret + metadata: + labels: + secret.everest.io/backup: 'true' #Indicates that the secret is used by e-backup to access the backup storage location. + name: secret-secure-opaque + namespace: velero # The value must be velero. The secret must be in the same namespace as e-backup. + type: cfe/secure-opaque + data: + # String obtained after the credential file is Base64-encoded. + cloud: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXHWOBS + + - The secret must be in the same namespace as e-backup, that is, **velero**. + - **secret.data** stores the secret for accessing OBS. The key must be **cloud**, and the value is the string Base64-encoded in :ref:`2 `. Generally, the displayed Base64-encoded string contains line breaks. Manually delete them when writing the string into **secret.data**. + - The secret must be labeled **secret.everest.io/backup: true**, indicating that the secret is used to manage the backup storage location. + +**Creating a storage location** + +Create a Kubernetes resource object used by e-backup as the backup storage location to obtain and detect information about the backend OBS. + +.. code-block:: + + apiVersion: velero.io/v1 + kind: BackupStorageLocation + metadata: + name: backup-location-001 + namespace: velero # The object must be in the same namespace as e-backup. + spec: + config: + endpoint: obs.ap-southeast-1.myhuaweicloud.com # OBS endpoint + credential: + name: secret-secure-opaque # Name of the created secret + key: cloud # Key in secret.data + objectStorage: + bucket: velero # OBS bucket name + provider: huawei # Uses the OBS service. + +- The **prefix** field is optional, and other fields are mandatory. The value of **provider** is fixed at **huawei**. +- You can obtain the endpoint from `Regions and Endpoints `__. Ensure that all nodes in the cluster can access the endpoint. If the endpoint does not carry a protocol header (http or https), **https** is used by default. +- Correctly set **name** and **key** in the credential. Otherwise, e-backup cannot access the backend storage location. + +After the creation is complete, wait for 30 seconds for check and synchronization of the backup storage location. Then check whether **PHASE** is **Available**. The backup location is available only when the value is **Available**. + +.. code-block:: + + $ kubectl get backupstoragelocations.velero.io backup-location-001 -n velero + NAME PHASE LAST VALIDATED AGE DEFAULT + backup-location-001 Available 23s 23m + +If the value of **PHASE** does not change to **Available** for a long time, you can view e-backup logs to locate the fault. After e-backup is installed, a workload named **velero** is created in the **velero** namespace, recorded in the logs of velero. + +|image2| + +Restoring Applications in the Target Cluster (e-backup) +------------------------------------------------------- + +Perform the following steps to restore a cluster in CCE using e-backup: + +Use an immediate backup as the data source and restore data to another cluster. This mode applies to all scenarios. + +You can use the Restore manifest below and run the **kubectl create** command to create a backup deletion request. + +.. code-block:: + + apiVersion: velero.io/v1 + kind: Restore + metadata: + name: restore-01 + namespace: velero + spec: + backupName: wordpress-backup + includedNamespaces: + - default + storageClassMapping: + local: csi-disk + imageRepositoryMapping: + quay.io/coreos: swr.ap-southeast-1.myhuaweicloud.com/everest + +- **backupName**: (**mandatory**) immediate backup that is used as the data source. +- **storageClassMapping**: changes the storageClassName used by backup resources such as PVs and PVCs. The storageClass types must be the same. In this example, **local** is changed to **csi-disk** supported by CCE. +- **imageRepositoryMapping**: changes the **images** field of the backup. It is used for repository mapping, excluding the change of the image name and tag (to prevent the migration and upgrade from being coupled). For example, after you migrate **quay.io/coreos/etcd:2.5** to SWR, you can use **swr.ap-southeast-1.myhuaweicloud.com/everest/etcd:2.5** in the local image repository. The configuration format is as follows: **quay.io/coreos: swr.ap-southeast-1.myhuaweicloud.com/everest** + +If **storageClassMapping** and **imageRepositoryMapping** are configured, you can skip their configuration in :ref:`Updating Images ` and :ref:`Updating the Storage Class `. + +For details about other parameters, see `e-backup `__. + +After the restore is performed, run the following command to **check the task status**: + +.. code-block:: + + $ kubectl -n velero get restores restore-01 -o yaml | grep " phase" + phase: Completed + +If the status is **Completed**, the restore is complete. You can view the application restore details on the CCE console. + +.. |image1| image:: /_static/images/en-us_image_0000001480031958.png +.. |image2| image:: /_static/images/en-us_image_0000001244128426.png diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_velero.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_velero.rst new file mode 100644 index 0000000..791d38e --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/migrating_resources_in_a_cluster_velero.rst @@ -0,0 +1,157 @@ +:original_name: cce_bestpractice_0024.html + +.. _cce_bestpractice_0024: + +Migrating Resources in a Cluster (Velero) +========================================= + +Application Scenarios +--------------------- + +WordPress is used as an example to describe how to migrate an application from an on-premises Kubernetes cluster to a CCE cluster. The WordPress application consists of the WordPress and MySQL components, which are containerized. The two components are bound to two local storage volumes of the Local type respectively and provide external access through the NodePort Service. + +Before the migration, use a browser to access the WordPress site, create a site named **Migrate to CCE**, and publish an article to verify the integrity of PV data after the migration. The article published in WordPress will be stored in the **wp_posts** table of the MySQL database. If the migration is successful, all contents in the database will be migrated to the new cluster. You can verify the PV data migration based on the migration result. + +Prerequisites +------------- + +- Before the migration, clear the abnormal pod resources in the source cluster. If the pod is in the abnormal state and has a PVC mounted, the PVC is in the pending state after the cluster is migrated. +- Ensure that the cluster on the CCE side does not have the same resources as the cluster to be migrated because Velero does not restore the same resources by default. +- To ensure that container image images can be properly pulled after cluster migration, migrate the images to SWR. +- CCE does not support EVS disks of the **ReadWriteMany** type. If resources of this type exist in the source cluster, change the storage type to **ReadWriteOnce**. +- Velero integrates the Restic tool to back up and restore storage volumes. Currently, the storage volumes of the HostPath type are not supported. For details, see `Restic Restrictions `__. To back up storage volumes of this type, replace the hostPath volumes with local volumes by referring to :ref:`en-us_topic_0000001217423605.html#section11197194820367 `. If a backup task involves storage of the HostPath type, the storage volumes of this type will be automatically skipped and a warning message will be generated. This will not cause a backup failure. + +Backing Up Applications in the Source Cluster +--------------------------------------------- + +#. .. _cce_bestpractice_0024__en-us_topic_0000001171703796_li686918502812: + + (Optional) To back up the data of a specified storage volume in the pod, add an annotation to the pod. The annotation template is as follows: + + .. code-block:: + + kubectl -n annotate backup.velero.io/backup-volumes=,,... + + - ****: namespace where the pod is located. + - ****: pod name. + - ****: name of the persistent volume mounted to the pod. You can run the **describe** statement to query the pod information. The **Volume** field indicates the names of all persistent volumes attached to the pod. + + Add annotations to the pods of WordPress and MySQL. The pod names are **wordpress-758fbf6fc7-s7fsr** and **mysql-5ffdfbc498-c45lh**. As the pods are in the default namespace **default**, the **-n ** parameter can be omitted. + + .. code-block:: + + kubectl annotate pod/wordpress-758fbf6fc7-s7fsr backup.velero.io/backup-volumes=wp-storage + kubectl annotate pod/mysql-5ffdfbc498-c45lh backup.velero.io/backup-volumes=mysql-storage + +#. Back up the application. During the backup, you can specify resources based on parameters. If no parameter is added, the entire cluster resources are backed up by default. For details about the parameters, see `Resource filtering `__. + + - **--default-volumes-to-restic**: indicates that the Restic tool is used to back up all storage volumes mounted to the pod. Storage volumes of the HostPath type are not supported. If this parameter is not specified, the storage volume specified by annotation in :ref:`1 ` is backed up by default. This parameter is available only when **--use-restic** is specified during :ref:`Velero installation `. + + .. code-block:: + + velero backup create --default-volumes-to-restic + + - **--include-namespaces**: backs up resources in a specified namespace. + + .. code-block:: + + velero backup create --include-namespaces + + - **--include-resources**: backs up the specified resources. + + .. code-block:: + + velero backup create --include-resources deployments + + - **--selector**: backs up resources that match the selector. + + .. code-block:: + + velero backup create --selector = + + In this section, resources in the namespace **default** are backed up. **wordpress-backup** is the backup name. Specify the same backup name when restoring applications. An example is as follows: + + .. code-block:: + + velero backup create wordpress-backup --include-namespaces default --default-volumes-to-restic + + If the following information is displayed, the backup task is successfully created: + + .. code-block:: + + Backup request "wordpress-backup" submitted successfully. Run `velero backup describe wordpress-backup` or `velero backup logs wordpress-backup` for more details. + +#. Check the backup status. + + .. code-block:: + + velero backup get + + Information similar to the following is displayed: + + .. code-block:: + + NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR + wordpress-backup Completed 0 0 2021-10-14 15:32:07 +0800 CST 29d default + + In addition, you can go to the object bucket to view the backup files. The backups path is the application resource backup path, and the restic path is the PV data backup path. + + |image1| + +.. _cce_bestpractice_0024__en-us_topic_0000001171703796_section482103142819: + +Restoring Applications in the Target Cluster +-------------------------------------------- + +The storage infrastructure of an on-premises cluster is different from that of a cloud cluster. After the cluster is migrated, PVs cannot be mounted to pods. Therefore, during the migration, update the storage class of the target cluster to shield the differences of underlying storage interfaces between the two clusters when creating a workload and request storage resources of the corresponding type. For details, see :ref:`Updating the Storage Class `. + +#. Use kubectl to connect to the CCE cluster. Create a storage class with the same name as that of the source cluster. + + In this example, the storage class name of the source cluster is **local** and the storage type is local disk. Local disks completely depend on the node availability. The data DR performance is poor. When the node is unavailable, the existing storage data is affected. Therefore, EVS volumes are used as storage resources in CCE clusters, and SAS disks are used as backend storage media. + + .. note:: + + - When an application containing PV data is restored in a CCE cluster, the defined storage class dynamically creates and mounts storage resources (such as EVS volumes) based on the PVC. + - The storage resources of the cluster can be changed as required, not limited to EVS volumes. To mount other types of storage, such as file storage and object storage, see :ref:`Updating the Storage Class `. + + YAML file of the migrated cluster: + + .. code-block:: + + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: local + provisioner: kubernetes.io/no-provisioner + volumeBindingMode: WaitForFirstConsumer + + The following is an example of the YAML file of the migration cluster: + + .. code-block:: + + allowVolumeExpansion: true + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: local + selfLink: /apis/storage.k8s.io/v1/storageclasses/csi-disk + parameters: + csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io + csi.storage.k8s.io/fstype: ext4 + everest.io/disk-volume-type: SAS + everest.io/passthrough: "true" + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + +#. Use the Velero tool to create a restore and specify a backup named **wordpress-backup** to restore the WordPress application to the CCE cluster. + + .. code-block:: + + velero restore create --from-backup wordpress-backup + + You can run the **velero restore get** statement to view the application restoration status. + +#. After the restoration is complete, check whether the application is running properly. If other adaptation problems may occur, rectify the fault by following the procedure described in :ref:`Updating Resources Accordingly `. + +.. |image1| image:: /_static/images/en-us_image_0000001480191270.png diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/others.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/others.rst new file mode 100644 index 0000000..ef191f7 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/others.rst @@ -0,0 +1,32 @@ +:original_name: cce_bestpractice_0021.html + +.. _cce_bestpractice_0021: + +Others +====== + +Service Verification +-------------------- + +Testing personnel check the functions of the new cluster without interrupting the live traffic. + +- Configure a test domain name. +- Test service functions. +- Check O&M functions, such as log monitoring and alarm reporting. + +Switching Live Traffic to the CCE Cluster +----------------------------------------- + +O&M switch DNS to direct live traffic to the CCE cluster. + +- DNS traffic switching: Adjust the DNS configuration to switch traffic. +- Client traffic switching: Upgrade the client code or update the configuration to switch traffic. + +Bringing the ACK Cluster Offline +-------------------------------- + +After confirming that the service on the CCE cluster is normal, bring the ACK cluster offline and delete the backup files. + +- Verify that the service on the CCE cluster is running properly. +- Bring the ACK cluster offline. +- Delete backup files. diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/preparing_object_storage_and_velero.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/preparing_object_storage_and_velero.rst new file mode 100644 index 0000000..4f7949a --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/preparing_object_storage_and_velero.rst @@ -0,0 +1,91 @@ +:original_name: cce_bestpractice_0336.html + +.. _cce_bestpractice_0336: + +Preparing Object Storage and Velero +=================================== + +O&M or development personnel migrate Kubernetes objects using the Velero tool. + +Preparing Object Storage MinIO +------------------------------ + +MinIO official website: https://docs.min.io/ + +Prepare the object storage and save its AK/SK. + +#. Install the MinIO. + + MinIO is a high performance,distributed,Kubernetes Native Object Storage. + + .. code-block:: + + # Binary installation + mkdir /opt/minio + mkdir /opt/miniodata + cd /opt/minio + wget https://dl.minio.io/server/minio/release/linux-amd64/minio + chmod +x minio + export MINIO_ACCESS_KEY=minio + export MINIO_SECRET_KEY=minio123 + ./minio server /opt/miniodata/ & + Enter http://{EIP of the node where MinIO is deployed}:9000 in the address box of a browser. Note that the corresponding ports on the firewall and security group must be enabled. + + # Installing kubectl in containers + # To release the MinIO service as a service that can be accessed from outside the cluster, change the service type in 00-minio-deployment.yaml to NodePort or LoadBalancer. + kubectl apply -f ./velero-v1.4.0-linux-amd64/examples/minio/00-minio-deployment.yaml + +#. Create a bucket, which will be used in the migration. + + .. code-block:: + + Open the web page of the MinIO service. + Use MINIO_ACCESS_KEY/MINIO_SECRET_KEY to log in to the MinIO service. In this example, use minio/minio123. + Click Create bucket above +. In this example, create a bucket named velero. + +Preparing Velero +---------------- + +Velero official website: https://velero.io/docs/v1.4/contributions/minio/ + +Velero is an open source tool to safely back up, restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. + +Perform the following operations on the ACK and CCE nodes that can run kubectl commands: + +#. Download the migration tool Velero. + + .. code-block:: + + Download the latest stable version from https://github.com/heptio/velero/releases. + This document uses velero-v1.4.0-linux-amd64.tar.gz as an example. + +#. Install the Velero client. + + .. code-block:: + + mkdir /opt/ack2cce + cd /opt/ack2cce + tar -xvf velero-v1.4.0-linux-amd64.tar.gz -C /opt/ack2cce + cp /opt/ack2cce/velero-v1.4.0-linux-amd64/velero /usr/local/bin + +#. Install the Velero server. + + .. code-block:: + + cd /opt/ack2cce + # Prepare the MinIO authentication file. The AK/SK must be correct. + vi credentials-velero + + [default] + aws_access_key_id = minio + aws_secret_access_key = minio123 + + # Install the Velero server. Note that s3Url must be set to the correct MinIO address. + velero install \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.0.0 \ + --bucket velero \ + --secret-file ./credentials-velero \ + --use-restic \ + --use-volume-snapshots=false \ + --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://{EIP of the node where minio runs}:9000 diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/restoring_kubernetes_objects_in_the_created_cce_cluster.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/restoring_kubernetes_objects_in_the_created_cce_cluster.rst new file mode 100644 index 0000000..f0ace86 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/restoring_kubernetes_objects_in_the_created_cce_cluster.rst @@ -0,0 +1,48 @@ +:original_name: cce_bestpractice_0338.html + +.. _cce_bestpractice_0338: + +Restoring Kubernetes Objects in the Created CCE Cluster +======================================================= + +Creating a StorageClass +----------------------- + +In this example, the WordPress application uses Alibaba Cloud SSD persistent data volumes, which need to be replaced with HUAWEI CLOUD SSDs. + +The StorageClass used in this example is alicloud-disk-ssd. Create an SC with the same name and use HUAWEI CLOUD SSDs as backend storage media. Set this parameter based on the application to migrate. + +.. code-block:: console + + [root@ccenode-roprr hujun]# cat cce-sc-csidisk-ack.yaml + allowVolumeExpansion: true + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: alicloud-disk-ssd + selfLink: /apis/storage.k8s.io/v1/storageclasses/csi-disk + parameters: + csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io + csi.storage.k8s.io/fstype: ext4 + everest.io/disk-volume-type: SSD + everest.io/passthrough: "true" + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + + [root@ccenode-roprr hujun]# kubectl create -f cce-sc-csidisk-ack.yaml + +Restoring the Application +------------------------- + +.. code-block:: console + + [root@ccenode-roprr hujun]# velero restore create --from-backup wordpress-ack-backup + Restore request "wordpress-ack-backup-20200707212519" submitted successfully. + Run `velero restore describe wordpress-ack-backup-20200707212519` or `velero restore logs wordpress-ack-backup-20200707212519` for more details + + [root@ccenode-roprr hujun]# velero restore get + NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR + wordpress-ack-backup-20200708112940 wordpress-ack-backup Completed 0 02020-07-08 11:29:42 +0800 CST + +Check the running status of the WordPress application. Make adaptation if issues such as image pulling failures and service access failures occur. diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/updating_resources_accordingly.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/updating_resources_accordingly.rst new file mode 100644 index 0000000..e04832d --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/procedure/updating_resources_accordingly.rst @@ -0,0 +1,199 @@ +:original_name: cce_bestpractice_0061.html + +.. _cce_bestpractice_0061: + +Updating Resources Accordingly +============================== + +.. _cce_bestpractice_0061__en-us_topic_0000001217102135_section7125750134820: + +Updating Images +--------------- + +The WordPress and MySQL images used in this example can be pulled from SWR. Therefore, the image pull failure (ErrImagePull) will not occur. If the application to be migrated is created from a private image, perform the following steps to update the image: + +#. Migrate the image resources to SWR. For details, see `Uploading an Image Through a Container Engine Client `__. + +#. Log in to the SWR console and obtain the image path used after the migration. + + The image path is in the following format: + + .. code-block:: + + 'swr.{Region}.myhuaweicloud.com/{Organization name}/{Image name}:{Tag name}' + +#. Run the following command to modify the workload and replace the **image** field in the YAML file with the image path: + + .. code-block:: + + kubectl edit deploy wordpress + +#. Check the running status of the workload. + +Updating Services +----------------- + +After the cluster is migrated, the Service of the source cluster may fail to take effect. You can perform the following steps to update the Service. If ingresses are configured in the source cluster, connect the new cluster to ELB again after the migration. For details, see `Using kubectl to Create an ELB Ingress `__. + +#. Connect to the cluster using kubectl. + +#. Edit the YAML file of the corresponding Service to change the Service type and port number. + + .. code-block:: + + kubectl edit svc wordpress + + To update load balancer resources, connect to ELB again. Add the annotations by following the procedure described in `Creating an Ingress - Interconnecting with an Existing Load Balancer `__. + + .. code-block:: + + annotations: + kubernetes.io/elb.class: union # Shared load balancer + kubernetes.io/elb.id: 9d06a39d-xxxx-xxxx-xxxx-c204397498a3 # Load balancer ID, which can be queried on the ELB console. + kubernetes.io/elb.subnet-id: f86ba71c-xxxx-xxxx-xxxx-39c8a7d4bb36 # ID of the cluster where the subnet resides + kubernetes.io/session-affinity-mode: SOURCE_IP # Enable the sticky session based on the source IP address. + +#. Use a browser to check whether the Service is available. + +.. _cce_bestpractice_0061__en-us_topic_0000001217102135_section746195321414: + +Updating the Storage Class +-------------------------- + +As the storage infrastructures of clusters may be different, storage volumes cannot be mounted to the target cluster. You can use either of the following methods to update the volumes: + +.. important:: + + Both update methods can be performed only before the application is restored in the target cluster. Otherwise, PV data resources may fail to be restored. In this case, use the Velero to restore applications after the storage class update is complete. For details, see :ref:`Restoring Applications in the Target Cluster `. + +**Method 1: Creating a ConfigMap mapping** + +#. Create a ConfigMap in the CCE cluster and map the storage class used by the source cluster to the default storage class of the CCE cluster. + + .. code-block:: + + apiVersion: v1 + kind: ConfigMap + metadata: + name: change-storageclass-plugin-config + namespace: velero + labels: + app.kubernetes.io/name: velero + velero.io/plugin-config: "true" + velero.io/change-storage-class: RestoreItemAction + data: + {Storage class name01 in the source cluster}: {Storage class name01 in the target cluster} + {Storage class name02 in the source cluster}: {Storage class name02 in the target cluster} + +#. Run the following command to apply the ConfigMap configuration: + + .. code-block:: + + $ kubectl create -f change-storage-class.yaml + configmap/change-storageclass-plugin-config created + +**Method 2: Creating a storage class with the same name** + +#. Run the following command to query the default storage class supported by CCE: + + .. code-block:: + + kubectl get sc + + Information similar to the following is displayed: + + .. code-block:: + + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + csi-disk everest-csi-provisioner Delete Immediate true 3d23h + csi-disk-topology everest-csi-provisioner Delete WaitForFirstConsumer true 3d23h + csi-nas everest-csi-provisioner Delete Immediate true 3d23h + csi-obs everest-csi-provisioner Delete Immediate false 3d23h + csi-sfsturbo everest-csi-provisioner Delete Immediate true 3d23h + + .. table:: **Table 1** Storage classes + + ================= ======================== + Storage Class Storage Resource + ================= ======================== + csi-disk EVS + csi-disk-topology EVS with delayed binding + csi-nas SFS + csi-obs OBS + csi-sfsturbo SFS Turbo + ================= ======================== + +#. Run the following command to export the required storage class details in YAML format: + + .. code-block:: + + kubectl get sc -o=yaml + +#. Copy the YAML file and create a new storage class. + + Change the storage class name to the name used in the source cluster to call basic storage resources of the cloud. + + The YAML file of csi-obs is used as an example. Delete the unnecessary information in italic under the **metadata** field and modify the information in bold. You are advised not to modify other parameters. + + .. code-block:: + + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + creationTimestamp: "2021-10-18T06:41:36Z" + name: # Use the name of the storage class used in the source cluster. + resourceVersion: "747" + selfLink: /apis/storage.k8s.io/v1/storageclasses/csi-obs + uid: 4dbbe557-ddd1-4ce8-bb7b-7fa15459aac7 + parameters: + csi.storage.k8s.io/csi-driver-name: obs.csi.everest.io + csi.storage.k8s.io/fstype: obsfs + everest.io/obs-volume-type: STANDARD + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + + .. note:: + + - SFS Turbo file systems cannot be directly created using StorageClass. Go to the SFS Turbo console to create SFS Turbo file systems that belong to the same VPC subnet and have inbound ports (111, 445, 2049, 2051, 2052, and 20048) enabled in the security group. + - CCE does not support EVS disks of the ReadWriteMany type. If resources of this type exist in the source cluster, change the storage type to **ReadWriteOnce**. + +#. Restore the cluster application by referring to :ref:`Restoring Applications in the Target Cluster ` and check whether the PVC is successfully created. + + .. code-block:: + + kubectl get pvc + + In the command output, the **VOLUME** column indicates the name of the PV automatically created using the storage class. + + .. code-block:: + + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + pvc Bound pvc-4c8e655a-1dbc-4897-ae6c-446b502f5e77 5Gi RWX local 13s + +Updating Databases +------------------ + +In this example, the database is a local MySQL database and does not need to be reconfigured after the migration. If you use **DRS** to migrate a local database to RDS, configure database access based on site requirements after the migration. + +.. note:: + + - If the RDS instance is in the same VPC as the CCE cluster, it can be accessed using the private IP address. Otherwise, it can only be accessed only through public networks by binding an EIP. You are advised to use the private network access mode for high security and good RDS performance. + - Ensure that the inbound rule of the security group to which RDS belongs has been enabled for the cluster. Otherwise, the connection will fail. + +#. Log in to the RDS console and obtain the private IP address and port number of the DB instance on the **Basic Information** page. + +#. Run the following command to modify the WordPress workload: + + .. code-block:: + + kubectl edit deploy wordpress + + Set the environment variables in the **env** field. + + - **WORDPRESS_DB_HOST**: address and port number used for accessing the database, that is, the internal network address and port number obtained in the previous step. + - **WORDPRESS_DB_USER**: username for accessing the database. + - **WORDPRESS_DB_PASSWORD**: password for accessing the database. + - **WORDPRESS_DB_NAME**: name of the database to be connected. + +#. Check whether the RDS database is properly connected. diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/resource_and_cost_planning.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/resource_and_cost_planning.rst new file mode 100644 index 0000000..562be27 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/resource_and_cost_planning.rst @@ -0,0 +1,22 @@ +:original_name: cce_bestpractice_0016.html + +.. _cce_bestpractice_0016: + +Resource and Cost Planning +========================== + +.. important:: + + The fees listed here are estimates. The actual fees will be displayed on the Huawei Cloud console. + +The required resources are as follows: + +.. table:: **Table 1** Resource and cost planning + + +------------------------------+-------------------------------------------------------------------+-----------------+-------------------+ + | Resource | Description | Quantity | Monthly Fee (CNY) | + +==============================+===================================================================+=================+===================+ + | Cloud Container Engine (CCE) | - CCE cluster version: v1.21 | 1 | 0.55/hour | + | | - Minimum node specifications: 4 vCPUs, 8 GB memory, EulerOS 2.9 | | | + | | - Pay-per-use recommended | | | + +------------------------------+-------------------------------------------------------------------+-----------------+-------------------+ diff --git a/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/solution_overview.rst b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/solution_overview.rst new file mode 100644 index 0000000..e7c8d73 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_clusters_from_other_clouds_to_cce/solution_overview.rst @@ -0,0 +1,23 @@ +:original_name: cce_bestpractice_0014.html + +.. _cce_bestpractice_0014: + +Solution Overview +================= + +This section takes the WordPress application as an example to describe how to migrate an application from Alibaba Cloud ACK to Huawei Cloud CCE. Assume that you have deployed the WordPress application on Alibaba Cloud and created your own blog. + +This document briefly describes how to smoothly migrate an application from an Alibaba Cloud ACK cluster to a Huawei Cloud CCE cluster in six steps without interrupting the service. + +Migration Scheme +---------------- + +|image1| + +Procedure +--------- + +|image2| + +.. |image1| image:: /_static/images/en-us_image_0000001402114285.png +.. |image2| image:: /_static/images/en-us_image_0264642164.png diff --git a/doc/best-practice/source/migration/migrating_container_images/index.rst b/doc/best-practice/source/migration/migrating_container_images/index.rst new file mode 100644 index 0000000..586618b --- /dev/null +++ b/doc/best-practice/source/migration/migrating_container_images/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_bestpractice_0328.html + +.. _cce_bestpractice_0328: + +Migrating Container Images +========================== + +- :ref:`Overview ` +- :ref:`Migrating Images to SWR Using Docker Commands ` +- :ref:`Migrating Images to SWR Using image-syncer ` +- :ref:`Synchronizing Images Across Clouds from Harbor to SWR ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + migrating_images_to_swr_using_docker_commands + migrating_images_to_swr_using_image-syncer + synchronizing_images_across_clouds_from_harbor_to_swr diff --git a/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_docker_commands.rst b/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_docker_commands.rst new file mode 100644 index 0000000..3f9a675 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_docker_commands.rst @@ -0,0 +1,65 @@ +:original_name: cce_bestpractice_0330.html + +.. _cce_bestpractice_0330: + +Migrating Images to SWR Using Docker Commands +============================================= + +Scenarios +--------- + +SWR provides easy-to-use image hosting and efficient distribution services. If small quantity of images need to be migrated, enterprises can use the **docker pull/push** command to migrate images to SWR. + +Procedure +--------- + +#. .. _cce_bestpractice_0330__en-us_topic_0000001262401516_li13905204219362: + + Pull images from the source repository. + + Run the **docker pull** command to pull the images. + + Example: **docker pull nginx:latest** + + Run the **docker images** command to check whether the images are successfully pulled. + + .. code-block:: + + # docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + nginx latest 22f2bf2e2b4f 5 hours ago 22.8MB + +#. Push the images pulled in :ref:`1 ` to SWR. + + a. Log in to the VM where the target container is located and log in to SWR. For details, see `Uploading an Image Through a Container Engine Client `__. + + b. Tag the images. + + **docker tag** **[Image name:Tag name] [Image repository address]/[Organization name]/[Image name:Tag name]** + + Example: + + **docker tag nginx:v1 swr.ap-southeast-1.myhuaweicloud.com/cloud-develop/nginx:v1** + + c. Run the following command to push the images to the target image repository. + + **docker push** **[Image repository address]/[Organization name]/[Image name:Tag name]** + + Example: + + **docker push swr.ap-southeast-1.myhuaweicloud.com/cloud-develop/nginx:v1** + + d. Check whether the following information is returned. If yes, the push is successful. + + .. code-block:: + + fbce26647e70: Pushed + fb04ab8effa8: Pushed + 8f736d52032f: Pushed + 009f1d338b57: Pushed + 678bbd796838: Pushed + d1279c519351: Pushed + f68ef921efae: Pushed + v1: digest: sha256:0cdfc7910db531bfa7726de4c19ec556bc9190aad9bd3de93787e8bce3385f8d size: 1780 + + To view the pushed image, refresh the **My Images** page. diff --git a/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_image-syncer.rst b/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_image-syncer.rst new file mode 100644 index 0000000..043e372 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_container_images/migrating_images_to_swr_using_image-syncer.rst @@ -0,0 +1,100 @@ +:original_name: cce_bestpractice_0331.html + +.. _cce_bestpractice_0331: + +Migrating Images to SWR Using image-syncer +========================================== + +Scenarios +--------- + +If small quantity of images need to be migrated, you can use Docker commands. However, for thousands of images and several TBs of image repository data, it takes a long time and even data may be lost. In this case, you can use the open-source image migration tool `image-syncer `__. + +Procedure +--------- + +#. Download, decompress, and run image-syncer. + + The following uses image-syncer v1.3.1 as an example. + + **wget https://github.com/AliyunContainerService/image-syncer/releases/download/v1.3.1/image-syncer-v1.3.1-linux-amd64.tar.gz** + + **tar -zvxf image-syncer-v1.3.1-linux-amd64.tar.gz** + +#. Create **auth.json**, the authentication information file of the image repositories. + + image-syncer supports the Docker image repository based on Docker Registry V2. Enter the authentication information as required. In the following example, the image repository of AP-Singapore is migrated to CN-Hong Kong. + + The following describes how to write the authentication information of the source and target repositories. + + .. code-block:: + + { + "swr.ap-southeast-3.myhuaweicloud.com": { + "username": "ap-southeast-3@F1I3Q......", + "password": "2fd4c869ea0......" + }, + "swr.ap-southeast-1.myhuaweicloud.com": { + "username": "ap-southeast-1@4N3FA......", + "password": "f1c82b57855f9d35......" + } + } + + In the preceding commands, **swr.ap-southeast-1.myhuaweicloud.com** indicates the image repository address. You can obtain the username and password from the login command as follows: + + Log in to the SWR console, and click **Generate Login Command** in the upper right corner to obtain the login command in the dialog box displayed, as shown in the following figure. + + .. _cce_bestpractice_0331__en-us_topic_0000001262561396_fig27182115592: + + .. figure:: /_static/images/en-us_image_0000001400827629.png + :alt: **Figure 1** Generating a login command + + **Figure 1** Generating a login command + + In :ref:`the above figure `, **ap-southeast-1@9LASB......** is the username; + + **e3d65a4c7a57624264c......** is the password; + + **swr.ap-southeast-1.myhuaweicloud.com** is the image repository address. + + .. caution:: + + For security, the example username and password are not complete. You should use the actual username and password obtained from the console. + +#. Create **images.json**, the image synchronization description file. + + In the following example, the source repository address is on the left, and the target repository address is on the right. image-syncer also supports other description modes. For details, see `README.md `__. + + .. code-block:: + + { + "swr.ap-southeast-3.myhuaweicloud.com/org-ss/canary-consumer": "swr.ap-southeast-1.myhuaweicloud.com/dev-container/canary-consumer" + } + +#. Run the following command to migrate the images to SWR: + + **./image-syncer --auth=./auth.json --images=./images.json --namespace=dev-container --registry=swr.ap-southeast-1.myhuaweicloud.com --retries=3 --log=./log** + + .. table:: **Table 1** Command parameter description + + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +=============+=================================================================================================================================================================================================================================================================================+ + | --config | Sets the path of config file. This file needs to be created before starting synchronization. Default config file is at "current/working/directory/config.json". (This flag can be replaced with flag **--auth** and **--images** which for better organization.) | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --images | Sets the path of image rules file. This file needs to be created before starting synchronization. Default config file is at "current/working/directory/images.json". | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --auth | Sets the path of authentication file. This file needs to be created before starting synchronization. Default config file is at "current/working/directory/auth.json". | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --log | Sets the path of log file. Logs will be printed to Stderr by default. | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --namespace | Sets default-namespace. default-namespace can also be set by environment variable **DEFAULT_NAMESPACE**. If they are both set at the same time, **DEFAULT_NAMESPACE** will not work at this synchronization. default-namespace will work only if default-registry is not empty. | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --proc | Number of goroutines. Default value is 5. | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --retries | Number of retries. Default value is 2. The retries of failed sync tasks will start after all sync tasks are executed once. Reties of failed sync tasks will resolve most occasional network problems during synchronization. | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | --registry | Sets default-registry. Default-registry can also be set by environment variable **DEFAULT_REGISTRY**. If they are both set at the same time, **DEFAULT_REGISTRY** will not work at this synchronization. default-registry will work only if default-namespace is not empty. | + +-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + After the migration command is executed, you can log in to the target image repository to view the migrated images. diff --git a/doc/best-practice/source/migration/migrating_container_images/overview.rst b/doc/best-practice/source/migration/migrating_container_images/overview.rst new file mode 100644 index 0000000..96f6346 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_container_images/overview.rst @@ -0,0 +1,38 @@ +:original_name: cce_bestpractice_0329.html + +.. _cce_bestpractice_0329: + +Overview +======== + +Challenges +---------- + +Containers are growing in popularity. Many enterprises choose to build their own Kubernetes clusters. However, the O&M workload of on-premises clusters is heavy, and O&M personnel need to configure the management systems and monitoring solutions by themselves. For enterprises, managing a large number of images requires high O&M, labor, and management costs, and the efficiency is low. + +SoftWare Repository for Container (SWR) manages container images that function on multiple architectures, such as Linux, Windows, and Arm. Enterprises can migrate their image repositories to SWR to reduce costs. + +This section describes three ways for migrating image repositories to SWR smoothly. You can select one as required. + +Migration Solutions +------------------- + +.. table:: **Table 1** Comparison of migration solutions and application scenarios + + +-------------------------------------------------------+-------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+ + | Solution | Application Scenario | Precautions | + +=======================================================+===========================================================================================+=============================================================================================================================================+ + | Migrating images to SWR using Docker commands | Small quantity of images | - Disk storage leads to the timely deletion of local images and time-cost flushing. | + | | | - Docker daemon strictly restricts the number of concurrent pull/push operations, so high-concurrency synchronization cannot be performed. | + | | | - Scripts are complex because HTTP APIs are needed for some operations which cannot be implemented only through Docker CLI. | + +-------------------------------------------------------+-------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+ + | Migrating images to SWR using image-syncer | A large number of images | - Many-to-many image repository synchronization is supported. | + | | | - Docker image repository services (such as Docker Hub, Quay, and Harbor) based on Docker Registry V2 are supported. | + | | | - Memory- and network-dependent synchronization is fast. | + | | | - Flushing the Blob information of synchronized images avoids repetition. | + | | | - Concurrent synchronization can be achieved by adjusting the number of concurrent tasks in the configuration files. | + | | | - Automatically retrying failed synchronization tasks can resolve most network jitter during image synchronization. | + | | | - Docker or other programs are not required. | + +-------------------------------------------------------+-------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+ + | Synchronizing images across clouds from Harbor to SWR | A customer deploys services in multiple clouds and uses Harbor as their image repository. | Only Harbor v1.10.5 and later versions are supported. | + +-------------------------------------------------------+-------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/doc/best-practice/source/migration/migrating_container_images/synchronizing_images_across_clouds_from_harbor_to_swr.rst b/doc/best-practice/source/migration/migrating_container_images/synchronizing_images_across_clouds_from_harbor_to_swr.rst new file mode 100644 index 0000000..bbf60e8 --- /dev/null +++ b/doc/best-practice/source/migration/migrating_container_images/synchronizing_images_across_clouds_from_harbor_to_swr.rst @@ -0,0 +1,151 @@ +:original_name: cce_bestpractice_0332.html + +.. _cce_bestpractice_0332: + +Synchronizing Images Across Clouds from Harbor to SWR +===================================================== + +Scenarios +--------- + +A customer deploys services in multiple clouds and uses Harbor as their image repository. There are two scenarios for synchronizing images from Harbor to SWR: + +#. Harbor accesses SWR through a public network. For details, see :ref:`Accessing SWR Through a Public Network `. +#. Harbor accesses SWR through a VPC endpoint by using a private line. For details, see :ref:`Accessing SWR Through a VPC Endpoint by Using a Private Line `. + +Background +---------- + +Harbor is an open-source enterprise-class Docker Registry server developed by VMware. It extends the Docker Distribution by adding the functionalities such as role-based access control (RBAC), image scanning, and image replication. Harbor has been widely used to store and distribute container images. + +.. _cce_bestpractice_0332__section126841459202520: + +Accessing SWR Through a Public Network +-------------------------------------- + +#. .. _cce_bestpractice_0332__li137017409395: + + Configure a registry endpoint on Harbor. + + .. note:: + + Huawei Cloud SWR has integrated with Harbor 1.10.5 and later versions. You only need to set **Provider** to **Huawei SWR** when configuring your endpoint. This document uses Harbor 2.4.1 as an example. + + a. Add an endpoint. + + |image1| + + b. Configure the following parameters. + + |image2| + + - **Provider**: Select **Huawei SWR**. + - **Name**: Enter a customized name. + - **Endpoint URL**: Enter the public network domain name of SWR in the format of **https://{SWR image repository address}**. To obtain the image repository address, log in to the SWR console, choose **My Images**, and click **Upload Through Client**. You can view the image repository address of the current region on the page that is displayed. + - **Access ID**: Enter an access ID in the format of **Regional project name@[AK]**. + - Access Secret: Enter an AK/SK. To obtain an AK/SK, see `Obtaining a Long-Term Valid Login Command `__. + - **Verify Remote Cert**: **Deselect** the option. + +#. Configure a replication rule. + + a. Create a replication rule. + + |image3| + + b. Configure the following parameters. + + - **Name**: Enter a customized name. + + - **Replication mode**: Select **Push-based**, indicating that images are pushed from the local Harbor to the remote repository. + + - **Source resource filter**: Filters images on Harbor based on the configured rules. + + - **Destination registry**: Select the endpoint created in :ref:`1 `. + + - **Destination** + + **Namespace**: Enter the organization name on SWR. + + **Flattening**: Select **Flatten All Levels**, indicating that the hierarchy of the registry is reduced when copying images. If the directory of Harbor registry is **library/nginx** and the directory of the endpoint namespace is **dev-container**, after you flatten all levels, the directory of the endpoint namespace is **library/nginx -> dev-container/nginx**. + + - **Trigger Mode**: Select **Manual**. + + - **Bandwidth**: Set the maximum network bandwidth when executing the replication rule. The value **-1** indicates no limitation. + +#. After creating the replication rule, select it and click **REPLICATE** to complete the replication. + + |image4| + +.. _cce_bestpractice_0332__section13685165982520: + +Accessing SWR Through a VPC Endpoint by Using a Private Line +------------------------------------------------------------ + +#. Configure a VPC endpoint. + +#. Obtain the private network IP address and domain name of the VPC. (By default, the domain name resolution rule is automatically added to Huawei Cloud VPCs, so you only need to configure hosts for non-Huawei Cloud endpoints.) You can query the IP address and domain name in **Private Domain Name** on the VPC endpoint details page. + + |image5| + +#. .. _cce_bestpractice_0332__li34036320421: + + Configure a registry endpoint on Harbor. + + .. note:: + + Huawei Cloud SWR has integrated with Harbor 1.10.5 and later versions. You only need to set **Provider** to **Huawei SWR** when configuring your endpoint. This document uses Harbor 2.4.1 as an example. + + a. Add an endpoint. + + b. Configure the following parameters. + + |image6| + + - **Provider**: Select **Huawei SWR**. + - **Name**: Enter a customized name. + - **Endpoint URL**: Enter **the private network domain name of the VPC endpoint**, which must start with **https**. In addition, the domain name mapping must be configured in the container where Harbor is located. + - **Access ID**: Enter an access ID in the format of **Regional project name@[AK]**. + - **Access Secret**: Enter an AK/SK. To obtain an AK/SK, see `Obtaining a Long-Term Valid Login Command `__. + - **Verify Remote Cert**: **Deselect** the option. + +#. Configure a replication rule. + + a. Create a replication rule. + + |image7| + + b. Configure the following parameters. + + |image8| + + - **Name**: Enter a customized name. + + - **Replication mode**: Select **Push-based**, indicating that images are pushed from the local Harbor to the remote repository. + + - **Source resource filter**: Filters images on Harbor based on the configured rules. + + - **Destination registry**: Select the endpoint created in :ref:`3 `. + + - Destination + + **Namespace**: Enter the organization name on SWR. + + **Flattening**: Select **Flatten All Levels**, indicating that the hierarchy of the registry is reduced when copying images. If the directory of Harbor registry is **library/nginx** and the directory of the endpoint namespace is **dev-container**, after you flatten all levels, the directory of the endpoint namespace is **library/nginx -> dev-container/nginx**. + + - **Trigger Mode**: Select **Manual**. + + - **Bandwidth**: Set the maximum network bandwidth when executing the replication rule. The value **-1** indicates no limitation. + +#. After creating the replication rule, select it and click **REPLICATE** to complete the replication. + + |image9| + +.. |image1| image:: /_static/images/en-us_image_0000001469005545.png +.. |image2| image:: /_static/images/en-us_image_0000001418569120.png +.. |image3| image:: /_static/images/en-us_image_0000001468885853.png +.. |image4| image:: /_static/images/en-us_image_0000001418729104.png +.. |image5| image:: /_static/images/en-us_image_0000001418569168.png +.. |image6| image:: /_static/images/en-us_image_0000001418729128.png +.. |image7| image:: /_static/images/en-us_image_0000001469005601.png +.. |image8| image:: /_static/images/en-us_image_0000001468605617.png +.. |image9| image:: /_static/images/en-us_image_0000001468885889.png