diff --git a/umn/source/_static/images/en-us_image_0000001620873737.png b/umn/source/_static/images/en-us_image_0000001620873737.png new file mode 100644 index 0000000..1186647 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001620873737.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147446.png b/umn/source/_static/images/en-us_image_0000001685147446.png new file mode 100644 index 0000000..ba47a09 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147446.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147450.png b/umn/source/_static/images/en-us_image_0000001685147450.png new file mode 100644 index 0000000..70a7eda Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147450.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147478.png b/umn/source/_static/images/en-us_image_0000001685147478.png new file mode 100644 index 0000000..96eba12 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147478.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147494.png b/umn/source/_static/images/en-us_image_0000001685147494.png new file mode 100644 index 0000000..2d70845 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147494.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147562.png b/umn/source/_static/images/en-us_image_0000001685147562.png new file mode 100644 index 0000000..4b159e1 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147562.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147566.png b/umn/source/_static/images/en-us_image_0000001685147566.png new file mode 100644 index 0000000..cb92361 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147566.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147570.png b/umn/source/_static/images/en-us_image_0000001685147570.png new file mode 100644 index 0000000..d57ae90 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147570.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147590.png b/umn/source/_static/images/en-us_image_0000001685147590.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147590.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147602.png b/umn/source/_static/images/en-us_image_0000001685147602.png new file mode 100644 index 0000000..f119df0 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147602.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147610.png b/umn/source/_static/images/en-us_image_0000001685147610.png new file mode 100644 index 0000000..f6ebadc Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147610.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147638.png b/umn/source/_static/images/en-us_image_0000001685147638.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147638.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147654.png b/umn/source/_static/images/en-us_image_0000001685147654.png new file mode 100644 index 0000000..fac45f5 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147654.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147662.png b/umn/source/_static/images/en-us_image_0000001685147662.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147662.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147678.png b/umn/source/_static/images/en-us_image_0000001685147678.png new file mode 100644 index 0000000..8106e7a Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147678.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685147682.png b/umn/source/_static/images/en-us_image_0000001685147682.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685147682.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307194.png b/umn/source/_static/images/en-us_image_0000001685307194.png new file mode 100644 index 0000000..28d08d9 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307194.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307202.png b/umn/source/_static/images/en-us_image_0000001685307202.png new file mode 100644 index 0000000..1267615 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307202.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307210.png b/umn/source/_static/images/en-us_image_0000001685307210.png new file mode 100644 index 0000000..dd36d9b Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307210.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307214.png b/umn/source/_static/images/en-us_image_0000001685307214.png new file mode 100644 index 0000000..4b159e1 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307214.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307262.png b/umn/source/_static/images/en-us_image_0000001685307262.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307262.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307302.png b/umn/source/_static/images/en-us_image_0000001685307302.png new file mode 100644 index 0000000..62ddb8e Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307302.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307306.jpg b/umn/source/_static/images/en-us_image_0000001685307306.jpg new file mode 100644 index 0000000..5d08e15 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307306.jpg differ diff --git a/umn/source/_static/images/en-us_image_0000001685307310.png b/umn/source/_static/images/en-us_image_0000001685307310.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307310.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307318.png b/umn/source/_static/images/en-us_image_0000001685307318.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307318.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307326.png b/umn/source/_static/images/en-us_image_0000001685307326.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307326.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307342.png b/umn/source/_static/images/en-us_image_0000001685307342.png new file mode 100644 index 0000000..65cb357 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307342.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307354.png b/umn/source/_static/images/en-us_image_0000001685307354.png new file mode 100644 index 0000000..c3676d6 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307354.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307362.png b/umn/source/_static/images/en-us_image_0000001685307362.png new file mode 100644 index 0000000..c9b040a Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307362.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307386.png b/umn/source/_static/images/en-us_image_0000001685307386.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307386.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307394.png b/umn/source/_static/images/en-us_image_0000001685307394.png new file mode 100644 index 0000000..72f9241 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307394.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307398.png b/umn/source/_static/images/en-us_image_0000001685307398.png new file mode 100644 index 0000000..2dc8c35 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307398.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307406.png b/umn/source/_static/images/en-us_image_0000001685307406.png new file mode 100644 index 0000000..c8d49ac Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307406.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307410.png b/umn/source/_static/images/en-us_image_0000001685307410.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307410.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307426.png b/umn/source/_static/images/en-us_image_0000001685307426.png new file mode 100644 index 0000000..fc7bf2d Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307426.png differ diff --git a/umn/source/_static/images/en-us_image_0000001685307430.png b/umn/source/_static/images/en-us_image_0000001685307430.png new file mode 100644 index 0000000..17774fb Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001685307430.png differ diff --git a/umn/source/_static/images/en-us_image_0000001700277302.png b/umn/source/_static/images/en-us_image_0000001700277302.png new file mode 100644 index 0000000..f287a55 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001700277302.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146257.png b/umn/source/_static/images/en-us_image_0000001733146257.png new file mode 100644 index 0000000..8b00a59 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146257.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146261.png b/umn/source/_static/images/en-us_image_0000001733146261.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146261.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146273.png b/umn/source/_static/images/en-us_image_0000001733146273.png new file mode 100644 index 0000000..4b159e1 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146273.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146277.png b/umn/source/_static/images/en-us_image_0000001733146277.png new file mode 100644 index 0000000..ecd22fc Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146277.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146301.png b/umn/source/_static/images/en-us_image_0000001733146301.png new file mode 100644 index 0000000..1679a67 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146301.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146317.png b/umn/source/_static/images/en-us_image_0000001733146317.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146317.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146325.png b/umn/source/_static/images/en-us_image_0000001733146325.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146325.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146333.png b/umn/source/_static/images/en-us_image_0000001733146333.png new file mode 100644 index 0000000..7cabd61 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146333.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146365.png b/umn/source/_static/images/en-us_image_0000001733146365.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146365.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146369.png b/umn/source/_static/images/en-us_image_0000001733146369.png new file mode 100644 index 0000000..1267615 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146369.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146373.png b/umn/source/_static/images/en-us_image_0000001733146373.png new file mode 100644 index 0000000..533f604 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146373.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146381.png b/umn/source/_static/images/en-us_image_0000001733146381.png new file mode 100644 index 0000000..e431718 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146381.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146397.png b/umn/source/_static/images/en-us_image_0000001733146397.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146397.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146405.png b/umn/source/_static/images/en-us_image_0000001733146405.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146405.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146413.png b/umn/source/_static/images/en-us_image_0000001733146413.png new file mode 100644 index 0000000..6184cfc Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146413.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146449.png b/umn/source/_static/images/en-us_image_0000001733146449.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146449.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146461.png b/umn/source/_static/images/en-us_image_0000001733146461.png new file mode 100644 index 0000000..72f9241 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146461.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733146485.png b/umn/source/_static/images/en-us_image_0000001733146485.png new file mode 100644 index 0000000..b0aa425 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733146485.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266389.png b/umn/source/_static/images/en-us_image_0000001733266389.png new file mode 100644 index 0000000..05725b8 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266389.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266393.png b/umn/source/_static/images/en-us_image_0000001733266393.png new file mode 100644 index 0000000..62ddb8e Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266393.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266397.png b/umn/source/_static/images/en-us_image_0000001733266397.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266397.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266413.png b/umn/source/_static/images/en-us_image_0000001733266413.png new file mode 100644 index 0000000..f05928a Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266413.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266417.png b/umn/source/_static/images/en-us_image_0000001733266417.png new file mode 100644 index 0000000..80c4990 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266417.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266429.png b/umn/source/_static/images/en-us_image_0000001733266429.png new file mode 100644 index 0000000..851c244 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266429.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266445.png b/umn/source/_static/images/en-us_image_0000001733266445.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266445.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266489.png b/umn/source/_static/images/en-us_image_0000001733266489.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266489.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266493.png b/umn/source/_static/images/en-us_image_0000001733266493.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266493.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266501.png b/umn/source/_static/images/en-us_image_0000001733266501.png new file mode 100644 index 0000000..1909444 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266501.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266529.png b/umn/source/_static/images/en-us_image_0000001733266529.png new file mode 100644 index 0000000..6ad01fc Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266529.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266537.png b/umn/source/_static/images/en-us_image_0000001733266537.png new file mode 100644 index 0000000..37c8275 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266537.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266553.png b/umn/source/_static/images/en-us_image_0000001733266553.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266553.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266557.png b/umn/source/_static/images/en-us_image_0000001733266557.png new file mode 100644 index 0000000..440b60b Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266557.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266565.png b/umn/source/_static/images/en-us_image_0000001733266565.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266565.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266569.png b/umn/source/_static/images/en-us_image_0000001733266569.png new file mode 100644 index 0000000..168c349 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266569.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266613.png b/umn/source/_static/images/en-us_image_0000001733266613.png new file mode 100644 index 0000000..a82c89e Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266613.png differ diff --git a/umn/source/_static/images/en-us_image_0000001733266617.png b/umn/source/_static/images/en-us_image_0000001733266617.png new file mode 100644 index 0000000..6916379 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001733266617.png differ diff --git a/umn/source/_static/images/en-us_image_0000001749511672.png b/umn/source/_static/images/en-us_image_0000001749511672.png new file mode 100644 index 0000000..4597538 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001749511672.png differ diff --git a/umn/source/account_management/creating_an_account.rst b/umn/source/account_management/creating_an_account.rst new file mode 100644 index 0000000..aaadd16 --- /dev/null +++ b/umn/source/account_management/creating_an_account.rst @@ -0,0 +1,48 @@ +:original_name: ddm_05_0002.html + +.. _ddm_05_0002: + +Creating an Account +=================== + +Prerequisites +------------- + +- You have logged in to the DDM console. +- There are schemas available in the DDM instance that you want to create an account for. + +Procedure +--------- + +#. In the instance list, locate the required DDM instance and click its name. +#. In the navigation pane, choose **Accounts**. +#. On the displayed page, click **Create Account** and configure the required parameters. + + .. table:: **Table 1** Required parameters + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=======================================================================================================================================================+ + | Username | Username of the account. | + | | | + | | The username can consist of 1 to 32 characters and must start with a letter. Only letters, digits, and underscores (_) are allowed. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Password | Password of the account. The password: | + | | | + | | - Must be case-sensitive. | + | | - Can include 8 to 32 characters. | + | | - Must contain at least three of the following character types: letters, digits, and special characters ``~!@#%^*-_=+?`` | + | | - Do not use weak or easy-to-guess passwords. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Confirm Password | The confirm password must be the same as the entered password. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Schema | Schema to be associated with the account. You can select an existing schema from the drop-down list. | + | | | + | | The account can be used to access only the associated schemas. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permissions | Options: **CREATE**, **DROP**, **ALTER**, **INDEX**, **INSERT**, **DELETE**, **UPDATE**, and **SELECT**. You can select any or a combination of them. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Description | Description of the account, which cannot exceed 256 characters. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Confirm the settings and click **OK**. diff --git a/umn/source/account_management/deleting_an_account.rst b/umn/source/account_management/deleting_an_account.rst new file mode 100644 index 0000000..9711bae --- /dev/null +++ b/umn/source/account_management/deleting_an_account.rst @@ -0,0 +1,23 @@ +:original_name: ddm_05_0004.html + +.. _ddm_05_0004: + +Deleting an Account +=================== + +Prerequisites +------------- + +You have logged in to the DDM console. + +.. note:: + + Deleted accounts cannot be recovered. Exercise caution when performing this operation. + +Procedure +--------- + +#. In the instance list, locate the DDM instance with the account that you want to delete and click its name. +#. In the navigation pane, choose **Accounts**. +#. In the account list, locate the account that you want to delete and choose **More** > **Delete** in the **Operation** column. +#. In the displayed dialog box, click **Yes**. diff --git a/umn/source/account_management/index.rst b/umn/source/account_management/index.rst new file mode 100644 index 0000000..8b868c2 --- /dev/null +++ b/umn/source/account_management/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm_05_0001.html + +.. _ddm_05_0001: + +Account Management +================== + +- :ref:`Creating an Account ` +- :ref:`Modifying an Account ` +- :ref:`Deleting an Account ` +- :ref:`Resetting the Password of an Account ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_an_account + modifying_an_account + deleting_an_account + resetting_the_password_of_an_account diff --git a/umn/source/account_management/modifying_an_account.rst b/umn/source/account_management/modifying_an_account.rst new file mode 100644 index 0000000..e08f1de --- /dev/null +++ b/umn/source/account_management/modifying_an_account.rst @@ -0,0 +1,20 @@ +:original_name: ddm_05_0003.html + +.. _ddm_05_0003: + +Modifying an Account +==================== + +Prerequisites +------------- + +You have logged in to the DDM console. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to modify and click its name. +#. In the navigation pane, choose **Accounts**. +#. In the account list, locate the required account and click **Modify** in the **Operation** column. +#. In the displayed dialog box, modify the associated schemas, permissions, and description. +#. Click **OK**. diff --git a/umn/source/account_management/resetting_the_password_of_an_account.rst b/umn/source/account_management/resetting_the_password_of_an_account.rst new file mode 100644 index 0000000..df149c5 --- /dev/null +++ b/umn/source/account_management/resetting_the_password_of_an_account.rst @@ -0,0 +1,20 @@ +:original_name: ddm_05_0008.html + +.. _ddm_05_0008: + +Resetting the Password of an Account +==================================== + +Prerequisites +------------- + +- You have logged in to the DDM console. +- Resetting the DDM account password is a high-risk operation. Ensure that you have the IAM permission to modify DDM accounts. + +Procedure +--------- + +#. In the instance list, locate the DDM instance with the account whose password you want to reset and click its name. +#. In the navigation pane, choose **Accounts**. +#. In the account list, locate the required account and choose **More** > **Reset Password** in the **Operation** column. +#. In the displayed dialog box, enter the new password, confirm the new password, and click **OK**. diff --git a/umn/source/auditing/index.rst b/umn/source/auditing/index.rst new file mode 100644 index 0000000..bf79322 --- /dev/null +++ b/umn/source/auditing/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_11_0001.html + +.. _ddm_11_0001: + +Auditing +======== + +- :ref:`Key Operations Recorded by CTS ` +- :ref:`Querying Traces ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + key_operations_recorded_by_cts + querying_traces diff --git a/umn/source/auditing/key_operations_recorded_by_cts.rst b/umn/source/auditing/key_operations_recorded_by_cts.rst new file mode 100644 index 0000000..7b7d03a --- /dev/null +++ b/umn/source/auditing/key_operations_recorded_by_cts.rst @@ -0,0 +1,82 @@ +:original_name: ddm_11_0002.html + +.. _ddm_11_0002: + +Key Operations Recorded by CTS +============================== + +Cloud Trace Service (CTS) records operations related to DDM for further query, audit, and backtrack. + +.. table:: **Table 1** DDM operations that can be recorded by CTS + + +----------------------------------------------------+----------------+-------------------------+ + | Operation | Resource Type | Trace Name | + +====================================================+================+=========================+ + | Applying a parameter template | parameterGroup | applyParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Clearing metadata after a schema is scaled out | logicDB | cleanMigrateLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Clearing user resources | all | cleanupUserAllResources | + +----------------------------------------------------+----------------+-------------------------+ + | Replicating a parameter template | parameterGroup | copyParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Creating a DDM instance | instance | createInstance | + +----------------------------------------------------+----------------+-------------------------+ + | Creating a schema | logicDB | createLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Creating a parameter template | parameterGroup | createParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Creating an account | user | createUser | + +----------------------------------------------------+----------------+-------------------------+ + | Deleting a DDM instance | instance | deleteInstance | + +----------------------------------------------------+----------------+-------------------------+ + | Deleting a schema | logicDB | deleteLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Deleting a parameter template | parameterGroup | deleteParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Deleting an account | user | deleteUser | + +----------------------------------------------------+----------------+-------------------------+ + | Scaling out a DDM instance | instance | enlargeNode | + +----------------------------------------------------+----------------+-------------------------+ + | Restarting a DDM instance | instance | instanceRestart | + +----------------------------------------------------+----------------+-------------------------+ + | Importing schema information | instance | loadMetadata | + +----------------------------------------------------+----------------+-------------------------+ + | Switching the route during scaling | logicDB | manualSwitchRoute | + +----------------------------------------------------+----------------+-------------------------+ + | Scaling out a schema | logicDB | migrateLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Modifying a parameter template | parameterGroup | modifyParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Changing the route switching time | logicDB | modifyRouteSwitchTime | + +----------------------------------------------------+----------------+-------------------------+ + | Modifying an account | user | modifyUser | + +----------------------------------------------------+----------------+-------------------------+ + | Scaling in a DDM instance | instance | reduceNode | + +----------------------------------------------------+----------------+-------------------------+ + | Resetting a parameter template | parameterGroup | resetParameterGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Resetting the password of an account | user | resetUserPassword | + +----------------------------------------------------+----------------+-------------------------+ + | Changing node class | instance | resizeFlavor | + +----------------------------------------------------+----------------+-------------------------+ + | Restoring DB instance data | instance | restoreInstance | + +----------------------------------------------------+----------------+-------------------------+ + | Retrying to scale out a schema | logicDB | retryMigrateLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Rolling back the version upgrade of a DDM instance | instance | rollback | + +----------------------------------------------------+----------------+-------------------------+ + | Rolling back a schema scaling task | logicDB | rollbackMigrateLogicDB | + +----------------------------------------------------+----------------+-------------------------+ + | Configuring access control | instance | switchIpGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Synchronizing data node information | instance | synRdsinfo | + +----------------------------------------------------+----------------+-------------------------+ + | Upgrading the version of a DDM Instance | instance | upgrade | + +----------------------------------------------------+----------------+-------------------------+ + | Creating a node group | group | createGroup | + +----------------------------------------------------+----------------+-------------------------+ + | Modifying the floating IP address | instance | modifyIp | + +----------------------------------------------------+----------------+-------------------------+ + | Modifying the name of an instance | instance | modifyName | + +----------------------------------------------------+----------------+-------------------------+ diff --git a/umn/source/auditing/querying_traces.rst b/umn/source/auditing/querying_traces.rst new file mode 100644 index 0000000..6474dd8 --- /dev/null +++ b/umn/source/auditing/querying_traces.rst @@ -0,0 +1,46 @@ +:original_name: ddm_11_0003.html + +.. _ddm_11_0003: + +Querying Traces +=============== + +Scenarios +--------- + +After CTS is enabled, the tracker starts recording operations on cloud resources. Operation records for the last 7 days are stored on the CTS console. + +This section describes how to query operation records for the last 7 days on the CTS console. + +Procedure +--------- + +#. Log in to the management console. + +#. Under **Management & Governance**, click **Cloud Trace Service**. + +#. Choose **Trace List** in the navigation pane on the left. + +#. Specify filter criteria to search for the required traces. The following filters are available: + + - **Trace Source**, **Resource Type**, and **Search By**: Select a filter from the drop-down list. + + When you select **Resource ID** for **Search By**, you also need to select or enter a resource ID. + + - **Operator**: Select a specific operator from the drop-down list. + + - **Trace Status**: Available options include **All trace statuses**, **Normal**, **Warning**, and **Incident**. You can only select one of them. + + - In the upper right corner of the page, you can specify a time range for querying traces. + +#. Click **Query**. + +#. Locate the required trace and click |image1| on the left of the trace to view details. + +#. Click **View Trace** in the **Operation** column. On the displayed dialog box, the trace structure details are displayed. + +#. Click **Export** on the right. CTS exports traces collected in the past seven days to a CSV file. The CSV file contains all information related to traces on the management console. + + For details about key fields in the trace structure, see sections "Trace Structure" and "Trace Examples" in the *Cloud Trace Service User Guide*. + +.. |image1| image:: /_static/images/en-us_image_0000001733146333.png diff --git a/umn/source/backups_and_restorations/index.rst b/umn/source/backups_and_restorations/index.rst new file mode 100644 index 0000000..6315e6c --- /dev/null +++ b/umn/source/backups_and_restorations/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_03_0067.html + +.. _ddm_03_0067: + +Backups and Restorations +======================== + +- :ref:`Restoring Data to a New Instance ` +- :ref:`Restoring Metadata ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + restoring_data_to_a_new_instance + restoring_metadata diff --git a/umn/source/backups_and_restorations/restoring_data_to_a_new_instance.rst b/umn/source/backups_and_restorations/restoring_data_to_a_new_instance.rst new file mode 100644 index 0000000..71f4cc1 --- /dev/null +++ b/umn/source/backups_and_restorations/restoring_data_to_a_new_instance.rst @@ -0,0 +1,69 @@ +:original_name: ddm_0600016.html + +.. _ddm_0600016: + +Restoring Data to a New Instance +================================ + +DDM allows you to restore data from the current instance to any point in time using an existing backup. This is a good choice for routine service backup and restoration. + +This section uses an RDS for MySQL instance as an example to describe how to restore data to a new DDM instance. + +Precautions +----------- + +- Restoring data to a new instance restores your DDM instance and its data nodes (RDS for MySQL instances). Before the restoration, you need to prepare a new DDM instance and as many new RDS for MySQL instances as there are data nodes. + +- Restoring data to a new DDM instance will overwrite data on it and cause the instance to be unavailable during restoration. +- The new RDS for MySQL instances must have the same or later versions and as much as or more storage space than the original ones. +- Restoration is not supported if the destination DDM instance is in the primary network and its associated RDS for MySQL instance is in the extended network. +- The source DDM instance must be of the version 2.3.2.11 or later, and the destination DDM instance must be of the version 3.0.8 or later. +- Time points that data can be restored to depend on the backup policy set on original data nodes. + +Procedure +--------- + +#. Log in to the DDM console. + +#. .. _ddm_0600016__li4793191882712: + + Create a new DDM instance in the region where the source DDM instance is located or select an existing DDM instance that meets the requirements. + + .. note:: + + Ensure that the new DDM instance or the selected existing DDM instance is not associated with any RDS for MySQL instance and has no schemas or accounts. + +#. .. _ddm_0600016__li1017501443616: + + On the RDS console, create as many RDS for MySQL instances as there are in the source DDM instance. + + .. note:: + + - Ensure that the new RDS instances have the same or later versions than RDS instances associated with the source DDM instance. + - Ensure that each new RDS for MySQL instance has the same or larger storage space than each source RDS instance. + +#. Switch back to the DDM console, in the instance list, locate the DDM instance whose data you want to restore, and click its name. + +#. In the navigation pane on the left, choose **Backups & Restorations**. + +#. Click **Restore to New Instance**. + +#. On the displayed **Restore to New Instance** page, specify a time range and a point in time, and select destination DDM instance and associated data nodes. + + .. table:: **Table 1** Parameter description + + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+======================================================================================================================+ + | Time Range | Select a time range. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------+ + | Time Point | Select a time point. | + | | | + | | DDM checks whether the associated data nodes have available backups at the selected point in time. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------+ + | Destination DDM Instance | Select the DDM instance created in :ref:`2 ` as the destination instance. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------+ + | Associated Data Nodes | Select the RDS for MySQL instances created in :ref:`3 ` as the destination data nodes. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------+ + +#. Confirm the information and click **OK**. Wait for 1 to 3 minutes for the data restoration to complete. diff --git a/umn/source/backups_and_restorations/restoring_metadata.rst b/umn/source/backups_and_restorations/restoring_metadata.rst new file mode 100644 index 0000000..9583d37 --- /dev/null +++ b/umn/source/backups_and_restorations/restoring_metadata.rst @@ -0,0 +1,91 @@ +:original_name: ddm_0600017.html + +.. _ddm_0600017: + +Restoring Metadata +================== + +DDM automatically backs up DDM instance metadata at 02:00 UTC+00:00 every day and retains the backup data for 30 days. Metadata backup is also triggered by key operations that affect metadata, such as deleting a schema, deleting data after shard configuration, and deleting instances. + +When you delete a schema by mistake or your RDS for MySQL instances become abnormal, metadata restoration allows you to restore your DDM instance metadata and match the metadata with the RDS instance that has completed PITR to re-establish the relationship between your DDM instance and RDS instance. Metadata restoration supports only RDS for MySQL. + +To restore metadata of a DDM instance, you can specify a point in time by referring to :ref:`Restoring Metadata to a Point in Time `, or using an available backup by referring to :ref:`Restoring Metadata Using an Available Backup `. + +Precautions +----------- + +- Metadata restoration mainly restores the metadata of your DDM instance to a new DDM instance. It starts after a point-in-time recovery (PITR) for the associated data nodes is complete. + + .. note:: + + PITR indicates that a data node has been restored to a specified point in time. + +- The destination DDM instance is not associated with any RDS for MySQL instance and has no schemas or accounts. +- Ensure that the selected RDS for MySQL instance has completed PITR. +- Restoration is not supported if the destination DDM instance is in the primary network and its associated RDS for MySQL instance is in the extended network. +- The source DDM instance must be of the version 2.3.2.11 or later, and the destination DDM instance must be of the version 3.0.8 or later. +- Time points that data can be restored to depend on the backup policy set on original data nodes. + +.. _ddm_0600017__section128315484524: + +Restoring Metadata to a Point in Time +------------------------------------- + +#. Log in to the DDM console. + +#. .. _ddm_0600017__li4793191882712: + + :ref:`Create a new DDM instance `. + +#. In the DDM instance list, locate the newly-created instance and click its name. + +#. In the navigation pane on the left, choose **Backups & Restorations**. + +#. Click **Restore Metadata**. + +#. On the displayed page, specify a time point. DDM will select an appropriate DDM metadata backup closest to the time point. + + .. table:: **Table 1** Parameter description + + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +==========================+=============================================================================================================================================================+ + | Restore To | Specify a point in time. DDM will restore metadata to this point in time using the most recent backup. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination DDM Instance | Select the DDM instance created in :ref:`2 ` as the destination instance. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination Data Nodes | Select the RDS for MySQL instances that have completed PITR. DDM will match the selected data nodes with shard information in the selected metadata backup. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Click **OK**. If a message is displayed indicating that the metadata is restored successfully, the restoration is complete. + +.. _ddm_0600017__section132806221525: + +Restoring Metadata Using an Available Backup +-------------------------------------------- + +#. Log in to the DDM console. + +#. .. _ddm_0600017__li881143742017: + + :ref:`Create a new DDM instance `. + +#. In the navigation pane on the left, choose **Backups**. + +#. Locate the required backup based on the instance name and backup time and click **Restore** in the **Operation** column. + +#. On the displayed page, configure required parameters. + + .. table:: **Table 2** Parameter description + + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +==========================+=============================================================================================================================================================+ + | Backup Name | Name of the backup to be restored. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination DDM Instance | Select the DDM instance created in :ref:`2 ` as the destination instance. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Destination Data Nodes | Select the RDS for MySQL instances that have completed PITR. DDM will match the selected data nodes with shard information in the selected metadata backup. | + +--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Click **OK**. If a message is displayed indicating that the metadata is restored successfully, the restoration is complete. diff --git a/umn/source/change_history.rst b/umn/source/change_history.rst new file mode 100644 index 0000000..dfb1303 --- /dev/null +++ b/umn/source/change_history.rst @@ -0,0 +1,24 @@ +:original_name: ddm_histroy_0003.html + +.. _ddm_histroy_0003: + +Change History +============== + ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Released On | Description | ++===================================+=======================================================================================================================================================+ +| 2023-12-15 | Optimized the description of tags in :ref:`Tags `. | ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ +| 2023-10-20 | Modified the following content: | +| | | +| | Optimized the directory for monitoring management and the procedure for viewing monitoring information in :ref:`Monitoring Management `. | +| | | +| | Added parameter description in :ref:`Restoring Metadata `. | +| | | +| | Added the description of tags in :ref:`Creating a DDM Instance `. | +| | | +| | Added :ref:`Tags `. | ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ +| 2023-01-30 | This is the first official release. | ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/connection_management/changing_a_database_port.rst b/umn/source/connection_management/changing_a_database_port.rst new file mode 100644 index 0000000..ef33626 --- /dev/null +++ b/umn/source/connection_management/changing_a_database_port.rst @@ -0,0 +1,32 @@ +:original_name: ddm_06_0036.html + +.. _ddm_06_0036: + +Changing a Database Port +======================== + +Scenarios +--------- + +DDM allows you to change the database port of a DDM instance. After the port is changed, the instance will restart. + +Procedure +--------- + +#. Log in to the DDM console, choose **Instances** in the navigation pane, locate the instance whose database port you want to change, and click its name. + +#. In the **Connection Information** area on the **Basic Information** page, click |image1| besides **Database Port**. + + For DDM instances, the database port number ranges from 1025 to 65534 except for ports 1033, 7009, 8888, and 12017 because they are in use by DDM. The default value is **5066**. + + - Click |image2|. + + Changing the database port requires a restart of the DDM instance. To continue the change, click **Yes** in the displayed dialog box. To cancel the change, click **No**. + + - To cancel the change, click |image3|. + +#. View the results on the **Basic Information** page. + +.. |image1| image:: /_static/images/en-us_image_0000001685307214.png +.. |image2| image:: /_static/images/en-us_image_0000001733266389.png +.. |image3| image:: /_static/images/en-us_image_0000001733266393.png diff --git a/umn/source/connection_management/changing_the_security_group_of_a_ddm_instance.rst b/umn/source/connection_management/changing_the_security_group_of_a_ddm_instance.rst new file mode 100644 index 0000000..30388e8 --- /dev/null +++ b/umn/source/connection_management/changing_the_security_group_of_a_ddm_instance.rst @@ -0,0 +1,30 @@ +:original_name: ddm_06_0039.html + +.. _ddm_06_0039: + +Changing the Security Group of a DDM Instance +============================================= + +Scenarios +--------- + +DDM allows you to change the security group of a DDM instance. + +.. note:: + + Changing the security group may disconnect the DDM instance from its associated data nodes. + +Procedure +--------- + +#. Log in to the DDM console, choose **Instances** in the navigation pane, locate the instance whose security group you want to change, and click its name. +#. In the **Network Information** area on the **Basic Information** page, click |image1| beside field **Security Group**. + + - Specify a new security group and click |image2|. + - To cancel the change, click |image3|. + +#. View the results on the **Basic Information** page. + +.. |image1| image:: /_static/images/en-us_image_0000001685147562.png +.. |image2| image:: /_static/images/en-us_image_0000001733146369.png +.. |image3| image:: /_static/images/en-us_image_0000001685307302.png diff --git a/umn/source/connection_management/configuring_access_control.rst b/umn/source/connection_management/configuring_access_control.rst new file mode 100644 index 0000000..05d694a --- /dev/null +++ b/umn/source/connection_management/configuring_access_control.rst @@ -0,0 +1,39 @@ +:original_name: ddm_06_0035.html + +.. _ddm_06_0035: + +Configuring Access Control +========================== + +Scenarios +--------- + +DDM supports load balancing by default, but some regions may not support. If an application accesses DDM using a private IP address, there are no traffic restrictions. To control access, you need to configure access control for your DDM instance. The security group is still valid for access requests directly sent to DDM nodes. + +Procedure +--------- + +#. Log in to the DDM console. +#. On the **Instances** page, locate the required instance and click its name. +#. On the displayed page, toggle on **Access Control**. + + - If the DDM instance has only one group, in the **Network Information** area, click |image1| on the right of button **Access Control**. + + + .. figure:: /_static/images/en-us_image_0000001685307398.png + :alt: **Figure 1** Enabling access control for a single group + + **Figure 1** Enabling access control for a single group + + - If the DDM instance has multiple groups, the access control button is moved to the group information list. On the **Basic Information** page, in the group list, click |image2| in the **Access Control** column. + + + .. figure:: /_static/images/en-us_image_0000001685147654.png + :alt: **Figure 2** Enabling access control for multiple groups + + **Figure 2** Enabling access control for multiple groups + +#. Click **Configure** on the right of **Access Control**. In the **Configure Access Control** dialog box, specify **Access Policy**, enter the required IP addresses, and click **OK**. + +.. |image1| image:: /_static/images/en-us_image_0000001733146461.png +.. |image2| image:: /_static/images/en-us_image_0000001685307394.png diff --git a/umn/source/connection_management/index.rst b/umn/source/connection_management/index.rst new file mode 100644 index 0000000..d89578f --- /dev/null +++ b/umn/source/connection_management/index.rst @@ -0,0 +1,18 @@ +:original_name: ddm_06_0040.html + +.. _ddm_06_0040: + +Connection Management +===================== + +- :ref:`Configuring Access Control ` +- :ref:`Changing a Database Port ` +- :ref:`Changing the Security Group of a DDM Instance ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + configuring_access_control + changing_a_database_port + changing_the_security_group_of_a_ddm_instance diff --git a/umn/source/data_node_management/configuring_read_weights.rst b/umn/source/data_node_management/configuring_read_weights.rst new file mode 100644 index 0000000..c50e586 --- /dev/null +++ b/umn/source/data_node_management/configuring_read_weights.rst @@ -0,0 +1,39 @@ +:original_name: ddm_10_1002.html + +.. _ddm_10_1002: + +Configuring Read Weights +======================== + +If one DDM instance is associated with multiple data nodes, you can synchronize read weight settings of the first data node to other data nodes. + +Prerequisites +------------- + +You have logged in to the DDM console. + +Precautions +----------- + +The read weight can be 0 to 100. + +Procedure +--------- + +#. In the instance list, locate the DDM instance whose data nodes you want to configure read weights for. +#. Click the instance name to enter the **Basic Information** page. +#. In the navigation pane, choose **Data Nodes**. +#. Set read weights for associated instances. + + - Set read weights for multiple instances. + + If you want to set read weights for multiple instances at a time, click **Configure Read Weight** on the **Data Nodes** page. + + In the displayed dialog box, click **Synchronize** to apply the read weight of the first instance to other instances. This operation requires that read weights of all instances should be the same. Otherwise, you need to manually configure a read weight for each instance. + + - Set a read weight for an instance. + + If you want to set the read weight of an instance, locate the target instance and click **Configure Read Weight** in the **Operation** column. + +#. Click **Yes**. +#. After the read weight is configured successfully, you can view the updated read weight on the **Data Nodes** page. diff --git a/umn/source/data_node_management/index.rst b/umn/source/data_node_management/index.rst new file mode 100644 index 0000000..a261568 --- /dev/null +++ b/umn/source/data_node_management/index.rst @@ -0,0 +1,22 @@ +:original_name: ddm_10_1000.html + +.. _ddm_10_1000: + +Data Node Management +==================== + +- :ref:`Overview ` +- :ref:`Synchronizing Data Node Information ` +- :ref:`Configuring Read Weights ` +- :ref:`Splitting Read and Write Requests ` +- :ref:`Reloading Table Data ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + synchronizing_data_node_information + configuring_read_weights + splitting_read_and_write_requests + reloading_table_data diff --git a/umn/source/data_node_management/overview.rst b/umn/source/data_node_management/overview.rst new file mode 100644 index 0000000..d0859da --- /dev/null +++ b/umn/source/data_node_management/overview.rst @@ -0,0 +1,12 @@ +:original_name: ddm_10_1001.html + +.. _ddm_10_1001: + +Overview +======== + +Managing data nodes is managing RDS for MySQL or GaussDB(for MySQL) instances that are associated with your DDM instance. You can view the instance status, storage, class, and read weight, configure read weights, and create read replicas on the data node management page. + +You can set read weights for multiple data nodes in the list at the same time. If a data node has no read replicas, you cannot set read weights for its primary RDS instance. + +Synchronize data node changes to your DDM instance after you make changes like adding or deleting a read replica, or changing the connection address, port number, or security group. diff --git a/umn/source/data_node_management/reloading_table_data.rst b/umn/source/data_node_management/reloading_table_data.rst new file mode 100644 index 0000000..1495cca --- /dev/null +++ b/umn/source/data_node_management/reloading_table_data.rst @@ -0,0 +1,22 @@ +:original_name: ddm_03_0059.html + +.. _ddm_03_0059: + +Reloading Table Data +==================== + +Prerequisites +------------- + +You have logged in to the DDM console. + +Scenarios +--------- + +If you want to deploy a DDM instance across regions for DR, use DRS to migrate service data and then reload table data after the migration is complete so that DDM can detect where logical table information is stored. + +Procedure +--------- + +#. Choose **Instances** on the left navigation pane, in the instance list, locate the instance whose information is changed and click the instance name. +#. Choose **More** > **Reload Table Data** in the **Operation** column. diff --git a/umn/source/data_node_management/splitting_read_and_write_requests.rst b/umn/source/data_node_management/splitting_read_and_write_requests.rst new file mode 100644 index 0000000..794a895 --- /dev/null +++ b/umn/source/data_node_management/splitting_read_and_write_requests.rst @@ -0,0 +1,40 @@ +:original_name: ddm_06_0012.html + +.. _ddm_06_0012: + +Splitting Read and Write Requests +================================= + +Read/write splitting offloads read requests from primary instances to read replicas on a data node at a ratio, improving processing of read/write transactions. This function is transparent to applications, and you do not need to modify service code. Configure read weights of primary instances and their read replicas on the DDM console, and read traffic will be distributed at the preset ratio and write traffic will be forwarded to the primary instances by default. The ratio is generally based on service requirements and loads of associated data nodes. + +Data is asynchronously replicated from the primary instance to read replicas, and there is a delay between them in milliseconds. Set weights of the primary instance and its read replicas to 0 and 100, respectively, that is, distribute all read requests to read replicas if sub-second latency is allowed for read requests and these requests require high query costs that may impact read/write transactions. In other scenarios, adjust the ratio based on service requirements. + +Precautions +----------- + +- The SELECT statements that contain hints or modify data in transactions are all executed by the primary instances. +- If the associated primary instance becomes faulty and parameter **Seconds_Behind_Master** on its read replicas is set to **NULL**, read-only requests are still forwarded to the primary instance. Recover the faulty instance as soon as possible. + +Prerequisites +------------- + +- You have created a DDM instance and a data node with read replicas. +- You have created a schema. + +Procedure +--------- + +#. On the **Instances** page, locate the required DDM instance and click its name. +#. In the navigation pane, choose **Data Nodes**. +#. On the displayed page, locate the target instance and click **Configure Read Weight** in the **Operation** column. The read weight can be 0 to 100. + + - If you create a read replica for the associated instance, the read replica will handle all separated read requests by default. To re-assign read/write requests, you can configure read weights of the associated instance and its read replica. + + - After the read weights are configured, the primary instance and its read replica will handle read requests according to the following formulas: + + - Primary instance: Read weight of primary instance/Total read weights of primary instance and read replica + - Read replica: Read weight of read replica/Total read weights of primary instance and read replica + + For example: If an RDS for MySQL instance contains one primary instance and one read replica and read weights of the primary instance and its read replica are 20 and 80 respectively, they will process read requests in the ratio of 1:4. In other words, the primary instance processes 1/4 of read requests and read replica processes 3/4. Write requests are automatically routed to the primary instance. + +#. After the read weights are configured successfully, you can view the weights on the **Data Nodes** page. diff --git a/umn/source/data_node_management/synchronizing_data_node_information.rst b/umn/source/data_node_management/synchronizing_data_node_information.rst new file mode 100644 index 0000000..a084237 --- /dev/null +++ b/umn/source/data_node_management/synchronizing_data_node_information.rst @@ -0,0 +1,23 @@ +:original_name: ddm_10_1003.html + +.. _ddm_10_1003: + +Synchronizing Data Node Information +=================================== + +Prerequisites +------------- + +You have logged in to the DDM console. + +Scenarios +--------- + +After you change your data nodes, for example, adding or deleting a read replica, changing the connection address, port number, or security group, you need to click **Synchronize Data Node Information** to synchronize the change to your DDM instance. + +Procedure +--------- + +#. In the instance list, locate the DDM instance whose data node changes you want to synchronize. +#. Choose **Data Nodes** in the left navigation pane and click **Synchronize Data Node Information**. +#. Wait till a message is returned, indicating that the request to synchronize data node information is submitted. diff --git a/umn/source/faqs/connection_management/how_can_i_handle_garbled_characters_generated_when_i_connect_a_mysql_instance_to_a_ddm_instance.rst b/umn/source/faqs/connection_management/how_can_i_handle_garbled_characters_generated_when_i_connect_a_mysql_instance_to_a_ddm_instance.rst new file mode 100644 index 0000000..6722795 --- /dev/null +++ b/umn/source/faqs/connection_management/how_can_i_handle_garbled_characters_generated_when_i_connect_a_mysql_instance_to_a_ddm_instance.rst @@ -0,0 +1,16 @@ +:original_name: ddm_04_0006.html + +.. _ddm_04_0006: + +How Can I Handle Garbled Characters Generated When I Connect a MySQL Instance to a DDM Instance? +================================================================================================ + +If the MySQL connection code is inconsistent with the actual one, garbled characters may be displayed during parsing on DDM. + +In this case, configure **default-character-set=utf8** to specify the encoding system. + +Example: + +.. code-block:: + + mysql -h 127.0.0.1 -P 5066 -D database --default-character-set=utf8 -u ddmuser diff --git a/umn/source/faqs/connection_management/index.rst b/umn/source/faqs/connection_management/index.rst new file mode 100644 index 0000000..2fb1700 --- /dev/null +++ b/umn/source/faqs/connection_management/index.rst @@ -0,0 +1,14 @@ +:original_name: ddm_04_0097.html + +.. _ddm_04_0097: + +Connection Management +===================== + +- :ref:`How Can I Handle Garbled Characters Generated When I Connect a MySQL Instance to a DDM Instance? ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + how_can_i_handle_garbled_characters_generated_when_i_connect_a_mysql_instance_to_a_ddm_instance diff --git a/umn/source/faqs/ddm_usage/can_i_manually_delete_databases_and_accounts_remained_in_data_nodes_after_a_schema_is_deleted.rst b/umn/source/faqs/ddm_usage/can_i_manually_delete_databases_and_accounts_remained_in_data_nodes_after_a_schema_is_deleted.rst new file mode 100644 index 0000000..85dbd20 --- /dev/null +++ b/umn/source/faqs/ddm_usage/can_i_manually_delete_databases_and_accounts_remained_in_data_nodes_after_a_schema_is_deleted.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0051.html + +.. _ddm_04_0051: + +Can I Manually Delete Databases and Accounts Remained in Data Nodes After a Schema Is Deleted? +============================================================================================== + +If you do not need to delete the databases or accounts, you can manually delete them to free up space. diff --git a/umn/source/faqs/ddm_usage/index.rst b/umn/source/faqs/ddm_usage/index.rst new file mode 100644 index 0000000..d7f4d75 --- /dev/null +++ b/umn/source/faqs/ddm_usage/index.rst @@ -0,0 +1,28 @@ +:original_name: ddm_04_0004.html + +.. _ddm_04_0004: + +DDM Usage +========= + +- :ref:`What Do I Do If I Fail to Connect to a DDM Instance Using the JDBC Driver? ` +- :ref:`What Version and Parameters Should I Select? ` +- :ref:`Why It Takes So Long Time to Export Data from MySQL Using mysqldump? ` +- :ref:`What Do I Do If a Duplicate Primary Key Error Occurs When Data Is Imported into DDM? ` +- :ref:`What Should I Do If an Error Message Is Returned When I Specify an Auto-Increment Primary Key During Migration? ` +- :ref:`What Do I Do If an Error Is Reported When Parameter Configuration Does Not Time Out? ` +- :ref:`Which Should I Delete First, a Schema or its Associated RDS Instances? ` +- :ref:`Can I Manually Delete Databases and Accounts Remained in Data Nodes After a Schema Is Deleted? ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + what_do_i_do_if_i_fail_to_connect_to_a_ddm_instance_using_the_jdbc_driver + what_version_and_parameters_should_i_select + why_it_takes_so_long_time_to_export_data_from_mysql_using_mysqldump + what_do_i_do_if_a_duplicate_primary_key_error_occurs_when_data_is_imported_into_ddm + what_should_i_do_if_an_error_message_is_returned_when_i_specify_an_auto-increment_primary_key_during_migration + what_do_i_do_if_an_error_is_reported_when_parameter_configuration_does_not_time_out + which_should_i_delete_first_a_schema_or_its_associated_rds_instances + can_i_manually_delete_databases_and_accounts_remained_in_data_nodes_after_a_schema_is_deleted diff --git a/umn/source/faqs/ddm_usage/what_do_i_do_if_a_duplicate_primary_key_error_occurs_when_data_is_imported_into_ddm.rst b/umn/source/faqs/ddm_usage/what_do_i_do_if_a_duplicate_primary_key_error_occurs_when_data_is_imported_into_ddm.rst new file mode 100644 index 0000000..f25bb75 --- /dev/null +++ b/umn/source/faqs/ddm_usage/what_do_i_do_if_a_duplicate_primary_key_error_occurs_when_data_is_imported_into_ddm.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0014.html + +.. _ddm_04_0014: + +What Do I Do If a Duplicate Primary Key Error Occurs When Data Is Imported into DDM? +==================================================================================== + +When you create a table in DDM, set the start value for automatic increment and ensure that the start value is greater than the maximum auto-increment value of imported data. diff --git a/umn/source/faqs/ddm_usage/what_do_i_do_if_an_error_is_reported_when_parameter_configuration_does_not_time_out.rst b/umn/source/faqs/ddm_usage/what_do_i_do_if_an_error_is_reported_when_parameter_configuration_does_not_time_out.rst new file mode 100644 index 0000000..5ac6e90 --- /dev/null +++ b/umn/source/faqs/ddm_usage/what_do_i_do_if_an_error_is_reported_when_parameter_configuration_does_not_time_out.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0031.html + +.. _ddm_04_0031: + +What Do I Do If an Error Is Reported When Parameter Configuration Does Not Time Out? +==================================================================================== + +Adjust the **SocketTimeOut** value or leave this parameter blank. The default value is **0**, indicating that the client is not disconnected. diff --git a/umn/source/faqs/ddm_usage/what_do_i_do_if_i_fail_to_connect_to_a_ddm_instance_using_the_jdbc_driver.rst b/umn/source/faqs/ddm_usage/what_do_i_do_if_i_fail_to_connect_to_a_ddm_instance_using_the_jdbc_driver.rst new file mode 100644 index 0000000..ba4684e --- /dev/null +++ b/umn/source/faqs/ddm_usage/what_do_i_do_if_i_fail_to_connect_to_a_ddm_instance_using_the_jdbc_driver.rst @@ -0,0 +1,65 @@ +:original_name: ddm_04_0008.html + +.. _ddm_04_0008: + +What Do I Do If I Fail to Connect to a DDM Instance Using the JDBC Driver? +========================================================================== + +When you access a DDM instance using the MySQL driver (JDBC) in load balancing mode, an infinite loop may occur during connection switchover, resulting in stack overflow. + +Fault Locating +-------------- + +#. Query the application logs and locate the fault cause. + + For example, the following logs show that the fault is caused by stack overflow. + + .. code-block:: + + Caused by: java.lang.StackOverflowError + at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) + at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) + at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:795) + at java.nio.charset.Charset.encode(Charset.java:843) + at com.mysql.jdbc.StringUtils.getBytes(StringUtils.java:2362) + at com.mysql.jdbc.StringUtils.getBytes(StringUtils.java:2344) + at com.mysql.jdbc.StringUtils.getBytes(StringUtils.java:568) + at com.mysql.jdbc.StringUtils.getBytes(StringUtils.java:626) + at com.mysql.jdbc.Buffer.writeStringNoNull(Buffer.java:670) + at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2636) + +#. Analyze the overflow source. + + For example, the following logs show that the overflow results from an infinite loop inside the driver. + + .. code-block:: + + at com.mysql.jdbc.LoadBalancedConnectionProxy.pickNewConnection(LoadBalancedConnectionProxy.java:344) + at com.mysql.jdbc.LoadBalancedAutoCommitInterceptor.postProcess(LoadBalancedAutoCommitInterceptor.java:104) + at com.mysql.jdbc.MysqlIO.invokeStatementInterceptorsPost(MysqlIO.java:2885) + at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2808) + at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2483) + at com.mysql.jdbc.ConnectionImpl.setReadOnlyInternal(ConnectionImpl.java:4961) + at com.mysql.jdbc.ConnectionImpl.setReadOnly(ConnectionImpl.java:4954) + at com.mysql.jdbc.MultiHostConnectionProxy.syncSessionState(MultiHostConnectionProxy.java:381) + at com.mysql.jdbc.MultiHostConnectionProxy.syncSessionState(MultiHostConnectionProxy.java:366) + at com.mysql.jdbc.LoadBalancedConnectionProxy.pickNewConnection(LoadBalancedConnectionProxy.java:344) + +#. Query the MySQL version, which is 5.1.44. + + According to the source code of the version, when a connection is obtained, **LoadBalance** updates the connection based on the load balancing policy and copies the configurations of the old connection to the new connection. If **AutoCommit** is **true** for the new connection, parameters of the new connection are inconsistent with those of the old connection, and **loadBalanceAutoCommitStatementThreshold** is not configured, an infinite loop occurs. The connection update function calls the parameter synchronization function, and the parameter synchronization function calls the connection update function at the same time, resulting in stack overflow. + +Solution +-------- + +Add the **loadBalanceAutoCommitStatementThreshold=5&retriesAllDown=10** parameter to the URL for connecting to the DDM instance. + +.. code-block:: + + //Connection example when load balancing is used + //jdbc:mysql:loadbalance://ip1:port1,ip2:port2..ipN:portN/{db_name} + String url = "jdbc:mysql:loadbalance://192.168.0.200:5066,192.168.0.201:5066/db_5133?loadBalanceAutoCommitStatementThreshold=5&retriesAllDown=10"; + +- **loadBalanceAutoCommitStatementThreshold** indicates the number of statements executed before a reconnection. + + If **loadBalanceAutoCommitStatementThreshold** is set to **5**, a reconnection is initiated after five SQL statements (queries or updates) are executed. A value of **0** indicates a sticky connection, and no reconnection is required. When automatic submission is disabled (**autocommit** is set to **false**), the system waits for the transaction to complete and then determines whether to initiate a reconnection. diff --git a/umn/source/faqs/ddm_usage/what_should_i_do_if_an_error_message_is_returned_when_i_specify_an_auto-increment_primary_key_during_migration.rst b/umn/source/faqs/ddm_usage/what_should_i_do_if_an_error_message_is_returned_when_i_specify_an_auto-increment_primary_key_during_migration.rst new file mode 100644 index 0000000..4f1476d --- /dev/null +++ b/umn/source/faqs/ddm_usage/what_should_i_do_if_an_error_message_is_returned_when_i_specify_an_auto-increment_primary_key_during_migration.rst @@ -0,0 +1,12 @@ +:original_name: ddm_04_0035.html + +.. _ddm_04_0035: + +What Should I Do If an Error Message Is Returned When I Specify an Auto-Increment Primary Key During Migration? +=============================================================================================================== + +Execute the following SQL statement to modify the start value of the auto-increment primary key so that the value is greater than the maximum value of primary keys in existing tables: + +.. code-block:: + + ALTER SEQUENCE . START WITH diff --git a/umn/source/faqs/ddm_usage/what_version_and_parameters_should_i_select.rst b/umn/source/faqs/ddm_usage/what_version_and_parameters_should_i_select.rst new file mode 100644 index 0000000..edb74df --- /dev/null +++ b/umn/source/faqs/ddm_usage/what_version_and_parameters_should_i_select.rst @@ -0,0 +1,49 @@ +:original_name: ddm_04_0009.html + +.. _ddm_04_0009: + +What Version and Parameters Should I Select? +============================================ + +Currently, you cannot connect to DDM using JDBC driver 5.1.46. Versions 5.1.35 to 5.1.45 are recommended. + +JDBC driver download address: `https://dev.mysql.com/doc/index-connectors.html `__ + +:ref:`Table 1 ` describes the recommended parameters for the JDBC URL. + +.. _ddm_04_0009__table127264441235: + +.. table:: **Table 1** Parameters + + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | Parameter | Description | Recommended Value | + +=========================================+======================================================================================================================================================================================+=================================================================================+ + | ip:port | Indicates the connection address and port number for connecting to the DDM instance. | Query the connection address on the DDM instance details page. | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | db_name | Indicates the name of a schema. | Query the schema name on the **Schemas** page of the DDM instance details page. | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | loadBalanceAutoCommitStatementThreshold | Indicates the number of statements executed before a reconnection. | 5 | + | | | | + | | - If the parameter value is set to **5**, after five SQL statements (queries or updates) are executed, a reconnection is initiated. | | + | | - A value of **0** indicates a sticky connection, and no reconnection is required. | | + | | | | + | | When automatic submission is disabled (**autocommit** is set to **false**), the system waits for the transaction to complete and then determines whether to initiate a reconnection. | | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | loadBalanceHostRemovalGracePeriod | Sets the grace period for removing a host from the load balancing connection. | 15000 | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | loadBalanceBlacklistTimeout | Sets the time for retaining a service in the global blacklist. | 60000 | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | loadBalancePingTimeout | Indicates the time (unit: ms) for waiting for the ping response of each load balancing connection. | 5000 | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | retriesAllDown | Indicates the maximum number of polling retries when all connection addresses fail. | 10 | + | | | | + | | If the threshold for retries has been reached but no valid address can be obtained, "SQLException" will be displayed. | | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | connectTimeout | Specifies the timeout interval for establishing a socket connection with a database server. | 10000 | + | | | | + | | Unit: ms. A value of **0** indicates that connection establishment never times out. This parameter setting is used for JDK 1.4 or later versions. | | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ + | socketTimeout | Specifies the timeout interval for a socket operation (read and write). | Set this parameter based on your service requirements. | + | | | | + | | Unit: ms. A value of **0** indicates that a socket operation never times out. | | + +-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+ diff --git a/umn/source/faqs/ddm_usage/which_should_i_delete_first_a_schema_or_its_associated_rds_instances.rst b/umn/source/faqs/ddm_usage/which_should_i_delete_first_a_schema_or_its_associated_rds_instances.rst new file mode 100644 index 0000000..b59760d --- /dev/null +++ b/umn/source/faqs/ddm_usage/which_should_i_delete_first_a_schema_or_its_associated_rds_instances.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0050.html + +.. _ddm_04_0050: + +Which Should I Delete First, a Schema or its Associated RDS Instances? +====================================================================== + +After an RDS instance is associated with your schema, you cannot delete the instance directly. To delete it, you have to delete the schema first and then delete the instance. diff --git a/umn/source/faqs/ddm_usage/why_it_takes_so_long_time_to_export_data_from_mysql_using_mysqldump.rst b/umn/source/faqs/ddm_usage/why_it_takes_so_long_time_to_export_data_from_mysql_using_mysqldump.rst new file mode 100644 index 0000000..cb04b9f --- /dev/null +++ b/umn/source/faqs/ddm_usage/why_it_takes_so_long_time_to_export_data_from_mysql_using_mysqldump.rst @@ -0,0 +1,10 @@ +:original_name: ddm_04_0013.html + +.. _ddm_04_0013: + +Why It Takes So Long Time to Export Data from MySQL Using mysqldump? +==================================================================== + +The version of the mysqldump client may be inconsistent with that of the supported MySQL server, so exporting data from MySQL is slow. + +Using the same version of the mysqldump client and MySQL server is recommended. diff --git a/umn/source/faqs/general_questions/can_data_nodes_associated_with_a_ddm_instance_share_data.rst b/umn/source/faqs/general_questions/can_data_nodes_associated_with_a_ddm_instance_share_data.rst new file mode 100644 index 0000000..26289de --- /dev/null +++ b/umn/source/faqs/general_questions/can_data_nodes_associated_with_a_ddm_instance_share_data.rst @@ -0,0 +1,8 @@ +:original_name: ddm_12_0011.html + +.. _ddm_12_0011: + +Can Data Nodes Associated with a DDM Instance Share Data? +========================================================= + +No. Different data nodes associated with a DDM instance are independent of each other and cannot share data. diff --git a/umn/source/faqs/general_questions/how_do_i_select_and_configure_a_security_group.rst b/umn/source/faqs/general_questions/how_do_i_select_and_configure_a_security_group.rst new file mode 100644 index 0000000..ef72e74 --- /dev/null +++ b/umn/source/faqs/general_questions/how_do_i_select_and_configure_a_security_group.rst @@ -0,0 +1,59 @@ +:original_name: ddm_04_0067.html + +.. _ddm_04_0067: + +How Do I Select and Configure a Security Group? +=============================================== + +DDM uses VPCs and security groups to ensure security of your instances. The following provides guidance for you on how to correctly configure a security group. + +Intra-VPC Access to DDM Instances +--------------------------------- + +Access to a DDM instance includes access to the DDM instance from the ECS where a client is located and access to its associated data nodes. + +The ECS, DDM instance, and data nodes must be in the same VPC. In addition, correct rules should be configured for their security groups to allow network access. + +#. Using the same security group is recommended for the ECS, DDM instance, and data nodes. After a security group is created, network access in the group is not restricted by default. + +#. If different security groups are configured, you may need to refer to the following configurations: + + .. note:: + + - Assume that the ECS, DDM instance, and RDS for MySQL instance are configured with security groups **sg-ECS**, **sg-DDM**, and **sg-RDS**, respectively. + - Assume that the service port of the DDM instance is **5066** and that of the RDS for MySQL instance is **3306**. + - The remote end should be a security group or an IP address. + + Add the rules described in :ref:`Figure 1 ` to the security group of the ECS to ensure that your client can access the DDM instance. + + .. _ddm_04_0067__fig153211250183316: + + .. figure:: /_static/images/en-us_image_0000001685147478.png + :alt: **Figure 1** ECS security group rules + + **Figure 1** ECS security group rules + + Add the rules in :ref:`Figure 2 ` and :ref:`Figure 3 ` to the security group of the ECS where your DDM instance is located so that your DDM instance can access associated data nodes and can be accessed by your client. + + .. _ddm_04_0067__fig09669136435: + + .. figure:: /_static/images/en-us_image_0000001733266413.png + :alt: **Figure 2** Configuring security group inbound rules for your DDM instance + + **Figure 2** Configuring security group inbound rules for your DDM instance + + .. _ddm_04_0067__fig14207437194314: + + .. figure:: /_static/images/en-us_image_0000001733146301.png + :alt: **Figure 3** Configuring security group outbound rules for your DDM instance + + **Figure 3** Configuring security group outbound rules for your DDM instance + + Add the rules in :ref:`Figure 4 ` to the security group of the ECS where the data node is located so that your DDM instance can access the node. + + .. _ddm_04_0067__fig11248191010442: + + .. figure:: /_static/images/en-us_image_0000001733266417.png + :alt: **Figure 4** Configuring security group rules of the RDS instance + + **Figure 4** Configuring security group rules of the RDS instance diff --git a/umn/source/faqs/general_questions/index.rst b/umn/source/faqs/general_questions/index.rst new file mode 100644 index 0000000..f3f954b --- /dev/null +++ b/umn/source/faqs/general_questions/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm_04_0066.html + +.. _ddm_04_0066: + +General Questions +================= + +- :ref:`What High-Reliability Mechanisms Does DDM Provide? ` +- :ref:`How Do I Select and Configure a Security Group? ` +- :ref:`Can Data Nodes Associated with a DDM Instance Share Data? ` +- :ref:`What Data Nodes Can Be Associated with a DDM Instance? ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + what_high-reliability_mechanisms_does_ddm_provide + how_do_i_select_and_configure_a_security_group + can_data_nodes_associated_with_a_ddm_instance_share_data + what_data_nodes_can_be_associated_with_a_ddm_instance diff --git a/umn/source/faqs/general_questions/what_data_nodes_can_be_associated_with_a_ddm_instance.rst b/umn/source/faqs/general_questions/what_data_nodes_can_be_associated_with_a_ddm_instance.rst new file mode 100644 index 0000000..697e2d0 --- /dev/null +++ b/umn/source/faqs/general_questions/what_data_nodes_can_be_associated_with_a_ddm_instance.rst @@ -0,0 +1,12 @@ +:original_name: ddm_12_0012.html + +.. _ddm_12_0012: + +What Data Nodes Can Be Associated with a DDM Instance? +====================================================== + +Any data nodes can be associated with a DDM instance as long as they are: + +- Running normally +- In the same VPC as the DDM instance +- Not in use by other DDM instances diff --git a/umn/source/faqs/general_questions/what_high-reliability_mechanisms_does_ddm_provide.rst b/umn/source/faqs/general_questions/what_high-reliability_mechanisms_does_ddm_provide.rst new file mode 100644 index 0000000..f0cb407 --- /dev/null +++ b/umn/source/faqs/general_questions/what_high-reliability_mechanisms_does_ddm_provide.rst @@ -0,0 +1,22 @@ +:original_name: ddm_04_0068.html + +.. _ddm_04_0068: + +What High-Reliability Mechanisms Does DDM Provide? +================================================== + +Protection of Data Integrity +---------------------------- + +DDM instance faults do not affect data integrity. + +- Service data is stored in shards of data nodes, but not on DDM. +- Configuration information of schemas and logical tables is stored in DDM databases. Primary and standby DDM databases are highly available. + +High Availability +----------------- + +DDM is deployed using multiple stateless nodes in cluster mode and provides services through the IP address bound to your load balancer. + +- If one DDM node becomes faulty, an error is returned for connections established on the node, without affecting the DDM cluster. The faulty node is generally deleted from the cluster within 5 seconds. +- If a data node becomes faulty, services can be restored within 30 seconds after the data node is recovered. diff --git a/umn/source/faqs/index.rst b/umn/source/faqs/index.rst new file mode 100644 index 0000000..453b077 --- /dev/null +++ b/umn/source/faqs/index.rst @@ -0,0 +1,22 @@ +:original_name: ddm_04_0001.html + +.. _ddm_04_0001: + +FAQs +==== + +- :ref:`General Questions ` +- :ref:`DDM Usage ` +- :ref:`SQL Syntax ` +- :ref:`RDS-related Questions ` +- :ref:`Connection Management ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + general_questions/index + ddm_usage/index + sql_syntax/index + rds-related_questions/index + connection_management/index diff --git a/umn/source/faqs/rds-related_questions/how_can_i_query_rds_for_mysql_information_by_running_command_show_full_innodb_status.rst b/umn/source/faqs/rds-related_questions/how_can_i_query_rds_for_mysql_information_by_running_command_show_full_innodb_status.rst new file mode 100644 index 0000000..65c74a6 --- /dev/null +++ b/umn/source/faqs/rds-related_questions/how_can_i_query_rds_for_mysql_information_by_running_command_show_full_innodb_status.rst @@ -0,0 +1,12 @@ +:original_name: ddm_04_0029.html + +.. _ddm_04_0029: + +How Can I Query RDS for MySQL Information by Running Command **show full innodb status**? +========================================================================================= + +After you connect to a DDM instance through the MySQL client, you can run command **show full innodb status** to query information about the associated RDS for MySQL instances. The following information can be queried: + +- Current time and duration since the last output. +- Status of the master thread. +- SEMAPHORES including event counts and available waiting threads when there is high-concurrency workload. You can use the information to locate performance bottlenecks if any. diff --git a/umn/source/faqs/rds-related_questions/how_do_i_handle_data_with_duplicate_primary_keys_in_a_table.rst b/umn/source/faqs/rds-related_questions/how_do_i_handle_data_with_duplicate_primary_keys_in_a_table.rst new file mode 100644 index 0000000..60ba15e --- /dev/null +++ b/umn/source/faqs/rds-related_questions/how_do_i_handle_data_with_duplicate_primary_keys_in_a_table.rst @@ -0,0 +1,35 @@ +:original_name: ddm_04_0028.html + +.. _ddm_04_0028: + +How Do I Handle Data with Duplicate Primary Keys in a Table? +============================================================ + +Scenario +-------- + +If there is already a primary key whose data type is a boundary value, in your DDM instance, duplicate primary keys will be reported when you insert a data record that is beyond the data range of the primary key. + +Procedure +--------- + +#. Log in to the RDS console. +#. On the **Instances** page, locate the RDS for MySQL instance associated with your DDM instance and click the name of the RDS instance. +#. On the **Basic Information** page, choose **Parameters** in the left pane. +#. Click the **Parameters** tab and enter **sql_mode** in the text box. Then click the expanding button in the **Value** column, select **STRICT_ALL_TABLES** or **STRICT_TRANS_TABLES**, and click **Save**. + + .. note:: + + **STRICT_ALL_TABLES** and **STRICT_TRANS_TABLES** are both strict modes. The strict mode controls how MySQL handles invalid or missing values. + + - An invalid value might have the wrong data type for the column, or might be out of range. + + - A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. + + - If the DDM instance version is earlier than 2.4.1.3, do not set **sql_mode** to **ANSI_QUOTES**. If you set it to **ANSI_QUOTES**, double quotation marks used for each string will be translated into an identifier during SQL statement execution, making the string invalid. + + For example, **logic** in **select \* from test where tb = "logic"** cannot be parsed correctly. + + For more information about SQL modes, see `Server SQL Modes `__. + +#. On the **Instances** page, restart the DDM instance. diff --git a/umn/source/faqs/rds-related_questions/index.rst b/umn/source/faqs/rds-related_questions/index.rst new file mode 100644 index 0000000..efbb170 --- /dev/null +++ b/umn/source/faqs/rds-related_questions/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm_04_0047.html + +.. _ddm_04_0047: + +RDS-related Questions +===================== + +- :ref:`Is the Name of a Database Table Case-Sensitive? ` +- :ref:`What Risky Operations on RDS for MySQL Will Affect DDM? ` +- :ref:`How Do I Handle Data with Duplicate Primary Keys in a Table? ` +- :ref:`How Can I Query RDS for MySQL Information by Running Command show full innodb status? ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + is_the_name_of_a_database_table_case-sensitive + what_risky_operations_on_rds_for_mysql_will_affect_ddm + how_do_i_handle_data_with_duplicate_primary_keys_in_a_table + how_can_i_query_rds_for_mysql_information_by_running_command_show_full_innodb_status diff --git a/umn/source/faqs/rds-related_questions/is_the_name_of_a_database_table_case-sensitive.rst b/umn/source/faqs/rds-related_questions/is_the_name_of_a_database_table_case-sensitive.rst new file mode 100644 index 0000000..5b5bc37 --- /dev/null +++ b/umn/source/faqs/rds-related_questions/is_the_name_of_a_database_table_case-sensitive.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0048.html + +.. _ddm_04_0048: + +Is the Name of a Database Table Case-Sensitive? +=============================================== + +DDM is case-insensitive to database names, table names, and column names by default. diff --git a/umn/source/faqs/rds-related_questions/what_risky_operations_on_rds_for_mysql_will_affect_ddm.rst b/umn/source/faqs/rds-related_questions/what_risky_operations_on_rds_for_mysql_will_affect_ddm.rst new file mode 100644 index 0000000..3a0c1fa --- /dev/null +++ b/umn/source/faqs/rds-related_questions/what_risky_operations_on_rds_for_mysql_will_affect_ddm.rst @@ -0,0 +1,52 @@ +:original_name: ddm_04_0025.html + +.. _ddm_04_0025: + +What Risky Operations on RDS for MySQL Will Affect DDM? +======================================================= + +:ref:`Table 1 ` lists risky operations on RDS for MySQL. + +.. _ddm_04_0025__table601177597: + +.. table:: **Table 1** Risky operations on RDS for MySQL + + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation Type | Operation | Impact of the Operation | + +============================================+========================================================================+==================================================================================================================================================================================================+ + | Operations on the RDS for MySQL console | Deleting an RDS for MySQL instance | After an RDS for MySQL instance is deleted, all schemas and logical tables of the DDM instance associated with the RDS instance become unavailable. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Stopping an RDS for MySQL instance | After an RDS for MySQL instance is stopped, all schemas and logical tables of the DDM instance associated with the RDS instance become unavailable. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Performing the primary/standby switchover of an RDS for MySQL instance | RDS for MySQL may be intermittently interrupted during the primary/standby switchover. In addition, a small amount of data may be lost in case of long delay in primary/standby synchronization. | + | | | | + | | | - Creating schemas or logical tables is not allowed on DDM during the primary/standby switchover of the RDS for MySQL instance. | + | | | - After a primary/standby switchover of an RDS for MySQL instance, the RDS instance ID remains unchanged in DDM. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Restarting an RDS for MySQL instance | The restart of an RDS for MySQL instance makes itself unavailable and will also affect the associated DDM instance. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Resetting a password | After the password of an RDS for MySQL instance is reset, enter the new password on the **DB Instance Connection** page when creating a DDM schema. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying a parameter template | The following parameters are set to fixed values. If their values are modified, DDM will not function properly. | + | | | | + | | | - **lower_case_table_names**: Set this parameter to **1**, indicating that data table names and sequence names are case-insensitive. | + | | | - **local_infile**: Set this parameter to **ON** in scale-out scenarios. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying a security group | Your DDM instance cannot connect to associated RDS for MySQL instances. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying a VPC | The DDM instance and RDS for MySQL instance cannot communicate with each other if they are in different VPCs. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Restoring data | Restoring data may damage data integrity. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operations through an RDS for MySQL client | Deleting a physical database created on DDM | After a physical database is deleted, the original data will be lost and new data cannot be written into the database. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Deleting an account created on DDM | After an account is deleted, logical tables cannot be created on DDM. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Deleting a physical table created on DDM | After a physical table is deleted, data stored on DDM will be lost. The corresponding logical table becomes unavailable on DDM. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying the name of a physical table created on DDM | DDM cannot obtain data of the corresponding logical table, and the logical table becomes unavailable on DDM. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Changing a record | Changing a record in a broadcast table will affect the data consistency of shards. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying a whitelist | A DDM instance cannot access the RDS for MySQL instance if it is not in the RDS instance whitelist. | + +--------------------------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/faqs/sql_syntax/does_ddm_support_distributed_joins.rst b/umn/source/faqs/sql_syntax/does_ddm_support_distributed_joins.rst new file mode 100644 index 0000000..f937e8d --- /dev/null +++ b/umn/source/faqs/sql_syntax/does_ddm_support_distributed_joins.rst @@ -0,0 +1,12 @@ +:original_name: ddm_04_0015.html + +.. _ddm_04_0015: + +Does DDM Support Distributed JOINs? +=================================== + +Yes. DDM supports distributed JOINs. + +- Redundant fields are added during table design. +- Cross-shard JOIN is implemented by using broadcast tables, ER shards, and ShareJoin. +- Currently, DDM does not allow cross-schema update or deletion of multiple tables. diff --git a/umn/source/faqs/sql_syntax/does_ddm_support_forced_conversion_of_data_types.rst b/umn/source/faqs/sql_syntax/does_ddm_support_forced_conversion_of_data_types.rst new file mode 100644 index 0000000..dd17fdb --- /dev/null +++ b/umn/source/faqs/sql_syntax/does_ddm_support_forced_conversion_of_data_types.rst @@ -0,0 +1,8 @@ +:original_name: ddm_04_0019.html + +.. _ddm_04_0019: + +Does DDM Support Forced Conversion of Data Types? +================================================= + +Data type conversion is an advanced function. DDM will be gradually upgraded to be compatible with more SQL syntax. If necessary, submit a service ticket for processing. diff --git a/umn/source/faqs/sql_syntax/how_do_i_optimize_sql_statements.rst b/umn/source/faqs/sql_syntax/how_do_i_optimize_sql_statements.rst new file mode 100644 index 0000000..2bc5ee9 --- /dev/null +++ b/umn/source/faqs/sql_syntax/how_do_i_optimize_sql_statements.rst @@ -0,0 +1,10 @@ +:original_name: ddm_04_0016.html + +.. _ddm_04_0016: + +How Do I Optimize SQL Statements? +================================= + +- You are advised to use INNER instead of LEFT JOIN or RIGHT JOIN. +- When LEFT JOIN or RIGHT JOIN is used, ON is preferentially executed, and WHERE is executed at the end. Therefore, when using LEFT JOIN or RIGHT JOIN, ensure that the conditions are judged in the ON statement to reduce the execution of WHERE. +- When possible, use JOIN instead of subqueries to avoid full scanning of large tables. diff --git a/umn/source/faqs/sql_syntax/index.rst b/umn/source/faqs/sql_syntax/index.rst new file mode 100644 index 0000000..e9207eb --- /dev/null +++ b/umn/source/faqs/sql_syntax/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm_04_0046.html + +.. _ddm_04_0046: + +SQL Syntax +========== + +- :ref:`Does DDM Support Distributed JOINs? ` +- :ref:`How Do I Optimize SQL Statements? ` +- :ref:`Does DDM Support Forced Conversion of Data Types? ` +- :ref:`What Should I Do If an Error Is Reported When Multiple Data Records Are Inserted into Batches Using the INSERT Statement? ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + does_ddm_support_distributed_joins + how_do_i_optimize_sql_statements + does_ddm_support_forced_conversion_of_data_types + what_should_i_do_if_an_error_is_reported_when_multiple_data_records_are_inserted_into_batches_using_the_insert_statement diff --git a/umn/source/faqs/sql_syntax/what_should_i_do_if_an_error_is_reported_when_multiple_data_records_are_inserted_into_batches_using_the_insert_statement.rst b/umn/source/faqs/sql_syntax/what_should_i_do_if_an_error_is_reported_when_multiple_data_records_are_inserted_into_batches_using_the_insert_statement.rst new file mode 100644 index 0000000..609fc1e --- /dev/null +++ b/umn/source/faqs/sql_syntax/what_should_i_do_if_an_error_is_reported_when_multiple_data_records_are_inserted_into_batches_using_the_insert_statement.rst @@ -0,0 +1,11 @@ +:original_name: ddm_04_0036.html + +.. _ddm_04_0036: + +What Should I Do If an Error Is Reported When Multiple Data Records Are Inserted into Batches Using the INSERT Statement? +========================================================================================================================= + +Solution +-------- + +Split an INSERT statement into multiple small statements. diff --git a/umn/source/function_overview.rst b/umn/source/function_overview.rst new file mode 100644 index 0000000..6188194 --- /dev/null +++ b/umn/source/function_overview.rst @@ -0,0 +1,36 @@ +:original_name: ddm_03_0053.html + +.. _ddm_03_0053: + +Function Overview +================= + +Distributed Database Middleware (DDM) is a MySQL-compatible, distributed middleware service designed for relational databases. It can resolve distributed scaling issues to break through capacity and performance bottlenecks of MySQL databases, helping handle highly concurrent access to massive volumes of data. + +:ref:`Table 1 ` lists the functions supported by DDM. + +.. _ddm_03_0053__table297216124517: + +.. table:: **Table 1** DDM functions + + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Category | Function | + +============================================+===================================================================================================================================================================================+ + | Instances | Creating, deleting, renewing, unsubscribing from, and restarting a DDM instance, and changing class of a DDM instance. For details, see :ref:`Instance Management `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Backups | Restoring data to a new DDM instance and restoring metadata. For details, see :ref:`Backups and Restorations `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter templates | Creating, editing, replicating, and applying a parameter template, and comparing two parameter templates. For details, see :ref:`Parameter Template Management `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Task center | Enabling you to view progress and statuses of asynchronous tasks submitted on the console. For details, see :ref:`Task Center `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Schemas | Creating, exporting, importing, and deleting schemas. For details, see :ref:`Schema Management `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Flexible shards configuration for a schema | You can increase shards or data nodes to scale out storage. For details, see :ref:`Shard Configuration `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Accounts | Creating, modifying, and deleting a DDM account, and resetting its password. For details, see :ref:`Account Management `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Monitoring | Providing metrics and methods of viewing metrics. For details, see :ref:`Monitoring Management `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | SQL syntax | Describing DDL, DML, global sequence, SQL statements, and sharding algorithms. For details, see :ref:`SQL Syntax `. | + +--------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/getting_started/index.rst b/umn/source/getting_started/index.rst new file mode 100644 index 0000000..3403b04 --- /dev/null +++ b/umn/source/getting_started/index.rst @@ -0,0 +1,22 @@ +:original_name: ddm_02_0001.html + +.. _ddm_02_0001: + +Getting Started +=============== + +- :ref:`Overview ` +- :ref:`Step 1: Create a DDM Instance and an RDS for MySQL Instance ` +- :ref:`Step 2: Create a Schema and Associate It with an RDS for MySQL Instance ` +- :ref:`Step 3: Create a DDM Account ` +- :ref:`Step 4: Log In to the DDM Schema ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + step_1_create_a_ddm_instance_and_an_rds_for_mysql_instance + step_2_create_a_schema_and_associate_it_with_an_rds_for_mysql_instance + step_3_create_a_ddm_account + step_4_log_in_to_the_ddm_schema diff --git a/umn/source/getting_started/overview.rst b/umn/source/getting_started/overview.rst new file mode 100644 index 0000000..3ea57fe --- /dev/null +++ b/umn/source/getting_started/overview.rst @@ -0,0 +1,28 @@ +:original_name: ddm_01_0020.html + +.. _ddm_01_0020: + +Overview +======== + +Scenarios +--------- + +This section describes how to associate a DDM instance with a data node (RDS for MySQL instance). + +Process of Using DDM +-------------------- + +:ref:`Step 1: Create a DDM Instance and an RDS for MySQL Instance ` + +:ref:`Step 2: Create a Schema and Associate It with an RDS for MySQL Instance ` + +:ref:`Step 3: Create a DDM Account ` + +:ref:`Step 4: Log In to the DDM Schema ` + + +.. figure:: /_static/images/en-us_image_0000001733146485.png + :alt: **Figure 1** Flowchart for using DDM + + **Figure 1** Flowchart for using DDM diff --git a/umn/source/getting_started/step_1_create_a_ddm_instance_and_an_rds_for_mysql_instance.rst b/umn/source/getting_started/step_1_create_a_ddm_instance_and_an_rds_for_mysql_instance.rst new file mode 100644 index 0000000..074efff --- /dev/null +++ b/umn/source/getting_started/step_1_create_a_ddm_instance_and_an_rds_for_mysql_instance.rst @@ -0,0 +1,115 @@ +:original_name: ddm_06_0002.html + +.. _ddm_06_0002: + +Step 1: Create a DDM Instance and an RDS for MySQL Instance +=========================================================== + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select the required region. + +#. Click **Service List** and choose **Databases** > **Distributed Database Middleware**. + +#. On the **Instances** page, in the upper right corner, click **Create** **DDM Instance**. + +#. On the displayed page, configure the required parameters. + + .. table:: **Table 1** Parameter description + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===============================================================================================================================================================================================================================================================================+ + | Region | Region where the DDM instance is located. Select the required region. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | AZ | Availability zone where the DDM instance is deployed. | + | | | + | | Nodes in a DDM instance can be deployed on different physical servers in the same AZ to keep services always available even if one physical server becomes faulty. | + | | | + | | A DDM instance can be deployed across AZs to provide cross-AZ DR. | + | | | + | | If necessary, you can select multiple AZs when you create a DDM instance. Then nodes of the instance will be deployed in multiple different AZs. | + | | | + | | .. note:: | + | | | + | | Deploy your application, DDM instance, and required RDS instances in the same AZ to reduce network latency. Cross-AZ deployment may increase network latency. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Instance Name | Name of the DDM instance, which: | + | | | + | | - Cannot be left blank. | + | | - Must start with a letter. | + | | - Must be 4 to 64 characters long. | + | | - Can contain only letters, digits, and hyphens (-). | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Node Class | Class of the DDM instance node. You can select **General-enhanced** or **Kunpeng general computing-plus** and then specify a node class. | + | | | + | | .. note:: | + | | | + | | Estimate compute and storage requirements of your applications based on your service type and scale before you create a DDM instance, and then select an appropriate node class so that the CPU and memory specifications of your DDM instance can better meet your needs. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Instance Nodes | Number of nodes in a DDM instance. Up to 32 nodes are supported. | + | | | + | | .. note:: | + | | | + | | At least 2 nodes are recommended because using a single node cannot guarantee high availability. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | VPC | VPC that the DDM instance belongs to. This VPC isolates networks for different services. It allows you to manage and configure private networks, simplifying network management. | + | | | + | | Click **View VPC** to show more details and security group rules. | + | | | + | | .. note:: | + | | | + | | The DDM instance should be in the same VPC as the required RDS for MySQL instance. | + | | | + | | To ensure network connectivity, the DDM instance you purchased must be in the same VPC as your applications and RDS for MySQL instances. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subnet | Name and IP address range of the subnet | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Security Group | Select an existing security group. | + | | | + | | You are advised to select the same security group for your DDM instance, application, and RDS for MySQL instances so that they can communicate with each other. If different security groups are selected, add security group rules to enable network access. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter Template | Select an existing parameter template. You can also click **View Parameter Template** to set parameters on the displayed page. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Tags | (Optional) Adding tags helps you better identify and manage your DDM resources. | + | | | + | | You can add tags to your instance. Each instance can have a maximum of 20 tags. | + | | | + | | **Tag key: This parameter is mandatory and cannot be null.** | + | | | + | | - Must be unique for each instance. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can include 1 to 36 characters. | + | | - Cannot be an empty string or start with **\_sys\_**. | + | | | + | | **Tag value: This parameter is mandatory.** | + | | | + | | - Is an empty string by default. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can contain 0 to 43 characters. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. After the configuration is complete, click **Next** at the bottom of the page. + +#. Confirm the configurations and click **Submit**. + +#. To view and manage the instance, go to the **Instances** page. + + The default database port is **5066** and can be changed after a DDM instance is created. + + For details, see :ref:`Changing a Database Port `. + +#. Switch to the RDS console, click **Create** **DB Instance** in the upper right corner, specify the required information, and click **Next**. + + For details about how to create an RDS for MySQL instance, see `Create a DB Instance `__. + + .. caution:: + + The RDS for MySQL instance must be in the same VPC and subnet as your DDM instance. If they are not in the same subnet, configure routes to ensure network connectivity. + +#. After confirming the settings, click **Submit**. Wait 1 to 3 minutes till the instance is created. + +.. |image1| image:: /_static/images/en-us_image_0000001685147682.png diff --git a/umn/source/getting_started/step_2_create_a_schema_and_associate_it_with_an_rds_for_mysql_instance.rst b/umn/source/getting_started/step_2_create_a_schema_and_associate_it_with_an_rds_for_mysql_instance.rst new file mode 100644 index 0000000..fbc30ec --- /dev/null +++ b/umn/source/getting_started/step_2_create_a_schema_and_associate_it_with_an_rds_for_mysql_instance.rst @@ -0,0 +1,39 @@ +:original_name: ddm_02_0013.html + +.. _ddm_02_0013: + +Step 2: Create a Schema and Associate It with an RDS for MySQL Instance +======================================================================= + +Procedure +--------- + +#. Log in to the DDM console, and in the navigation pane, choose **Instances**. In the instance list, locate the required DDM instance and click **Create Schema** in the **Operation** column. +#. On the displayed page, specify a sharding mode, enter a schema name, set the number of shards, select the required DDM accounts, and click **Next**. + + .. table:: **Table 1** Parameter description + + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==========================================================================================================================================================================================================================+ + | Sharding | - **Sharded**: indicates that one schema can be associated with multiple data nodes, and all shards will be evenly distributed across the nodes. | + | | - **Unsharded**: indicates that one schema can be associated with only one data node, and only one shard can be created on the data node. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Schema | The name contains 2 to 48 characters and must start with a lowercase letter. Only lowercase letters, digits, and underscores (_) are allowed. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Account | The DDM account that needs to be associated with the schema. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Nodes | Select only the data nodes that are in the same VPC as your DDM instance and not in use by other data nodes. DDM will create databases on the selected data nodes without affecting their existing databases and tables. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Shards | The total shards are the shards on all data nodes. There cannot be more data nodes than there are shards in the schema. Each data node must have at least one shard assigned. Recommended shards per data node: 8 to 64. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. On the **DB Instance Connection** page, enter a database account with the required permissions and click **Test Connection**. + + .. note:: + + Required permissions: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER WITH GRANT OPTION + + You can create a database account for the RDS for MySQL instance and assign it the above permissions in advance. + +#. After the test becomes successful, click **Finish**. diff --git a/umn/source/getting_started/step_3_create_a_ddm_account.rst b/umn/source/getting_started/step_3_create_a_ddm_account.rst new file mode 100644 index 0000000..307a234 --- /dev/null +++ b/umn/source/getting_started/step_3_create_a_ddm_account.rst @@ -0,0 +1,41 @@ +:original_name: ddm_02_0000.html + +.. _ddm_02_0000: + +Step 3: Create a DDM Account +============================ + +Procedure +--------- + +#. Log in to the DDM console, in the instance list, locate the required DDM instance and click its name. +#. In the navigation pane, choose **Accounts**. +#. On the displayed page, click **Create Account** and configure required parameters. + + .. table:: **Table 1** Required parameters + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=======================================================================================================================================================+ + | Username | Username of the account. | + | | | + | | The username can include 1 to 32 characters and must start with a letter. Only letters, digits, and underscores (_) are allowed. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Password | Password of the account. The password: | + | | | + | | - Can include 8 to 32 characters. | + | | - Must contain at least three of the following character types: letters, digits, and special characters ``~!@#%^*-_=+?`` | + | | - Cannot be a weak password. It cannot be overly simple and easily guessed. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Confirm Password | The confirm password must be the same as the entered password. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Schema | Schema to be associated with the DDM account. You can select an existing schema from the drop-down list. | + | | | + | | Only the associated schemas can be accessed using the account. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permissions | Options: **CREATE**, **DROP**, **ALTER**, **INDEX**, **INSERT**, **DELETE**, **UPDATE**, and **SELECT**. You can select any or a combination of them. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Description | Description of the account, which cannot exceed 256 characters. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Click **OK**. diff --git a/umn/source/getting_started/step_4_log_in_to_the_ddm_schema.rst b/umn/source/getting_started/step_4_log_in_to_the_ddm_schema.rst new file mode 100644 index 0000000..c38b3b8 --- /dev/null +++ b/umn/source/getting_started/step_4_log_in_to_the_ddm_schema.rst @@ -0,0 +1,206 @@ +:original_name: ddm_02_0005.html + +.. _ddm_02_0005: + +Step 4: Log In to the DDM Schema +================================ + +After you create a DDM instance, you can log in to it using a client such as Navicat, or connect to the required schema in the instance using the CLI or JDBC driver. + +This section describes how to log in to a DDM instance or a schema. + +Preparations +------------ + +Before you log in to your DDM instance or schema, you have to obtain its connection address. + +Obtaining the Schema Connection Address +--------------------------------------- + +#. Log in to the DDM console. +#. Hover on the left menu to display **Service List** and choose **Databases** > **Distributed Database Middleware**. +#. In the navigation pane, choose **Instances**. In the instance list, locate the required DDM instance and click its name. +#. In the navigation pane, choose **Schemas**. +#. In the schema list, locate the required schema and click its name. +#. In the **Connection Address** area, view CLI and JDBC connection addresses. + + .. note:: + + - If load balancing is enabled, one floating IP address will be assigned to a DDM instance even if it has multiple nodes. You can use this address to connect to the DDM instance for load balancing. + - There are some historical instances that do not support load balancing, so they have multiple IP addresses. For load balancing, you can use JDBC connection strings to connect to them. + - If read-only groups are created, each group will be assigned a load balancing address for service isolation. + +Connection Methods +------------------ + +For details about method 1, see :ref:`Using Navicat to Log In to a DDM Instance `. + +For details about method 2, see :ref:`Using the MySQL CLI to Log In to a Schema `. + +For details about method 3, see :ref:`Using a JDBC Driver to Log In to a Schema `. + +For details about method 4, see :ref:`Logging In to a DDM Instance on the DDM Console `. + +.. note:: + + #. For security purposes, select an ECS in the same VPC as your DDM instance. + #. Ensure that a MySQL client has been installed on the required ECS or the MySQL connection driver has been configured. + #. Before you log in to a DDM instance, configure its information on the client or connection driver. + +.. _ddm_02_0005__section19691512121812: + +Using Navicat to Log In to a DDM Instance +----------------------------------------- + +#. Log in to the DDM console, locate the required DDM instance, and click its name. +#. Ask technical support to add an EIP to the feature whitelist. In the **Instance Information** area, click **Bind**. In the displayed dialog box, select the EIP and click **OK**. Bind the EIP with your DDM instance. +#. In the navigation pane on the left, click the VPC icon and choose **Access Control** > **Security Groups**. +#. On the **Security Groups** page, locate the required security group and click **Manage Rule** in the **Operation** column. On the displayed page, click **Add Rule**. Configure the security group rule as needed and click **OK**. + + .. note:: + + After binding an EIP to your DDM instance, set strict inbound and outbound rules for the security group to enhance database security. + +#. Open Navicat and click **Connection**. In the displayed dialog box, enter the host IP address (EIP), username, and password (DDM account). + + .. note:: + + Navicat12 is recommended for Navicat clients. + +#. Click **Test Connection**. If a message is returned indicating that the connection is successful, click **OK**. The connection will succeed 1 to 2 minutes later. If the connection fails, the failure cause is displayed. Modify the required information and try again. + +.. note:: + + Using Navicat to access a DDM instance is similar to using other visualized MySQL tools such as MySQL Workbench. Therefore, the procedure of using other visualized MySQL tools to connect to a DDM instance has been omitted. + +.. _ddm_02_0005__section1621624581510: + +Using the MySQL CLI to Log In to a Schema +----------------------------------------- + +#. Log in to the required ECS, open the CLI, and run the following command: + + .. code-block:: + + mysql -h ${DDM_SERVER_ADDRESS} -P ${DDM_SERVER_PORT} -u ${DDM_USER} -p [-D ${DDM_DBNAME}] [--default -character -set=utf8][--default_auth=mysql_native_password] + + .. table:: **Table 1** Parameter description + + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | Parameter | Description | Example Value | + +====================================+========================================================================================================================================================+=======================+ + | DDM_SERVER_ADDRESS | IP address of the DDM instance | 192.168.0.200 | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | DDM_SERVER_PORT | Connection port of the DDM instance | 5066 | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | DDM_USER | Account of the DDM instance | dbuser01 | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | DDM_DBNAME | (Optional) Name of the target schema in the DDM instance | ``-`` | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | default-character-set=utf8 | (Optional) Select character set UTF-8 for encoding. | ``-`` | + | | | | + | | Configure this parameter if garbled characters are displayed during parsing due to inconsistency between MySQL connection code and actually used code. | | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + | default_auth=mysql_native_password | (Optional) The password authentication plug-in is used by default. | ``-`` | + | | | | + | | If you use the MySQL 8.0 client, this parameter is required. | | + +------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ + +#. View the command output. The following is an example output of running a MySQL command in the Windows CLI. + + .. code-block:: + + C:\Users\testDDM>mysql -h 192.168.0.200 -P 5066 -D db_5133 -u dbuser01 -p + Enter password: + Reading table information for completion of table and column names + You can turn off this feature to get a quicker startup with -A + + Welcome to the MySQL monitor. Commands end with ;or \g. + Your MySQL connection id is 5 + Server version: 5.6.29 + + Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. + + Oracle is a registered trademark of Oracle Corporation and/or its + affiliates. Other names may be trademarks of their respective + owners. + + Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + + mysql> + +.. _ddm_02_0005__section1690417388176: + +Using a JDBC Driver to Log In to a Schema +----------------------------------------- + +#. Load the required JDBC driver. + + .. code-block:: + + Class.forname(com.mysql.jdbc.Driver); + + .. note:: + + JDBC drivers 5.1.49 or later are recommended. + +#. Create a database connection. + + .. code-block:: + + String username = "dbuser01" ; + String password = "xxxxxx" ; + String url = "jdbc:mysql://192.168.0.200:5066/db_5133"; + Connection con = DriverManager.getConnection(url , username , password); + +#. Create a Statement object. + + .. code-block:: + + Statement stmt = con.createStatement(); + +#. Execute the required SQL statement. + + .. code-block:: + + ResultSet rs = stmt.executeQuery("select now() as Systemtime"); + con.close(); + +#. .. _ddm_02_0005__li139111931387: + + (Optional) Optimize code as needed. + + .. code-block:: + + loadBalanceAutoCommitStatementThreshold=5&loadBalanceHostRemovalGracePeriod=15000&loadBalanceBlacklistTimeout=60000&loadBalancePingTimeout=5000&retriesAllDown=10&connectTimeout=10000 + + .. note:: + + - Parameters **loadBalanceAutoCommitStatementThreshold** and **retriesAllDown** must be configured based on the example in :ref:`5 `. Otherwise, an infinite loop may occur during the connection switchover, resulting in stack overflow. + - **loadBalanceAutoCommitStatementThreshold**: defines the number of matching statements which will trigger the driver to potentially swap physical server connections. + - **loadBalanceHostRemovalGracePeriod**: indicates the grace period to wait for a host being removed from a load-balanced connection, to be released when it is the active host. + - **loadBalanceBlacklistTimeout**: indicates the time in milliseconds between checks of servers which are unavailable, by controlling how long a server lives in the global blacklist. + - **loadBalancePingTimeout**: indicates the time in milliseconds that the connection will wait for a response to a ping operation when you set **loadBalanceValidateConnectionOnSwapServer** to **true**. + - **retriesAllDown**: indicates the maximum number of connection attempts before an exception is thrown when a valid host is searched. SQLException will be returned if the threshold of retries is reached with no valid connections obtained. + - **connectTimeout**: indicates the maximum amount of time in milliseconds that the JDBC driver is willing to wait to set up a socket connection. **0** indicates that the connection does not time out. This parameter is available to JDK-1.4 or later versions. The default value is **0**. + +.. _ddm_02_0005__section144072633313: + +Logging In to a DDM Instance on the DDM Console +----------------------------------------------- + +#. Log in to the DDM console. + +#. In the navigation pane, choose **Instances**. + +#. In the instance list, locate the required instance and click **Log In** in the **Operation** column. + + On the displayed page, enter the required username and password. + +#. On the displayed page, enter username and password of the DDM account. + +#. Click **Test Connection**. + +#. (Optional) Enable **Collect Metadata Periodically** and **Show Executed SQL Statements**. + +#. Ensure that all settings are correct and click **Log In**. diff --git a/umn/source/index.rst b/umn/source/index.rst index e939631..63c23a7 100644 --- a/umn/source/index.rst +++ b/umn/source/index.rst @@ -2,3 +2,26 @@ Distributed Database Middleware - User Guide ============================================ +.. toctree:: + :maxdepth: 1 + + service_overview/index + getting_started/index + function_overview + permissions_management/index + instance_management/index + connection_management/index + parameter_template_management/index + task_center + schema_management/index + shard_configuration/index + data_node_management/index + account_management/index + backups_and_restorations/index + slow_queries + monitoring_management/index + tags + auditing/index + sql_syntax/index + faqs/index + change_history diff --git a/umn/source/instance_management/administrator_account.rst b/umn/source/instance_management/administrator_account.rst new file mode 100644 index 0000000..61890f0 --- /dev/null +++ b/umn/source/instance_management/administrator_account.rst @@ -0,0 +1,60 @@ +:original_name: ddm_06_0021.html + +.. _ddm_06_0021: + +Administrator Account +===================== + +Overview +-------- + +DDM allows you to create an administrator account for your instance. This account has the superuser permissions to modify permissions of accounts displayed on the **Accounts** page. The administrator account has read/write permissions for all schemas and tables by default, including schemas being created. Once an administrator account is created, it cannot be deleted. + +You can configure an administrator account when you create an instance, or create one on the instance details page after your instance has been created. + +Prerequisites +------------- + +The kernel version of DDM instances must be 3.0.9 or later. + +Precautions +----------- + +- After an administrator account is created, its username cannot be modified. +- The administrator account cannot be duplicated with any DDM account on the **Accounts** page. +- If the administrator account is modified on the management control plane, all its original permissions are cleared, and the new permissions assigned take effect. + +Scenarios +--------- + +- If you forget the password of the administrator account, reset it by referring to :ref:`Resetting the Administrator Password `. +- If you select **Skip** when you create a DDM instance, you can create an administrator by referring to :ref:`Creating an Administrator Account ` on the instance basic information page. + +.. _ddm_06_0021__section1545145125010: + +Resetting the Administrator Password +------------------------------------ + +#. Log in to the management console. +#. Click |image1| in the upper left corner and select a region and a project. +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. In the instance list, locate the DDM instance whose administrator password you want to reset and click its name. Click **Reset Password**. +#. In the displayed dialog, enter a new password and confirm the password. Click **Yes**. +#. Wait the request is submitted. + +.. _ddm_06_0021__section74841851143710: + +Creating an Administrator Account +--------------------------------- + +#. Log in to the management console. +#. Click |image3| in the upper left corner and select a region and a project. +#. Click |image4| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. In the instance list, locate the DDM instance that you want to create an administrator account for and click its name. Click **Create Administrator**. +#. In the displayed dialog box, enter an administrator, password, and confirm password. Click **Yes**. +#. Wait the request is submitted. + +.. |image1| image:: /_static/images/en-us_image_0000001733146449.png +.. |image2| image:: /_static/images/en-us_image_0000001733266565.png +.. |image3| image:: /_static/images/en-us_image_0000001685147638.png +.. |image4| image:: /_static/images/en-us_image_0000001733266553.png diff --git a/umn/source/instance_management/changing_a_parameter_template.rst b/umn/source/instance_management/changing_a_parameter_template.rst new file mode 100644 index 0000000..a72d255 --- /dev/null +++ b/umn/source/instance_management/changing_a_parameter_template.rst @@ -0,0 +1,20 @@ +:original_name: ddm_06_0020.html + +.. _ddm_06_0020: + +Changing a Parameter Template +============================= + +Prerequisites +------------- + +You have logged in to the DDM console. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to change a parameter template for and choose **More** > **Change Parameter Template** in the **Operation** column. + + The **Change Parameter Template** dialog box is displayed. + +#. Select the required parameter template and click **OK**. diff --git a/umn/source/instance_management/changing_class_of_a_ddm_node.rst b/umn/source/instance_management/changing_class_of_a_ddm_node.rst new file mode 100644 index 0000000..1f4e6a8 --- /dev/null +++ b/umn/source/instance_management/changing_class_of_a_ddm_node.rst @@ -0,0 +1,33 @@ +:original_name: ddm_06_0003.html + +.. _ddm_06_0003: + +Changing Class of a DDM Node +============================ + +Prerequisites +------------- + +- You have logged in to the DDM console. +- The DDM instance is in the **Running** state. + +.. important:: + + Change node class during off-peak hours because services will be interrupted for a while during class changing. + +Procedure +--------- + +.. important:: + + After a read-only group is created, the entry for changing node class will be moved to the operation column of the group. + +#. In the instance list, locate the DDM instance whose node class you want to change and click its name. Click **Change**. +#. On the displayed page, select the required class. +#. Confirm the configurations and click **Submit**. +#. Switch back to the instance list and check whether the status of the instance changes to **Changing class**. You can also view the change task at Task Center. + + .. note:: + + - Once the change operation is performed, it cannot be undone. To change the class again, submit another request after the class change is complete. + - Node class can be upgraded or downgraded. diff --git a/umn/source/instance_management/creating_a_ddm_instance.rst b/umn/source/instance_management/creating_a_ddm_instance.rst new file mode 100644 index 0000000..6f1907a --- /dev/null +++ b/umn/source/instance_management/creating_a_ddm_instance.rst @@ -0,0 +1,94 @@ +:original_name: ddm_06_00017.html + +.. _ddm_06_00017: + +Creating a DDM Instance +======================= + +Prerequisites +------------- + +You have logged in to the DDM console. + +Procedure +--------- + +#. On the displayed page, in the upper right corner, click **Create** **DDM Instance**. +#. On the displayed page, configure the required parameters. + + .. table:: **Table 1** Parameter description + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===============================================================================================================================================================================================================================================================================+ + | Region | Region where the DDM instance is located. Select the required region. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | AZ | Availability zone where the DDM instance is deployed. | + | | | + | | Nodes in a DDM instance can be deployed on different physical servers in the same AZ to keep services always available even if one physical server becomes faulty. | + | | | + | | A DDM instance can be deployed across AZs to provide cross-AZ DR. | + | | | + | | If necessary, you can select multiple AZs when you create a DDM instance. Then nodes of the instance will be deployed in multiple different AZs. | + | | | + | | .. note:: | + | | | + | | Deploy your application, DDM instance, and required RDS instances in the same AZ to reduce network latency. Cross-AZ deployment may increase network latency. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Instance Name | Name of the DDM instance, which: | + | | | + | | - Cannot be left blank. | + | | - Must start with a letter. | + | | - Must be 4 to 64 characters long. | + | | - Can contain only letters, digits, and hyphens (-). | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Node Class | Class of the DDM instance node. You can select **General-enhanced** or **Kunpeng general computing-plus** and then specify a node class. | + | | | + | | .. note:: | + | | | + | | Estimate compute and storage requirements of your applications based on your service type and scale before you create a DDM instance, and then select an appropriate node class so that the CPU and memory specifications of your DDM instance can better meet your needs. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Instance Nodes | Number of nodes in a DDM instance. Up to 32 nodes are supported. | + | | | + | | .. note:: | + | | | + | | At least 2 nodes are recommended because using a single node cannot guarantee high availability. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | VPC | VPC that the DDM instance belongs to. This VPC isolates networks for different services. It allows you to manage and configure private networks, simplifying network management. | + | | | + | | Click **View VPC** to show more details and security group rules. | + | | | + | | .. note:: | + | | | + | | The DDM instance should be in the same VPC as the required RDS for MySQL instance. | + | | | + | | To ensure network connectivity, the DDM instance you purchased must be in the same VPC as your applications and RDS for MySQL instances. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subnet | Name and IP address range of the subnet | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Security Group | Select an existing security group. | + | | | + | | You are advised to select the same security group for your DDM instance, application, and RDS for MySQL instances so that they can communicate with each other. If different security groups are selected, add security group rules to enable network access. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter Template | Select an existing parameter template. You can also click **View Parameter Template** to set parameters on the displayed page. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Tags | (Optional) Adding tags helps you better identify and manage your DDM resources. | + | | | + | | You can add tags to your instance. Each instance can have a maximum of 20 tags. | + | | | + | | **Tag key: This parameter is mandatory and cannot be null.** | + | | | + | | - Must be unique for each instance. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can include 1 to 36 characters. | + | | - Cannot be an empty string or start with **\_sys\_**. | + | | | + | | **Tag value: This parameter is mandatory.** | + | | | + | | - Is an empty string by default. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can contain 0 to 43 characters. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. After the configuration is complete, click Create Now at the bottom of the page. +#. Confirm the configurations and click **Submit**. diff --git a/umn/source/instance_management/deleting_a_ddm_instance.rst b/umn/source/instance_management/deleting_a_ddm_instance.rst new file mode 100644 index 0000000..b09a80f --- /dev/null +++ b/umn/source/instance_management/deleting_a_ddm_instance.rst @@ -0,0 +1,25 @@ +:original_name: ddm_06_0005.html + +.. _ddm_06_0005: + +Deleting a DDM Instance +======================= + +You can delete instances that are no longer needed. + +Precautions +----------- + +- Deleted instances cannot be recovered. Exercise caution when performing this operation. +- Deleting a DDM instance will not affect its associated RDS instances. +- Deleting a DDM instance involves deleting its associated schemas and DDM accounts. +- If you need to delete data stored on the associated data nodes when deleting a DDM instance, select **Delete data on data nodes**. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to delete and choose **More** > **Delete** in the **Operation** column. + +#. In the displayed dialog box, click **Yes**. + + To delete data stored on the associated data nodes, select **Delete data on data nodes**. diff --git a/umn/source/instance_management/index.rst b/umn/source/instance_management/index.rst new file mode 100644 index 0000000..a4e685a --- /dev/null +++ b/umn/source/instance_management/index.rst @@ -0,0 +1,32 @@ +:original_name: ddm_06_0001.html + +.. _ddm_06_0001: + +Instance Management +=================== + +- :ref:`Creating a DDM Instance ` +- :ref:`Splitting Read-only and Read-Write Services ` +- :ref:`Changing Class of a DDM Node ` +- :ref:`Scaling Out a DDM Instance ` +- :ref:`Scaling In a DDM Instance ` +- :ref:`Restarting a DDM Instance ` +- :ref:`Deleting a DDM Instance ` +- :ref:`Modifying Parameters of a DDM Instance ` +- :ref:`Changing a Parameter Template ` +- :ref:`Administrator Account ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_a_ddm_instance + splitting_read-only_and_read-write_services/index + changing_class_of_a_ddm_node + scaling_out_a_ddm_instance + scaling_in_a_ddm_instance + restarting_a_ddm_instance/index + deleting_a_ddm_instance + modifying_parameters_of_a_ddm_instance + changing_a_parameter_template + administrator_account diff --git a/umn/source/instance_management/modifying_parameters_of_a_ddm_instance.rst b/umn/source/instance_management/modifying_parameters_of_a_ddm_instance.rst new file mode 100644 index 0000000..7221aed --- /dev/null +++ b/umn/source/instance_management/modifying_parameters_of_a_ddm_instance.rst @@ -0,0 +1,124 @@ +:original_name: ddm_03_0058.html + +.. _ddm_03_0058: + +Modifying Parameters of a DDM Instance +====================================== + +Scenarios +--------- + +Configure parameters of a DDM instance based on your needs to keep the instance running well. + +Prerequisites +------------- + +There is a DDM instance available and running normally. + +Procedure +--------- + +#. Log in to the DDM console. + +#. In the navigation pane, choose **Instances**. + +#. In the instance list, locate the DDM instance whose parameters you want to configure and click its name. + +#. In the left pane, click **Parameters** and modify parameter values as needed. + + .. table:: **Table 1** Parameters of a DDM instance + + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | Parameter | Description | Value Range | Default Value | + +==================================+=======================================================================================================================================================================================================================================================================================================================================================================================================+=====================================================================================================================================================================================================================================================+====================+ + | bind_table | Data association among multiple sharded tables. The optimizer processes JOIN operations at the MySQL layer based on these associations. For details about parameter examples, see the description below the table. | The value should be in format **[{.,.},{.,.},...]**. *.,.* indicates a table name.column name pair, and the value may contain multiple pairs. | ``-`` | + | | | | | + | | | The version should be: | | + | | | | | + | | | DDM 2.3.2.7 or later. | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | character_set_server | DDM server's character set. To store emoticons, set both this parameter and the character set on RDS to **utf8mb4**. | gbk, utf8, utf8mb4 | utf8mb4 | + | | | | | + | | For a DDM instance 3.0.9 or later, you can execute **show variables like '%char%'** to query its character set. You will find that **character_set_client**, **character_set_results**, and **character_set_connection** in the command output all have a fixed value, **utf8mb4**. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | collation_server | Collation on the DDM server. | utf8mb4_unicode_ci, utf8mb4_bin, utf8mb4_general_ci | utf8mb4_unicode_ci | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | concurrent_execution_level | Concurrency level of scanning table shards in a logical table. **DATA_NODE**: indicates that database shards are scanned in parallel and table shards in each database shard are scanned in serial. **RDS_INSTANCE**: indicates that RDS instances are scanned in parallel and shards in each instance are scanned in serial. **PHY_TABLE**: indicates that all table shards are scanned in parallel. | RDS_INSTANCE, DATA_NODE, PHY_TABLE | DATA_NODE | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | connection_idle_timeout | Number of seconds the server waits for activity on a connection before closing it. The default value is **28800**, indicating that the server waits for 28,800 seconds before closing a connection. | 60-86400 | 28800 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | contains_shard_key | Whether the SELECT, UPDATE, and DELETE statements must contain sharding keys in filter conditions. | OFF or ON | OFF | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | ddl_precheck_mdl_threshold_time | Threshold of the MDL duration in DDL pre-check. The unit is second. The default value is **120**. | 1-3600 | 120 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | enable_table_recycle | **ON**: indicates that the table recycle bin is enabled. | OFF or ON | OFF | + | | | | | + | | **OFF**: indicates that the table recycle bin is disabled. | | | + | | | | | + | | After the table recycle bin is enabled, deleted tables are moved to the recycle bin and can be recovered by running the RESTORE command within seven days. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | long_query_time | Minimum duration of a query to be logged as slow, in seconds. The default value is **1**, indicating that the query is considered as a slow query if its execution duration is greater than or equal to 1 second. | 0.01-10 | 1 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | max_allowed_packet | Maximum size of one packet or any generated intermediate string. The packet message buffer is initialized to **net_buffer_length** bytes, but can grow up to **max_allowed_packet** bytes when needed. This value is small by default, to catch large (and possibly incorrect) packets. The value must be a multiple of **1024**. | 1024-1073741824 | 1073741824 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | max_backend_connections | Maximum of concurrent client connections allowed per DDM instance. | 0-10000000 | 0 | + | | | | | + | | The default value is **0**. | | | + | | | | | + | | Actual value: (Maximum connections of RDS - 20)/DDM nodes | | | + | | | | | + | | This parameter does not take effect only after maximum connections are set on RDS. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | max_connections | Minimum concurrent connections from a DDM instance node to the client. | 10-40000 | 20000 | + | | | | | + | | This value depends on specifications and processing capabilities of the target data node. Too many connections may cause connection waiting, affecting performance. The consumption of DDM connections varies with the number of shards and SQL design. | | | + | | | | | + | | For example, If a SQL statement contains a sharding key, each DDM connection consumes one data node connection. If the SQL statement contains no sharding keys and the number of shards is N, N data node connections are consumed. | | | + | | | | | + | | If SQL design is appropriate and processing capabilities of DDM and its data nodes are good enough, you can set this parameter to a value slightly smaller than the product of backend data nodes x maximum connections supported by each data node. | | | + | | | | | + | | Carry out pressure tests on your services and then select a proper value. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | min_backend_connections | Minimum concurrent connections from a DDM node to an RDS instance. The default value is **10**. | 0-10000000 | 10 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | seconds_behind_master | Threshold in seconds of the replication lag between a primary RDS instance to its read replica. The default value is **30**, indicating that the time for data replication between the primary RDS instance and its read replicas cannot exceed 30 seconds. If the time exceeds 30 seconds, the data read requests are no longer forwarded to the read replicas. | 0-7200 | 30 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | sql_execute_timeout | Number of seconds to wait for a SQL statement to execute before it times out. The default value is **28800**, indicating that the SQL statement times out if its execution time is greater than or equal to 28800 seconds. | 100-28800 | 28800 | + | | | | | + | | For data nodes, ensure that **net_write_timeout** has a greater value than **sql_execute_timeout**. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | temp_table_size_limit | Size of a temporary table. | 500000-2000000000 | 1000000 | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | transaction_policy | Transactions supported by DDM. XA transaction, which attempts to ensure atomicity and isolation. FREE transaction, which is a best-effort commit transaction that allows data to be written to multiple shards, without impacting performance. FREE transactions do not ensure atomicity. NO_DTX transaction, which is a single-shard transaction. | XA, FREE, NO_DTX | XA | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | transfer_hash_to_mod_hash | Whether the hash algorithm must be converted into mod_hash during table creation. | OFF or ON | OFF | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | ultimate_optimize | Whether the SQL execution plan is optimized based on parameter values. | OFF or ON | ON | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + | force_read_master_in_transaction | Whether SQL statements involved in each transaction are read from the master node. | OFF or ON | OFF | + | | | | | + | | Note: This parameter is available in version 3.0.9 or later. If this feature is enabled in version 3.0.9 but the version is downgraded to 3.0.9 below, the feature keeps enabled when the version returns to 3.0.9 or later. | | | + +----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+ + + By default, DDM allows you to modify only the preceding instance parameters. If you need to modify other parameters in some special scenarios such as data migration, contact technical support. + + Parameter configuration examples: + + + .. figure:: /_static/images/en-us_image_0000001685147610.png + :alt: **Figure 1** Result if **bind_table** is not used + + **Figure 1** Result if **bind_table** is not used + + + .. figure:: /_static/images/en-us_image_0000001685147602.png + :alt: **Figure 2** Result if **bind_table** is used + + **Figure 2** Result if **bind_table** is used + +#. Click **Save** in the upper left corner and then **Yes** in the displayed dialog box. + + .. note:: + + - Modifying parameters may affect access to the DDM instance. Exercise caution when performing this operation. + - It takes 20s to 60s to have the modifications to take effect. diff --git a/umn/source/instance_management/restarting_a_ddm_instance/index.rst b/umn/source/instance_management/restarting_a_ddm_instance/index.rst new file mode 100644 index 0000000..7fa3e3a --- /dev/null +++ b/umn/source/instance_management/restarting_a_ddm_instance/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_06_0004.html + +.. _ddm_06_0004: + +Restarting a DDM Instance +========================= + +- :ref:`Restarting a DDM Instance ` +- :ref:`Restarting a Node ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + restarting_a_ddm_instance + restarting_a_node diff --git a/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_ddm_instance.rst b/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_ddm_instance.rst new file mode 100644 index 0000000..7e3117d --- /dev/null +++ b/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_ddm_instance.rst @@ -0,0 +1,23 @@ +:original_name: ddm_06_0028.html + +.. _ddm_06_0028: + +Restarting a DDM Instance +========================= + +You may need to restart an instance to perform maintenance. + +The DDM instance is not available during restart, and the restart operation cannot be undone. Exercise caution when performing this operation. + +Prerequisites +------------- + +- You have logged in to the DDM console. +- The instance is in the **Available** status. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to restart and choose **More** > **Restart** in the **Operation** column. +#. In the displayed dialog box, click **Yes**. +#. Wait until the instance is restarted. diff --git a/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_node.rst b/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_node.rst new file mode 100644 index 0000000..8efbe97 --- /dev/null +++ b/umn/source/instance_management/restarting_a_ddm_instance/restarting_a_node.rst @@ -0,0 +1,24 @@ +:original_name: ddm_06_0029.html + +.. _ddm_06_0029: + +Restarting a Node +================= + +You can restart a single node of your DDM instance. + +An instance is not available when one of its nodes is being restarted. The restart operation cannot be undone. Exercise caution when you restart an instance node. + +Prerequisites +------------- + +- You have logged in to the DDM console. +- There is a DDM instance available, and its nodes are normal. + +Procedure +--------- + +#. In the instance list, locate the DDM instance whose node you want to restart and click its name. +#. In the **Node Information** area, locate the target node and click **Restart** in the **Operation** column. +#. In the displayed dialog box, click **Yes**. +#. Wait until the node is restarted. diff --git a/umn/source/instance_management/scaling_in_a_ddm_instance.rst b/umn/source/instance_management/scaling_in_a_ddm_instance.rst new file mode 100644 index 0000000..a24ad31 --- /dev/null +++ b/umn/source/instance_management/scaling_in_a_ddm_instance.rst @@ -0,0 +1,25 @@ +:original_name: ddm_06_0014.html + +.. _ddm_06_0014: + +Scaling In a DDM Instance +========================= + +Scenarios +--------- + +This section describes how to scale in a DDM instance as service data volume decreases. + +.. note:: + + - Scale in your DDM instance during off-peak hours. + - Make sure that the associated data nodes are normal and not undergoing other operations. + - At least one node should be left for a DDM instance. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to scale in and click its name. Click **Scale In**. +#. On the displayed page, view the current instance configuration and specify the number of nodes to be removed. +#. Click **Next**. +#. On the displayed page, click **Submit** if all configurations are correct. diff --git a/umn/source/instance_management/scaling_out_a_ddm_instance.rst b/umn/source/instance_management/scaling_out_a_ddm_instance.rst new file mode 100644 index 0000000..0faf0d1 --- /dev/null +++ b/umn/source/instance_management/scaling_out_a_ddm_instance.rst @@ -0,0 +1,26 @@ +:original_name: ddm_06_0011.html + +.. _ddm_06_0011: + +Scaling Out a DDM Instance +========================== + +Scenarios +--------- + +As service data increases, you can scale out a DDM instance by adding nodes to improve service stability. + +.. note:: + + - Scale out your DDM instance during off-peak hours. + - Make sure that the associated data nodes are normal and not undergoing other operations. + - Each DDM instance supports up to 32 nodes. + - After a read-only group is created, the entry for adding nodes will be moved to the operation column of the group. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to scale out and click its name. Click **Scale Out**. +#. On the displayed page, view the current instance configuration, select the required AZ, and specify the number of new nodes. +#. Click **Next**. +#. On the displayed page, click **Submit** if all configurations are correct. diff --git a/umn/source/instance_management/splitting_read-only_and_read-write_services/how_are_read-only_services_split_from_read-write_services.rst b/umn/source/instance_management/splitting_read-only_and_read-write_services/how_are_read-only_services_split_from_read-write_services.rst new file mode 100644 index 0000000..851002f --- /dev/null +++ b/umn/source/instance_management/splitting_read-only_and_read-write_services/how_are_read-only_services_split_from_read-write_services.rst @@ -0,0 +1,28 @@ +:original_name: ddm_06_0027.html + +.. _ddm_06_0027: + +How Are Read-only Services Split from Read-Write Services +========================================================= + +Procedure +--------- + +#. Log in to the DDM console and choose **Instances** in the navigation pane. In the instance list, locate the required instance and click its name. + +#. Choose **Basic Information** in the navigation pane to view node information. + +#. In the **Node Information** area, click **Create Group**. After a group is created, existing nodes are included into another read/write group by default, responsible for handling read/write requests to core services. + + .. note:: + + - One DDM instance supports multiple read-only groups. Each group contains at least 2 nodes, and each instance contains up to 32 nodes. + - One node belongs to only one group, and its group cannot be changed once determined. Nodes in the same group must be of the same node class. + +#. On the **Create Group** page, select the required role, VPC, and node class, specify the quantity of new nodes, and click **Next**. + +#. Confirm the information and click **Next** and then **Submit**. + +#. After the creation is complete, check whether the original **Node Information** area becomes the **Group Information** area. Then you can manage nodes in the group. + + To delete a group of a DDM instance, locate the group that you want to delete and click **Delete**. The corresponding floating IP address becomes invalid once the group is deleted. This may affect your services. Retain at least one read/write group. diff --git a/umn/source/instance_management/splitting_read-only_and_read-write_services/index.rst b/umn/source/instance_management/splitting_read-only_and_read-write_services/index.rst new file mode 100644 index 0000000..236dfbb --- /dev/null +++ b/umn/source/instance_management/splitting_read-only_and_read-write_services/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_06_0025.html + +.. _ddm_06_0025: + +Splitting Read-only and Read-Write Services +=========================================== + +- :ref:`What Is Read-only Service Isolation? ` +- :ref:`How Are Read-only Services Split from Read-Write Services ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + what_is_read-only_service_isolation + how_are_read-only_services_split_from_read-write_services diff --git a/umn/source/instance_management/splitting_read-only_and_read-write_services/what_is_read-only_service_isolation.rst b/umn/source/instance_management/splitting_read-only_and_read-write_services/what_is_read-only_service_isolation.rst new file mode 100644 index 0000000..12d7ab3 --- /dev/null +++ b/umn/source/instance_management/splitting_read-only_and_read-write_services/what_is_read-only_service_isolation.rst @@ -0,0 +1,21 @@ +:original_name: ddm_06_0026.html + +.. _ddm_06_0026: + +What Is Read-only Service Isolation? +==================================== + +Overview +-------- + +DDM provides read-only service isolation by grouping nodes of a DDM instance to provide physically separated compute and storage resources. + +DDM provides two types of node groups, read-only and read/write, which handle read-only and read/write requests, respectively. By default, read-only groups handle read requests sent to read replicas at the storage layer, relieving the read pressure of core workloads in the DDM cluster. Read-only and read/write groups use the same data. When there are a large number of concurrent requests, read-only groups handle complex queries or extract data offline from read replicas of data nodes to reduce query response time and provide faster access. It is easy to use node groups without the need of establishing complex links or synchronizing data. + +.. note:: + + If you want read-only groups to handle SQL queries, make sure that the associated data node has available read replicas. If there are no available read replicas, the following error messages may be returned: + + - backend database connection error; + - query has been canceled + - execute error: No read-only node diff --git a/umn/source/monitoring_management/index.rst b/umn/source/monitoring_management/index.rst new file mode 100644 index 0000000..2886055 --- /dev/null +++ b/umn/source/monitoring_management/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_03_0050.html + +.. _ddm_03_0050: + +Monitoring Management +===================== + +- :ref:`Supported Metrics ` +- :ref:`Viewing Metrics ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + supported_metrics/index + viewing_metrics/index diff --git a/umn/source/monitoring_management/supported_metrics/ddm_instance_metrics.rst b/umn/source/monitoring_management/supported_metrics/ddm_instance_metrics.rst new file mode 100644 index 0000000..7042de3 --- /dev/null +++ b/umn/source/monitoring_management/supported_metrics/ddm_instance_metrics.rst @@ -0,0 +1,74 @@ +:original_name: ddm_03_0051.html + +.. _ddm_03_0051: + +DDM Instance Metrics +==================== + +Description +----------- + +This section describes metrics reported by DDM to Cloud Eye, metric namespaces, and dimensions. You can use APIs provided by Cloud Eye to query the metric information generated for DDM. + +Namespace +--------- + +SYS.DDMS + +.. note:: + + SYS.DDM is the namespace of DDM 1.0. + + SYS.DDMS is the namespace of DDM 2.0. + + DDM has been upgraded to version 2.0. The namespace is still SYS.DDM for existing users of DDM1.0. + +Metrics +------- + +.. table:: **Table 1** DDM metrics + + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | Metric ID | Metric Name | Description | Value Range | Monitored Object | Monitoring Interval (Raw Data) | + +==============================+==================================+================================================================================================================================================================================+=============+==================+================================+ + | ddm_cpu_util | CPU Usage | CPU usage of the DDM instance node | 0—100 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_mem_util | Memory Usage | Memory usage of the DDM instance node. | 0—100 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_bytes_in | Network Input Throughput | Incoming traffic per second of the DDM instance node | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_bytes_out | Network Output Throughput | Outgoing traffic per second of the DDM instance node | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_qps | QPS | Requests per second of the DDM instance node | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_read_count | Reads | Read operations of the DDM instance node within each monitoring period | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_write_count | Writes | Write operations of the DDM instance node within a monitoring period | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_slow_log | Slow SQL Logs | Slow SQL logs of DDM-Core | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_rt_avg | Average Response Latency | Average response latency of DDM-Core | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_connections | Connections | Connections of DDM-Core | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_backend_connection_ratio | Percentage of Active Connections | Percentage of active connections (from a DDM node to the target RDS instance) | 0—100 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | active_connections | Active connections | Active connections of each DDM instance node | >= 0 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_connection_util | Connection Usage | Percentage of active connections to each DDM instance node | 0—100 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + | ddm_node_status_alarm_code | DDM Node Connectivity | Whether each DDM instance node is unavailable. The value can be **0** and **1**. **0** indicates that the node is available, and **1** indicates that the node is unavailable. | 0 or 1 | DDM nodes | 1 minute | + +------------------------------+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------------------+--------------------------------+ + +Dimensions +---------- + +======= ========= +Key Value +======= ========= +node_id DDM nodes +======= ========= + +.. note:: + + DDM supports dimension **node_id**, but not **instance_id**. You can obtain the ID of a node by the corresponding instance ID. diff --git a/umn/source/monitoring_management/supported_metrics/index.rst b/umn/source/monitoring_management/supported_metrics/index.rst new file mode 100644 index 0000000..0386081 --- /dev/null +++ b/umn/source/monitoring_management/supported_metrics/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_16_0001.html + +.. _ddm_16_0001: + +Supported Metrics +================= + +- :ref:`DDM Instance Metrics ` +- :ref:`Network Metrics ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + ddm_instance_metrics + network_metrics diff --git a/umn/source/monitoring_management/supported_metrics/network_metrics.rst b/umn/source/monitoring_management/supported_metrics/network_metrics.rst new file mode 100644 index 0000000..dfd083b --- /dev/null +++ b/umn/source/monitoring_management/supported_metrics/network_metrics.rst @@ -0,0 +1,34 @@ +:original_name: ddm_03_0054.html + +.. _ddm_03_0054: + +Network Metrics +=============== + +If load balancing is enabled for your DDM instance, you can view network metrics in the following table. If load balancing is not enabled, you do not have the permissions to view them. + +.. table:: **Table 1** Load balancing metrics + + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | Metric ID | Metric Name | Description | Value Range | Monitored Object | Monitoring Interval (Raw Data) | + +=====================+==================================+==================================================================================+=============+=========================+================================+ + | m7_in_Bps | Inbound Rate | Traffic used for accessing the monitored object from the Internet per second | >= 0 | Dedicated load balancer | 1 minute | + | | | | | | | + | | | Unit: byte/s | | | | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | m8_out_Bps | Outbound Rate | Traffic used by the monitored object to access the Internet per second | >= 0 | Dedicated load balancer | 1 minute | + | | | | | | | + | | | Unit: byte/s | | | | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | m9_abnormal_servers | Unhealthy Servers | Number of unhealthy backend servers associated with the monitored object | >= 0 | Dedicated load balancer | 1 minute | + | | | | | | | + | | | Unit: count | | | | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | ma_normal_servers | Healthy Servers | Number of healthy backend servers associated with the monitored object | >= 0 | Dedicated load balancer | 1 minute | + | | | | | | | + | | | Unit: count | | | | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | l4_in_bps_usage | Layer-4 Inbound Bandwidth Usage | Percentage of inbound TCP/UDP bandwidth from the monitored object to the client | 0-100 | Dedicated load balancer | 1 minute | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ + | l4_out_bps_usage | Layer-4 Outbound Bandwidth Usage | Percentage of outbound TCP/UDP bandwidth from the monitored object to the client | 0-100 | Dedicated load balancer | 1 minute | + +---------------------+----------------------------------+----------------------------------------------------------------------------------+-------------+-------------------------+--------------------------------+ diff --git a/umn/source/monitoring_management/viewing_metrics/index.rst b/umn/source/monitoring_management/viewing_metrics/index.rst new file mode 100644 index 0000000..5f56aa7 --- /dev/null +++ b/umn/source/monitoring_management/viewing_metrics/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm_16_0002.html + +.. _ddm_16_0002: + +Viewing Metrics +=============== + +- :ref:`Viewing Instance Metrics ` +- :ref:`Viewing Network Metrics ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + viewing_instance_metrics + viewing_network_metrics diff --git a/umn/source/monitoring_management/viewing_metrics/viewing_instance_metrics.rst b/umn/source/monitoring_management/viewing_metrics/viewing_instance_metrics.rst new file mode 100644 index 0000000..93da784 --- /dev/null +++ b/umn/source/monitoring_management/viewing_metrics/viewing_instance_metrics.rst @@ -0,0 +1,36 @@ +:original_name: ddm_03_0052.html + +.. _ddm_03_0052: + +Viewing Instance Metrics +======================== + +Cloud Eye monitors the running status of DDM instances. You can view instance monitoring metrics on the DDM console. + +Monitored data requires a period of time for transmission and display. The status of the monitored object displayed on the Cloud Eye page is the status obtained 5 to 10 minutes before. If you have created a DDM instance, wait for 5 to 10 minutes to view its monitored data on Cloud Eye. + +Prerequisites +------------- + +- The DDM instance is running normally. + + Monitored data of faulty or deleted DDM instances are not displayed on Cloud Eye. + +- The DDM instance has been normally running for about 10 minutes. + + It takes a while to view the monitoring data and graphics of a newly created DDM instance. + +Procedure +--------- + +#. Log in to the DDM console. + +#. On the **Instances** page, locate the required instance and click **More** > **View Metric** in the **Operation** column. + + Alternatively, click the instance name, on the displayed page, click **View Metric** in the upper right corner. + +#. In the instance list, click |image1| in the front of the target instance. Locate a node and click **View Metric** in the **Operation** column. + + You can view instance metrics, including CPU usage, memory usage, network input throughput, network output throughput, QPS, and slow query logs. For details, see :ref:`DDM Instance Metrics `. + +.. |image1| image:: /_static/images/en-us_image_0000001620873737.png diff --git a/umn/source/monitoring_management/viewing_metrics/viewing_network_metrics.rst b/umn/source/monitoring_management/viewing_metrics/viewing_network_metrics.rst new file mode 100644 index 0000000..b79295a --- /dev/null +++ b/umn/source/monitoring_management/viewing_metrics/viewing_network_metrics.rst @@ -0,0 +1,26 @@ +:original_name: ddm_16_0003.html + +.. _ddm_16_0003: + +Viewing Network Metrics +======================= + +The DDM console supports monitoring and management of network metrics. + +Prerequisites +------------- + +If load balancing is enabled for your DDM instance, you can view network metrics. If load balancing is not enabled, you do not have the permissions to view them. + +Procedure +--------- + +#. Log in to the DDM console. + +#. In the instance list, locate the required DDM instance and click its name. + +#. In the navigation pane on the left, choose **Monitoring**. + +#. Click **Network**. + + You can select a time range and view metrics such as inbound rate, outbound rate, unhealthy servers, and healthy servers. For details, see :ref:`Network Metrics `. diff --git a/umn/source/parameter_template_management/applying_a_parameter_template.rst b/umn/source/parameter_template_management/applying_a_parameter_template.rst new file mode 100644 index 0000000..411f734 --- /dev/null +++ b/umn/source/parameter_template_management/applying_a_parameter_template.rst @@ -0,0 +1,34 @@ +:original_name: ddm_05_0013.html + +.. _ddm_05_0013: + +Applying a Parameter Template +============================= + +Scenarios +--------- + +After you create a parameter template and modify parameters in it based on service requirements, you can apply it to your DDM instances. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. Choose **Parameter Templates** in the left navigation pane and proceed with subsequent operations based on the type of the required parameter template. + + - To apply a default template, click the **Default Templates** tab, locate the required parameter template, and click **Apply** in the **Operation** column. + - To apply a custom template, click the **Custom Templates** tab, locate the required parameter template, and choose **More** > **Apply** in the **Operation** column. + + A parameter template can be applied to one or more DDM instances. + +#. In the displayed dialog box, select one or more DDM instances that you want to apply the parameter template to and click **OK**. + + After the parameter template is applied to DDM instances successfully, you can view its application history by referring to :ref:`Viewing Application Records of a Parameter Template `. + +.. |image1| image:: /_static/images/en-us_image_0000001685147590.png +.. |image2| image:: /_static/images/en-us_image_0000001733146405.png diff --git a/umn/source/parameter_template_management/comparing_two_parameter_templates.rst b/umn/source/parameter_template_management/comparing_two_parameter_templates.rst new file mode 100644 index 0000000..cd8f014 --- /dev/null +++ b/umn/source/parameter_template_management/comparing_two_parameter_templates.rst @@ -0,0 +1,32 @@ +:original_name: ddm_05_0009.html + +.. _ddm_05_0009: + +Comparing Two Parameter Templates +================================= + +Scenarios +--------- + +You can apply different parameter templates to the same DDM instance to view impacts on parameter settings of the instance. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. On the **Parameter Templates** page, locate the required parameter template and click **Compare** in the **Operation** column. + +#. In the displayed dialog box, select a parameter template and click **OK**. + + You can compare different custom parameter templates, or a default parameter template with a custom parameter template. + + - If their settings are different, the parameter names and values of both parameter templates are displayed. + - If their settings are the same, no data is displayed. + +.. |image1| image:: /_static/images/en-us_image_0000001685307262.png +.. |image2| image:: /_static/images/en-us_image_0000001733266445.png diff --git a/umn/source/parameter_template_management/creating_a_parameter_template.rst b/umn/source/parameter_template_management/creating_a_parameter_template.rst new file mode 100644 index 0000000..b85b794 --- /dev/null +++ b/umn/source/parameter_template_management/creating_a_parameter_template.rst @@ -0,0 +1,40 @@ +:original_name: ddm_05_0006.html + +.. _ddm_05_0006: + +Creating a Parameter Template +============================= + +A database parameter template acts as a container for parameter configurations that can be applied to one or more DDM instances. You can manage configurations of a DDM instance by managing parameters in the parameter template applied to the instance. + +If you do not specify a parameter template when creating a DDM instance, the system uses the default parameter template for your instance. The default parameter template contains multiple default values, which are determined based on the computing level and the storage space allocated to the instance. You cannot modify parameter settings of a default parameter template. You must create your own parameter template to change parameter settings. + +If you want to use your custom parameter template, you simply create a parameter template and select it when you create a DDM instance or apply it to an existing DDM instance following the instructions provided in :ref:`Applying a Parameter Template `. + +When you have already created a parameter template and want to provide most of its custom parameters and values in a new parameter template, you can replicate the template you created following the instructions provided in :ref:`Replicating a Parameter Template `. + +The following are the key points you should know when using parameters from a parameter template: + +- Changing a parameter value in a parameter template does not change any parameter in a DDM instance where it has been applied before. +- When you change a parameter value in a parameter template and save the change, the change will take effect only after you apply the parameter template to a DDM instance and manually restart the instance. +- Improper parameter settings may have unintended adverse effects, including degraded performance and system instability. Exercise caution when modifying parameters and you need to back up data before modifying parameters in a parameter template. Before applying parameter template changes to a production DDM instance, you should try out these changes on a test DDM instance. + +Procedure +--------- + +#. Log in to the management console. +#. Click |image1| in the upper left corner and select a region and a project. +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. Choose **Parameter Templates** and click **Create Parameter Template**. +#. In the displayed dialog box, enter a template name and description and click **OK**. + + - The template name is case-sensitive and consists of 1 to 64 characters. It can contain only letters, digits, hyphens (-), underscores (_), and periods (.). + - The template description consists of a maximum of 256 characters and cannot include carriage return characters and the following special characters: >!<"&'= + + .. note:: + + - Each user can create up to 100 parameter templates. + - The parameter template quota is shared by all DDM instances in a project. + +.. |image1| image:: /_static/images/en-us_image_0000001733146325.png +.. |image2| image:: /_static/images/en-us_image_0000001733146317.png diff --git a/umn/source/parameter_template_management/deleting_a_parameter_template.rst b/umn/source/parameter_template_management/deleting_a_parameter_template.rst new file mode 100644 index 0000000..43dcac5 --- /dev/null +++ b/umn/source/parameter_template_management/deleting_a_parameter_template.rst @@ -0,0 +1,28 @@ +:original_name: ddm_05_0016.html + +.. _ddm_05_0016: + +Deleting a Parameter Template +============================= + +Scenarios +--------- + +You can delete custom parameter templates that will not be used any more. + +.. important:: + + - Deleted parameter templates cannot be recovered. Exercise caution when performing this operation. + - Default parameter templates cannot be deleted. + +Procedure +--------- + +#. Log in to the management console. +#. Click |image1| in the upper left corner and select a region and a project. +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. Choose **Parameter Templates**, click the **Custom Templates** tab, locate the template that you want to delete, and click **Delete** in the **Operation** column. +#. In the displayed dialog box, click **Yes**. + +.. |image1| image:: /_static/images/en-us_image_0000001733266501.png +.. |image2| image:: /_static/images/en-us_image_0000001685307318.png diff --git a/umn/source/parameter_template_management/editing_a_parameter_template.rst b/umn/source/parameter_template_management/editing_a_parameter_template.rst new file mode 100644 index 0000000..2e58c7f --- /dev/null +++ b/umn/source/parameter_template_management/editing_a_parameter_template.rst @@ -0,0 +1,44 @@ +:original_name: ddm_05_0007.html + +.. _ddm_05_0007: + +Editing a Parameter Template +============================ + +To improve performance of a DDM instance, you can modify parameters in custom parameter templates based on service requirements. + +You cannot change parameter values in default parameter templates. + +The following are the key points you should know when using parameters from a parameter template: + +- When you modify a custom parameter template, the modifications take effect only after you apply the parameter template to DDM instances. For details, see :ref:`Applying a Parameter Template `. +- The time when the modification takes effect is determined by the type of the parameter. +- Parameters in default parameter templates cannot be modified. You can view these parameters by clicking template names. If a custom parameter template is set incorrectly and causes an instance restart to fail, you can re-configure the custom parameter template according to configurations of the default parameter template. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. Choose **Parameter Templates**, click the **Custom Templates** tab, locate the required parameter template, and click its name. + +#. On the **Parameter Details** page, modify parameters as needed. + + Available operations are as follows: + + - To save the modifications, click **Save**. + - To cancel the modifications, click **Cancel**. + +#. After the parameter values are modified, click **Template History** to view details. + + .. important:: + + - The modifications take effect only after you apply the parameter template to DDM instances. For details, see :ref:`Applying a Parameter Template `. + - The instance restart caused by node class changes will not put parameter modifications into effect. + +.. |image1| image:: /_static/images/en-us_image_0000001685307326.png +.. |image2| image:: /_static/images/en-us_image_0000001733146397.png diff --git a/umn/source/parameter_template_management/index.rst b/umn/source/parameter_template_management/index.rst new file mode 100644 index 0000000..985ccb4 --- /dev/null +++ b/umn/source/parameter_template_management/index.rst @@ -0,0 +1,30 @@ +:original_name: ddm_05_0005.html + +.. _ddm_05_0005: + +Parameter Template Management +============================= + +- :ref:`Creating a Parameter Template ` +- :ref:`Editing a Parameter Template ` +- :ref:`Comparing Two Parameter Templates ` +- :ref:`Viewing Parameter Change History ` +- :ref:`Replicating a Parameter Template ` +- :ref:`Applying a Parameter Template ` +- :ref:`Viewing Application Records of a Parameter Template ` +- :ref:`Modifying the Description of a Parameter Template ` +- :ref:`Deleting a Parameter Template ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_a_parameter_template + editing_a_parameter_template + comparing_two_parameter_templates + viewing_parameter_change_history + replicating_a_parameter_template + applying_a_parameter_template + viewing_application_records_of_a_parameter_template + modifying_the_description_of_a_parameter_template + deleting_a_parameter_template diff --git a/umn/source/parameter_template_management/modifying_the_description_of_a_parameter_template.rst b/umn/source/parameter_template_management/modifying_the_description_of_a_parameter_template.rst new file mode 100644 index 0000000..d0e50c7 --- /dev/null +++ b/umn/source/parameter_template_management/modifying_the_description_of_a_parameter_template.rst @@ -0,0 +1,33 @@ +:original_name: ddm_05_0015.html + +.. _ddm_05_0015: + +Modifying the Description of a Parameter Template +================================================= + +Scenarios +--------- + +You can modify the description of a parameter template that you have created. + +.. note:: + + You cannot modify the description of any default parameter template. + +Procedure +--------- + +#. Log in to the management console. +#. Click |image1| in the upper left corner and select a region and a project. +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. Choose **Parameter Templates**, click the **Custom Templates** tab, locate the parameter template whose description you want to modify, and click |image3| in the **Description** column. +#. Enter a new description. You can click |image4| to submit or |image5| to cancel the modification. + + - The description contains up to 256 characters but cannot contain special characters >!<"&'= + - After the modification is successful, you can view the new description in the **Description** column. + +.. |image1| image:: /_static/images/en-us_image_0000001733266397.png +.. |image2| image:: /_static/images/en-us_image_0000001733146261.png +.. |image3| image:: /_static/images/en-us_image_0000001733146273.png +.. |image4| image:: /_static/images/en-us_image_0000001685307202.png +.. |image5| image:: /_static/images/en-us_image_0000001685147450.png diff --git a/umn/source/parameter_template_management/replicating_a_parameter_template.rst b/umn/source/parameter_template_management/replicating_a_parameter_template.rst new file mode 100644 index 0000000..eb86228 --- /dev/null +++ b/umn/source/parameter_template_management/replicating_a_parameter_template.rst @@ -0,0 +1,34 @@ +:original_name: ddm_05_0011.html + +.. _ddm_05_0011: + +Replicating a Parameter Template +================================ + +Scenarios +--------- + +You can replicate a parameter template you have created. When you have already created a parameter template and want to provide most of its custom parameters and values in a new parameter template, you can replicate the template you created. + +Default parameter templates cannot be replicated. You can create parameter templates based on the default ones. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. Choose **Parameter Templates**, click the **Custom Templates** tab, locate the required parameter template, and click **Replicate** in the **Operation** column. + +#. In the displayed dialog box, configure required details and click **OK**. + + - The template name is case-sensitive and consists of 1 to 64 characters. It can contain only letters, digits, hyphens (-), underscores (_), and periods (.). + - The template description consists of a maximum of 256 characters and cannot include carriage return characters and special characters >!<"&'= + + After the parameter template is replicated, a new template is generated in the list. + +.. |image1| image:: /_static/images/en-us_image_0000001733146365.png +.. |image2| image:: /_static/images/en-us_image_0000001733266489.png diff --git a/umn/source/parameter_template_management/viewing_application_records_of_a_parameter_template.rst b/umn/source/parameter_template_management/viewing_application_records_of_a_parameter_template.rst new file mode 100644 index 0000000..eb7369b --- /dev/null +++ b/umn/source/parameter_template_management/viewing_application_records_of_a_parameter_template.rst @@ -0,0 +1,29 @@ +:original_name: ddm_05_0014.html + +.. _ddm_05_0014: + +Viewing Application Records of a Parameter Template +=================================================== + +Scenarios +--------- + +After a parameter template is applied to DDM instances, you can view its application records. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. Choose **Parameter Templates** in the navigation pane on the left. + +#. On the **Default Templates** page, locate the target parameter template and click **View Application Record** in the **Operation** column. Alternatively, on the **Custom Templates** page, choose **More** > **View Application Record** in the **Operation** column. + + You can view the name or ID of the DDM instance to which the parameter template is applied, as well as the application status, application time, and failure cause. + +.. |image1| image:: /_static/images/en-us_image_0000001685307386.png +.. |image2| image:: /_static/images/en-us_image_0000001733266569.png diff --git a/umn/source/parameter_template_management/viewing_parameter_change_history.rst b/umn/source/parameter_template_management/viewing_parameter_change_history.rst new file mode 100644 index 0000000..068d581 --- /dev/null +++ b/umn/source/parameter_template_management/viewing_parameter_change_history.rst @@ -0,0 +1,31 @@ +:original_name: ddm_05_0010.html + +.. _ddm_05_0010: + +Viewing Parameter Change History +================================ + +Scenarios +--------- + +You can view parameters of a DDM instance and change history of custom templates. + +.. note:: + + An exported or custom parameter template has initially a blank change history. + +Procedure +--------- + +#. Log in to the management console. + +#. Click |image1| in the upper left corner and select a region and a project. + +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. + +#. Choose **Parameter Templates**, click the **Customer Templates** tab, locate the required parameter template, and choose **More** > **View Change History**. + + You can view the name, original parameter value, new parameter value, modification status, and modification time of each parameter. + +.. |image1| image:: /_static/images/en-us_image_0000001733266493.png +.. |image2| image:: /_static/images/en-us_image_0000001685307310.png diff --git a/umn/source/permissions_management/database_accounts_and_permissions.rst b/umn/source/permissions_management/database_accounts_and_permissions.rst new file mode 100644 index 0000000..2b77352 --- /dev/null +++ b/umn/source/permissions_management/database_accounts_and_permissions.rst @@ -0,0 +1,10 @@ +:original_name: ddm_05_0021.html + +.. _ddm_05_0021: + +Database Accounts and Permissions +================================= + +To create a schema, import schema information, or configure shards, you can use the administrator account of data nodes or create a database account with the following permissions: + +SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, and TRIGGER WITH GRANT OPTION diff --git a/umn/source/permissions_management/index.rst b/umn/source/permissions_management/index.rst new file mode 100644 index 0000000..f97db3e --- /dev/null +++ b/umn/source/permissions_management/index.rst @@ -0,0 +1,14 @@ +:original_name: ddm_05_0000.html + +.. _ddm_05_0000: + +Permissions Management +====================== + +- :ref:`Database Accounts and Permissions ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + database_accounts_and_permissions diff --git a/umn/source/schema_management/configuring_the_sql_blacklist.rst b/umn/source/schema_management/configuring_the_sql_blacklist.rst new file mode 100644 index 0000000..86c7011 --- /dev/null +++ b/umn/source/schema_management/configuring_the_sql_blacklist.rst @@ -0,0 +1,33 @@ +:original_name: ddm_03_0100.html + +.. _ddm_03_0100: + +Configuring the SQL Blacklist +============================= + +Overview +-------- + +Configure a blacklist and add those statements to it to prevent the system executing some SQL statements. + +Prerequisites +------------- + +- You have logged in to the DDM console. +- A DDM instance is running properly and has available schemas. + +Procedure +--------- + +#. In the instance list, locate the instance that contains schemas you require and click the instance name. +#. On the displayed page, choose **Schemas**. +#. In the schema list, locate the schema that you want to configure a blacklist for and click **Configure SQL Blacklist** in the **Operation** column. +#. In the displayed dialog box, click **Edit**, enter the required SQL statements or regular expressions in prefix match, full-text, and regular expression match boxes, and click **OK**. + + .. note:: + + - **Prefix Match**: Enter SQL statements that contain keywords such as DROP or DELETE and are not allowed by the current schema. + - **Full-text Match**: Enter full-text SQL statements that are not allowed by the current schema. Multiple spaces and line breaks will not be treated as if they were replaced or truncated as a single space. + - **Regular Expression Match**: Enter specific regular expressions that are not allowed by the current schema. + - Separate SQL statements in the blacklist with semicolons (;). The size of SQL statements for prefix match, full-text match, and regular expression match cannot exceed 1 KB, respectively. + - If you want to clear all the SQL statements in prefix match and full-text match areas, clear them separately and click **OK**. diff --git a/umn/source/schema_management/creating_a_schema.rst b/umn/source/schema_management/creating_a_schema.rst new file mode 100644 index 0000000..c69c4e3 --- /dev/null +++ b/umn/source/schema_management/creating_a_schema.rst @@ -0,0 +1,59 @@ +:original_name: ddm_06_0006.html + +.. _ddm_06_0006: + +Creating a Schema +================= + +Prerequisites +------------- + +- You have logged in to the DDM console. +- The DDM instance is in the **Running** state. +- Do not modify or delete the internal accounts (DDMRW*, DDMR*, and DDMREP*) created on data nodes. Otherwise, services will be affected. + + .. note:: + + - The internal account name is in the format: Fixed prefix (such as DDMRW, DDMR, or DDMREP) + Hash value of the data node ID. + - A random password is generated, which contains 16 to 32 characters. + - All instances associated with one schema must have the same major MySQL version. + - Multiple schemas can be created in a DDM instance and associated with the same data node. One DDM instance can be associated with either RDS for MySQL or GaussDB(for MySQL) instances, but not both. + - One data node cannot be associated with schemas in different DDM instances. + - If you create a sharded schema, more than one shard will be generated in the schema. Shard names follow the rule: **\ \_\ **. ** here indicates a four-digit number starting from 0000. This number will be incremented by one. For example, if a schema name is **db_cbb5** and there are 2 shards, the shard names are **db_cbb5_0000** and **db_cbb5_0001**. + - Read-only instances cannot be associated with the schema as data nodes. + +Procedure +--------- + +#. In the navigation pane, choose **Instances**. In the instance list, locate the DDM instance that you want to create a schema for and click **Create Schema** in the **Operation** column. + +#. On the **Create Schema** page, set required parameters by referring to :ref:`Table 1 `, and click **Next**. + + .. _ddm_06_0006__table5532135017574: + + .. table:: **Table 1** Parameter description + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================+ + | Sharding | - **Sharded**: indicates that one schema can be associated with multiple data nodes, and all shards will be evenly distributed across the nodes. | + | | - **Unsharded**: indicates that one schema can be associated with only one data node, and only one shard can be created on the RDS instance. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Schema | The name contains 2 to 48 characters and must start with a lowercase letter. Only lowercase letters, digits, and underscores (_) are allowed. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Account | The DDM account that needs to be associated with the schema. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Nodes | Select only the data nodes that are in the same VPC as your DDM instance and not in use by other DDM instances. DDM will create databases on the selected data nodes without affecting their existing databases and tables. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Shards | The total shards are the shards on all data nodes. There cannot be more data nodes than there are shards in the schema. Each data node has to have at least one shard assigned. Recommended shards per data node: 8 to 64. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Enter a database account with the required permissions and click **Test Connection**. + + .. note:: + + Required permissions: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER WITH GRANT OPTION + + You can create a database account for the RDS for MySQL instance and assign it the above permissions in advance. + +#. After the test becomes successful, click **Finish**. diff --git a/umn/source/schema_management/deleting_a_schema.rst b/umn/source/schema_management/deleting_a_schema.rst new file mode 100644 index 0000000..629c828 --- /dev/null +++ b/umn/source/schema_management/deleting_a_schema.rst @@ -0,0 +1,29 @@ +:original_name: ddm_03_0008.html + +.. _ddm_03_0008: + +Deleting a Schema +================= + +Prerequisites +------------- + +- You have logged in to the DDM console. +- You have created a schema. + + .. important:: + + Deleted schemas cannot be recovered. Exercise caution when performing this operation. + +Procedure +--------- + +#. In the instance list, locate the DDM instance that you want to delete and click its name. +#. On the displayed page, in the navigation pane, choose **Schemas**. +#. In the schema list, locate the schema that you want to delete and click **Delete** in the **Operation** column. +#. In the displayed dialog box, click **Yes**. + + .. note:: + + - Your schema will become faulty if you delete its associated data nodes by clicking the **Delete** button in the schema list. + - To delete data stored on the associated data nodes, select **Delete data on data nodes** in the displayed dialog box. diff --git a/umn/source/schema_management/exporting_schema_information.rst b/umn/source/schema_management/exporting_schema_information.rst new file mode 100644 index 0000000..de1f91e --- /dev/null +++ b/umn/source/schema_management/exporting_schema_information.rst @@ -0,0 +1,23 @@ +:original_name: ddm_06_0015.html + +.. _ddm_06_0015: + +Exporting Schema Information +============================ + +Scenarios +--------- + +When you deploy DR or migrate data across regions, you can export schema information from source DDM instances. The export information includes schema information and shard information, excluding service data and index data. + +Prerequisites +------------- + +There are schemas available in the DDM instance that you want to export schema information from. + +Procedure +--------- + +#. Log in to the DDM console; in the instance list, locate the required DDM instance and click its name. +#. On the displayed page, in the navigation pane, choose **Schemas**. +#. On the displayed page, click **Export Schema Information**. All schema information of the current DDM instance is exported as a JSON file. diff --git a/umn/source/schema_management/importing_schema_information.rst b/umn/source/schema_management/importing_schema_information.rst new file mode 100644 index 0000000..2b817e7 --- /dev/null +++ b/umn/source/schema_management/importing_schema_information.rst @@ -0,0 +1,33 @@ +:original_name: ddm_06_0007.html + +.. _ddm_06_0007: + +Importing Schema Information +============================ + +Scenarios +--------- + +When you deploy DR or migrate data across regions, you can import schema information in destination DDM instances. The imported information includes schema information and shard information, excluding service data and index data. + +Precautions +----------- + +The destination DDM instance has no schemas with the same name. + +Procedure +--------- + +#. Log in to the DDM console, in the instance list, locate the DDM instance that you want to import schema information into and click its name. +#. On the displayed page, in the navigation pane, choose **Schemas**. +#. On the displayed page, click **Import Schema Information**. +#. On the displayed page, click **Select File** to select the required JSON file which has been exported in :ref:`Exporting Schema Information `. +#. Select the required data nodes, enter a database account with required permissions, and click **Finish**. + + .. note:: + + - The number of selected data nodes is the same as the number of data nodes imported into the DDM instance. + + - Required permissions: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER WITH GRANT OPTION + + You can create a database account for the RDS for MySQL instance and assign it the above permissions in advance. diff --git a/umn/source/schema_management/index.rst b/umn/source/schema_management/index.rst new file mode 100644 index 0000000..aa7d0a8 --- /dev/null +++ b/umn/source/schema_management/index.rst @@ -0,0 +1,22 @@ +:original_name: ddm_03_0006.html + +.. _ddm_03_0006: + +Schema Management +================= + +- :ref:`Creating a Schema ` +- :ref:`Exporting Schema Information ` +- :ref:`Importing Schema Information ` +- :ref:`Deleting a Schema ` +- :ref:`Configuring the SQL Blacklist ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_a_schema + exporting_schema_information + importing_schema_information + deleting_a_schema + configuring_the_sql_blacklist diff --git a/umn/source/service_overview/application_scenarios.rst b/umn/source/service_overview/application_scenarios.rst new file mode 100644 index 0000000..0bf5dfe --- /dev/null +++ b/umn/source/service_overview/application_scenarios.rst @@ -0,0 +1,20 @@ +:original_name: ddm-01-0002.html + +.. _ddm-01-0002: + +Application Scenarios +===================== + +It is especially suitable for applications requiring high-concurrency access to large volumes of data. Typical application scenarios are as follows: + +- **Internet** + + E-commerce, finance, O2O, retail, and social networking applications usually face challenges such as large user base, frequent marketing events, and slow response of core transactional systems. DDM can scale compute and storage resources to improve database processing of high-concurrency transactions and ensure fast access to data. + +- **IoT** + + In industrial monitoring, remote control, smart city extension, smart home, and Internet of Vehicles (IoV) scenarios, a large number of sensors and monitoring devices frequently collect data and generate huge amounts of data, which may exceed the storage capability of single-node databases. DDM provides horizontal expansion to help you store massive data at low costs. + +- **Traditional sectors** + + Government agencies, large-sized enterprises, banks, and the like usually use commercial solutions to support high-concurrency access to large volumes of data. These solutions are expensive because they need to rely on mid-range computers and high-end storage devices. DDM, deployed in clusters with common ECSs, provides cost-efficient database solutions with the same or even higher performance than traditional commercial database solutions. diff --git a/umn/source/service_overview/basic_concepts.rst b/umn/source/service_overview/basic_concepts.rst new file mode 100644 index 0000000..32c732c --- /dev/null +++ b/umn/source/service_overview/basic_concepts.rst @@ -0,0 +1,47 @@ +:original_name: ddm_01_0018.html + +.. _ddm_01_0018: + +Basic Concepts +============== + +Data Node +--------- + +A data node is the minimum management unit of DDM. Each data node represents an independently running database, and it may be an RDS for MySQL or GaussDB(for MySQL) instance that is associated with your DDM instance. You can create multiple schemas in a DDM instance to manage data nodes and access each data node independently. + +.. note:: + + DDM instances do not store service-related data, which is stored in shards of data nodes. + +VPC +--- + +A Virtual Private Cloud (VPC) is a private and isolated virtual network. You can configure IP address ranges, subnets, and security groups, assign EIPs, and allocate bandwidth for DDM instances. + +Subnet +------ + +A subnet is a range of IP addresses, a logical subdivision of an IP network. Subnets are created for a VPC where you will place your DDM instances. Every subnet is defined by a unique CIDR block which cannot be modified once the subnet is created. + +Security Group +-------------- + +A security group is a collection of rules for ECSs that have the same security protection requirements and are mutually trusted. After a security group is created, you can add different access rules to the security group, and these rules will apply to all ECSs added to this security group. + +Your account automatically comes with a security group by default. The default security group allows all outbound traffic and denies all inbound traffic. Your ECSs in this security group can communicate with each other without the need to add rules. + +Parameter Template +------------------ + +A parameter template acts as a container for configuration values that can be applied to one or more DDM instances. If you want to use your own parameter template, you only need to create a custom parameter template and select it when creating a DDM instance. You can also apply the parameter template to an existing DDM instance. + +EIP +--- + +The Elastic IP (EIP) service provides independent public IP addresses and bandwidth for Internet access. EIPs can be bound to and unbound from DDM instances. + +Region and Endpoint +------------------- + +Before using an API to call resources, you need to specify its region and endpoint. For more details, see `Regions and Endpoints `__. diff --git a/umn/source/service_overview/core_functions.rst b/umn/source/service_overview/core_functions.rst new file mode 100644 index 0000000..a8bc762 --- /dev/null +++ b/umn/source/service_overview/core_functions.rst @@ -0,0 +1,63 @@ +:original_name: ddm_01_0016.html + +.. _ddm_01_0016: + +Core Functions +============== + +.. table:: **Table 1** DDM main functions + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Function | Description | + +===================================+=========================================================================================================================================================================================================================================================================================================================================================================+ + | Horizontal sharding | Select a sharding key when creating a logical table. DDM will generate a sharding rule and horizontally shard data. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Flexible shard configuration | DDM supports both compute and storage scaling. You can add nodes to a DDM instance or scale up its node class. Alternatively, increase shards or data nodes to distribute data from one large table to multiple tables or scale out storage resources. Compute scaling is undetectable to your applications. Storage scaling minimizes service interruption to seconds. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Distributed transactions | DDM processes three types of transactions, including single-shard, FREE, and Extended Architecture (XA). | + | | | + | | - Single-shard: Transactions cannot be committed across shards. | + | | - FREE: Transactions are committed across shards. A transaction is not rolled back when it fails to be executed by any shard, causing data inconsistency. | + | | - XA: Transactions are committed in two phases. If a transaction fails to be executed by any shard, all work done will be rolled back to ensure data consistency. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Highly compatible SQL syntax | DDM is highly compatible with the MySQL licenses and syntax. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Read and write splitting | Read and write requests can be split without modifying the application code, and this is totally transparent to applications. You only need to create read replicas for a MySQL instance associated with your DDM instance and configure a read policy, and a large number of concurrent requests can read data from those read replicas. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Global sequence | DDM allows you to use globally unique, distributed, and ascending SNs as primary or unique keys or to meet your requirements in specific scenarios. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | DDM console | The DDM console enables you to manage and maintain DDM instances, schemas, and accounts. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Related Services +---------------- + +- VPC + + DDM instances are deployed in an isolated VPC and you can configure IP addresses and bandwidth for accessing these DDM instances and use a security group to control access to them. + +- ECS + + You can access your DDM instance through an ECS. + +- Relational Database Service (RDS) + + After you create a DDM instance, you can associate it with RDS for MySQL instances in the same VPC to obtain separated storage resources. + +- GaussDB(for MySQL) + + After you create a DDM instance, you can associate it with GaussDB(for MySQL) instances in the same VPC to obtain separated storage resources. + +- Cloud Trace Service (CTS) + + CTS records operations on your DDM resources for later query, audit, and backtrack. + +- Elastic Load Balance (ELB) + + ELB distributes incoming traffic to multiple backend servers based on the forwarding policy to balance workloads. So, it can expand external service capabilities of DDM and eliminate single points of failure (SPOFs) to improve service availability. + + +.. figure:: /_static/images/en-us_image_0000001700277302.png + :alt: **Figure 1** Relationship among DDM, VPC, ECS, and data nodes + + **Figure 1** Relationship among DDM, VPC, ECS, and data nodes diff --git a/umn/source/service_overview/index.rst b/umn/source/service_overview/index.rst new file mode 100644 index 0000000..1566003 --- /dev/null +++ b/umn/source/service_overview/index.rst @@ -0,0 +1,26 @@ +:original_name: ddm_01_0000.html + +.. _ddm_01_0000: + +Service Overview +================ + +- :ref:`Overview ` +- :ref:`Basic Concepts ` +- :ref:`Core Functions ` +- :ref:`Product Specifications ` +- :ref:`Usage Constraints ` +- :ref:`Regions and AZs ` +- :ref:`Application Scenarios ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + basic_concepts + core_functions + product_specifications + usage_constraints/index + regions_and_azs + application_scenarios diff --git a/umn/source/service_overview/overview.rst b/umn/source/service_overview/overview.rst new file mode 100644 index 0000000..62701c5 --- /dev/null +++ b/umn/source/service_overview/overview.rst @@ -0,0 +1,50 @@ +:original_name: ddm_01_0001.html + +.. _ddm_01_0001: + +Overview +======== + +Definition +---------- + +Distributed Database Middleware (DDM) is a MySQL-compatible, distributed middleware service designed for relational databases. It can resolve distributed scaling issues to break through capacity and performance bottlenecks of databases, helping handle highly concurrent access to massive volumes of data. + +DDM uses a decoupled compute and storage architecture and provides functions such as database and table sharding, read/write splitting, elastic scaling, and sustainable O&M. Management of instance nodes has no impacts on your workloads. You can perform O&M on your databases and read and write data from and to them on the DDM console, just like as operating a single-node MySQL database. + +Advantages +---------- + +- Automatic Database and Table Sharding + + MySQL databases are usually deployed on single nodes. Once a fault occurs, all data may be lost, and your workloads are 100% affected. + + DDM supports automatic database and table sharding to distribute data across multiple data nodes, so impacts on your services are greatly reduced once a fault occurs. It also supports explosive growth of services. + +- Read/Write Splitting + + DDM can leverage data nodes. If there is still great query pressure after horizontal sharding, you can enable read/write splitting to speed up database processing and access, without the need to reconstruct your service system. + +- Elastic Scaling + + MySQL databases can support only medium- and small-scale service systems because their CPU, memory, and network processing are limited by server configurations and their storage depends on the size of SSD or EVS disks. + + DDM supports both compute and storage scaling. You can add nodes to a DDM instance or scale up its node class. Alternatively, increase shards or data nodes to distribute data from one large table to multiple tables or scale out storage resources as services grow, without worrying about O&M. + +Service Architecture +-------------------- + + +.. figure:: /_static/images/en-us_image_0000001733266537.png + :alt: **Figure 1** DDM service architecture + + **Figure 1** DDM service architecture + +How DDM Works +------------- + + +.. figure:: /_static/images/en-us_image_0000001685307354.png + :alt: **Figure 2** DDM working diagram + + **Figure 2** DDM working diagram diff --git a/umn/source/service_overview/product_specifications.rst b/umn/source/service_overview/product_specifications.rst new file mode 100644 index 0000000..c25e7af --- /dev/null +++ b/umn/source/service_overview/product_specifications.rst @@ -0,0 +1,18 @@ +:original_name: ddm_01_0017.html + +.. _ddm_01_0017: + +Product Specifications +====================== + +General-enhanced DDM instances use Intel® Xeon® Scalable processors. Working in high-performance networks, these DDM instances can offer high and stable computing performance, meeting enterprise-class application requirements. + +.. table:: **Table 1** Supported specifications + + ================ ============ ===== =========== + Specification Architecture vCPUs Memory (GB) + ================ ============ ===== =========== + General-enhanced x86 8 16 + \ 16 32 + \ 32 64 + ================ ============ ===== =========== diff --git a/umn/source/service_overview/regions_and_azs.rst b/umn/source/service_overview/regions_and_azs.rst new file mode 100644 index 0000000..48df024 --- /dev/null +++ b/umn/source/service_overview/regions_and_azs.rst @@ -0,0 +1,36 @@ +:original_name: ddm_01_0007.html + +.. _ddm_01_0007: + +Regions and AZs +=============== + +Concepts +-------- + +The combination of a region and an availability zone (AZ) identifies the location of a data center. You can create resources in a specific AZ in a region. + +- A region is a geographic area where physical data centers are located. Each region is completely independent, improving fault tolerance and stability. After a resource is created, its region cannot be changed. +- An AZ is a physical location using independent power supplies and networks. Faults in an AZ do not affect other AZs. A region can contain multiple AZs, which are physically isolated but interconnected through internal networks. This ensures the independence of AZs and provides low-cost and low-latency network connections. + +:ref:`Figure 1 ` shows the relationship between regions and AZs. + +.. _ddm_01_0007__fig18764197715: + +.. figure:: /_static/images/en-us_image_0000001733266557.png + :alt: **Figure 1** Regions and AZs + + **Figure 1** Regions and AZs + +Selecting a Region +------------------ + +You are advised to select a region close to you or your target users. This reduces network latency and improves access rate. + +Selecting an AZ +--------------- + +When determining whether to deploy resources in the same AZ, consider your applications' requirements on disaster recovery (DR) and network latency. + +- For high DR capability, deploy resources in different AZs in the same region. +- For low network latency, deploy resources in the same AZ. diff --git a/umn/source/service_overview/usage_constraints/data_nodes.rst b/umn/source/service_overview/usage_constraints/data_nodes.rst new file mode 100644 index 0000000..3fd3294 --- /dev/null +++ b/umn/source/service_overview/usage_constraints/data_nodes.rst @@ -0,0 +1,20 @@ +:original_name: ddm_01_0005.html + +.. _ddm_01_0005: + +Data Nodes +========== + +Constraints on data nodes are as follows: + +- Data nodes can be only RDS for MySQL and GaussDB(for MySQL) instances of versions 5.7 and 8.0. +- DDM cannot connect to MySQL instances using SSL connections. +- Case sensitivity support cannot be enabled for MySQL instances. + + .. note:: + + - If you are using MySQL 5.7, select **Case insensitive** for **Table Name** when you create a MySQL instance, or set **lower_case_table_names** to **1** on the **Parameters** page after you complete the creation. + - If you are using MySQL 8.0, select **Case insensitive** for **Table Name** when you create a MySQL instance. + +- Modifying configurations of a data node may result in an exception in using your DDM instance. After the modification, click **Synchronize Data Node Information** on the **Data Nodes** page to synchronize changes from the data node to DDM. +- Character set GBK is not allowed for data nodes. diff --git a/umn/source/service_overview/usage_constraints/high-risk_operations.rst b/umn/source/service_overview/usage_constraints/high-risk_operations.rst new file mode 100644 index 0000000..8fdb3f7 --- /dev/null +++ b/umn/source/service_overview/usage_constraints/high-risk_operations.rst @@ -0,0 +1,11 @@ +:original_name: ddm_01_0175.html + +.. _ddm_01_0175: + +High-risk Operations +==================== + +Pay attention to the following when you use DDM: + +- Do not connect to any data node for data operations to avoid deleting by mistake system catalogs or metadata. +- Do not clear system tables **TBL_DRDS_TABLE** and **MYCAT_SEQUENCE** to prevent metadata loss. diff --git a/umn/source/service_overview/usage_constraints/index.rst b/umn/source/service_overview/usage_constraints/index.rst new file mode 100644 index 0000000..57dcf53 --- /dev/null +++ b/umn/source/service_overview/usage_constraints/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm-01-0003.html + +.. _ddm-01-0003: + +Usage Constraints +================= + +- :ref:`Network Access ` +- :ref:`Data Nodes ` +- :ref:`Unsupported Features and Limitations ` +- :ref:`High-risk Operations ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + network_access + data_nodes + unsupported_features_and_limitations + high-risk_operations diff --git a/umn/source/service_overview/usage_constraints/network_access.rst b/umn/source/service_overview/usage_constraints/network_access.rst new file mode 100644 index 0000000..d0432fb --- /dev/null +++ b/umn/source/service_overview/usage_constraints/network_access.rst @@ -0,0 +1,11 @@ +:original_name: ddm-01-0004.html + +.. _ddm-01-0004: + +Network Access +============== + +Restrictions on network access are as follows: + +- The data nodes and ECSs running your applications must be in the same VPC as your DDM instance. +- To access DDM from your computer, you need to bind an EIP to your DDM instance and then use the EIP to access the DDM instance. diff --git a/umn/source/service_overview/usage_constraints/unsupported_features_and_limitations.rst b/umn/source/service_overview/usage_constraints/unsupported_features_and_limitations.rst new file mode 100644 index 0000000..25395e3 --- /dev/null +++ b/umn/source/service_overview/usage_constraints/unsupported_features_and_limitations.rst @@ -0,0 +1,305 @@ +:original_name: ddm_01_0174.html + +.. _ddm_01_0174: + +Unsupported Features and Limitations +==================================== + +Unsupported Features +-------------------- + +- Stored procedures +- Triggers +- Views +- Events +- User-defined functions +- Foreign key reference and association + +- Full-text indexes and SPACE functions +- Temporary tables +- Compound statements such as BEGIN...END, LOOP...END LOOP, REPEAT...UNTIL...END REPEAT, and WHILE...DO...END WHILE + +- Process control statements such as IF and WHILE +- RESET and FLUSH statements + +- BINLOG statement +- HANDLER statement + +- INSTALL and UNINSTALL PLUGIN statements +- Character sets other than ASCII, Latin1, binary, utf8, and utf8mb4 + +- SYS schema + +- Optimizer Trace +- X-Protocol + +- CHECKSUM TABLE syntax +- Table maintenance statements, including ANALYZE, CHECK, CHECKSUM, OPTIMIZE, and REPAIR TABLE + +- Statements for assigning a value to or querying variable **session** + + For example: + + .. code-block:: + + set @rowid=0;select @rowid:=@rowid+1,id from user; + +- SQL statements that use -- or /.../ to comment out a single line or multiple lines of code + +- DDM provides incomplete support for system variable queries. The returned values are variable values of RDS instances, instead of DDM kernel variable values. For example, the returned values of SELECT @@autocommit do not indicate the current transaction status. +- Executing SET syntax to modify global variables + +- PARTITION syntax. Partitioned tables are not recommended. +- LOAD XML statement + +Unsupported Operators +--------------------- + +- Assignment operator (:=) is not supported. +- Operator (->) is not supported. This operator can be executed successfully in a single table. An error is reported when this operator is executed in other types of tables. +- Operator (->>) is not supported. This operator can be executed successfully in a single table. An error is reported when this operator is executed in other types of tables. +- Expression IS UNKNOWN + +Unsupported Functions +--------------------- + +The compute layer of DDM does not support the following functions: + +- XML functions +- Function **ANY_VALUE()** +- Function **ROW_COUNT()** +- Function **COMPRESS()** +- Function **SHA()** +- Function **SHA1()** +- Function **MD5()** +- Function **AES_ENCRYPT()** +- Function **AES_DECRYPT()** +- Aggregate function **JSON_OBJECTAGG()** +- Aggregate function **JSON_ARRAYAGG()** +- Aggregate function **STD()** +- Aggregate function **STDDEV()** +- Aggregate function **STDDEV_POP()** +- Aggregate function **STDDEV_SAMP()** +- Aggregate function **VAR_POP()** +- Aggregate function **VAR_SAMP()** +- Aggregate function **VARIANCE()** + +SQL Syntax +---------- + +**SELECT** + +- DISTINCTROW + +- Configuring options [HIGH_PRIORITY], [STRAIGHT_JOIN], [SQL_SMALL_RESULT], [SQL_BIG_RESULT], [SQL_BUFFER_RESULT], and [SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS] in SELECT statements on DDM instances + +- SELECT ... GROUP BY ... WITH ROLLUP + +- SELECT ... ORDER BY ... WITH ROLLUP + +- WITH + +- Window functions + +- SELECT FOR UPDATE supports only simple queries and does not support statements such as JOIN, GROUP BY, ORDER BY, and LIMIT. Option [NOWAIT \| SKIP LOCKED] for modifying FOR UPDATE is invalid for DDM. + +- DDM does not support multiple columns with the same name for each SELECT statement in UNION. Duplicate column names are used in the following SELECT statement: + + .. code-block:: + + SELECT id, id, name FROM t1 UNION SELECT pk, pk, name FROM t2; + +**SORT and LIMIT** + +- LIMIT/OFFSET, value range: 0-2147483647 + +**Aggregation** + +Function **asc** or **desc** cannot be used in the GROUP BY statement to sort out results. + +.. note:: + + - DDM automatically ignores keyword **asc** or **desc** after GROUP BY. + - In MySQL versions earlier than 8.0.13, function **asc** or **desc** can be used in the GROUP BY statement to sort out results. In MySQL 8.0.13 or later, a syntax error is reported if you use function **asc** or **desc** this way. ORDER BY is recommended for sorting. + +**Subqueries** + +- Subqueries that join grandparent queries are not supported. +- Using subqueries in the HAVING clause and the JOIN ON condition is not supported. +- Each derived table must have an alias. +- A derived table cannot be a correlated subquery. + +**LOAD DATA** + +- LOW_PRIORITY is not supported. +- CONCURRENT is not supported. +- PARTITION (partition_name [, partition_name] ...) is not supported. +- LINES STARTING BY 'string' is not supported. +- User-defined variables are not supported. +- ESCAPED BY supports only '\\\\'. +- If you have not specified a value for your auto-increment key when you insert a data record, DDM will not fill a value for the key. The auto-increment keys of data nodes of a DDM instance all take effect, so the auto-increment key values may be duplicate. +- If the primary key or unique index is not routed to the same physical table, REPLACE does not take effect. +- If the primary key or unique index is not routed to the same physical table, IGNORE does not take effect. + +**INSERT and REPLACE** + +- INSERT DELAYED is not supported. + +- Only INSERT statements that contain sharding fields are supported. + +- PARTITION syntax is not supported. Partitioned tables are not recommended. + +- Setting **YYYY** of **datetime** (in the format of **YYYY-MM-DD HH:MM:SS**) to **1582** or any value smaller in INSERT statements is not supported. + +- Nesting a subquery in ON DUPLICATE KEY UPDATE of an INSERT statement is not supported. The following is an example: + + .. code-block:: + + INSERT INTO t1(a, b) + SELECT * FROM(SELECT c, d FROM t2 UNION SELECT e, f FROM t3) AS dt + ON DUPLICATE KEY UPDATE b = b + c; + + Subquery c is used in the ON DUPLICATE KEY UPDATE clause. + +- The sharding key values in INSERT and REPLACE statements cannot be **DEFAULT**. + +**UPDATE and DELETE** + +- Updating a sharding key value to **DEFAULT** is not supported. + +- Repeatedly updating the same field in one SQL statement is not supported. + +- Updating a sharding key using UPDATE JOIN is not supported. The following is an example: + + .. code-block:: + + UPDATE tbl_1 a, tbl_2 b set a.name=b.name where a.id=b.id; + + **name** indicates the sharding key of table **tbl_1**. + +- Updating a sharding key by executing INSERT ON DUPLICATE KEY UPDATE is not supported. + +- Updating self-joins is not supported. + + .. code-block:: + + UPDATE tbl_1 a, tbl_1 b set a.tinyblob_col=concat(b.tinyblob_col, 'aaabbb'); + +- UPDATE JOIN supports only joins with WHERE conditions. + + The following is an example: + + .. code-block:: + + UPDATE tbl_3, tbl_4 SET tbl_3.varchar_col='dsgfdg'; + +- Referencing other object columns in assignment statements or expressions is not supported when UPDATE JOIN syntax is used. The following is an example: + + .. code-block:: + + UPDATE tbl_1 a, tbl_2 b SET a.name=concat(b.name, 'aaaa'),b.name=concat(a.name, 'bbbb') ON a.id=b.id; + +- You can update a sharding field by two steps: delete the original sharding field and then insert a new field. During this process, the results of querying the sharding fields involved in the target table may be inconsistent. + +**DDL** + +- SQL statements for modifying database names and sharding field names and types +- SQL statements for creating and deleting schemas +- Index FULL_TEXT +- AS SELECT clause of the CREATE TABLE statement +- CREATE TABLE ... LIKE statement +- Dropping multiple tables with one SQL statement +- Executing multiple SQL statements at the same time +- Creating foreign keys for broadcast and sharded tables +- Creating tables whose names are prefixed by **\_ddm** +- Creating temporary sharded or broadcast tables +- Specifying globally unique keys in the CREATE TABLE statement + +Indexes +------- + +- Global secondary indexes +- Global unique indexes. Unique keys and primary keys may not be globally unique. + +Table Recycle Bins +------------------ + +- Hints +- Deleting tables by schema +- Deleting tables by logical table +- After a table is recovered, its globally unique sequence increases automatically but may not follow the last sequence value. +- Shard configuration +- Retaining copies with no time limit +- Recovering data to a table with any name +- Unlimited copies + +Transactions +------------ + +- Savepoints +- XA syntax. DDM has implemented distributed transactions through XA, so the user layer does not need to process the syntax. +- Customizing the isolation level of a transaction. Currently, DDM supports only the READ COMMITTED isolation level. In consideration of compatibility, DDM does not report errors for any SQL statement (such as SET GLOBAL TRANSACTION ISOLATION LEVEL REPEATABLE READ) to set the database isolation level, but will ignore the modifications to the transaction isolation level. +- Setting a transaction to read-only (START TRANSACTION READ ONLY). DDM can enable read/write of a transaction, instead of enabling read-only, to ensure compatibility. + +Permissions +----------- + +- Column-level permissions +- Subprogram-level permissions + +Database Management Statements +------------------------------ + +- SHOW TRIGGERS +- Most of SHOW statements such as SHOW PROFILES, SHOW ERRORS, and SHOW WARNINGS +- The following SHOW statements are randomly sent to a database shard. If database shards are on different RDS for MySQL instances, the returned variables or table information may be different. + + - SHOW TABLE STATUS; + - SHOW VARIABLES Syntax; + - SHOW WARNINGS Syntax does not support the combination of LIMIT and COUNT. + - SHOW ERRORS Syntax does not support the combination of LIMIT and COUNT. + +INFORMATION_SCHEMA +------------------ + +- Only simple queries of SCHEMATA, TABLES, COLUMNS, STATISTICS, and PARTITIONS are supported. No subqueries, JOINs, aggregate functions, ORDER BY, and LIMIT are allowed. + +Broadcast Tables +---------------- + +If a broadcast table is used, do not use any function that has different results returned each time it is executed. Otherwise, data inconsistency will occur between different shards. If such functions are indeed required, calculate their results, write the results to your SQL statements, and then execute the SQL statements on the broadcast table. Functions of this type include but are not limited to the following: + +- CONNECTION_ID() +- CURDATE() +- CURRENT_DATE() +- CURRENT_TIME() +- CURRENT_TIMESTAMP() +- CURTIME() +- LAST_INSERT_ID() +- LOCALTIME() +- LOCALTIMESTAMP() +- NOW() +- UNIX_TIMESTAMP() +- UTC_DATE() +- UTC_TIME() +- UTC_TIMESTAMP() +- CURRENT_ROLE() +- CURRENT_USER() +- FOUND_ROWS() +- GET_LOCK() +- IS_FREE_LOCK() +- IS_USED_LOCK() +- JSON_TABLE() +- LOAD_FILE() +- MASTER_POS_WAIT() +- RAND() +- RELEASE_ALL_LOCKS() +- RELEASE_LOCK() +- ROW_COUNT() +- SESSION_USER() +- SLEEP() +- SYSDATE() +- SYSTEM_USER() +- USER() +- UUID() +- UUID_SHORT() diff --git a/umn/source/shard_configuration/assessment.rst b/umn/source/shard_configuration/assessment.rst new file mode 100644 index 0000000..0e1ae2b --- /dev/null +++ b/umn/source/shard_configuration/assessment.rst @@ -0,0 +1,13 @@ +:original_name: ddm_03_0069.html + +.. _ddm_03_0069: + +Assessment +========== + +Before changing shards, you need to carry out a preliminary evaluation and determine the number of new shards, whether to scale up the current DDM node class, and the number of required data nodes and their specifications. + +- Data volume: Run **show db status** to query the volume of data involved. +- DDM node class: Determine nodes of the DDM instance and vCPUs and memory size of each node. +- Data node class: Determine the number of data nodes and vCPUs and memory size of each node. +- Business scale: Analyze current service scale and growth trend. diff --git a/umn/source/shard_configuration/index.rst b/umn/source/shard_configuration/index.rst new file mode 100644 index 0000000..c5dbaf4 --- /dev/null +++ b/umn/source/shard_configuration/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm_03_0064.html + +.. _ddm_03_0064: + +Shard Configuration +=================== + +- :ref:`Overview and Application Scenarios ` +- :ref:`Assessment ` +- :ref:`Pre-check ` +- :ref:`Operation Guide ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview_and_application_scenarios + assessment + pre-check + operation_guide diff --git a/umn/source/shard_configuration/operation_guide.rst b/umn/source/shard_configuration/operation_guide.rst new file mode 100644 index 0000000..8cee722 --- /dev/null +++ b/umn/source/shard_configuration/operation_guide.rst @@ -0,0 +1,103 @@ +:original_name: ddm_03_0071.html + +.. _ddm_03_0071: + +Operation Guide +=============== + +This section uses an RDS for MySQL instance as an example to describe how to configure shards for a schema. + +Prerequisites +------------- + +- There is a DDM instance with available schemas. +- There is an RDS for MySQL instance in the same VPC as the DDM instance, and is not associated with any other DDM instances. If adding data nodes is required, ensure that the new data nodes are in the same VPC as the DDM instance. +- The kernel version of the DDM instance must be 3.0.8.3 or later. The latest kernel version is recommended. +- Ensure that the instances to be associated with your schema cannot be in read-only states. + +Procedure +--------- + +#. Log in to the DDM console. In the instance list, locate the instance that you want to configure shards for and click its name. + +#. On the displayed page, choose **Schemas** to view schemas of the DDM instance. + +#. In the schema list, locate the schema that you want to configure shards for and click **Configure Shards** in the **Operation** column. + +#. On the **Configure Shards** page, configure the required parameters and click **Test Availability**. + + .. note:: + + - Tables without primary keys do not support shard configuration. + - **Total Shards After Configuration** defaults to the total number of existing shards in the schema. If you want to increase shards, change the default value to the new total number of shards, and DDM will distribute all shards evenly to all data nodes. + - You can increase data nodes or shards. Data will be redistributed across all shards if one or more shards are added. + - Existing instances are selected by default in the data node list, but you still need to input the required password for testing connections. + - The number of physical shards per data node in the schema cannot exceed 64. If more than 64 shards are required, contact DDM technical support. + - Required permissions: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER WITH GRANT OPTION + +#. After the test is successful, click **Next** to go to the **Precheck** page. + + .. note:: + + - Precheck is not the start of shard configuration. The configuration task does not start until you click **OK**. + - Handle risks first if any. You can also ignore the risks if you ensure that they do not affect your services. + +#. After all check items are complete, click **Configure shards**. + +#. View progress at the Task Center or run command **show migrate status** on your SQL client to view progress. A shard configuration task consists of two phases: full migration and incremental migration. + + + .. figure:: /_static/images/en-us_image_0000001685307342.png + :alt: **Figure 1** Run the required command to view task progress + + **Figure 1** Run the required command to view task progress + + .. note:: + + The number of returned records corresponds to the number of source RDS instances. + + **SOURCE_RDS**: indicates the source RDS instance. + + **MIGRATE_ID**: indicates the scale-out task ID. + + **SUCCEED_TABLE_STRUCTURE**: indicates the number of physical tables whose structure data has been migrated. + + **TOTAL_TABLE_STRUCTURE**: indicates the total number of physical tables whose structure data is to be migrated. + + **SUCCEED_TABLE_DATA**: indicates the number of physical tables whose data records have been migrated. + + **TOTAL_TABLE_DATA**: indicates the number of physical tables whose data records are to be migrated. + + **SUCCEED_INDEX_DATA**: indicates the number of physical tables whose indexes have been migrated. + + **TOTAL_INDEX_DATA**: indicates the number of physical tables whose data records are to be migrated. + + **FULL_SUCCEED_COUNT**: indicates the objects that have finished a full migration in the current scale-out subtask. + + **FULL_TOTAL_COUNT**: indicates all objects that need to be migrated by a full migration in the current scale-out subtask. + + **FULL_PERCENTAGE**: indicates the percentage of migrated objects in the full migration in the current scale-out subtask. + + Aggregate total objects to be migrated in a full migration and migrated objects in each scale-out subtask. The total objects to be migrated and migrated in all subtasks are displayed in the progress bar at Task Center. + +#. At the Task Center, click **View Log** to view task logs. + +#. If you select **Manual** for route switchover, click **Switch Route** at Task Center after data is completely migrated. If you select **Automatic**, the route is automatically switched over within the specified time. + + .. note:: + + - Switching route is critical for a shard configuration task. Before the route is switched, you can cancel a shard configuration task, and data in original databases is not affected. + - If new RDS for MySQL instances are added, write operations will be disabled during route switchover. If the number of shards is increased, read and write operations are both disabled during route switchover. + - Switching route during off-peak hours is recommended. This is because data validation is required during this process, increasing the switchover time. How long route switchover requires depends on the volume of the data involved. + +#. Click **Clear** in the **Operation** column to delete the data migrated from original RDS for MySQL instances. + +#. Carefully read information in the dialog box, confirm that the task is correct, and click **Yes**. + +#. Wait till the source data is cleared. + +#. Run the following commands after the shard configuration is complete: + + **show data node**: used to view the relationship between new data nodes and shards + + **show db status**: used to view the estimated usage of schema disks. diff --git a/umn/source/shard_configuration/overview_and_application_scenarios.rst b/umn/source/shard_configuration/overview_and_application_scenarios.rst new file mode 100644 index 0000000..c8f5647 --- /dev/null +++ b/umn/source/shard_configuration/overview_and_application_scenarios.rst @@ -0,0 +1,52 @@ +:original_name: ddm_03_0068.html + +.. _ddm_03_0068: + +Overview and Application Scenarios +================================== + +Overview +-------- + +Shard configuration is a core function of DDM. With this function, you can increase data nodes or shards to improve database storage and concurrency as services grow. Shard configuration has little impacts on your services, so you do not need to worry about database scaling and subsequent O&M as your services are burst. + +Application Scenarios +--------------------- + +DDM provides the following methods of configuring shards to meet different service needs. + +**Method 1: Keep shards unchanged and increase data nodes** + +This method does not change the number of shards and only increases the number of data nodes. Some shards are migrated from original data nodes to new data nodes. The shard data is not redistributed, so this method is the fastest one among all three methods and is recommended. + +This method underpins rapid service growth after horizontal sharding and can reduce costs at the early stage of services. It is also suitable if RDS for MySQL instances cannot meet storage space and read/write performance requirements. + + +.. figure:: /_static/images/en-us_image_0000001685147678.png + :alt: **Figure 1** Adding RDS for MySQL instances with shards unchanged + + **Figure 1** Adding RDS for MySQL instances with shards unchanged + +**Method 2: Add shards with data nodes unchanged** + +This method adds shards, but not data nodes. It changes total shards, total table shards, and table sharding rules. Data is redistributed to all shards. Old tables in original shards will be deleted, and broadcast tables are increased. + +This method is suitable if the associated RDS for MySQL instance has sufficient storage space but one of its tables contains a large amount of data, with query performance limited. + + +.. figure:: /_static/images/en-us_image_0000001685307426.png + :alt: **Figure 2** Adding shards with RDS for MySQL instances unchanged + + **Figure 2** Adding shards with RDS for MySQL instances unchanged + +**Method 3: Add both shards and data nodes** + +This method increases both shards and data nodes. It changes total shards, total table shards, and table sharding rules. Data is redistributed to all shards. Old tables in original shards will be deleted, and broadcast tables are increased. + +This method is suitable if RDS for MySQL instances cannot meet storage space and read/write requirements and there is a physical table containing a large amount of data with query performance limited. + + +.. figure:: /_static/images/en-us_image_0000001733266613.png + :alt: **Figure 3** Adding shards and RDS for MySQL instances + + **Figure 3** Adding shards and RDS for MySQL instances diff --git a/umn/source/shard_configuration/pre-check.rst b/umn/source/shard_configuration/pre-check.rst new file mode 100644 index 0000000..921c91b --- /dev/null +++ b/umn/source/shard_configuration/pre-check.rst @@ -0,0 +1,63 @@ +:original_name: ddm_03_0070.html + +.. _ddm_03_0070: + +Pre-check +========= + +Check items in the table below one day before performing a shard configuration task. + +Pre-check Items +--------------- + +.. table:: **Table 1** Pre-check items involved + + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Item | Purpose | Solution to Check Failure | + +===============================================+==============================================================================================================================================================+==============================================================================================================================================================================================+ + | Binlog backup time of the DB instance | Whether your full backups are retained for a time period long enough | Increase the retention period for full backups on the data node console. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Binlog enabled on data nodes | Whether binlog is enabled to support online shard configuration | If your data node is an RDS instance, no further action is required. If your data node is a GaussDB(for MySQL) instance, set **log_bin** to **true** on the GaussDB(for MySQL) console. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Retention period of binlogs on data nodes | The retention period of binlogs on data nodes must be long enough. | If your data node is an RDS instance, no further action is required. If your data node is a GaussDB(for MySQL) instance, set **binlog_expire_logs_seconds** to **604800** or a larger value. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Broadcast table consistency | Ensure broadcast table consistency before performing a shard configuration task. | Contact DDM O&M personnel. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Character set and collation of source shards | Ensure that character set and collation are consistent before and after the shard configuration. | Contact DDM O&M personnel. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | SQL statements for creating physical stables. | Ensure that table structure on physical shards is consistent. | Execute CHECK TABLE to check for table structure inconsistencies and execute ALTER to rectify the inconsistencies. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Primary keys | All tables in the source database have primary keys, and the sharding key is a part of the primary keys to ensure data consistency after shards are changed. | Add primary keys for tables using ALTER if the tables have no primary keys. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access to DB instances | Check whether data nodes can be connected. | Check security group configurations. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | DB instance parameters | The source data nodes have the same DB parameter settings as the destination data nodes. | Modify parameter configurations on the data node console. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | DB instance storage space | The disk space of data nodes is sufficient during shard configuration. | Scale up storage space of data nodes. | + | | | | + | | | .. caution:: | + | | | | + | | | CAUTION: | + | | | This check item is based on the estimated value that may be different from the actual value. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | DB instance time zone | The source data nodes have the same time zone requirements as the destination data nodes. | Modify the time zone on the **Parameters** page of the data node console. | + +-----------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Common Issues and Solutions +--------------------------- + +- The shard configuration fails due to table structure inconsistency. + + Solution: Execute CHECK TABLE to query table structure inconsistencies and execute ALTER to rectify the inconsistencies. Contact O&M personnel if the inconsistencies cannot be rectified using DDL, for example, the primary or unique keys cannot be modified for data reasons. + +- Tables without primary keys cannot be migrated. If a table has no primary keys, it cannot be correctly located and recorded. After a retry is performed during shard configuration, duplicate data may be generated. + + Solution: Add keys to the tables. + +- If the sharding key is not part of a primary key, there may be data records (in different physical tables) with duplicate primary key values in a logical table. When these data records are redistributed, they will be routed to the same physical table, and only one record is retained because they have the same primary keys. As a result, data becomes inconsistent before and after the migration, causing the shard configuration failure. + + .. note:: + + - This error does not occur when the primary key is a globally unique sequence and the number of shards does not change. + + Solution: Rectify the data and check again. diff --git a/umn/source/slow_queries.rst b/umn/source/slow_queries.rst new file mode 100644 index 0000000..6b0a8e3 --- /dev/null +++ b/umn/source/slow_queries.rst @@ -0,0 +1,18 @@ +:original_name: ddm_13_0001.html + +.. _ddm_13_0001: + +Slow Queries +============ + +Scenarios +--------- + +DDM provides a Slow Queries function that sorts out the same type of slow SQL statements within a specified period of time by SQL template. You can specify a time range, search for all types of slow SQL statements within the time range, and then optimize them. + +Procedure +--------- + +#. In the instance list, locate the DDM instance whose slow queries you want to view and click its name. +#. In the navigation pane, choose **Slow Queries**. +#. On the **Slow Queries** page, specify a time range and view SQL statements executed within this time range. diff --git a/umn/source/sql_syntax/advanced_sql_functions.rst b/umn/source/sql_syntax/advanced_sql_functions.rst new file mode 100644 index 0000000..0c760ca --- /dev/null +++ b/umn/source/sql_syntax/advanced_sql_functions.rst @@ -0,0 +1,30 @@ +:original_name: ddm_03_0035.html + +.. _ddm_03_0035: + +Advanced SQL Functions +====================== + +.. table:: **Table 1** Restrictions on advanced SQL functions + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Item | Restriction | + +===================================+=========================================================================================================================================+ + | SQL functions | - PREPARE and EXECUTE syntax is not supported. | + | | | + | | - Customized data types and functions are not supported. | + | | | + | | - Views, stored procedures, triggers, and cursors are not supported. | + | | | + | | - Compound statements such as BEGIN...END, LOOP...END LOOP, REPEAT...UNTIL...END REPEAT, and WHILE...DO...END WHILE are not supported. | + | | | + | | - Process control statements such as IF and WHILE are not supported. | + | | | + | | - The following prepared statements are not supported: | + | | | + | | **PREPARE**\ Syntax | + | | | + | | **EXECUTE**\ Syntax | + | | | + | | - Comments for indexes are not supported in table creation statements. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/sql_syntax/database_management_syntax.rst b/umn/source/sql_syntax/database_management_syntax.rst new file mode 100644 index 0000000..f74cc9d --- /dev/null +++ b/umn/source/sql_syntax/database_management_syntax.rst @@ -0,0 +1,58 @@ +:original_name: ddm_03_0032.html + +.. _ddm_03_0032: + +Database Management Syntax +========================== + +Supported Database Management Syntax +------------------------------------ + +- SHOW Syntax + +- SHOW COLUMNS Syntax + +- SHOW CREATE TABLE Syntax + +- SHOW TABLE STATUS Syntax + +- SHOW TABLES Syntax + +- SHOW DATABASES + + If the required database is not found, check fine-grained permissions of your account. + +- SHOW INDEX FROM + +- SHOW VARIABLES Syntax + +Supported Database Tool Commands +-------------------------------- + +- DESC Syntax + +- USE Syntax + +- EXPLAIN Syntax + + Unlike EXPLAIN in MySQL, the output of DDM EXPLAIN describes the nodes that the current SQL statement is routed to. + +Unsupported Database Management Syntax +-------------------------------------- + +.. table:: **Table 1** Restrictions on database management statements + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Item | Restriction | + +===================================+=================================================================================================================================================================================================+ + | Database management statements | - Executing SET Syntax to modify global variables is not supported. | + | | - SHOW TRIGGERS is not supported. | + | | | + | | The following SHOW statements are randomly sent to a database shard. If database shards are on different RDS for MySQL instances, the returned variables or table information may be different. | + | | | + | | - SHOW TABLE STATUS | + | | - SHOW VARIABLES Syntax | + | | - CHECK TABLE does not support sharding tables by hash or sharding key. | + | | - SHOW WARNINGS Syntax does not support the combination of LIMIT and COUNT. | + | | - SHOW ERRORS Syntax does not support the combination of LIMIT and COUNT. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/sql_syntax/ddl/creating_a_table.rst b/umn/source/sql_syntax/ddl/creating_a_table.rst new file mode 100644 index 0000000..216bd23 --- /dev/null +++ b/umn/source/sql_syntax/ddl/creating_a_table.rst @@ -0,0 +1,70 @@ +:original_name: ddm_08_0029.html + +.. _ddm_08_0029: + +Creating a Table +================ + +.. note:: + + - Do not create tables whose names start with **\_ddm**. DDM manages such tables as internal tables by default + - Sharded tables do not support globally unique indexes. If the unique key is different from the sharding key, data uniqueness cannot be ensured. + - The auto-increment key should be a BIGINT value. To avoid duplicate values, do not use TINYINT, SMALLINT, MEDIUMINT, INTEGER, or INT as the auto-increment key. + +Database and Table Sharding +--------------------------- + +The following is an example statement when HASH is used for database sharding and MOD_HASH for table sharding: + +.. code-block:: + + CREATE TABLE tbpartition_tb1 ( + id bigint NOT NULL AUTO_INCREMENT COMMENT 'Primary key id', + name varchar(128), + PRIMARY KEY(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci + DBPARTITION BY HASH(id) + TBPARTITION BY MOD_HASH(name) tbpartitions 8; + +Database Sharding +----------------- + +The following is an example statement when HASH is used: + +.. code-block:: + + CREATE TABLE dbpartition_tb1 ( + id bigint NOT NULL AUTO_INCREMENT COMMENT 'Primary key id', + name varchar(128), + PRIMARY KEY(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci + DBPARTITION BY HASH(id); + +Creating a Broadcast Table +-------------------------- + +The following is an example statement: + +.. code-block:: + + CREATE TABLE broadcast_tb1 ( + id bigint NOT NULL AUTO_INCREMENT COMMENT 'Primary key id', + name varchar(128), + PRIMARY KEY(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci + BROADCAST; + +Creating a Table When Sharding Is not Used +------------------------------------------ + +A global sequence can also be specified for an unsharded table, but this function is always ignored. An unsharded table provides auto-increment using auto-increment values of corresponding physical tables. + +The following is an example statement: + +.. code-block:: + + CREATE TABLE single_tb1 ( + id bigint NOT NULL AUTO_INCREMENT COMMENT 'Primary key id', + name varchar(128), + PRIMARY KEY(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; diff --git a/umn/source/sql_syntax/ddl/index.rst b/umn/source/sql_syntax/ddl/index.rst new file mode 100644 index 0000000..0667739 --- /dev/null +++ b/umn/source/sql_syntax/ddl/index.rst @@ -0,0 +1,20 @@ +:original_name: ddm-08-0003.html + +.. _ddm-08-0003: + +DDL +=== + +- :ref:`Overview ` +- :ref:`Creating a Table ` +- :ref:`Sharding Algorithm Overview ` +- :ref:`Sharding Algorithms ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + creating_a_table + sharding_algorithm_overview + sharding_algorithms/index diff --git a/umn/source/sql_syntax/ddl/overview.rst b/umn/source/sql_syntax/ddl/overview.rst new file mode 100644 index 0000000..3a9a8c5 --- /dev/null +++ b/umn/source/sql_syntax/ddl/overview.rst @@ -0,0 +1,57 @@ +:original_name: ddm_12_0006.html + +.. _ddm_12_0006: + +Overview +======== + +DDM supports common DDL operations, such as creating databases, creating tables, and modifying table structure, but the implementation method is different from that in common MySQL databases. + +DDL Statements that Can Be Executed on a MySQL Client +----------------------------------------------------- + +- TRUNCATE + + Example: + + .. code-block:: text + + TRUNCATE TABLE t1 + + Deletes all data from table t1. + + TRUNCATE TABLE deletes all data from a table and has the DROP permission. In logic, TRUNCATE TABLE is similar to the DELETE statement for deleting all rows from a table. + +- ALTER TABLE + + Example: + + .. code-block:: text + + ALTER TABLE t2 DROP COLUMN c, DROP COLUMN d; + + Deletes columns c and d fom table t2. + + ALTER can add or delete a column, create or drop an index, change the type of an existing column, rename columns or tables, or change the storage engine or comments of a table. + +- DROP INDEX + + Example: + + .. code-block:: text + + DROP INDEX `PRIMARY` ON t; + + Deletes primary key from table t. + +- CREATE INDEX + + Example: + + .. code-block:: text + + CREATE INDEX part_of_name ON customer (name(10)); + + Creates an index using the first 10 characters in column name (assuming that there are non-binary character strings in column name). + + CREATE INDEX can add an index to an existing table. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithm_overview.rst b/umn/source/sql_syntax/ddl/sharding_algorithm_overview.rst new file mode 100644 index 0000000..ebb26c6 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithm_overview.rst @@ -0,0 +1,105 @@ +:original_name: ddm_03_0038.html + +.. _ddm_03_0038: + +Sharding Algorithm Overview +=========================== + +Supported Sharding Algorithms +----------------------------- + +DDM supports database sharding, table sharding, and a variety of sharding algorithms. + +.. table:: **Table 1** Sharding algorithms + + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | Algorithm | Description | Database Sharding Supported | Table Sharding Supported | + +=============+==========================================================================================+=============================+==========================+ + | MOD_HASH | Performing a simple modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | MOD_HASH_CI | Performing a simple modulo operation (case-insensitive) | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | HASH | Performing a simple modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | RANGE | Performing a RANGE-based operation | Yes | No | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | RIGHT_SHIFT | Arithmetic right shifting of a sharding key value and then performing a modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | YYYYMM | Getting a hash code for a YearMonth object and then performing a modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | YYYYDD | Getting a hash code for a YearDay object and then performing a modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | YYYYWEEK | Getting a hash code for a YearWeek object and then performing a modulo operation | Yes | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | MM | Getting a hash code for a MONTH object and then performing a modulo operation | No | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | DD | Getting a hash code for a DAY object and then performing a modulo operation | No | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | MMDD | Getting a hash code for a MonthDay object and then performing a modulo operation | No | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + | WEEK | Getting a hash code for a WEEK object and then performing a modulo operation | No | Yes | + +-------------+------------------------------------------------------------------------------------------+-----------------------------+--------------------------+ + +.. note:: + + - Database and table sharding keys cannot be left blank. + - In DDM, sharding of a logical table is defined by the sharding function (number of shards and routing algorithm) and the sharding key (MySQL data type). + - If a logical table uses different database and table sharding algorithms, DDM will perform full-shard or full-table scanning when you do not specify database and table conditions in SQL queries. + +Data Type of Sharding Algorithms +-------------------------------- + +Different sharding algorithms support different data types. The following table lists supported data types. + +.. table:: **Table 2** Supported data types + + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | Sharding Algorithm | TINYINT | SMALLINT | MEDIUMINT | INTEGER | INT | BIGINT | CHAR | VARCHAR | DATE | DATETIME | TIMESTAMP | Others | + +====================+=========+==========+===========+=========+=====+========+======+=========+======+==========+===========+========+ + | MOD_HASH | Y | Y | Y | Y | Y | Y | Y | Y | N | N | N | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | MOD_HASH_CI | Y | Y | Y | Y | Y | Y | Y | Y | N | N | N | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | HASH | Y | Y | Y | Y | Y | Y | Y | Y | N | N | N | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | RANGE | Y | Y | Y | Y | Y | Y | N | N | N | N | N | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | RIGHT_SHIFT | Y | Y | Y | Y | Y | Y | N | N | N | N | N | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | YYYYMM | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | YYYYDD | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | YYYYWEEK | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | MM | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | DD | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | MMDD | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + | WEEK | N | N | N | N | N | N | N | N | Y | Y | Y | N | + +--------------------+---------+----------+-----------+---------+-----+--------+------+---------+------+----------+-----------+--------+ + +.. note:: + + **Y** indicates that the data type is supported, and **N** indicates that the data type is not supported. + +Table Creation Syntax of Sharding Algorithms +-------------------------------------------- + +DDM is compatible with table creation syntax of MySQL databases and adds keyword **partition_options** for databases and tables sharding. + +.. code-block:: + + CREATE TABLE [IF NOT EXISTS] tbl_name + (create_definition,...) + [table_options] + [partition_options] + partition_options: + DBPARTITION BY + {{RANGE|HASH|MOD_HASH|RIGHT_SHIFT|YYYYMM|YYYYWEEK|YYYYDD}([column])} + [TBPARTITION BY + {{HASH|MOD_HASH|UNI_HASH|RIGHT_SHIFT|YYYYMM|YYYYWEEK|YYYYDD}(column)} + [TBPARTITIONS num] + ] diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/dd.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/dd.rst new file mode 100644 index 0000000..e1710e1 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/dd.rst @@ -0,0 +1,56 @@ +:original_name: ddm_10_0015.html + +.. _ddm_10_0015: + +DD +== + +Application Scenarios +--------------------- + +This algorithm applies if you want to shard data by date. One table shard for one day is recommended, and its name is the day number. + +Instructions +------------ + +- The sharding key must be DATE, DATETIME, or TIMESTAMP. +- This algorithm can be used only for table sharding. It cannot be used for database sharding. + +Data Routing +------------ + +Use the day number in the sharding key value to find the remainder. This remainder determines which table shard your data is routed to and serves as the name suffix of the table shard. + +For example, if the sharding key value is **2019-01-15**, the calculation of the table shard is: Day number in a month mod Table shards, that is, 15 mod 31 = 15. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +-----------------------+----------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +=======================+================================================================+================================+ + | None | Table routing result = Table sharding key value % Table shards | Sharding key value: 2019-01-15 | + | | | | + | | | Table shard: 15 mod 31 = 15 | + +-----------------------+----------------------------------------------------------------+--------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: + + create table test_dd_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by MOD_HASH(id) + tbpartition by DD(create_time) tbpartitions 31; + +Precautions +----------- + +Table shards in each database shard cannot exceed 31 because there are at most 31 days in a month. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/hash.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/hash.rst new file mode 100644 index 0000000..655fb34 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/hash.rst @@ -0,0 +1,98 @@ +:original_name: ddm_10_0012.html + +.. _ddm_10_0012: + +HASH +==== + +Application Scenarios +--------------------- + +This algorithm features even distribution of data and sharding tables. Arithmetic operators such as equality (=) and IN operators are often used in SQL queries. + +Instructions +------------ + +The sharding key must be CHAR, VARCHAR, INT, INTEGER, BIGINT, MEDIUMINT, SMALLINT, TINYINT, or DECIMAL (the precision can be 0). The sharding key must be DATE, DATETIME, or TIMESTAMP if you use HASH together with date functions. + +Data Routing +------------ + +Determine the range of each database or table shard using 102400. + +For example, if there are 8 shards in each schema, use formula 102400/8 = 12800 to calculate the range of each shard as follows: 0=[0,12799], 1=[12800,25599], 2=[25600,38399], 3=[38400,51199], 4=[51200,63999], 5=[64000,76799], 6=[76800,89599], and 7=[89600,102399]. + +To determine the route, calculate CRC32 value based on the sharding key value and divide the CRC value by 102400. Then check which range the remainder belongs to. + +Calculation Method +------------------ + +**Method 1: Use a Non-date Sharding Key** + +.. table:: **Table 1** Required calculation methods when the sharding key is not the DATE type + + +-----------------------+-----------------------------------------------------------------+---------------------------------------------------------------------+ + | Condition | Calculation Method | Example | + +=======================+=================================================================+=====================================================================+ + | Non-date sharding key | Database routing result = crc32(Database sharding key) % 102400 | Database/Table shard: crc32(16) % 102400 = 49364; | + | | | | + | | Table routing result = crc32(Table sharding key) % 102400 | 49364 belongs to range 3=38400-51199, so data is routed to shard 3. | + +-----------------------+-----------------------------------------------------------------+---------------------------------------------------------------------+ + +**Method 2: Use a Date Sharding Key** + +.. table:: **Table 2** Supported date functions + + +---------------+--------------------------------------------------------+------------------------------+ + | Date Function | Calculation Method | Example | + +===============+========================================================+==============================+ + | year() | year(yyyy-MM-dd)=yyyy | year('2019-10-11')=2019 | + +---------------+--------------------------------------------------------+------------------------------+ + | month() | month(yyyy-MM-dd)=MM | month('2019-10-11')=10 | + +---------------+--------------------------------------------------------+------------------------------+ + | weekofyear() | weekofyear(yyyy-MM-dd)=Week number of the current year | weekofyear ('2019-10-11')=41 | + +---------------+--------------------------------------------------------+------------------------------+ + | day() | day(yyyy-MM-dd)=dd | day ('2019-10-11')=11 | + +---------------+--------------------------------------------------------+------------------------------+ + +.. table:: **Table 3** Required calculation methods when the sharding key is the DATE type + + +-----------------------+--------------------------------------------------------------------------------+------------------------------------------------------------------+ + | Condition | Calculation Method | Example | + +=======================+================================================================================+==================================================================+ + | Date sharding key | Database routing result = crc32(Date function(Database sharding key)) % 102400 | Database/Table shard: crc32(year('2019-10-11')) % 102400 = 5404; | + | | | | + | | Table routing result = crc32(Date function(Database sharding key)) % 102400 | 5404 belongs to range 0=0-12799, so data is routed to shard 0. | + +-----------------------+--------------------------------------------------------------------------------+------------------------------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +- Assume that you use field ID as the sharding key and the HASH algorithm to shard databases: + + .. code-block:: + + create table hash_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash (ID); + +- Assume that you use field ID as the sharding key and the hash algorithm to shard databases and tables: + + .. code-block:: + + create table mod_hash_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by hash (ID) + tbpartition by hash (ID) tbpartitions 4; + +Precautions +----------- + +None diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/index.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/index.rst new file mode 100644 index 0000000..ef51f00 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/index.rst @@ -0,0 +1,36 @@ +:original_name: ddm_10_0018.html + +.. _ddm_10_0018: + +Sharding Algorithms +=================== + +- :ref:`MOD_HASH ` +- :ref:`MOD_HASH_CI ` +- :ref:`RIGHT_SHIFT ` +- :ref:`MM ` +- :ref:`DD ` +- :ref:`WEEK ` +- :ref:`MMDD ` +- :ref:`YYYYMM ` +- :ref:`YYYYDD ` +- :ref:`YYYYWEEK ` +- :ref:`HASH ` +- :ref:`Range ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + mod_hash + mod_hash_ci + right_shift + mm + dd + week + mmdd + yyyymm + yyyydd + yyyyweek + hash + range diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/mm.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/mm.rst new file mode 100644 index 0000000..25b9477 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/mm.rst @@ -0,0 +1,56 @@ +:original_name: ddm_10_0014.html + +.. _ddm_10_0014: + +MM +== + +Application Scenarios +--------------------- + +This algorithm applies if you want to shard data by month. One table shard for one month is recommended, and its name is the month number. + +Instructions +------------ + +- The sharding key must be DATE, DATETIME, or TIMESTAMP. +- This algorithm can be used only for table sharding. It cannot be used for database sharding. + +Data Routing +------------ + +Use the month number in the sharding key value to find the remainder. This remainder determines which table shard your data is routed to and serves as the name suffix of each table shard. + +For example, if the sharding key value is **2019-01-15**, the calculation of the table shard is: Month mod Shards or 1 mod 12 = 1. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +-----------------------+----------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +=======================+================================================================+================================+ + | None | Table routing result = Table sharding key value % Table shards | Sharding key value: 2019-01-15 | + | | | | + | | | Table shard: 1 mod 12 = 1 | + +-----------------------+----------------------------------------------------------------+--------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: + + create table test_mm_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by MOD_HASH(id) + tbpartition by MM(create_time) tbpartitions 12; + +Precautions +----------- + +Table shards in each database shard cannot exceed 12 because there are only 12 months a year. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/mmdd.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/mmdd.rst new file mode 100644 index 0000000..1c65c4c --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/mmdd.rst @@ -0,0 +1,56 @@ +:original_name: ddm_10_0016.html + +.. _ddm_10_0016: + +MMDD +==== + +Application Scenarios +--------------------- + +This algorithm applies when you want to shard data by day in a year. One table shard for one day (at most 366 days in a year) is recommended. + +Instructions +------------ + +- The sharding key must be DATE, DATETIME, or TIMESTAMP. +- This algorithm can be used only for table sharding. It cannot be used for database sharding. + +Data Routing +------------ + +Use the day number of a year in the sharding key value to find the remainder. This remainder determines which table shard your data is routed to and serves as the name suffix of each table shard. + +For example, if the sharding key value is **2019-01-15**, the calculation of the table shard is: Day number in a year mod Table shards, that is, 15 mod 366 = 15. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +-----------------------+----------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +=======================+================================================================+================================+ + | None | Table routing result = Table sharding key value % Table shards | Sharding key value: 2019-01-15 | + | | | | + | | | Table shard: 15 % 366= 15 | + +-----------------------+----------------------------------------------------------------+--------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: + + create table test_mmdd_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by MOD_HASH(name) + tbpartition by MMDD(create_time) tbpartitions 366; + +Precautions +----------- + +Table shards in each database shard cannot exceed 366 because there are at most 366 days in a year. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash.rst new file mode 100644 index 0000000..48806ca --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash.rst @@ -0,0 +1,99 @@ +:original_name: ddm_10_0002.html + +.. _ddm_10_0002: + +MOD_HASH +======== + +Application Scenarios +--------------------- + +This algorithm applies if you want to route data to different database shards by user ID or order ID. + +Instructions +------------ + +The sharding key must be CHAR, VARCHAR, INT, INTEGER, BIGINT, MEDIUMINT, SMALLINT, TINYINT, or DECIMAL (the precision can be 0). + +Data Routing +------------ + +The data route depends on the remainder of the sharding key value divided by database or table shards. If the value is a string, convert the string into a hashed value and calculate the data route based on the value. + +For example, if the sharding key value is **8**, MOD_HASH('8') is equivalent to 8 % D. D is the number of database or table shards. + +Calculation Method +------------------ + +**Method 1: Use an Integer as the Sharding Key** + +.. table:: **Table 1** Required calculation methods when the sharding key is the integer data type + + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +============================================+==============================================================================+================================+ + | Database sharding key ≠ Table sharding key | Database routing result = Database sharding key value % Database shards | Database shard: 16 % 8 = 0 | + | | | | + | | Table routing result = Table sharding key value % Table shards | Table shard: 16 % 3 = 1 | + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + | Database sharding key = Table sharding key | Table routing result = Sharding key value % (Database shards x Table shards) | Table shard: 16 % (8 x 3) = 16 | + | | | | + | | Database routing result = Table routing result/Table shards | Database shard: 16/3 = 5 | + | | | | + | | .. note:: | | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + +**Method 2: Use a String as the Sharding Key** + +.. table:: **Table 2** Required calculation methods when the sharding key is the string data type + + +--------------------------------------------+------------------------------------------------------------------------------------+--------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+====================================================================================+======================================+ + | Database sharding key ≠ Table sharding key | Database routing result = hash(Database sharding key value) % Database shards | hash('abc') = 'abc'.hashCode()=96354 | + | | | | + | | Table routing result = hash(Table sharding key value) % Table shards | Database shard: 96354 % 8 = 2; | + | | | | + | | | Table shard: 96354 % 3 = 0; | + +--------------------------------------------+------------------------------------------------------------------------------------+--------------------------------------+ + | Database sharding key = Table sharding key | Table routing result = hash(Sharding key value) % (Database shards x Table shards) | hash('abc') = 'abc'.hashCode()=96354 | + | | | | + | | Database routing result = Table routing result/Table shards | Table shard: 96354 % (8 x 3) = 18 | + | | | | + | | .. note:: | Database shard: 18/3 = 6 | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+------------------------------------------------------------------------------------+--------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +- Assume that you use field **ID** as the sharding key to shard databases based on MOD_HASH: + + .. code-block:: + + create table mod_hash_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 dbpartition by mod_hash(ID); + +- Assume that you use field **ID** as the sharding key to shard databases and tables based on MOD_HASH: + + .. code-block:: + + create table mod_hash_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by mod_hash(ID) tbpartition by mod_hash(ID) tbpartitions 4; + +Precautions +----------- + +The MOD_HASH algorithm is a simple way to find the remainder of the sharding key value divided by shards. This algorithm features even distribution of sharding key values to ensure even results. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash_ci.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash_ci.rst new file mode 100644 index 0000000..f0f0ed3 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/mod_hash_ci.rst @@ -0,0 +1,98 @@ +:original_name: ddm_12_0007.html + +.. _ddm_12_0007: + +MOD_HASH_CI +=========== + +Application Scenarios +--------------------- + +This algorithm applies if you want to route data to different database shards by user ID or order ID. + +Instructions +------------ + +The sharding key must be CHAR, VARCHAR, INT, INTEGER, BIGINT, MEDIUMINT, SMALLINT, TINYINT, or DECIMAL (the precision can be 0). + +Data Routing +------------ + +The data route depends on the remainder of the sharding key value divided by database or table shards. MOD_HASH is case-sensitive, but MOD_HASH_CI is not. + +Calculation Method +------------------ + +**Method 1: Use an Integer as the Sharding Key** + +.. table:: **Table 1** Required calculation methods when the sharding key is the integer data type + + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +============================================+==============================================================================+================================+ + | Database sharding key ≠ Table sharding key | Database routing result = Database sharding key value % Database shards | Database shard: 16 % 8 = 0 | + | | | | + | | Table routing result = Table sharding key value % Table shards | Table shard: 16 % 3 = 1 | + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + | Database sharding key = Table sharding key | Table routing result = Sharding key value % (Database shards x Table shards) | Table shard: 16 % (8 x 3) = 16 | + | | | | + | | Database routing result = Table routing result/Table shards | Database shard: 16/3 = 5 | + | | | | + | | .. note:: | | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+------------------------------------------------------------------------------+--------------------------------+ + +**Method 2: Use a String as the Sharding Key** + +.. table:: **Table 2** Required calculation methods when the sharding key is the string data type + + +--------------------------------------------+------------------------------------------------------------------------------------+----------------------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+====================================================================================+====================================================+ + | Database sharding key ≠ Table sharding key | Database routing result = hash(Database sharding key value) % Database shards | hash('abc') = 'abc'.toUpperCase().hashCode()=64578 | + | | | | + | | Table routing result = hash(Table sharding key value) % Table shards | Database shard: 64578 % 8 = 2; | + | | | | + | | | Table shard: 64578 % 3 = 0; | + +--------------------------------------------+------------------------------------------------------------------------------------+----------------------------------------------------+ + | Database sharding key = Table sharding key | Table routing result = hash(Sharding key value) % (Database shards x Table shards) | hash('abc') = 'abc'.toUpperCase().hashCode()=64578 | + | | | | + | | Database routing result = Table routing result/Table shards | Table shard: 64578 % (8 x 3) = 18 | + | | | | + | | .. note:: | Database shard: 18/3 = 6 | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+------------------------------------------------------------------------------------+----------------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +- Assume that you use field **ID** as the sharding key to shard databases based on MOD_HASH_CI: + + .. code-block:: + + create table mod_hash_ci_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 dbpartition by mod_hash_ci(id); + +- Assume that you use field **ID** as the sharding key to shard databases and tables based on MOD_HASH_CI: + + .. code-block:: + + create table mod_hash_ci_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by mod_hash_ci(id) + tbpartition by mod_hash_ci(id) tbpartitions 4; + +Precautions +----------- + +The MOD_HASH_CI algorithm is a simple way to find the remainder of the sharding key value divided by shards. This algorithm features even distribution of sharding key values to ensure even results. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/range.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/range.rst new file mode 100644 index 0000000..06d51fc --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/range.rst @@ -0,0 +1,92 @@ +:original_name: ddm_10_0013.html + +.. _ddm_10_0013: + +Range +===== + +Application Scenarios +--------------------- + +This algorithm applies to routing data in different ranges to different shards. Less-than signs (<), greater-than signs (>), and BETWEEN ... AND ... are frequently used in SQL queries. + +Instructions +------------ + +The sharding key can only be an integer, a date, or used in combination with a date function. If a date function is used, the sharding key must be DATE, DATETIME, or TIMESTAMP. + +Data Routing +------------ + +Data is routed to different shards by the sharding key value based on algorithm metadata rules. + +Metadata needs to be set when a table is created. For example, if there are eight shards in one schema, the metadata range can be [1-2]=0, [3-4]=1, [5-6]=2, [7-8]=3, [9-10]=4, [11-12]=5, [13-14]=6, and default=7. Data is routed to shards by the sharding key value based on the range. + +Calculation Method +------------------ + +**Method 1: Use an Integer as the Sharding Key** + +.. table:: **Table 1** Required calculation methods when the sharding key is the integer data type + + +-----------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+ + | Condition | Calculation Method | Example | + +=======================+======================================================================================================================+=================================================================================================+ + | Integer sharding keys | Database routing result: Data is routed to different shards based on the sharding key and the preset metadata range. | Data is routed to shard1 if the sharding key value is 3 and the preset metadata range is [3-4]. | + +-----------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+ + +**Method 2: Use a Date as the Sharding Key** + +.. table:: **Table 2** Supported date functions + + +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+ + | Date Function | Calculation Method | Example | + +=======================+================================================================================================================================================================================================================================================+==============================+ + | year() | year(yyyy-MM-dd)=yyyy | year('2019-10-11')=2019 | + +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+ + | month() | month(yyyy-MM-dd)=MM | month('2019-10-11')=10 | + +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+ + | weekofyear() | weekofyear(yyyy-MM-dd)=Week number of the current year | weekofyear ('2019-10-11')=41 | + | | | | + | | .. note:: | | + | | | | + | | The Weekofyear() function is used to return the week number of a specific date represented by the date parameter in a year. For details, see `WEEK `__. | | + +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+ + | day() | day(yyyy-MM-dd)=dd | day ('2019-10-11')=11 | + +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+ + +.. table:: **Table 3** Calculation methods + + +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Condition | Calculation Method | Example | + +===================+=======================================================================================================================================================+=========================================================================================================================================+ + | Date sharding key | Database routing: Data is routed to different database shards based on the date function (database sharding key value) and the preset metadata range. | Data is routed to shard 4 based on the metadata range 9-10 when the sharding key value is 10: month(2019-10-11)=10 belongs to [9-10]=4. | + +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: text + + create table range_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) + dbpartition by range(id) + { + 1-2=0, + 3-4=1, + 5-6=2, + 7-8=3, + 9-10=4, + 11-12=5, + 13-14=6, + default=7 + }; + +Precautions +----------- + +None diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/right_shift.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/right_shift.rst new file mode 100644 index 0000000..ec5e91b --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/right_shift.rst @@ -0,0 +1,57 @@ +:original_name: ddm_10_0004.html + +.. _ddm_10_0004: + +RIGHT_SHIFT +=========== + +Application Scenarios +--------------------- + +This algorithm applies if a large difference appears in high-digit part but a small difference in low-digit part of sharding key values. Using this algorithm ensures uniform distribution of remainders calculated from sharding key values. Therefore, data is evenly routed to different shards. + +Instructions +------------ + +The sharding key value is an integer. + +Data Routing +------------ + +The data route depends on the remainder of the new sharding key value divided by the number of database or table shards. To change the sharding key value, you need to convert the value into a binary number and right shift its bits to gain a new binary number. The number of moved bits is specified in DDL statements. Then, convert the new binary number into a decimal number. This decimal number is the changed sharding key value. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +--------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+======================================================================================================================================+=====================================================================+ + | Database sharding key ≠ Table sharding key | Database routing result = Database sharding key value % Database shards | Database shard: (123456 >> 4) % 8 = 4 | + | | | | + | | Table routing result = Table sharding key value % Table shards | Table shard: (123456 >> 4) % 3 = 0 | + +--------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+ + | Database sharding key = Table sharding key | Database routing result = Sharding key value % Database shards | Database shard: (123456 >> 4) % 8 = 4 | + | | | | + | | Table routing result = (Sharding key value % Database shards) x Table shards + (Sharding key value / Database shards) % Table shards | Table table: ((123456 >> 4) % 8) x 3 + ((123456 >> 4) / 8) % 3 = 13 | + +--------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: + + create table RIGHT_SHIFT( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by RIGHT_SHIFT(id, 4) + tbpartition by RIGHT_SHIFT(id, 4) tbpartitions 2; + +Precautions +----------- + +- The number of shifts cannot exceed the number of bits occupied by the integer type. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/week.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/week.rst new file mode 100644 index 0000000..b504773 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/week.rst @@ -0,0 +1,69 @@ +:original_name: ddm_10_0017.html + +.. _ddm_10_0017: + +WEEK +==== + +Application Scenarios +--------------------- + +This algorithm applies when you want to shard data by day in a week. One table shard for one weekday is recommended. + +Instructions +------------ + +- The sharding key must be DATE, DATETIME, or TIMESTAMP. +- This algorithm can be used only for table sharding. It cannot be used for database sharding. + +Data Routing +------------ + +Use the day number of a week in the sharding key value to find the remainder. This remainder determines which table shard your data is routed to and serves as the name suffix of each table shard. + +For example, if the sharding key value is **2019-01-15**, the calculation of the table shard is: Day number in a week mod Table shards, that is, 3 mod 7 = 3. + +.. note:: + + For details on how to calculate a weekday for any particular date, see `WEEKDAY(date) `__. + + Run the following SQL statement to query the WEEKDAY value for a specified date: + + .. code-block:: + + mysql> SELECT WEEKDAY('2019-01-15'); + -> 1 + + If the value returned from the above SQL statement is **1**, the weekday for date 2019-01-15 is Tuesday. Sunday is the first day of the week, so Tuesday is the third day of the week. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +-----------------------+----------------------------------------------------------------+--------------------------------+ + | Condition | Calculation Method | Example | + +=======================+================================================================+================================+ + | None | Table routing result = Table sharding key value % Table shards | Sharding key value: 2019-01-15 | + | | | | + | | | Table shard: 3 mod 7= 3 | + +-----------------------+----------------------------------------------------------------+--------------------------------+ + +Syntax for Creating Tables +-------------------------- + +.. code-block:: + + create table test_week_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by HASH(name) + tbpartition by WEEK(create_time) tbpartitions 7; + +Precautions +----------- + +Table shards in each database shard cannot exceed 7 because there are 7 days in a week. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/yyyydd.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyydd.rst new file mode 100644 index 0000000..d78f514 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyydd.rst @@ -0,0 +1,88 @@ +:original_name: ddm_10_0007.html + +.. _ddm_10_0007: + +YYYYDD +====== + +Application Scenarios +--------------------- + +This algorithm applies when data is routed to shards by year and day. You are advised to use this algorithm together with tbpartition YYYYDD(ShardKey). + +Instructions +------------ + +The sharding key must be DATE, DATETIME, or TIMESTAMP. + +Data Routing +------------ + +Use the hash function and enter the year and the day of the year specified in the sharding key value to calculate the hash value. The data route depends on the remainder of the hash value divided by the number of database or table shards. + +For example, YYYYDD('2012-12-31 12:12:12') is equivalent to (2012 x 366 + 366) % D. D is the number of database or table shards. + +.. note:: + + 2012-12-31 is the 366th day of 2012, so the calculation is 2012 x 366 + 366. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +--------------------------------------------+--------------------------------------------------------------------------------------------------+--------------------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+==================================================================================================+==================================================+ + | Database sharding key ≠ Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-12-31 | + | | | | + | | Database routing result = (yyyy x 366 + Day of the current year) % Database shards | Database shard: (2012 x 366 + 366) % 8 = 6 | + | | | | + | | Table routing result = (yyyy x 366 + Day of the current year) % Table shards | Table shard: (2012 x 366 + 366) % 3 = 0 | + +--------------------------------------------+--------------------------------------------------------------------------------------------------+--------------------------------------------------+ + | Database sharding key = Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-12-31 | + | | | | + | | Table routing result = (yyyy x 366 + Day of the current year) % (Database shards x Table shards) | Database shard: (2012 x 366 + 366) % (8 x 3) = 6 | + | | | | + | | Database routing result = Table routing result/Table shards | Database shard: 6/3 = 2 | + | | | | + | | .. note:: | | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+--------------------------------------------------------------------------------------------------+--------------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +Assume that there are already 8 physical databases in your database instance. Now you want to shard data by year and day and require that data of the same day be stored in one table and each day within two years should correspond to an independent table, so that you can query data from a physical table in a physical database by the sharding key. + +In this scenario, you can select the YYYYDD algorithm. Then create at least 732 physical tables for 732 days of the two years (366 days for one year), each day corresponding to one table. Since you already have 8 physical databases, 92 (732/8 = 91.5, rounded up to 92) physical tables should be created in each of them. The number of tables should be an integral multiple of databases. The following is an example SQL statement for creating a table: + +.. code-block:: + + create table test_yyyydd_tb ( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 + dbpartition by YYYYDD(create_time) + tbpartition by YYYYDD(create_time) tbpartitions 92; + +Syntax for creating tables when only database sharding is required: + +.. code-block:: + + create table YYYYDD( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by YYYYDD(create_time); + +Precautions +----------- + +- This YYYYDD algorithm does not apply if each day of a year corresponds to one database shard. The number of tables must be fixed if database and table sharding is both required. +- Data of the same day in different years may be routed to the same shard. The result depends on the number of tables. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/yyyymm.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyymm.rst new file mode 100644 index 0000000..773e248 --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyymm.rst @@ -0,0 +1,84 @@ +:original_name: ddm_10_0006.html + +.. _ddm_10_0006: + +YYYYMM +====== + +Application Scenarios +--------------------- + +This algorithm applies when data is routed to shards by year and month. You are advised to use this algorithm together with tbpartition YYYYMM(ShardKey). + +Instructions +------------ + +The sharding key must be DATE, DATETIME, or TIMESTAMP. + +Data Routing +------------ + +The data route depends on the remainder of the sharding key hash value divided by database shards. Enter the year and month into the hash function to obtain the hash value. + +For example, YYYYMM ('2012-12-31 12:12:12') is equivalent to (2012 x 12 + 12) % D. D is the number of database or table shards. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +--------------------------------------------+----------------------------------------------------------------------------+----------------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+============================================================================+==============================================+ + | Database sharding key ≠ Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-11-20 | + | | | | + | | Database routing result = (yyyy x 12 + MM) % Database shards | Database shard: (2012 x 12 + 11) % 8 = 3 | + | | | | + | | Table routing result = (yyyy x 12 + MM) % Table shards | Table shard: (2012 x 12 + 11) % 3 = 2 | + +--------------------------------------------+----------------------------------------------------------------------------+----------------------------------------------+ + | Database sharding key = Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-11-20 | + | | | | + | | Table routing result = (yyyy x 12 + MM) % (Database shards x Table shards) | Table shard: (2012 x 12 + 11) % (8 x 3) = 11 | + | | | | + | | Database routing result = Table routing result/Table shards | Database shard: 11 % 3 = 3 | + | | | | + | | .. note:: | | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+----------------------------------------------------------------------------+----------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +Assume that there are already 8 physical databases in your database instance. Now you want to shard data by year and month and require that data of the same month be stored in one table and each month within two years should correspond to an independent table, so that you can query data from a physical table in a physical database by the sharding key. + +In this scenario, you can select the YYYYMM algorithm. Then create 24 physical tables for 24 months of the two years, each month corresponding to one table. Since you already have 8 physical databases, three physical tables should be created in each of them. The following is an example SQL statement for creating a table: + +.. code-block:: + + create table test_yyyymm_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by YYYYMM(create_time) + tbpartition by YYYYMM(create_time) tbpartitions 3; + +Syntax for creating tables when only database sharding is required: + +.. code-block:: + + create table YYYYMM( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by YYYYMM(create_time); + +Precautions +----------- + +- This YYYYMM algorithm does not apply if each month of a year corresponds to one database shard. The number of tables must be fixed if database and table sharding is both required. +- Data of the same month in different years may be routed to the same database or table. The result depends on the number of tables. diff --git a/umn/source/sql_syntax/ddl/sharding_algorithms/yyyyweek.rst b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyyweek.rst new file mode 100644 index 0000000..b3bb00c --- /dev/null +++ b/umn/source/sql_syntax/ddl/sharding_algorithms/yyyyweek.rst @@ -0,0 +1,89 @@ +:original_name: ddm_10_0008.html + +.. _ddm_10_0008: + +YYYYWEEK +======== + +Application Scenarios +--------------------- + +This algorithm applies when data is routed to shards by week. You are advised to use this algorithm together with tbpartition YYYYWEEK(ShardKey). + +Instructions +------------ + +The sharding key must be DATE, DATETIME, or TIMESTAMP. + +Data Routing +------------ + +Use the hash function and enter the year and the week of the year specified in the sharding key value to calculate the hash value. The data route depends on the remainder of the hash value divided by the number of database or table shards. + +For example, YYYYWEEK('2012-12-31 12:12:12') is equivalent to (2013 x 54 + 1) % D. D is the number of database or table shards. + +.. note:: + + - 2012-12-31 is the first week of 2013, so the calculation is 2013 x 54 + 1. + - For details on how to use YYYYWEEK, see `YEARWEEK Function `__. + +Calculation Method +------------------ + +.. table:: **Table 1** Required calculation methods + + +--------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------+ + | Condition | Calculation Method | Example | + +============================================+==================================================================================================+===============================================+ + | Database sharding key ≠ Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-12-31 | + | | | | + | | Database routing result = (yyyy x 54 + Week of the current year) % Database shards | Database shard: (2013 x 54 + 1) % 8 = 7 | + | | | | + | | Table routing result = (yyyy x 54 + Week of the current year) % Table shards | Table shard: (2013 x 54 + 1) % 3 = 1 | + +--------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------+ + | Database sharding key = Table sharding key | Sharding key: yyyy-MM-dd | Sharding key: 2012-12-31 | + | | | | + | | Table routing result = (yyyy x 54 + Week of the current year) % (Database shards x Table shards) | Database shard: (2013 x 54 + 1) % (8 x 3) = 7 | + | | | | + | | Database routing result = Table routing result/Table shards | Database shard: 7/3 = 2 | + | | | | + | | .. note:: | | + | | | | + | | Database routing result is rounded off to the nearest integer. | | + +--------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------+ + +Syntax for Creating Tables +-------------------------- + +Assume that there are already 8 physical databases in your database instance. Now you want to shard data by week and require that data of the same week be stored in one table and each week within two years should correspond to an independent table, so that you can query data from a physical table in a physical database by the sharding key. + +In this scenario, you can select the YYYYWEEK algorithm. Then create at least 106 physical tables for 53 (rounded off) weeks of the two years, each week corresponding to one table. Since you already have 8 physical databases, 14 (14 x 8 = 112 > 106) physical tables should be created in each of them. The number of tables should be an integral multiple of databases. The following is an example SQL statement for creating a table: + +.. code-block:: + + create table test_yyyymm_tb( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by YYYYWEEK(create_time) + tbpartition by YYYYWEEK(create_time) tbpartitions 14; + +Syntax for creating tables when only database sharding is required: + +.. code-block:: + + create table YYYYWEEK( + id int, + name varchar(30) DEFAULT NULL, + create_time datetime DEFAULT NULL, + primary key(id) + ) ENGINE = InnoDB DEFAULT CHARSET = utf8 + dbpartition by YYYYWEEK(create_time); + +Precautions +----------- + +- This YYYYWEEK algorithm does not apply if each week of a year corresponds to one database shard. The number of tables must be fixed if database and table sharding is both required. +- Data of the same week in different years may be routed to the same shard. diff --git a/umn/source/sql_syntax/dml/delete.rst b/umn/source/sql_syntax/dml/delete.rst new file mode 100644 index 0000000..4cd086b --- /dev/null +++ b/umn/source/sql_syntax/dml/delete.rst @@ -0,0 +1,23 @@ +:original_name: ddm-08-0008.html + +.. _ddm-08-0008: + +DELETE +====== + +DELETE is used to delete rows that meet conditions from a table. + +Common Syntax +------------- + +.. code-block:: text + + DELETE [IGNORE] + FROM tbl_name [WHERE where_condition] + +Syntax Restrictions +------------------- + +- The WHERE clause does not support subqueries, including correlated and non-correlated subqueries. +- Data in reference tables cannot be deleted when multiple tables are deleted at a time. +- PARTITION clauses are not supported. diff --git a/umn/source/sql_syntax/dml/index.rst b/umn/source/sql_syntax/dml/index.rst new file mode 100644 index 0000000..50bec52 --- /dev/null +++ b/umn/source/sql_syntax/dml/index.rst @@ -0,0 +1,30 @@ +:original_name: ddm-08-0004.html + +.. _ddm-08-0004: + +DML +=== + +- :ref:`INSERT ` +- :ref:`REPLACE ` +- :ref:`DELETE ` +- :ref:`UPDATE ` +- :ref:`SELECT ` +- :ref:`SELECT JOIN Syntax ` +- :ref:`SELECT UNION Syntax ` +- :ref:`SELECT Subquery Syntax ` +- :ref:`Supported System Schema Queries ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + insert + replace + delete + update + select + select_join_syntax + select_union_syntax + select_subquery_syntax + supported_system_schema_queries diff --git a/umn/source/sql_syntax/dml/insert.rst b/umn/source/sql_syntax/dml/insert.rst new file mode 100644 index 0000000..eda65f8 --- /dev/null +++ b/umn/source/sql_syntax/dml/insert.rst @@ -0,0 +1,42 @@ +:original_name: ddm-08-0005.html + +.. _ddm-08-0005: + +INSERT +====== + +INSERT is used to insert data into database objects. + +Common Syntax +------------- + +.. code-block:: + + INSERT [INTO] tbl_name + [(col_name,...)] + {VALUES | VALUE} ({expr },...),(...),... + [ ON DUPLICATE KEY UPDATE + col_name=expr + [, col_name=expr] ... ] + OR + INSERT [INTO] tbl_name + SET col_name={expr | DEFAULT}, ... + [ ON DUPLICATE KEY UPDATE + col_name=expr [, col_name=expr] ... ] + +Syntax Restrictions +------------------- + +- INSERT DELAYED is not supported. +- Only INSERT statements that contain sharding fields are supported. +- PARTITION syntax is not supported. Partitioned tables are not recommended. +- Setting **datetime** to **1582** or any value smaller in INSERT statements is not supported. +- INSERT cannot be used to insert sharding key value **DEFAULT**. +- If you specify an auto-increment key value in an INSERT statement and execute it on a sharded table, the auto-increment key value of the inserted data entry changes. Auto-increment key values of data entries inserted subsequently will increase based on the first inserted data entry unless you specify a new auto-increment key value. +- Referencing a table column in function REPEAT of the VALUES statement is not supported, for example, INSERT INTO T(NAME) VALUES(REPEAT(ID,3)). + +Use Constraints +--------------- + +- If the sharding key value in the INSERT statement is invalid, data is routed to database shard 0 or table shard 0 by default. +- Do not use functions VERSION, DATABASE, or USER in the INSERT statement. When you execute such as functions, you may not obtain the expected results because its results depend on whether it pushed to data nodes for execution. diff --git a/umn/source/sql_syntax/dml/replace.rst b/umn/source/sql_syntax/dml/replace.rst new file mode 100644 index 0000000..463b547 --- /dev/null +++ b/umn/source/sql_syntax/dml/replace.rst @@ -0,0 +1,28 @@ +:original_name: ddm-08-0007.html + +.. _ddm-08-0007: + +REPLACE +======= + +REPLACE is used to insert rows into or replace rows in a table. + +Common Syntax +------------- + +.. code-block:: + + replace into table(col1,col2,col3) + values(value1,value2,value3) + +Syntax Constraints +------------------ + +- PARTITION syntax is not supported. +- If an auto-increment table has no ID, you can insert a data record with a specified ID using REPLACE, but no ID is generated. + +Use Constraints +--------------- + +- If the sharding key value in the REPLACE statement is invalid, data is routed to database shard 0 or table shard 0 by default. +- Do not use functions VERSION, DATABASE, or USER in the REPLACE statement. When you execute such as functions, you may not obtain the expected results because its results depend on whether it pushed to data nodes for execution. diff --git a/umn/source/sql_syntax/dml/select.rst b/umn/source/sql_syntax/dml/select.rst new file mode 100644 index 0000000..3e694cc --- /dev/null +++ b/umn/source/sql_syntax/dml/select.rst @@ -0,0 +1,57 @@ +:original_name: ddm-08-0006.html + +.. _ddm-08-0006: + +SELECT +====== + +SELECT is generally used to query data in one or more tables. + +Common Syntax +------------- + +.. code-block:: + + SELECT + [ALL | DISTINCT | DISTINCTROW ] + select_expr + [, select_expr ...] + [FROM table_references [WHERE where_condition] + [GROUP BY {col_name | expr | position} [ASC | DESC], ...] + [HAVING where_condition] [ORDER BY {col_name | expr | position} [ASC | DESC], ...] + [LIMIT {[offset,] row_count | row_count OFFSET offset}] + +.. table:: **Table 1** Supported syntax + + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Syntax | Description | + +=======================+=============================================================================================================================================================================================================================+ + | select_expr | Indicates a column that you want to query. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | FROM table_references | Indicates the tables that you want to query. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | WHERE | Followed by an expression to filter for rows that meet certain criteria. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GROUP BY | Groups the clauses used in SQL in sequence. GROUP BY indicates relationships between statements and supports column names. For example, the HAVING clause must be after the GROUP BY clause and before the ORDER BY clause. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ORDER BY | Indicates relationships between statements. Sorting by column name or by a specified order such as ASC and DESC is supported. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | LIMIT/OFFSET | Restrains the offset and size of output result sets, for example, one or two values can be input after LIMIT. | + +-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Syntax Description +------------------ + +- An empty string cannot be used as an alias. + +- SELECT ... GROUP BY ... WITH ROLLUP is not supported. + +- Neither STRAIGHT_JOIN nor NATURAL JOIN is supported. + +- The SELECT FOR UPDATE statement supports only simple queries and does not support JOIN, GROUP BY, ORDER BY, or LIMIT. + +- Each SELECT statement in UNION does not support multiple columns with the same name, for example, + + SELECT id, id, name FROM t1 UNION SELECT pk, pk, name FROM t2 is not supported because this statement has duplicate column names. + +- User-defined sequencing similar to **ORDER BY FIELD(id,1,2,3)** is not supported. diff --git a/umn/source/sql_syntax/dml/select_join_syntax.rst b/umn/source/sql_syntax/dml/select_join_syntax.rst new file mode 100644 index 0000000..5ecfed4 --- /dev/null +++ b/umn/source/sql_syntax/dml/select_join_syntax.rst @@ -0,0 +1,62 @@ +:original_name: ddm-08-0010.html + +.. _ddm-08-0010: + +SELECT JOIN Syntax +================== + +Common Syntax +------------- + +table_references: + +.. code-block:: + + table_reference [, table_reference] ... + +table_reference: + +.. code-block:: + + table_factor | join_table + +table_factor: + +.. code-block:: + + tbl_name [[AS] alias] + | table_subquery [AS] alias + | ( table_references ) + +join_table: + +.. code-block:: + + table_reference [INNER | CROSS] JOIN table_factor [join_condition] + | table_reference {LEFT|RIGHT} [OUTER] JOIN table_reference join_condition + | table_reference [{LEFT|RIGHT} [OUTER]] JOIN table_factor + +join_condition: + +.. code-block:: + + ON conditional_expr + | USING (column_list) + +Syntax Restrictions +------------------- + +SELECT STRAIGHT_JOIN and NATURAL JOIN are not supported. + +Example +------- + +.. code-block:: + + select id,name from test1 where id=1; + select distinct id,name from test1 where id>=1; + select id,name from test1 order by id limit 2 offset 2; + select id,name from test1 order by id limit 2,2; + select 1+1,'test',id,id*1.1,now() from test1 limit 3; + select current_date,current_timestamp; + select abs(sum(id)) from test1; diff --git a/umn/source/sql_syntax/dml/select_subquery_syntax.rst b/umn/source/sql_syntax/dml/select_subquery_syntax.rst new file mode 100644 index 0000000..7d31536 --- /dev/null +++ b/umn/source/sql_syntax/dml/select_subquery_syntax.rst @@ -0,0 +1,87 @@ +:original_name: ddm-08-0012.html + +.. _ddm-08-0012: + +SELECT Subquery Syntax +====================== + +Subquery as Scalar Operand +-------------------------- + +Example + +.. code-block:: + + SELECT (SELECT id FROM test1 where id=1); + SELECT (SELECT id FROM test2 where id=1)FROM test1; + SELECT UPPER((SELECT name FROM test1 limit 1)) FROM test2; + +Comparisons Using Subqueries +---------------------------- + +Syntax + +.. code-block:: + + non_subquery_operand comparison_operator (subquery) + comparison_operator: = > < >= <= <> != <=> like + +Example + +.. code-block:: + + select name from test1 where id > (select id from test2 where id=1); + select name from test1 where id = (select id from test2 where id=1); + select id from test1 where name like (select name from test2 where id=1); + +Subqueries with ANY, IN, NOT IN, SOME,ALL,Exists,NOT Exists +----------------------------------------------------------- + +Syntax + +.. code-block:: + + operand comparison_operator SOME (subquery) + operand comparison_operator ALL (subquery) + operand comparison_operator ANY (subquery) + operand IN (subquery) + operand not IN (subquery) + operand exists (subquery) + operand not exists (subquery) + +Example + +.. code-block:: + + select id from test1 where id > any (select id from test2); + select id from test1 where id > some (select id from test2); + select id from test1 where id > all (select id from test2); + select id from test1 where id in (select id from test2); + select id from test1 where id not in (select id from test2); + select id from test1 where exists (select id from test2 where id=1); + select id from test1 where not exists (select id from test2 where id=1); + +Derived Tables (Subqueries in the FROM Clause) +---------------------------------------------- + +Syntax + +.. code-block:: + + SELECT ... FROM (subquery) [AS] tbl_name ... + +Example + +.. code-block:: + + select id from (select id,name from test2 where id>1) a order by a.id; + +Syntax Restrictions +------------------- + +- Each derived table must have an alias. +- A derived table cannot be a correlated subquery. +- In some cases, correct results cannot be obtained using a scalar subquery. Using JOIN instead is recommended to improve query performance. +- Using subqueries in the HAVING clause and the JOIN ON condition is not supported. + +- Row subqueries are not supported. diff --git a/umn/source/sql_syntax/dml/select_union_syntax.rst b/umn/source/sql_syntax/dml/select_union_syntax.rst new file mode 100644 index 0000000..a37d1bd --- /dev/null +++ b/umn/source/sql_syntax/dml/select_union_syntax.rst @@ -0,0 +1,27 @@ +:original_name: ddm-08-0011.html + +.. _ddm-08-0011: + +SELECT UNION Syntax +=================== + +Common Syntax +------------- + +.. code-block:: + + SELECT ...UNION [ALL | DISTINCT] + SELECT ...[UNION [ALL | DISTINCT] SELECT ...] + +Example +------- + +.. code-block:: + + select userid from user union select orderid from ordertbl order by userid; + select userid from user union (select orderid from ordertbl group by orderid) order by userid; + +Syntax Restrictions +------------------- + +SELECT statements in UNION do not support duplicate column names. diff --git a/umn/source/sql_syntax/dml/supported_system_schema_queries.rst b/umn/source/sql_syntax/dml/supported_system_schema_queries.rst new file mode 100644 index 0000000..673d317 --- /dev/null +++ b/umn/source/sql_syntax/dml/supported_system_schema_queries.rst @@ -0,0 +1,31 @@ +:original_name: ddm_12_0100.html + +.. _ddm_12_0100: + +Supported System Schema Queries +=============================== + +.. table:: **Table 1** Supported system schema queries + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------+ + | DML Syntax | Restriction | + +===================================+=================================================================================================================+ + | System schema queries | The following system schema queries are supported: | + | | | + | | - Version query: **SELECT version()** | + | | | + | | - information_schema.SCHEMA_PRIVILEGES | + | | - information_schema.TABLE_PRIVILEGES | + | | - information_schema.USER_PRIVILEGES | + | | - information_schema.SCHEMATA | + | | - information_schema.tables | + | | - information_schema.columns | + | | | + | | - Index query: **SHOW KEYS FROM FROM ** | + | | | + | | .. note:: | + | | | + | | - Supported operators include **=**, **IN**, and **LIKE**. These operators can be associated using **AND**. | + | | - Complex queries, such as subquery, JOIN, sorting, aggregate query, and LIMIT, are not supported. | + | | - **information_schema.tables** and **information_schema.columns** support operators **<** and **>**. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/sql_syntax/dml/update.rst b/umn/source/sql_syntax/dml/update.rst new file mode 100644 index 0000000..c3daebb --- /dev/null +++ b/umn/source/sql_syntax/dml/update.rst @@ -0,0 +1,44 @@ +:original_name: ddm-08-0009.html + +.. _ddm-08-0009: + +UPDATE +====== + +Common Syntax +------------- + +.. code-block:: + + UPDATE table_reference + SET col_name1={expr1} [, col_name2={expr2}] ... + [WHERE where_condition] + +Syntax Restrictions +------------------- + +- Subqueries are not supported, including correlated and non-correlated subqueries. + +- Cross-shard subquery is not supported. + +- The WHERE condition in the UPDATE statement does not support arithmetic expressions and their subqueries. + +- Modifying broadcast tables is not supported during an update of multiple tables. (Data in columns of a broadcast table cannot be on the left of SET assignment statements). + +- Updating the sharding key field of a logical table is not supported because this operation may cause data redistribution. + +- Setting **datetime** to **1582** or any value smaller in UPDATE statements is not supported. + +- UPDATE cannot be used to update sharding key value **DEFAULT**. + +- Repeatedly updating the same field in an UPDATE statement is not supported. + +- Updating a sharding key using UPDATE JOIN syntax is not supported. + +- UPDATE cannot be used to update self-joins. + +- Referencing other object columns in assignment statements or expressions may cause unexpected update results. Example: + + update tbl_1 a,tbl_2 b set a.name=concat(b.name,'aaaa'),b.name=concat(a.name,'bbbb') on a.id=b.id + +- UPDATE JOIN supports only joins with WHERE conditions. diff --git a/umn/source/sql_syntax/functions.rst b/umn/source/sql_syntax/functions.rst new file mode 100644 index 0000000..4977beb --- /dev/null +++ b/umn/source/sql_syntax/functions.rst @@ -0,0 +1,111 @@ +:original_name: ddm_03_0063.html + +.. _ddm_03_0063: + +Functions +========= + +Supported Functions +------------------- + +.. table:: **Table 1** Operator functions + + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Expression | Example | + +===============+========================================================================================================================================================================+ + | IN | SELECT \* FROM Products WHERE vendor_id IN ( 'V000001', 'V000010' ) ORDER BY product_price | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | NOT IN | SELECT product_id, product_name FROM Products WHERE vendor_id NOT IN ('V000001', 'V000002') ORDER BY product_id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | BETWEEN | SELECT id, product_id, product_name, product_price FROM Products WHERE id BETWEEN 000005 AND 000034 ORDER BY id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | NOT...BETWEEN | SELECT product_id, product_name FROM Products WHERE NOT vendor_id BETWEEN 'V000002' and 'V000005' ORDER BY product_id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | IS NULL | SELECT product_name FROM Products WHERE product_price IS NULL | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | IS NOT NULL | SELECT id, product_name FROM Products WHERE product_price IS NOT NULL ORDER BY id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | AND | SELECT \* FROM Products WHERE vendor_id = 'V000001' AND product_price <= 4000 ORDER BY product_price | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | OR | SELECT \* FROM Products WHERE vendor_id = 'V000001' OR vendor_id = 'V000009' | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | NOT | SELECT product_id, product_name FROM Products WHERE NOT vendor_id = 'V000002' | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | LIKE | SELECT \* FROM Products WHERE product_name LIKE 'NAME%' ORDER BY product_name | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | NOT LIKE | SELECT \* FROM Products WHERE product_name NOT LIKE 'NAME%' ORDER BY product_name | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CONCAT | SELECT product_id, product_name, CONCAT( product_id , '(', product_name ,')' ) AS product_test FROM Products ORDER BY product_id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | |image2| | SELECT 3 \* 2+5-100/50 | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ``-`` | SELECT 3 \* 2+5-100/50 | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | \* | SELECT order_num, product_id, quantity, item_price, quantity*item_price AS expanded_price FROM OrderItems WHERE order_num BETWEEN 000009 AND 000028 ORDER BY order_num | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | / | SELECT 3 \* 2+5-100/50 | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | UPPER | SELECT id, product_id, UPPER(product_name) FROM Products WHERE id > 10 ORDER BY product_id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | LOWER | SELECT id, product_id, LOWER(product_name) FROM Products WHERE id <= 10 ORDER BY product_id | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | SOUNDEX | SELECT \* FROM Vendors WHERE SOUNDEX(vendor_name) = SOUNDEX('test') ORDER BY vendor_name | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | IFNULL | SELECT IFNULL(product_id, 0) FROM Products; | + +---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. table:: **Table 2** Time and date functions + + +-----------------------------------+------------------------------------------------------+ + | Expression | Example | + +===================================+======================================================+ + | DAY() | SELECT \* FROM TAB_DATE WHERE DAY(date)=21 | + | | | + | | SELECT \* FROM TAB_DATE WHERE date='2018-12-21' | + | | | + | | INSERT INTO TAB_DATE(id,date) VALUES(1,'2018-05-22') | + +-----------------------------------+------------------------------------------------------+ + | MONTH() | SELECT \* FROM TAB_DATE WHERE MONTH(date)=12 | + | | | + | | SELECT \* FROM TAB_DATE WHERE date='2018-12-21' | + | | | + | | INSERT INTO TAB_DATE(id,date) VALUES(1,'2018-05-22') | + +-----------------------------------+------------------------------------------------------+ + | YEAR() | SELECT \* FROM TAB_DATE WHERE YEAR(date)=2018 | + | | | + | | SELECT \* FROM TAB_DATE WHERE date='2018-12-21' | + | | | + | | INSERT INTO TAB_DATE(id,date) VALUES(1,'2018-05-22') | + +-----------------------------------+------------------------------------------------------+ + +.. table:: **Table 3** Mathematical functions + + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | Expression | Example | + +============+=============================================================================================================================+ + | SQRT() | SELECT id, product_price, SQRT(product_price) AS price_sqrt FROM Products WHERE product_price < 4000 ORDER BY product_price | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | AVG() | SELECT AVG(product_price) AS avg_product FROM Products | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | COUNT() | SELECT COUNT(``*``) AS num_product FROM Products | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | MAX() | SELECT id, product_id, product_name, MAX(product_price) AS max_price FROM Products ORDER BY id | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | MIN() | SELECT id, product_id, product_name, MIN(product_price) AS min_price FROM Products ORDER BY id | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + | SUM() | SELECT SUM(product_price) AS sum_product FROM Products | + +------------+-----------------------------------------------------------------------------------------------------------------------------+ + +Unsupported Functions +--------------------- + +.. table:: **Table 4** Function restrictions + + =========== ========================================== + Item Restriction + =========== ========================================== + ROW_COUNT() Function **ROW_COUNT()** is not supported. + =========== ========================================== + +.. |image1| image:: /_static/images/en-us_image_0000001749511672.png +.. |image2| image:: /_static/images/en-us_image_0000001749511672.png diff --git a/umn/source/sql_syntax/global_sequence/index.rst b/umn/source/sql_syntax/global_sequence/index.rst new file mode 100644 index 0000000..96068bc --- /dev/null +++ b/umn/source/sql_syntax/global_sequence/index.rst @@ -0,0 +1,18 @@ +:original_name: ddm_03_0030.html + +.. _ddm_03_0030: + +Global Sequence +=============== + +- :ref:`Overview ` +- :ref:`Using NEXTVAL or CURRVAL to Query Global Sequence Numbers ` +- :ref:`Using Global Sequences in INSERT or REPLACE Statements ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + using_nextval_or_currval_to_query_global_sequence_numbers + using_global_sequences_in_insert_or_replace_statements diff --git a/umn/source/sql_syntax/global_sequence/overview.rst b/umn/source/sql_syntax/global_sequence/overview.rst new file mode 100644 index 0000000..6d91b57 --- /dev/null +++ b/umn/source/sql_syntax/global_sequence/overview.rst @@ -0,0 +1,113 @@ +:original_name: ddm_12_0010.html + +.. _ddm_12_0010: + +Overview +======== + +Global sequences are mainly database-based global sequences. + +.. note:: + + - The start auto-increment SN can be modified. + - Global sequence provides sequence numbers that are globally unique but may not increase continuously. + +.. table:: **Table 1** Table types supported by global sequence + + ========== ========= ========= ============= + Table Type Sharded Broadcast Unsharded + ========== ========= ========= ============= + DB-based Supported Supported Not supported + ========== ========= ========= ============= + +Creating an Auto-Increment Sequence +----------------------------------- + +#. Log in to the required DDM instance using a client. + +#. Open the required schema. + +#. Run the following command to create an auto-increment sequence: + + **create sequence** ; + + .. note:: + + - ** indicates the sequence name. + - The auto-increment key should be a BIGINT value. To avoid duplicate values, do not use TINYINT, SMALLINT, MEDIUMINT, INTEGER, or INT as the auto-increment key. + - Run **show sequences** to view the usage of the auto-increment sequence. If the usage reaches 100%, do not insert data anymore. + +Dropping an Auto-Increment Sequence +----------------------------------- + +#. Log in to the required DDM instance using a client. + +#. Open the required schema. + +#. Run **show sequences** to view all global sequences. + +#. Run the following command to drop an auto-increment sequence: + + **drop sequence** ; + + **drop sequence** *DB.*\ ; + + .. note:: + + - The sequence name is case-insensitive. + - If an auto-increment sequence is inherent to a table, the sequence cannot be deleted. + +Modifying the Start Value of an Auto-Increment Sequence +------------------------------------------------------- + +#. Log in to the required DDM instance using a client. + +#. Open the required schema. + +#. Run **show sequences** to view all global sequences. + +#. Run the command to change the start value: + + **alter sequence** **START WITH** \ *;* + + .. note:: + + - ** indicates the sequence name. + - ** indicates the start value of the target sequence. + +Querying an Auto-Increment Sequence +----------------------------------- + +#. Log in to the required DDM instance using a client. + +#. Open the required schema. + +#. Run **show sequences** to view all global sequences. + + **show sequences**; + + |image1| + +Modifying the Auto-Increment Cache Value +---------------------------------------- + +.. important:: + + This feature is only available in kernel 3.0.3 and later versions. + +#. Log in to the required DDM instance using a client. +#. Open the required schema. +#. Run command **alter sequence test cache 5000** to modify the global sequence cache value of table **test**. +#. Run command **show sequences** to view the cache value (**INCREMENT** value) of table **test**. + +Updating Auto-Increment Sequences of All Tables +----------------------------------------------- + +.. important:: + + This feature is available only in kernel 3.0.4.1 or later. + +#. Log in to the required DDM instance using a client. +#. Run command **fresh all sequence start value** to change sequences of all schemas. + +.. |image1| image:: /_static/images/en-us_image_0000001685307306.jpg diff --git a/umn/source/sql_syntax/global_sequence/using_global_sequences_in_insert_or_replace_statements.rst b/umn/source/sql_syntax/global_sequence/using_global_sequences_in_insert_or_replace_statements.rst new file mode 100644 index 0000000..5714559 --- /dev/null +++ b/umn/source/sql_syntax/global_sequence/using_global_sequences_in_insert_or_replace_statements.rst @@ -0,0 +1,60 @@ +:original_name: ddm_03_0037.html + +.. _ddm_03_0037: + +Using Global Sequences in INSERT or REPLACE Statements +====================================================== + +You can use global sequences in INSERT or REPLACE statements to provide unique global sequence across schemas in a DDM instance. Generating sequence numbers with NEXTVAL and CURRVAL is supported in INSERT or REPLACE statements. For example, you can execute schema.seq.nextval and schema.seq.currval to obtain global sequence numbers. CURRVAL returns the current sequence number, and NEXTVAL returns the next one. If no schema is specified, use the global sequence of the currently connected schema. + +Concurrently executing schema.seq.nextval in multiple sessions is supported to obtain unique global sequence numbers. + +Prerequisites +------------- + +- There are two schemas **dml_test_1** and **dml_test_2**. + +- Both of them have table **test_seq**. + + Run the following command to create a table: + + **create table test_seq(col1 bigint,col2 bigint) dbpartition by hash(col1);** + +Procedure +--------- + +#. Log in to the required DDM instance using a client. + +#. Click the **dml_test_1** schema and run the following commands to create a global sequence: + + **use dml_test_1**; + + **create sequence seq_test**; + + |image1| + +#. Run the following command to use the global sequence in an INSERT or REPLACE statement: + + **insert into test_seq(col1,col2)values(seq_test.nextval,seq_test.currval)**; + + |image2| + +#. Click the **dml_test_2** schema, run the following commands to use the global sequence in an INSERT or REPLACE statement: + + **use dml_test_2**; + + **insert into test_seq(col1,col2)values(dml_test_1.seq_test.nextval,dml_test_1.seq_test.currval)**; + + |image3| + + The global sequence is created in schema **dml_test_1**. To use the global sequence in **schema dml_test_2**, you need to specify a schema name, for example, **dml_test_1.seq_test.nextval** or **dml_test_1.seq_test.currval**. + + .. note:: + + - Using global sequences in INSERT and REPLACE statements is supported only in sharded tables, but not in broadcast or unsharded tables. + - NEXTVAL and CURRVAL are executed from left to right in INSERT and REPLACE statements. If NEXTVAL is referenced more than once in a single statement, the sequence number is incremented for each reference. + - Each global sequence belongs to a schema. When you delete a schema, the global sequence of the schema is also deleted. + +.. |image1| image:: /_static/images/en-us_image_0000001733146257.png +.. |image2| image:: /_static/images/en-us_image_0000001685147446.png +.. |image3| image:: /_static/images/en-us_image_0000001685307194.png diff --git a/umn/source/sql_syntax/global_sequence/using_nextval_or_currval_to_query_global_sequence_numbers.rst b/umn/source/sql_syntax/global_sequence/using_nextval_or_currval_to_query_global_sequence_numbers.rst new file mode 100644 index 0000000..c472f2e --- /dev/null +++ b/umn/source/sql_syntax/global_sequence/using_nextval_or_currval_to_query_global_sequence_numbers.rst @@ -0,0 +1,51 @@ +:original_name: ddm_03_0036.html + +.. _ddm_03_0036: + +Using NEXTVAL or CURRVAL to Query Global Sequence Numbers +========================================================= + +- NEXTVAL returns the next sequence number, and CURRVAL returns the current sequence number. nextval(n) returns *n* unique sequence numbers. +- nextval(n) can be used only in **select sequence.nextval(n)** and does not support cross-schema operations. +- currval(n) is not supported. + +Procedure +--------- + +#. Log in to the required DDM instance using a client. + +#. Open the required schema. + +#. Run the following command to create a global sequence: + + **create sequence seq_test**; + + |image1| + +#. Run the following command to obtain the next sequence number: + + **select seq_test.nextval;** + + |image2| + +#. Run the following command to obtain the current sequence number: + + **select seq_test.currval**; + + |image3| + +#. Run the following command to obtain sequence numbers in batches: + + **select seq_test.nextval(n)**; + + |image4| + + .. note:: + + - Cross-schema operations are not supported when sequence numbers are obtained in batches. + - If no global sequence is used, CURRVAL returns **0**. + +.. |image1| image:: /_static/images/en-us_image_0000001685147566.png +.. |image2| image:: /_static/images/en-us_image_0000001733146373.png +.. |image3| image:: /_static/images/en-us_image_0000001685147570.png +.. |image4| image:: /_static/images/en-us_image_0000001733146381.png diff --git a/umn/source/sql_syntax/index.rst b/umn/source/sql_syntax/index.rst new file mode 100644 index 0000000..0ab9cb8 --- /dev/null +++ b/umn/source/sql_syntax/index.rst @@ -0,0 +1,30 @@ +:original_name: ddm-08-0001.html + +.. _ddm-08-0001: + +SQL Syntax +========== + +- :ref:`Introduction ` +- :ref:`DDL ` +- :ref:`DML ` +- :ref:`Functions ` +- :ref:`Use Constraints ` +- :ref:`Supported SQL Statements ` +- :ref:`Global Sequence ` +- :ref:`Database Management Syntax ` +- :ref:`Advanced SQL Functions ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + introduction + ddl/index + dml/index + functions + use_constraints + supported_sql_statements/index + global_sequence/index + database_management_syntax + advanced_sql_functions diff --git a/umn/source/sql_syntax/introduction.rst b/umn/source/sql_syntax/introduction.rst new file mode 100644 index 0000000..eb4c81b --- /dev/null +++ b/umn/source/sql_syntax/introduction.rst @@ -0,0 +1,115 @@ +:original_name: ddm_03_0062.html + +.. _ddm_03_0062: + +Introduction +============ + +DDM is compatible with the MySQL license and syntax, but the use of SQL statements is limited due to differences between distributed databases and single-node databases. + +Before selecting a DDM solution, evaluate the SQL syntax compatibility between your application and DDM. + +MySQL EXPLAIN +------------- + +If you add **EXPLAIN** before a SQL statement, you will see a specific execution plan when you execute the statement. You can analyze the time required based on the plan and modify the SQL statement for optimization. + +.. table:: **Table 1** Description of the **EXPLAIN** column + + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Column Name | Description | + +===============+==============================================================================================================================================================================================================================================================================================+ + | table | Table that the row of data belongs to | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | type | Type of the connection. Connection types from the best to the worst are **const**, **eq_reg**, **ref**, **range**, **index**, and **ALL**. | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | possible_keys | Index that may be applied to the table | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | key | Index that is actually used. If the value is **NULL**, no index is used. In some cases, MySQL may choose to optimize indexes, for example, force MySQL to use an index by adding **USE INDEX(indexname)** to a SELECT statement or to ignore an index by adding **IGNORE INDEX(indexname)**. | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | key_len | Length of the used index. The shorter the length is, the better the index is if accuracy is not affected. | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ref | Column where the index is used. The value is generally a constant. | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | rows | Rows of the data returned by MySQL | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Extra | Additional information about how MySQL parses queries | + +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +SQL Restrictions +---------------- + +- Temporary tables are not supported. + +- Foreign keys, views, cursors, triggers, and stored procedures are not supported. +- Customized data types and functions are not supported. +- Process control statements such as IF and WHILE are not supported. +- Compound statements such as BEGIN...END, LOOP...END LOOP, REPEAT...UNTIL...END REPEAT, and WHILE...DO...END WHILE are not supported. + +DDL Syntax +---------- + +- Sharded and broadcast tables do not support foreign keys. +- Modifying sharding keys is not supported. +- ALTER DATABASE Syntax is not supported. +- Creating sharded or broadcast tables from another table is not supported. +- The CREATE TABLE statement does not support GENERATED COLUMN. +- Modifying sharding keys or global sequence fields using the **ALTER** command is not supported. +- Creating TEMPORARY sharded or broadcast tables is not supported. +- The logical table name contains only letters, digits, and underscores (_). +- CREATE TABLE tbl_name LIKE old_tbl_name is not supported. +- The CREATE TABLE tbl_name SELECT statement is not supported. +- Updating the sharding key by executing INSERT INTO ON DUPLICATE KEY UPDATE is not supported. +- Cross-schema DDL is not supported, for example, CREATE TABLE db_name.tbl_name (... ) +- Reverse quotation marks are required to quote identifiers such as table names, column names, and index names that are MySQL key words or reserved words. + +DML Syntax +---------- + +- PARTITION clauses are not supported. +- Nesting a subquery in an UPDATE statement is not supported. +- INSERT DELAYED Syntax is not supported. +- STRAIGHT_JOIN and NATURAL JOIN are not supported. +- Multiple-table UPDATE is supported if all tables joined across shards have primary keys. +- Multiple-table DELETE is supported if all tables joined across shards have primary keys. + +- Using or manipulating variables in SQL statements is not supported, for example, SET @c=1, @d=@c+1; SELECT @c, @d. + +- Inserting keyword DEFAULT or updating a sharding key value to DEFAULT is not supported. + +- Repeatedly updating the same field in an UPDATE statement is not supported. + +- Updating a sharding key using UPDATE JOIN syntax is not supported. + +- UPDATE cannot be used to update self-joins. + +- Referencing other object columns in assignment statements or expressions may cause unexpected update results. Example: + + update tbl_1 a,tbl_2 b set a.name=concat(b.name,'aaaa'),b.name=concat(a.name,'bbbb') on a.id=b.id + +- If a text protocol is used, BINARY, VARBINARY, TINYBLOB, BLOB, MEDIUMBLOB, and LONGBLOB data must be converted into hexadecimal data. + +- DDM processes invalid data based on **sql_mode** settings of associated MySQL instances. + +- UPDATE JOIN supports only joins with WHERE conditions. + +- The expression in a SQL statement has a maximum of 1000 factors. + +Unsupported Functions +--------------------- + +- XML functions +- GTID functions +- Full-text search functions +- Enterprise encryption functions +- Function **row_count()** + +Subqueries +---------- + +Using subqueries in the HAVING clause and the JOIN ON condition is not supported. + +Data Types +---------- + +Spatial data types are not supported. diff --git a/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_all_logical_tables.rst b/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_all_logical_tables.rst new file mode 100644 index 0000000..7bf2b62 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_all_logical_tables.rst @@ -0,0 +1,44 @@ +:original_name: ddm-08-0021.html + +.. _ddm-08-0021: + +Checking DDL Consistency of Physical Tables in All Logical Tables +================================================================= + +**Purpose:** To check DDL consistency of all logical tables in one schema + +**Command Format:** + +.. code-block:: text + + check table + +**Command Output:** + +The following output is returned if DDL check results of all logical tables are consistent. + +|image1| + +The following output is returned if there are logical tables with inconsistent DDL check results. + +|image2| + +**Output Details:** + +Each row contains the check result of a logical table. + +- **DATABASE_NAME**: indicates the schema name. +- **TABLE_NAME**: indicates the logical table name. +- **TABLE_TYPE**: indicates the logical table type. + + - **SINGLE**: indicates that the logical table is unsharded. + - **BROADCAST**: indicates that the table is a broadcast table. + - **SHARDING**: indicates that the table is sharded. + +- **DDL_CONSISTENCY**: indicates whether DDL results of all physical tables corresponding to the logical table are consistent. +- **TOTAL_COUNT**: indicates the number of physical tables in the logical table. +- **INCONSISTENT_COUNT**: indicates the number of physical tables with inconsistent DDL results. +- **DETAILS**: indicates names of the physical tables with inconsistent DDL check results. + +.. |image1| image:: /_static/images/en-us_image_0000001685307210.png +.. |image2| image:: /_static/images/en-us_image_0000001733146277.png diff --git a/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_one_logical_table.rst b/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_one_logical_table.rst new file mode 100644 index 0000000..4dcdc77 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/check_table/checking_ddl_consistency_of_physical_tables_in_one_logical_table.rst @@ -0,0 +1,46 @@ +:original_name: ddm-08-0022.html + +.. _ddm-08-0022: + +Checking DDL Consistency of Physical Tables in One Logical Table +================================================================ + +**Purpose:** To check DDL consistency of physical tables in a specific logical table + +**Command Format:** + +.. code-block:: text + + check table + +**Command Output:** + +If the returned result set is empty, DDL results of physical tables in this logical table are consistent. + +|image1| + +If the returned result set is not empty, there are physical tables with inconsistent DDL results. + +|image2| + +**Output Details:** + +Each row displays details of a physical table with inconsistent DDL results. + +- **DATABASE_NAME**: indicates the database shard containing the physical table. +- **TABLE_NAME**: indicates the name of the physical table. +- **TABLE_TYPE**: indicates the type of the logical table that the physical table belongs to. +- **EXTRA_COLUMNS**: indicates extra columns in the physical table. +- **MISSING_COLUMNS**: indicates missing columns in the physical table. +- **DIFFERENT_COLUMNS**: indicates name and type columns whose attributes are inconsistent in the physical table. +- **KEY_DIFF**: indicates inconsistent indexes in the physical table. +- **ENGINE_DIFF**: indicates inconsistent engines in the physical table. +- **CHARSET_DIFF**: indicates inconsistent character sets in the physical table. +- **COLLATE_DIFF**: indicates inconsistent collations in the physical table. +- **EXTRA_PARTITIONS**: indicates extra partitions in the physical table. This field is only available to partitioned tables. +- **MISSING_PARTITIONS**: indicates missing partitions in the physical table. This field is only available to partitioned tables. +- **DIFFERENT_PARTITIONS**: indicates partitions with inconsistent attributes in the physical table. This field is only available to partitioned tables. +- **EXTRA_INFO**: indicates other information such as missing physical tables. + +.. |image1| image:: /_static/images/en-us_image_0000001733266429.png +.. |image2| image:: /_static/images/en-us_image_0000001685147494.png diff --git a/umn/source/sql_syntax/supported_sql_statements/check_table/index.rst b/umn/source/sql_syntax/supported_sql_statements/check_table/index.rst new file mode 100644 index 0000000..8f64993 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/check_table/index.rst @@ -0,0 +1,16 @@ +:original_name: ddm-08-0014.html + +.. _ddm-08-0014: + +CHECK TABLE +=========== + +- :ref:`Checking DDL Consistency of Physical Tables in All Logical Tables ` +- :ref:`Checking DDL Consistency of Physical Tables in One Logical Table ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + checking_ddl_consistency_of_physical_tables_in_all_logical_tables + checking_ddl_consistency_of_physical_tables_in_one_logical_table diff --git a/umn/source/sql_syntax/supported_sql_statements/customized_hints_for_read_write_splitting.rst b/umn/source/sql_syntax/supported_sql_statements/customized_hints_for_read_write_splitting.rst new file mode 100644 index 0000000..0afe4cc --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/customized_hints_for_read_write_splitting.rst @@ -0,0 +1,30 @@ +:original_name: ddm_03_0034.html + +.. _ddm_03_0034: + +Customized Hints for Read/Write Splitting +========================================= + +DDM allows you to customize a hint to specify whether SQL statements are executed on the primary instance or its read replicas. + +The following hint formats are supported: + +Format 1 + +.. code-block:: text + + /*!mycat:db_type=host*/ + +Format 2 + +.. code-block:: text + + /*+ db_type=host */ + +**host** can be **master** or **slave**. **master** indicates a primary instance, and **slave** indicates a read replica. + +Currently, this function only applies to SELECT statements. + +.. note:: + + After read/write splitting is enabled, write operations are performed only on the primary instance, and read operations are performed only on its read replicas. To read from the primary instance, you can customize a hint to forcibly perform read operations on the primary instance. This method is only suitable for queries. diff --git a/umn/source/sql_syntax/supported_sql_statements/hint-_allow_alter_rerun.rst b/umn/source/sql_syntax/supported_sql_statements/hint-_allow_alter_rerun.rst new file mode 100644 index 0000000..7f64466 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/hint-_allow_alter_rerun.rst @@ -0,0 +1,18 @@ +:original_name: ddm-08-0027.html + +.. _ddm-08-0027: + +HINT- ALLOW_ALTER_RERUN +======================= + +**Command Format:** + +**/*+ allow_alter_rerun=true*/**\ ** + +**Description:** + +Using this hint ensures that commands can be repeatedly executed, and no error is reported. This hint supports the following ALTER TABLE statements: ADD COLUMN, MODIFY COLUMN, DROP COLUMN, ADD INDEX, DROP INDEX, CHANGE COLUMN, ADD PARTITION, and DROP PARTITION. + +Example: + +**/*+ allow_alter_rerun=true*/ALTER TABLE aaa_tb ADD schoolroll varchar(128) not null comment 'Enrollment data'** diff --git a/umn/source/sql_syntax/supported_sql_statements/index.rst b/umn/source/sql_syntax/supported_sql_statements/index.rst new file mode 100644 index 0000000..ea0f419 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/index.rst @@ -0,0 +1,34 @@ +:original_name: ddm-08-0002.html + +.. _ddm-08-0002: + +Supported SQL Statements +======================== + +- :ref:`CHECK TABLE ` +- :ref:`SHOW RULE ` +- :ref:`SHOW TOPOLOGY ` +- :ref:`SHOW DATA NODE ` +- :ref:`TRUNCATE TABLE ` +- :ref:`HINT- ALLOW_ALTER_RERUN ` +- :ref:`LOAD DATA ` +- :ref:`SHOW PHYSICAL PROCESSLIST ` +- :ref:`Customized Hints for Read/Write Splitting ` +- :ref:`Setting a Hint to Skip the Cached Execution Plan ` +- :ref:`Specifying a Shard Using a Hint When Executing a SQL Statement ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + check_table/index + show_rule + show_topology + show_data_node + truncate_table/index + hint-_allow_alter_rerun + load_data + show_physical_processlist + customized_hints_for_read_write_splitting + setting_a_hint_to_skip_the_cached_execution_plan + specifying_a_shard_using_a_hint_when_executing_a_sql_statement diff --git a/umn/source/sql_syntax/supported_sql_statements/load_data.rst b/umn/source/sql_syntax/supported_sql_statements/load_data.rst new file mode 100644 index 0000000..1c1a70b --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/load_data.rst @@ -0,0 +1,68 @@ +:original_name: ddm_03_0031.html + +.. _ddm_03_0031: + +LOAD DATA +========= + +Standard Example +---------------- + +LOAD DATA LOCAL INFILE '/data/data.txt' IGNORE INTO TABLE test CHARACTER SET 'utf8' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\\n' (id, sid, asf); + +.. note:: + + If a data field contains special characters like separators and escapes, execute OPTIONALLY ENCLOSED BY '"' to enclose the field with double quotation marks (""). + + Example: + + The following data field contains separators (,) and is enclosed with quotation marks: + + **"aab,,,bba,ddd"** + + If a data field contains quotation marks, the preceding method may not work. You can add a backslash (\\) before each quotation mark (") in the field, for example, **"aab,,,bba,ddd\\"ddd\\"bb,ae"**. + +- If keyword **LOCAL** is specified, the file is read from the client host. If keyword **LOCAL** is not specified, this function is not supported for security purposes. +- You can use **FIELDS TERMINATED BY** to specify a separator between characters. The default value is **\\t**. +- You can use **OPTIONALLY ENCLOSED BY** to ignore symbols in the data source fields. +- You can use **LINES TERMINATED BY** to specify a newline character between lines. The default value is **\\n**. + + .. note:: + + On some hosts running the Windows OS, the newline character of text files may be **\\r\\n**. The newline character is invisible, so you may need to check whether it is there. + +- You can use **CHARACTER SET** to specify a file code that should be the same as the code used by physical databases in the target RDS for MySQL instance, to avoid garbled characters. The character set code shall be enclosed in quotation marks to avoid parsing errors. +- You can use **IGNORE** or **REPLACE** to specify whether repeated records are replaced or ignored. +- Currently, the column name must be specified, and the sharding field must be included. Otherwise, the route cannot be determined. +- For other parameters, see the `LOAD DATA INFILE Syntax `__ on the MySQL official website. The sequence of other parameters must be correct. For more information, visit `the MySQL official website `__. + +.. important:: + + #. Importing data affects performance of DDM instances and RDS for MySQL instances. Import data during off-peak hours. + + #. Do not to send multiple LOAD DATA requests at the same time. If you do so, SQL transactions may time out due to highly concurrent data write operations, table locking, and system I/O occupation, resulting in failure of all LOAD DATA requests. + + #. Manually submit transactions when using LOAD DATA to import data so that data records are modified correctly. + + For example, configure your client as follows: + + **mysql> set autocommit=0;** + + **mysql>** **LOAD DATA LOCAL INFILE** '/data/data.txt' **IGNORE INTO TABLE** **test CHARACTER SET** **'utf8' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\\n' (id, sid, asf);** + + **mysql> commit;** + +Use Constraints +--------------- + +There are the following constraints on LOAD DATA syntax. + +- LOW_PRIORITY is not supported. +- CONCURRENT is not supported. +- PARTITION (partition_name [, partition_name] ...) is not supported. +- LINES STARTING BY 'string' is not supported. +- User-defined variables are not supported. +- ESCAPED BY supports only '\\\\'. +- If you have not specified a value for your auto-increment key when you insert a data record, DDM will not fill a value for the key. The auto-increment keys of data nodes of a DDM instance all take effect, so the auto-increment key values may be duplicate. +- If the primary key or unique index is not routed to the same physical table, REPLACE does not take effect. +- If the primary key or unique index is not routed to the same physical table, IGNORE does not take effect. diff --git a/umn/source/sql_syntax/supported_sql_statements/setting_a_hint_to_skip_the_cached_execution_plan.rst b/umn/source/sql_syntax/supported_sql_statements/setting_a_hint_to_skip_the_cached_execution_plan.rst new file mode 100644 index 0000000..a6a4743 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/setting_a_hint_to_skip_the_cached_execution_plan.rst @@ -0,0 +1,14 @@ +:original_name: ddm_03_0039.html + +.. _ddm_03_0039: + +Setting a Hint to Skip the Cached Execution Plan +================================================ + +DDM allows you to configure a hint to control whether each SELECT statement skips the cached execution plan. + +The hint is in the format of **/*!GAUSS:skip_plancache=**\ *flag*\ **\*/**. + +*flag* can be set to **true** or **false**. **true** indicates that the statement skips the cached execution plan. **false** indicates that the statement does not skip the cached execution plan. + +Currently, this function only applies to SELECT statements. diff --git a/umn/source/sql_syntax/supported_sql_statements/show_data_node.rst b/umn/source/sql_syntax/supported_sql_statements/show_data_node.rst new file mode 100644 index 0000000..1b1ead5 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/show_data_node.rst @@ -0,0 +1,22 @@ +:original_name: ddm-08-0029.html + +.. _ddm-08-0029: + +SHOW DATA NODE +============== + +**Command Format:** + +**show data node**; + +It is used to view data about database shards in the RDS instance. + +**Output Details:** + +**RDS_instance_id**: indicates the ID of the RDS instance. + +**PHYSICAL_NODE**: used to view physical databases in the RDS instance. + +**HOST**: indicates the IP address of the RDS instance. + +**PORT**: indicates the port number of the RDS instance. diff --git a/umn/source/sql_syntax/supported_sql_statements/show_physical_processlist.rst b/umn/source/sql_syntax/supported_sql_statements/show_physical_processlist.rst new file mode 100644 index 0000000..d209c6b --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/show_physical_processlist.rst @@ -0,0 +1,55 @@ +:original_name: ddm_08_0032.html + +.. _ddm_08_0032: + +SHOW PHYSICAL PROCESSLIST +========================= + +**Command Format 1:** + +**show physical processlist**; + +This command returns the processes that run on the associated RDS instance. + +**Command Format 2:** + +**show physical processlist with info**; + +This commands filters out the data records whose **info** is empty from result sets of command 1 and returns only the data records whose **info** is not empty. + +**Command Output** + + +.. figure:: /_static/images/en-us_image_0000001685307362.png + :alt: **Figure 1** Command execution effect + + **Figure 1** Command execution effect + +**Output Details:** + +**Ip**: indicates the IP address of the associated RDS instance. + +**Port**: indicates the port number of the associated RDS instance. + +**Instance_id**: indicates the ID of the associated RDS instance. + +**Type**: **master** indicates that the associated instance is a primary instance, and **readreplica** indicates that the associated instance is a read replica. + +Columns after column **Type** indicate the information about processes running on the associated RDS instance. Such information is the same as the output of command **show processlist** executed on the associated RDS instance. + +**Command Format 3:** + +Run the following statement to kill the execution thread on the associated RDS instance: + +**kill physical** **\ **@**\ **\ **:**\ **; + +**physical_thread_id**: indicates the ID of the execution thread on the associated RDS instance. You can obtain it from the result set in command 2. + +**rds_ip**: indicates the IP address of the associated RDS instance. You can obtain it from the result set in command 2. + +**rds_port**: indicates the port number of the associated RDS instance. You can obtain it from the result set in command 2. + +.. important:: + + - SHOW PHYSICAL PROCESSLIST is available only in kernel 3.0.1 or later. + - You need to log in to the target DDM instance and execute the preceding commands on it. diff --git a/umn/source/sql_syntax/supported_sql_statements/show_rule.rst b/umn/source/sql_syntax/supported_sql_statements/show_rule.rst new file mode 100644 index 0000000..64537b6 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/show_rule.rst @@ -0,0 +1,53 @@ +:original_name: ddm-08-0015.html + +.. _ddm-08-0015: + +SHOW RULE +========= + +**Command Format 1:** + +**show rule**; + +It is used to view the sharding rule of each logical table in a certain schema. + +Command output: + +|image1| + +**Command Format 2:** + +**show rule from** **; + +It is used to view the sharding rule of a specific logical table in a certain schema. + +Command output: + +|image2| + +**Output Details:** + +**TABLE_NAME**: indicates the name of the logical table. + +**BROADCAST**: specifies whether the table is a broadcast table. **0** indicates that the table is not a broadcast table. **1** indicates the table is a broadcast table. + +**DB_PARTITION_KEY**: indicates the database sharding key. Leave this field blank if database sharding is not required. + +**DB_PARTITION_POLICY**: indicates the database sharding algorithm. The value can be **HASH**, **YYYYMM**, **YYYYDD**, and **YYYYWEEK**. + +**DB_PARTITION_COUNT**: indicates the number of database shards. + +**DB_PARTITION_OFFSET**: indicates where a new database shard starts from. + +**PARTITION_RANGE**: indicates the sharding range when the database sharding algorithm is range. + +**TB_PARTITION_KEY**: indicates the table sharding key. Leave this field blank if table sharding is not required. + +**TB_PARTITION_POLICY**: indicates the table sharding algorithm. The value can be **HASH**, **MM**, **DD**, **MMDD**, or **WEEK**. + +**TB_PARTITION_COUNT**: indicates the number of physical tables in each database shard. + +**TB_PARTITION_OFFSET**: indicates where a new physical table starts from. + +.. |image1| image:: /_static/images/en-us_image_0000001733146413.png +.. |image2| image:: /_static/images/en-us_image_0000001733266529.png diff --git a/umn/source/sql_syntax/supported_sql_statements/show_topology.rst b/umn/source/sql_syntax/supported_sql_statements/show_topology.rst new file mode 100644 index 0000000..5501024 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/show_topology.rst @@ -0,0 +1,26 @@ +:original_name: ddm-08-0017.html + +.. _ddm-08-0017: + +SHOW TOPOLOGY +============= + +**Command Format:** + +**show topology from** **; + +It is used to view physical tables corresponding to a specified logical table. + +**Output Details:** + +**RDS_instance_id**: indicates the ID of the RDS instance. + +**HOST**: indicates the IP address of the RDS instance. + +**PORT**: indicates the port number of the RDS instance. + +**DATABASE**: indicates the physical database in the RDS instance. + +**TABLE**: indicates the physical table. + +**ROW_COUNT**: indicates the estimated number of data entries in each physical table. The value is obtained from information_schema.TABLES. diff --git a/umn/source/sql_syntax/supported_sql_statements/specifying_a_shard_using_a_hint_when_executing_a_sql_statement.rst b/umn/source/sql_syntax/supported_sql_statements/specifying_a_shard_using_a_hint_when_executing_a_sql_statement.rst new file mode 100644 index 0000000..b40c0f2 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/specifying_a_shard_using_a_hint_when_executing_a_sql_statement.rst @@ -0,0 +1,27 @@ +:original_name: ddm_03_0040.html + +.. _ddm_03_0040: + +Specifying a Shard Using a Hint When Executing a SQL Statement +============================================================== + +**Command Format:** + +.. code-block:: text + + /*+db=*/ ; + +**Description:** + +Specify a shard by configuring ** and execute a SQL statement on the shard. + +**Example:** + +.. code-block:: text + + /*+db=test_0000*/ select * from t1; + +**Restrictions:** + +- The hint is valid only for SELECT, DML, and TRUNCATE statements. +- The hint works only under the text protocol, rather than the Prepare protocol. diff --git a/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db.rst b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db.rst new file mode 100644 index 0000000..c247172 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db.rst @@ -0,0 +1,18 @@ +:original_name: ddm-08-0024.html + +.. _ddm-08-0024: + +HINT-DB +======= + +**Command Format:** + +**/*+db=**\ **\ **\*/ TRUNCATE TABLE** ** + +**Description:** + +Deleting data in physical tables corresponding to ** in ** does not affect physical tables in other database shards. + +.. note:: + + HINTs are instructions within a SQL statement that tell the optimizer to execute the statement in a flexible way. diff --git a/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db_table.rst b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db_table.rst new file mode 100644 index 0000000..ad196da --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-db_table.rst @@ -0,0 +1,18 @@ +:original_name: ddm-08-0026.html + +.. _ddm-08-0026: + +HINT-DB/TABLE +============= + +**Command Format:** + +**/*+db=**\ **\ **,table=**\ **\ **\*/ TRUNCATE TABLE** ** + +**Description:** + +Deleting data in physical table ** in database shard ** does not affect other physical tables. + +.. note:: + + Hints are valid only for sharded tables. diff --git a/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-table.rst b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-table.rst new file mode 100644 index 0000000..0606a3f --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/truncate_table/hint-table.rst @@ -0,0 +1,31 @@ +:original_name: ddm-08-0025.html + +.. _ddm-08-0025: + +HINT-TABLE +========== + +HINTs are instructions within a SQL statement that tell the optimizer to execute the statement in a flexible way. This section describes how to use HINT syntax to delete data from a table. + +**Command Format:** + +**/*+table=**\ **\ **\*/ TRUNCATE TABLE** ** + +**Description:** + +Deleting data in physical table ** in the current database shard does not affect other physical tables. + +**Example output before the table is deleted:** + +|image1| + +**Example output after the table is deleted:** + +|image2| + +.. note:: + + Hints are valid only for sharded tables. + +.. |image1| image:: /_static/images/en-us_image_0000001685307430.png +.. |image2| image:: /_static/images/en-us_image_0000001733266617.png diff --git a/umn/source/sql_syntax/supported_sql_statements/truncate_table/index.rst b/umn/source/sql_syntax/supported_sql_statements/truncate_table/index.rst new file mode 100644 index 0000000..690e290 --- /dev/null +++ b/umn/source/sql_syntax/supported_sql_statements/truncate_table/index.rst @@ -0,0 +1,18 @@ +:original_name: ddm-08-0023.html + +.. _ddm-08-0023: + +TRUNCATE TABLE +============== + +- :ref:`HINT-DB ` +- :ref:`HINT-TABLE ` +- :ref:`HINT-DB/TABLE ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + hint-db + hint-table + hint-db_table diff --git a/umn/source/sql_syntax/use_constraints.rst b/umn/source/sql_syntax/use_constraints.rst new file mode 100644 index 0000000..67a480f --- /dev/null +++ b/umn/source/sql_syntax/use_constraints.rst @@ -0,0 +1,44 @@ +:original_name: ddm-08-0013.html + +.. _ddm-08-0013: + +Use Constraints +=============== + +DDM is compatible with the MySQL license and syntax, but the use of SQL statements is limited due to differences between distributed databases and single-node databases. + +Unsupported SQL Statements +-------------------------- + +- Triggers +- Temporary tables +- DO statement +- Association with foreign keys +- RESET statement +- FLUSH statement +- BINLOG statement +- HANDLER statement +- SHOW WARNINGS statement +- Assignment operator := +- Operators less than (<), assignment (=), and more than (>) +- Expression IS UNKNOWN +- INSTALL and UNINSTALL PLUGIN statements +- Cross-shard stored procedures and custom functions +- Statements for modifying database names, table names, and sharding field names and types +- Most of SHOW statements such as SHOW PROFILES and SHOW ERRORS +- Table maintenance statements, including ANALYZE, CHECK, CHECKSUM, OPTIMIZE, and REPAIR TABLE +- Statements for assigning a value to or querying variable **session**, for example, set @rowid=0;select @rowid:=@rowid+1,id from user +- SQL statements that use -- or ``/*...*/`` to comment out a single line or multiple lines of code +- The result of the REPEAT function contains a maximum of 1,000,000 characters (in version 3.0.9 or later). + +Permission Levels +----------------- + +Permission levels supported by DDM are as follows: + +- Global level (not supported) +- Database level (supported) +- Table level (supported) +- Column level (not supported) +- Subprogram level (not supported) +- User level (supported) diff --git a/umn/source/tags.rst b/umn/source/tags.rst new file mode 100644 index 0000000..e225e32 --- /dev/null +++ b/umn/source/tags.rst @@ -0,0 +1,81 @@ +:original_name: ddm_06_1000.html + +.. _ddm_06_1000: + +Tags +==== + +Tag Management Service (TMS) enables you to use tags on the console to manage resources. TMS works with other cloud services to manage tags. TMS manages tags globally. Other cloud services manage only their own tags. + +Precautions +----------- + +- A tag consists of a key and value. You can add only one value for each key. +- Each instance can have up to 20 tags. + +Adding a tag +------------ + +#. Log in to the DDM console. + +#. On the **Instances** page, locate the required instance and click its name. + +#. In the navigation pane on the left, click **Tags**. + +#. Click **Add Tag**. + +#. In the displayed dialog box, enter a tag key and value and click **OK**. + + The tag key and value must comply with the following rules. + + .. table:: **Table 1** Parameter description + + +-----------------------------------+----------------------------------------------------------------------------------------+ + | Item | Description | + +===================================+========================================================================================+ + | Tag key | This parameter is mandatory and cannot be null. The key: | + | | | + | | - Must be unique for each instance. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can include 1 to 36 characters. | + | | - Cannot be an empty string or start with **\_sys\_**. | + +-----------------------------------+----------------------------------------------------------------------------------------+ + | Tag value | This parameter is mandatory. The value: | + | | | + | | - Is an empty string by default. | + | | - Can only consist of digits, letters, underscores (_), hyphens (-), and at sign (@). | + | | - Can contain 0 to 43 characters. | + +-----------------------------------+----------------------------------------------------------------------------------------+ + +#. View and manage the tag on the **Tags** page. + +Editing a Tag +------------- + +#. Log in to the DDM console. + +#. On the **Instances** page, locate the required instance and click its name. + +#. On the **Tags** page, locate the tag that you want to edit and click **Edit** in the **Operation** column. In the displayed dialog box, change the tag value and click **OK**. + + Only the tag value can be edited. + +#. View and manage the tag on the **Tags** page. + +Deleting Tags +------------- + +#. Log in to the DDM console. +#. On the **Instances** page, locate the required instance and click its name. +#. In the navigation pane, choose **Tags**. On the displayed page, locate the tag that you want to delete and click **Delete** in the **Operation** column. In the displayed dialog box, click **Yes**. +#. Check that the tag is no longer displayed on the **Tags** page. + +Searching for Instances by Tag +------------------------------ + +After tags are added, you can search for instances by tag to quickly find specific types of instances. + +#. Log in to the DDM console. +#. On the **Instances** page, click **Search by Tag** in the upper right corner of the instance list. +#. Enter a tag key and a tag value and click **Search**. +#. View the instance found. diff --git a/umn/source/task_center.rst b/umn/source/task_center.rst new file mode 100644 index 0000000..00d6db4 --- /dev/null +++ b/umn/source/task_center.rst @@ -0,0 +1,55 @@ +:original_name: ddm_09_0002.html + +.. _ddm_09_0002: + +Task Center +=========== + +You can view the progress and results of asynchronous tasks on the **Task Center** page. + +.. note:: + + The following tasks can be viewed: + + - Creating a DDM Instance + - Deleting a DDM instance + - Changing node class + - Scaling out a DDM instance + - Scaling in a DDM instance + - Restarting a DDM instance + - Binding an EIP + - Unbinding an EIP + - Restoring a DDM instance + - Importing schema information + - Configuring shards + - Retrying shards configuration + - Deleting a backup + - Creating a group + - Deleting a group + - Restarting the node + +Procedure +--------- + +#. Log in to the management console. +#. Click |image1| in the upper left corner and select a region and a project. +#. Click |image2| in the upper left corner of the page and choose **Databases** > **Distributed Database Middleware**. +#. Choose **Task Center** in the left navigation pane, locate the required task, and view its details. + + - You can locate a task by name, order ID, or instance name/ID, or search for the required task by entering a task name in the search box in the upper right corner. + + - You can click |image3| in the upper right corner to search for tasks executed within a specific period. The default time range is seven days. + + All tasks in the list can be retained for up to one month. + + - You can view the tasks that are in the following statuses: + + - Running + - Completed + - Failed + + - You can view the task creation and completion time. + +.. |image1| image:: /_static/images/en-us_image_0000001685147662.png +.. |image2| image:: /_static/images/en-us_image_0000001685307410.png +.. |image3| image:: /_static/images/en-us_image_0000001685307406.png