diff --git a/docs/mrs/component-operation-guide/ALL_META.TXT.json b/docs/mrs/component-operation-guide/ALL_META.TXT.json
index 8613608c..eb46183f 100644
--- a/docs/mrs/component-operation-guide/ALL_META.TXT.json
+++ b/docs/mrs/component-operation-guide/ALL_META.TXT.json
@@ -1,8051 +1,14476 @@
[
+ {
+ "dockw":"Component Operation Guide (Normal)"
+ },
{
"uri":"mrs_01_0756.html",
+ "node_id":"mrs_01_0756.xml",
"product_code":"mrs",
"code":"1",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Alluxio",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Alluxio",
"githuburl":""
},
{
"uri":"mrs_01_0759.html",
+ "node_id":"mrs_01_0759.xml",
"product_code":"mrs",
"code":"2",
"des":"If you want to use a unified client API and a global namespace to access persistent storage systems including HDFS and OBS to separate computing from storage, you can con",
"doc_type":"cmpntguide",
"kw":"Configuring an Underlying Storage System,Using Alluxio,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring an Underlying Storage System",
"githuburl":""
},
{
"uri":"mrs_01_0760.html",
+ "node_id":"mrs_01_0760.xml",
"product_code":"mrs",
"code":"3",
"des":"The port number used for accessing the Alluxio file system is 19998, and the access address is alluxio://:19998/. This section us",
"doc_type":"cmpntguide",
"kw":"Accessing Alluxio Using a Data Application,Using Alluxio,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing Alluxio Using a Data Application",
"githuburl":""
},
{
"uri":"mrs_01_0757.html",
+ "node_id":"mrs_01_0757.xml",
"product_code":"mrs",
"code":"4",
"des":"Create a cluster with Alluxio installed.Log in to the active Master node in a cluster as user root using the password set during cluster creation.Run the following comman",
"doc_type":"cmpntguide",
"kw":"Common Operations of Alluxio,Using Alluxio,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Operations of Alluxio",
"githuburl":""
},
{
"uri":"mrs_01_0385.html",
+ "node_id":"mrs_01_0385.xml",
"product_code":"mrs",
"code":"5",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using CarbonData (for Versions Earlier Than MRS 3.x)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using CarbonData (for Versions Earlier Than MRS 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_0386.html",
+ "node_id":"mrs_01_0386.xml",
"product_code":"mrs",
"code":"6",
"des":"This section is for MRS 3.x or earlier. For MRS 3.x or later, see Using CarbonData (for MRS 3.x or Later).This section describes the procedure of using Spark CarbonData. ",
"doc_type":"cmpntguide",
"kw":"Using CarbonData from Scratch,Using CarbonData (for Versions Earlier Than MRS 3.x),Component Operati",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using CarbonData from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0387.html",
+ "node_id":"mrs_01_0387.xml",
"product_code":"mrs",
"code":"7",
"des":"CarbonData tables are similar to tables in the relational database management system (RDBMS). RDBMS tables consist of rows and columns to store data. CarbonData tables ha",
"doc_type":"cmpntguide",
"kw":"About CarbonData Table,Using CarbonData (for Versions Earlier Than MRS 3.x),Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"About CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_0388.html",
+ "node_id":"mrs_01_0388.xml",
"product_code":"mrs",
"code":"8",
"des":"A CarbonData table must be created to load and query data.Users can create a table by specifying its columns and data types. For analysis clusters with Kerberos authentic",
"doc_type":"cmpntguide",
"kw":"Creating a CarbonData Table,Using CarbonData (for Versions Earlier Than MRS 3.x),Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_0389.html",
+ "node_id":"mrs_01_0389.xml",
"product_code":"mrs",
"code":"9",
"des":"Unused CarbonData tables can be deleted. After a CarbonData table is deleted, its metadata and loaded data are deleted together.DROP TABLE [IF EXISTS] [db_name.]table_nam",
"doc_type":"cmpntguide",
"kw":"Deleting a CarbonData Table,Using CarbonData (for Versions Earlier Than MRS 3.x),Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Deleting a CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_1400.html",
+ "node_id":"mrs_01_1400.xml",
"product_code":"mrs",
"code":"10",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using CarbonData (for MRS 3.x or Later)",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using CarbonData (for MRS 3.x or Later)",
"githuburl":""
},
{
"uri":"mrs_01_1401.html",
+ "node_id":"mrs_01_1401.xml",
"product_code":"mrs",
"code":"11",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Overview",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview",
"githuburl":""
},
{
"uri":"mrs_01_1402.html",
+ "node_id":"mrs_01_1402.xml",
"product_code":"mrs",
"code":"12",
"des":"CarbonData is a new Apache Hadoop native data-store format. CarbonData allows faster interactive queries over PetaBytes of data using advanced columnar storage, index, co",
"doc_type":"cmpntguide",
"kw":"CarbonData Overview,Overview,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Overview",
"githuburl":""
},
{
"uri":"mrs_01_1403.html",
+ "node_id":"mrs_01_1403.xml",
"product_code":"mrs",
"code":"13",
"des":"The memory required for data loading depends on the following factors:Number of columnsColumn valuesConcurrency (configured using carbon.number.of.cores.while.loading)Sor",
"doc_type":"cmpntguide",
"kw":"Main Specifications of CarbonData,Overview,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Main Specifications of CarbonData",
"githuburl":""
},
{
"uri":"mrs_01_1404.html",
+ "node_id":"mrs_01_1404.xml",
"product_code":"mrs",
"code":"14",
"des":"This section provides the details of all the configurations required for the CarbonData System.Configure the following parameters in the spark-defaults.conf file on the S",
"doc_type":"cmpntguide",
"kw":"limit,limit,Configuration Reference,Using CarbonData (for MRS 3.x or Later),Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuration Reference",
"githuburl":""
},
{
"uri":"mrs_01_1405.html",
+ "node_id":"mrs_01_1405.xml",
"product_code":"mrs",
"code":"15",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Operation Guide",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Operation Guide",
"githuburl":""
},
{
"uri":"mrs_01_1406.html",
+ "node_id":"mrs_01_1406.xml",
"product_code":"mrs",
"code":"16",
"des":"This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to ",
"doc_type":"cmpntguide",
"kw":"CarbonData Quick Start,CarbonData Operation Guide,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Quick Start",
"githuburl":""
},
{
"uri":"mrs_01_1407.html",
+ "node_id":"mrs_01_1407.xml",
"product_code":"mrs",
"code":"17",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Table Management",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Table Management",
"githuburl":""
},
{
"uri":"mrs_01_1408.html",
+ "node_id":"mrs_01_1408.xml",
"product_code":"mrs",
"code":"18",
"des":"In CarbonData, data is stored in entities called tables. CarbonData tables are similar to RDBMS tables. RDBMS data is stored in a table consisting of rows and columns. Ca",
"doc_type":"cmpntguide",
"kw":"About CarbonData Table,CarbonData Table Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"About CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_1409.html",
+ "node_id":"mrs_01_1409.xml",
"product_code":"mrs",
"code":"19",
"des":"A CarbonData table must be created to load and query data. You can run the Create Table command to create a table. This command is used to create a table using custom col",
"doc_type":"cmpntguide",
"kw":"Creating a CarbonData Table,CarbonData Table Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_1410.html",
+ "node_id":"mrs_01_1410.xml",
"product_code":"mrs",
"code":"20",
"des":"You can run the DROP TABLE command to delete a table. After a CarbonData table is deleted, its metadata and loaded data are deleted together.Run the following command to ",
"doc_type":"cmpntguide",
"kw":"Deleting a CarbonData Table,CarbonData Table Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Deleting a CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_1411.html",
+ "node_id":"mrs_01_1411.xml",
"product_code":"mrs",
"code":"21",
"des":"When the SET command is executed, the new properties overwrite the existing ones.SORT SCOPEThe following is an example of the SET SORT SCOPE command:ALTER TABLE tablename",
"doc_type":"cmpntguide",
"kw":"Modify the CarbonData Table,CarbonData Table Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Modify the CarbonData Table",
"githuburl":""
},
{
"uri":"mrs_01_1412.html",
+ "node_id":"mrs_01_1412.xml",
"product_code":"mrs",
"code":"22",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Table Data Management",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Table Data Management",
"githuburl":""
},
{
"uri":"mrs_01_1413.html",
+ "node_id":"mrs_01_1413.xml",
"product_code":"mrs",
"code":"23",
"des":"After a CarbonData table is created, you can run the LOAD DATA command to load data to the table for query. Once data loading is triggered, data is encoded in CarbonData ",
"doc_type":"cmpntguide",
"kw":"Loading Data,CarbonData Table Data Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Loading Data",
"githuburl":""
},
{
"uri":"mrs_01_1414.html",
+ "node_id":"mrs_01_1414.xml",
"product_code":"mrs",
"code":"24",
"des":"If you want to modify and reload the data because you have loaded wrong data into a table, or there are too many bad records, you can delete specific segments by segment ",
"doc_type":"cmpntguide",
"kw":"Deleting Segments,CarbonData Table Data Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Deleting Segments",
"githuburl":""
},
{
"uri":"mrs_01_1415.html",
+ "node_id":"mrs_01_1415.xml",
"product_code":"mrs",
"code":"25",
"des":"Frequent data access results in a large number of fragmented CarbonData files in the storage directory. In each data loading, data is sorted and indexing is performed. Th",
"doc_type":"cmpntguide",
"kw":"Combining Segments,CarbonData Table Data Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Combining Segments",
"githuburl":""
},
{
"uri":"mrs_01_1416.html",
+ "node_id":"mrs_01_1416.xml",
"product_code":"mrs",
"code":"26",
"des":"If you want to rapidly migrate CarbonData data from a cluster to another one, you can use the CarbonData backup and restoration commands. This method does not require dat",
"doc_type":"cmpntguide",
"kw":"CarbonData Data Migration,CarbonData Operation Guide,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Data Migration",
"githuburl":""
},
{
"uri":"mrs_01_2301.html",
+ "node_id":"mrs_01_2301.xml",
"product_code":"mrs",
"code":"27",
"des":"This migration guides you to migrate the CarbonData table data of Spark 1.5 to that of Spark2x.Before performing this operation, you need to stop the data import service ",
"doc_type":"cmpntguide",
"kw":"Migrating Data on CarbonData from Spark 1.5 to Spark2x,CarbonData Operation Guide,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Migrating Data on CarbonData from Spark 1.5 to Spark2x",
"githuburl":""
},
{
"uri":"mrs_01_1417.html",
+ "node_id":"mrs_01_1417.xml",
"product_code":"mrs",
"code":"28",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1418.html",
+ "node_id":"mrs_01_1418.xml",
"product_code":"mrs",
"code":"29",
"des":"There are various parameters that can be tuned to improve the query performance in CarbonData. Most of the parameters focus on increasing the parallelism in processing an",
"doc_type":"cmpntguide",
"kw":"Tuning Guidelines,CarbonData Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Tuning Guidelines",
"githuburl":""
},
{
"uri":"mrs_01_1419.html",
+ "node_id":"mrs_01_1419.xml",
"product_code":"mrs",
"code":"30",
"des":"This section provides suggestions based on more than 50 test cases to help you create CarbonData tables with higher query performance.If the to-be-created table contains ",
"doc_type":"cmpntguide",
"kw":"Suggestions for Creating CarbonData Tables,CarbonData Performance Tuning,Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Suggestions for Creating CarbonData Tables",
"githuburl":""
},
{
"uri":"mrs_01_1421.html",
+ "node_id":"mrs_01_1421.xml",
"product_code":"mrs",
"code":"31",
"des":"This section describes the configurations that can improve CarbonData performance.Table 1 and Table 2 describe the configurations about query of CarbonData.Table 3, Table",
"doc_type":"cmpntguide",
"kw":"Configurations for Performance Tuning,CarbonData Performance Tuning,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configurations for Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1422.html",
+ "node_id":"mrs_01_1422.xml",
"product_code":"mrs",
"code":"32",
"des":"The following table provides details about Hive ACL permissions required for performing operations on CarbonData tables.Parameters listed in Table 5 or Table 6 have been ",
"doc_type":"cmpntguide",
"kw":"CarbonData Access Control,Using CarbonData (for MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Access Control",
"githuburl":""
},
{
"uri":"mrs_01_1423.html",
+ "node_id":"mrs_01_1423.xml",
"product_code":"mrs",
"code":"33",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Syntax Reference",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Syntax Reference",
"githuburl":""
},
{
"uri":"mrs_01_1424.html",
+ "node_id":"mrs_01_1424.xml",
"product_code":"mrs",
"code":"34",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"DDL",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DDL",
"githuburl":""
},
{
"uri":"mrs_01_1425.html",
+ "node_id":"mrs_01_1425.xml",
"product_code":"mrs",
"code":"35",
"des":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_",
"doc_type":"cmpntguide",
"kw":"CREATE TABLE,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CREATE TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1426.html",
+ "node_id":"mrs_01_1426.xml",
"product_code":"mrs",
"code":"36",
"des":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE[IF NOT EXISTS] [db_name.]table_name STORED",
"doc_type":"cmpntguide",
"kw":"CREATE TABLE As SELECT,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CREATE TABLE As SELECT",
"githuburl":""
},
{
"uri":"mrs_01_1427.html",
+ "node_id":"mrs_01_1427.xml",
"product_code":"mrs",
"code":"37",
"des":"This command is used to delete an existing table.DROP TABLE [IF EXISTS] [db_name.]table_name;In this command, IF EXISTS and db_name are optional.DROP TABLE IF EXISTS prod",
"doc_type":"cmpntguide",
"kw":"DROP TABLE,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DROP TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1428.html",
+ "node_id":"mrs_01_1428.xml",
"product_code":"mrs",
"code":"38",
"des":"SHOW TABLES command is used to list all tables in the current or a specific database.SHOW TABLES [IN db_name];IN db_Name is optional.SHOW TABLES IN ProductDatabase;All ta",
"doc_type":"cmpntguide",
"kw":"SHOW TABLES,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SHOW TABLES",
"githuburl":""
},
{
"uri":"mrs_01_1429.html",
+ "node_id":"mrs_01_1429.xml",
"product_code":"mrs",
"code":"39",
"des":"The ALTER TABLE COMPACTION command is used to merge a specified number of segments into a single segment. This improves the query performance of a table.ALTER TABLE[db_na",
"doc_type":"cmpntguide",
"kw":"ALTER TABLE COMPACTION,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ALTER TABLE COMPACTION",
"githuburl":""
},
{
"uri":"mrs_01_1430.html",
+ "node_id":"mrs_01_1430.xml",
"product_code":"mrs",
"code":"40",
"des":"This command is used to rename an existing table.ALTER TABLE [db_name.]table_name RENAME TO new_table_name;Parallel queries (using table names to obtain paths for reading",
"doc_type":"cmpntguide",
"kw":"TABLE RENAME,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"TABLE RENAME",
"githuburl":""
},
{
"uri":"mrs_01_1431.html",
+ "node_id":"mrs_01_1431.xml",
"product_code":"mrs",
"code":"41",
"des":"This command is used to add a column to an existing table.ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...) TBLPROPERTIES(''COLUMNPROPERTIES.columnNam",
"doc_type":"cmpntguide",
"kw":"ADD COLUMNS,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ADD COLUMNS",
"githuburl":""
},
{
"uri":"mrs_01_1432.html",
+ "node_id":"mrs_01_1432.xml",
"product_code":"mrs",
"code":"42",
"des":"This command is used to delete one or more columns from a table.ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...);After a column is deleted, at least one key ",
"doc_type":"cmpntguide",
"kw":"DROP COLUMNS,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DROP COLUMNS",
"githuburl":""
},
{
"uri":"mrs_01_1433.html",
+ "node_id":"mrs_01_1433.xml",
"product_code":"mrs",
"code":"43",
"des":"This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher.ALTER TABLE [db_name.]table_name CHANGE col_name col_name change",
"doc_type":"cmpntguide",
"kw":"CHANGE DATA TYPE,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CHANGE DATA TYPE",
"githuburl":""
},
{
"uri":"mrs_01_1434.html",
+ "node_id":"mrs_01_1434.xml",
"product_code":"mrs",
"code":"44",
"des":"This command is used to register Carbon table to Hive meta store catalogue from exisiting Carbon table data.REFRESH TABLE db_name.table_name;The new database name and the",
"doc_type":"cmpntguide",
"kw":"REFRESH TABLE,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"REFRESH TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1435.html",
+ "node_id":"mrs_01_1435.xml",
"product_code":"mrs",
"code":"45",
"des":"This command is used to register an index table with the primary table.REGISTER INDEX TABLE indextable_name ON db_name.maintable_name;Before running this command, run REF",
"doc_type":"cmpntguide",
"kw":"REGISTER INDEX TABLE,DDL,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"REGISTER INDEX TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1437.html",
+ "node_id":"mrs_01_1437.xml",
"product_code":"mrs",
"code":"46",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"DML",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DML",
"githuburl":""
},
{
"uri":"mrs_01_1438.html",
+ "node_id":"mrs_01_1438.xml",
"product_code":"mrs",
"code":"47",
"des":"This command is used to load user data of a particular type, so that CarbonData can provide good query performance.Only the raw data on HDFS can be loaded.LOAD DATA INPAT",
"doc_type":"cmpntguide",
"kw":"LOAD DATA,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"LOAD DATA",
"githuburl":""
},
{
"uri":"mrs_01_1439.html",
+ "node_id":"mrs_01_1439.xml",
"product_code":"mrs",
"code":"48",
"des":"This command is used to update the CarbonData table based on the column expression and optional filtering conditions.Syntax 1:UPDATE SET (column_name1, col",
"doc_type":"cmpntguide",
"kw":"UPDATE CARBON TABLE,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"UPDATE CARBON TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1440.html",
+ "node_id":"mrs_01_1440.xml",
"product_code":"mrs",
"code":"49",
"des":"This command is used to delete records from a CarbonData table.DELETE FROM CARBON_TABLE [WHERE expression];If a segment is deleted, all secondary indexes associated with ",
"doc_type":"cmpntguide",
"kw":"DELETE RECORDS from CARBON TABLE,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DELETE RECORDS from CARBON TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1441.html",
+ "node_id":"mrs_01_1441.xml",
"product_code":"mrs",
"code":"50",
"des":"This command is used to add the output of the SELECT command to a Carbon table.INSERT INTO [CARBON TABLE] [select query];A table has been created.You must belong to the d",
"doc_type":"cmpntguide",
"kw":"INSERT INTO CARBON TABLE,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"INSERT INTO CARBON TABLE",
"githuburl":""
},
{
"uri":"mrs_01_1442.html",
+ "node_id":"mrs_01_1442.xml",
"product_code":"mrs",
"code":"51",
"des":"This command is used to delete segments by the ID.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.ID IN (segment_id1,segment_id2);Segments cannot be deleted from the s",
"doc_type":"cmpntguide",
"kw":"DELETE SEGMENT by ID,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DELETE SEGMENT by ID",
"githuburl":""
},
{
"uri":"mrs_01_1443.html",
+ "node_id":"mrs_01_1443.xml",
"product_code":"mrs",
"code":"52",
"des":"This command is used to delete segments by loading date. Segments created before a specific date will be deleted.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.STARTT",
"doc_type":"cmpntguide",
"kw":"DELETE SEGMENT by DATE,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DELETE SEGMENT by DATE",
"githuburl":""
},
{
"uri":"mrs_01_1444.html",
+ "node_id":"mrs_01_1444.xml",
"product_code":"mrs",
"code":"53",
"des":"This command is used to list the segments of a CarbonData table.SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_loads;Nonecreate tablecarbon01(a int,b string",
"doc_type":"cmpntguide",
"kw":"SHOW SEGMENTS,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SHOW SEGMENTS",
"githuburl":""
},
{
"uri":"mrs_01_1445.html",
+ "node_id":"mrs_01_1445.xml",
"product_code":"mrs",
"code":"54",
"des":"This command is used to create secondary indexes in the CarbonData tables.CREATE INDEX index_nameON TABLE [db_name.]table_name (col_name1, col_name2)AS 'carbondata'PROPER",
"doc_type":"cmpntguide",
"kw":"CREATE SECONDARY INDEX,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CREATE SECONDARY INDEX",
"githuburl":""
},
{
"uri":"mrs_01_1446.html",
+ "node_id":"mrs_01_1446.xml",
"product_code":"mrs",
"code":"55",
"des":"This command is used to list all secondary index tables in the CarbonData table.SHOW INDEXES ON db_name.table_name;db_name is optional.create table productdb.productSales",
"doc_type":"cmpntguide",
"kw":"SHOW SECONDARY INDEXES,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SHOW SECONDARY INDEXES",
"githuburl":""
},
{
"uri":"mrs_01_1447.html",
+ "node_id":"mrs_01_1447.xml",
"product_code":"mrs",
"code":"56",
"des":"This command is used to delete the existing secondary index table in a specific table.DROP INDEX [IF EXISTS] index_nameON [db_name.]table_name;In this command, IF EXISTS ",
"doc_type":"cmpntguide",
"kw":"DROP SECONDARY INDEX,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DROP SECONDARY INDEX",
"githuburl":""
},
{
"uri":"mrs_01_1448.html",
+ "node_id":"mrs_01_1448.xml",
"product_code":"mrs",
"code":"57",
"des":"After the DELETE SEGMENT command is executed, the deleted segments are marked as the delete state. After the segments are merged, the status of the original segments chan",
"doc_type":"cmpntguide",
"kw":"CLEAN FILES,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CLEAN FILES",
"githuburl":""
},
{
"uri":"mrs_01_1449.html",
+ "node_id":"mrs_01_1449.xml",
"product_code":"mrs",
"code":"58",
"des":"This command is used to dynamically add, update, display, or reset the CarbonData properties without restarting the driver.Add or Update parameter value:SET parameter_nam",
"doc_type":"cmpntguide",
"kw":"SET/RESET,DML,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SET/RESET",
"githuburl":""
},
{
"uri":"mrs_01_24046.html",
+ "node_id":"mrs_01_24046.xml",
"product_code":"mrs",
"code":"59",
"des":"Before performing DDL and DML operations, you need to obtain the corresponding locks. See Table 1 for details about the locks that need to be obtained for each operation.",
"doc_type":"cmpntguide",
"kw":"Operation Concurrent Execution,CarbonData Syntax Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Operation Concurrent Execution",
"githuburl":""
},
{
"uri":"mrs_01_1450.html",
+ "node_id":"mrs_01_1450.xml",
"product_code":"mrs",
"code":"60",
"des":"This section describes the APIs and usage methods of Segment. All methods are in the org.apache.spark.util.CarbonSegmentUtil class.The following methods have been abandon",
"doc_type":"cmpntguide",
"kw":"API,CarbonData Syntax Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"API",
"githuburl":""
},
{
"uri":"mrs_01_1451.html",
+ "node_id":"mrs_01_1451.xml",
"product_code":"mrs",
"code":"61",
"des":"Spatial data includes multidimensional points, lines, rectangles, cubes, polygons, and other geometric objects. A spatial data object occupies a certain region of space, ",
"doc_type":"cmpntguide",
"kw":"Spatial Indexes,CarbonData Syntax Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spatial Indexes",
"githuburl":""
},
{
"uri":"mrs_01_1454.html",
+ "node_id":"mrs_01_1454.xml",
"product_code":"mrs",
"code":"62",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData Troubleshooting",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData Troubleshooting",
"githuburl":""
},
{
"uri":"mrs_01_1455.html",
+ "node_id":"mrs_01_1455.xml",
"product_code":"mrs",
"code":"63",
"des":"When double data type values with higher precision are used in filters, incorrect values are returned by filtering results.When double data type values with higher precis",
"doc_type":"cmpntguide",
"kw":"Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter,CarbonData ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter",
"githuburl":""
},
{
"uri":"mrs_01_1456.html",
+ "node_id":"mrs_01_1456.xml",
"product_code":"mrs",
"code":"64",
"des":"The query performance fluctuates when the query is executed in different query periods.During data loading, the memory configured for each executor program instance may b",
"doc_type":"cmpntguide",
"kw":"Query Performance Deterioration,CarbonData Troubleshooting,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Query Performance Deterioration",
"githuburl":""
},
{
"uri":"mrs_01_1457.html",
+ "node_id":"mrs_01_1457.xml",
"product_code":"mrs",
"code":"65",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"CarbonData FAQ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CarbonData FAQ",
"githuburl":""
},
{
"uri":"mrs_01_1458.html",
+ "node_id":"mrs_01_1458.xml",
"product_code":"mrs",
"code":"66",
"des":"Why is incorrect output displayed when I perform query with filter on decimal data type values?For example:select * from carbon_table where num = 1234567890123456.22;Outp",
"doc_type":"cmpntguide",
"kw":"Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?,Carb",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?",
"githuburl":""
},
{
"uri":"mrs_01_1459.html",
+ "node_id":"mrs_01_1459.xml",
"product_code":"mrs",
"code":"67",
"des":"How to avoid minor compaction for historical data?If you want to load historical data first and then the incremental data, perform following steps to avoid minor compacti",
"doc_type":"cmpntguide",
"kw":"How to Avoid Minor Compaction for Historical Data?,CarbonData FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Avoid Minor Compaction for Historical Data?",
"githuburl":""
},
{
"uri":"mrs_01_1460.html",
+ "node_id":"mrs_01_1460.xml",
"product_code":"mrs",
"code":"68",
"des":"How to change the default group name for CarbonData data loading?By default, the group name for CarbonData data loading is ficommon. You can perform the following operati",
"doc_type":"cmpntguide",
"kw":"How to Change the Default Group Name for CarbonData Data Loading?,CarbonData FAQ,Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Change the Default Group Name for CarbonData Data Loading?",
"githuburl":""
},
{
"uri":"mrs_01_1461.html",
+ "node_id":"mrs_01_1461.xml",
"product_code":"mrs",
"code":"69",
"des":"Why does the INSERT INTO CARBON TABLE command fail and the following error message is displayed?The INSERT INTO CARBON TABLE command fails in the following scenarios:If t",
"doc_type":"cmpntguide",
"kw":"Why Does INSERT INTO CARBON TABLE Command Fail?,CarbonData FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does INSERT INTO CARBON TABLE Command Fail?",
"githuburl":""
},
{
"uri":"mrs_01_1462.html",
+ "node_id":"mrs_01_1462.xml",
"product_code":"mrs",
"code":"70",
"des":"Why is the data logged in bad records different from the original input data with escaped characters?An escape character is a backslash (\\) followed by one or more charac",
"doc_type":"cmpntguide",
"kw":"Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?",
"githuburl":""
},
{
"uri":"mrs_01_1463.html",
+ "node_id":"mrs_01_1463.xml",
"product_code":"mrs",
"code":"71",
"des":"Why data load performance decreases due to bad records?If bad records are present in the data and BAD_RECORDS_LOGGER_ENABLE is true or BAD_RECORDS_ACTION is redirect then",
"doc_type":"cmpntguide",
"kw":"Why Data Load Performance Decreases due to Bad Records?,CarbonData FAQ,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Data Load Performance Decreases due to Bad Records?",
"githuburl":""
},
{
"uri":"mrs_01_1464.html",
+ "node_id":"mrs_01_1464.xml",
"product_code":"mrs",
"code":"72",
"des":"Why INSERT INTO or LOAD DATA task distribution is incorrect, and the openedtasks are less than the available executors when the number of initial executors is zero?In ca",
"doc_type":"cmpntguide",
"kw":"Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Avai",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial ExecutorsIs Zero?",
"githuburl":""
},
{
"uri":"mrs_01_1465.html",
+ "node_id":"mrs_01_1465.xml",
"product_code":"mrs",
"code":"73",
"des":"Why does CarbonData require additional executors even though the parallelism is greater than the number of blocks to be processed?CarbonData block distribution optimizes ",
"doc_type":"cmpntguide",
"kw":"Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Num",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?",
"githuburl":""
},
{
"uri":"mrs_01_1466.html",
+ "node_id":"mrs_01_1466.xml",
"product_code":"mrs",
"code":"74",
"des":"Why Data Loading fails during off heap?YARN Resource Manager will consider (Java heap memory + spark.yarn.am.memoryOverhead) as memory limit, so during the off heap, the ",
"doc_type":"cmpntguide",
"kw":"Why Data loading Fails During off heap?,CarbonData FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Data loading Fails During off heap?",
"githuburl":""
},
{
"uri":"mrs_01_1467.html",
+ "node_id":"mrs_01_1467.xml",
"product_code":"mrs",
"code":"75",
"des":"Why do I fail to create a hive table?Creating a Hive table fails, when source table or sub query has more number of partitions. The implementation of the query requires a",
"doc_type":"cmpntguide",
"kw":"Why Do I Fail to Create a Hive Table?,CarbonData FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do I Fail to Create a Hive Table?",
"githuburl":""
},
{
"uri":"mrs_01_1468.html",
+ "node_id":"mrs_01_1468.xml",
"product_code":"mrs",
"code":"76",
"des":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?The Hive ACL is implemented after the version V100",
"doc_type":"cmpntguide",
"kw":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privi",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?",
"githuburl":""
},
{
"uri":"mrs_01_1469.html",
+ "node_id":"mrs_01_1469.xml",
"product_code":"mrs",
"code":"77",
"des":"How do I logically split data across different namespaces?Configuration:To logically split data across different namespaces, you must update the following configuration i",
"doc_type":"cmpntguide",
"kw":"How Do I Logically Split Data Across Different Namespaces?,CarbonData FAQ,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Logically Split Data Across Different Namespaces?",
"githuburl":""
},
{
"uri":"mrs_01_1470.html",
+ "node_id":"mrs_01_1470.xml",
"product_code":"mrs",
"code":"78",
"des":"Why drop database cascade is throwing the following exception?This error is thrown when the owner of the database performs drop database cascade which con",
"doc_type":"cmpntguide",
"kw":"Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?,CarbonData ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?",
"githuburl":""
},
{
"uri":"mrs_01_1471.html",
+ "node_id":"mrs_01_1471.xml",
"product_code":"mrs",
"code":"79",
"des":"Why the UPDATE command cannot be executed in Spark Shell?The syntax and examples provided in this document are about Beeline commands instead of Spark Shell commands.To r",
"doc_type":"cmpntguide",
"kw":"Why the UPDATE Command Cannot Be Executed in Spark Shell?,CarbonData FAQ,Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why the UPDATE Command Cannot Be Executed in Spark Shell?",
"githuburl":""
},
{
"uri":"mrs_01_1472.html",
+ "node_id":"mrs_01_1472.xml",
"product_code":"mrs",
"code":"80",
"des":"How do I configure unsafe memory in CarbonData?In the Spark configuration, the value of spark.yarn.executor.memoryOverhead must be greater than the sum of (sort.inmemory.",
"doc_type":"cmpntguide",
"kw":"How Do I Configure Unsafe Memory in CarbonData?,CarbonData FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Configure Unsafe Memory in CarbonData?",
"githuburl":""
},
{
"uri":"mrs_01_1473.html",
+ "node_id":"mrs_01_1473.xml",
"product_code":"mrs",
"code":"81",
"des":"Why exception occurs in CarbonData when Disk Space Quota is set for the storage directory in HDFS?The data will be written to HDFS when you during create table, load tabl",
"doc_type":"cmpntguide",
"kw":"Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?,Carbo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?",
"githuburl":""
},
{
"uri":"mrs_01_1474.html",
+ "node_id":"mrs_01_1474.xml",
"product_code":"mrs",
"code":"82",
"des":"Why does data query or loading fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" is displayed?This exception is thrown when the out-of-heap ",
"doc_type":"cmpntguide",
"kw":"Why Does Data Query or Loading Fail and \"org.apache.carbondata.core.memory.MemoryException: Not enou",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Data Query or Loading Fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" Is Displayed?",
"githuburl":""
},
{
"uri":"mrs_01_24537.html",
+ "node_id":"mrs_01_24537.xml",
"product_code":"",
"code":"83",
"des":"Why do files of a Carbon table exist in the recycle bin even if the drop table command is not executed when mis-deletion prevention is enabled?After the the mis-deletion ",
"doc_type":"",
"kw":"Why Do Files of a Carbon Table Exist in the Recycle Bin Even If the drop table Command Is Not Execut",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Why Do Files of a Carbon Table Exist in the Recycle Bin Even If the drop table Command Is Not Executed When Mis-deletion Prevention Is Enabled?",
"githuburl":""
},
{
"uri":"mrs_01_2344.html",
+ "node_id":"mrs_01_2344.xml",
"product_code":"mrs",
"code":"84",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using ClickHouse",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using ClickHouse",
"githuburl":""
},
{
"uri":"mrs_01_2345.html",
+ "node_id":"mrs_01_2345.xml",
"product_code":"mrs",
"code":"85",
"des":"ClickHouse is a column-based database oriented to online analysis and processing. It supports SQL query and provides good query performance. The aggregation analysis and ",
"doc_type":"cmpntguide",
"kw":"Using ClickHouse from Scratch,Using ClickHouse,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using ClickHouse from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_24105.html",
+ "node_id":"mrs_01_24105.xml",
"product_code":"mrs",
"code":"86",
"des":"Table engines play a key role in ClickHouse to determine:Where to write and read dataSupported query modesWhether concurrent data access is supportedWhether indexes can b",
"doc_type":"cmpntguide",
"kw":"ClickHouse Table Engine Overview,Using ClickHouse,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ClickHouse Table Engine Overview",
"githuburl":""
},
{
"uri":"mrs_01_2398.html",
+ "node_id":"mrs_01_2398.xml",
"product_code":"mrs",
"code":"87",
"des":"ClickHouse implements the replicated table mechanism based on the ReplicatedMergeTree engine and ZooKeeper. When creating a table, you can specify an engine to determine ",
"doc_type":"cmpntguide",
"kw":"Creating a ClickHouse Table,Using ClickHouse,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a ClickHouse Table",
"githuburl":""
},
{
"uri":"mrs_01_24199.html",
+ "node_id":"mrs_01_24199.xml",
"product_code":"mrs",
"code":"88",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common ClickHouse SQL Syntax",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common ClickHouse SQL Syntax",
"githuburl":""
},
{
"uri":"mrs_01_24200.html",
+ "node_id":"mrs_01_24200.xml",
"product_code":"mrs",
"code":"89",
"des":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse database.CREATE DATABASE [IF NOT EXISTS] Database_name [ON CLUSTERClickHo",
"doc_type":"cmpntguide",
"kw":"CREATE DATABASE: Creating a Database,Common ClickHouse SQL Syntax,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CREATE DATABASE: Creating a Database",
"githuburl":""
},
{
"uri":"mrs_01_24201.html",
+ "node_id":"mrs_01_24201.xml",
"product_code":"mrs",
"code":"90",
"des":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse table.Method 1: Creating a table named table_name in the specified databa",
"doc_type":"cmpntguide",
"kw":"CREATE TABLE: Creating a Table,Common ClickHouse SQL Syntax,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"CREATE TABLE: Creating a Table",
"githuburl":""
},
{
"uri":"mrs_01_24202.html",
+ "node_id":"mrs_01_24202.xml",
"product_code":"mrs",
"code":"91",
"des":"This section describes the basic syntax and usage of the SQL statement for inserting data to a table in ClickHouse.Method 1: Inserting data in standard formatINSERT INTO ",
"doc_type":"cmpntguide",
"kw":"INSERT INTO: Inserting Data into a Table,Common ClickHouse SQL Syntax,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"INSERT INTO: Inserting Data into a Table",
"githuburl":""
},
{
"uri":"mrs_01_24203.html",
+ "node_id":"mrs_01_24203.xml",
"product_code":"mrs",
"code":"92",
"des":"This section describes the basic syntax and usage of the SQL statement for querying table data in ClickHouse.SELECT [DISTINCT] expr_list[FROM[database_name.]table| (subqu",
"doc_type":"cmpntguide",
"kw":"SELECT: Querying Table Data,Common ClickHouse SQL Syntax,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SELECT: Querying Table Data",
"githuburl":""
},
{
"uri":"mrs_01_24204.html",
+ "node_id":"mrs_01_24204.xml",
"product_code":"mrs",
"code":"93",
"des":"This section describes the basic syntax and usage of the SQL statement for modifying a table structure in ClickHouse.ALTER TABLE [database_name].name[ON CLUSTER cluster] ",
"doc_type":"cmpntguide",
"kw":"ALTER TABLE: Modifying a Table Structure,Common ClickHouse SQL Syntax,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ALTER TABLE: Modifying a Table Structure",
"githuburl":""
},
{
"uri":"mrs_01_24205.html",
+ "node_id":"mrs_01_24205.xml",
"product_code":"mrs",
"code":"94",
"des":"This section describes the basic syntax and usage of the SQL statement for querying a table structure in ClickHouse.DESC|DESCRIBETABLE[database_name.]table[INTOOUTFILE fi",
"doc_type":"cmpntguide",
"kw":"DESC: Querying a Table Structure,Common ClickHouse SQL Syntax,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DESC: Querying a Table Structure",
"githuburl":""
},
{
"uri":"mrs_01_24208.html",
+ "node_id":"mrs_01_24208.xml",
"product_code":"mrs",
"code":"95",
"des":"This section describes the basic syntax and usage of the SQL statement for deleting a ClickHouse table.DROP[TEMPORARY] TABLE[IF EXISTS] [database_name.]name[ON CLUSTER cl",
"doc_type":"cmpntguide",
"kw":"DROP: Deleting a Table,Common ClickHouse SQL Syntax,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DROP: Deleting a Table",
"githuburl":""
},
{
"uri":"mrs_01_24207.html",
+ "node_id":"mrs_01_24207.xml",
"product_code":"mrs",
"code":"96",
"des":"This section describes the basic syntax and usage of the SQL statement for displaying information about databases and tables in ClickHouse.show databasesshow tables",
"doc_type":"cmpntguide",
"kw":"SHOW: Displaying Information About Databases and Tables,Common ClickHouse SQL Syntax,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SHOW: Displaying Information About Databases and Tables",
"githuburl":""
},
{
"uri":"mrs_01_24250.html",
+ "node_id":"mrs_01_24250.xml",
"product_code":"mrs",
"code":"97",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Migrating ClickHouse Data",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Migrating ClickHouse Data",
"githuburl":""
},
{
"uri":"mrs_01_24206.html",
+ "node_id":"mrs_01_24206.xml",
"product_code":"mrs",
"code":"98",
"des":"This section describes the basic syntax and usage of the SQL statements for importing and exporting file data using ClickHouse.Importing data in CSV formatclickhouse clie",
"doc_type":"cmpntguide",
"kw":"Using ClickHouse to Import and Export Data,Migrating ClickHouse Data,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using ClickHouse to Import and Export Data",
"githuburl":""
},
{
"uri":"mrs_01_24377.html",
+ "node_id":"mrs_01_24377.xml",
"product_code":"",
"code":"99",
"des":"This section describes how to create a Kafka table to automatically synchronize Kafka data to the ClickHouse cluster.You have created a Kafka cluster. The Kafka client ha",
"doc_type":"",
"kw":"Synchronizing Kafka Data to ClickHouse,Migrating ClickHouse Data,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Synchronizing Kafka Data to ClickHouse",
"githuburl":""
},
{
"uri":"mrs_01_24198.html",
+ "node_id":"mrs_01_24198.xml",
"product_code":"mrs",
"code":"100",
"des":"The ClickHouse data migration tool can migrate some partitions of one or more partitioned MergeTree tables on several ClickHouseServer nodes to the same tables on other C",
"doc_type":"cmpntguide",
"kw":"Using the ClickHouse Data Migration Tool,Migrating ClickHouse Data,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the ClickHouse Data Migration Tool",
"githuburl":""
},
{
"uri":"mrs_01_24251.html",
+ "node_id":"mrs_01_24251.xml",
"product_code":"mrs",
"code":"101",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"User Management and Authentication",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"User Management and Authentication",
"githuburl":""
},
{
"uri":"mrs_01_24057.html",
+ "node_id":"mrs_01_24057.xml",
"product_code":"mrs",
"code":"102",
"des":"ClickHouse user permission management enables unified management of users, roles, and permissions on each ClickHouse instance in the cluster. You can use the permission m",
"doc_type":"cmpntguide",
"kw":"ClickHouse User and Permission Management,User Management and Authentication,Component Operation Gui",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ClickHouse User and Permission Management",
"githuburl":""
},
{
"uri":"mrs_01_24109.html",
+ "node_id":"mrs_01_24109.xml",
"product_code":"mrs",
"code":"103",
"des":"ClickHouse can be interconnected with OpenLDAP. You can manage accounts and permissions in a centralized manner by adding the OpenLDAP server configuration and creating u",
"doc_type":"cmpntguide",
"kw":"Interconnecting ClickHouse With OpenLDAP for Authentication,User Management and Authentication,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Interconnecting ClickHouse With OpenLDAP for Authentication",
"githuburl":""
},
{
"uri":"mrs_01_24292.html",
+ "node_id":"mrs_01_24292.xml",
"product_code":"",
"code":"104",
"des":"This section describes how to back up data by exporting ClickHouse data to a CSV file and restore data using the CSV file.You have installed the ClickHouse client.You hav",
"doc_type":"",
"kw":"Backing Up and Restoring ClickHouse Data Using a Data File,Using ClickHouse,Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Backing Up and Restoring ClickHouse Data Using a Data File",
"githuburl":""
},
{
"uri":"mrs_01_2399.html",
+ "node_id":"mrs_01_2399.xml",
"product_code":"mrs",
"code":"105",
"des":"Log path: The default storage path of ClickHouse log files is as follows: ${BIGDATA_LOG_HOME}/clickhouseLog archive rule: The automatic ClickHouse log compression functio",
"doc_type":"cmpntguide",
"kw":"ClickHouse Log Overview,Using ClickHouse,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ClickHouse Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_2356.html",
+ "node_id":"mrs_01_2356.xml",
"product_code":"mrs",
"code":"106",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using DBService",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using DBService",
"githuburl":""
},
{
"uri":"mrs_01_0789.html",
+ "node_id":"mrs_01_0789.xml",
"product_code":"mrs",
"code":"107",
"des":"Log path: The default storage path of DBService log files is /var/log/Bigdata/dbservice.GaussDB: /var/log/Bigdata/dbservice/DB (GaussDB run log directory), /var/log/Bigda",
"doc_type":"cmpntguide",
"kw":"DBService Log Overview,Using DBService,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DBService Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_0591.html",
+ "node_id":"mrs_01_0591.xml",
"product_code":"mrs",
"code":"108",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Flink",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Flink",
"githuburl":""
},
{
"uri":"mrs_01_0473.html",
+ "node_id":"mrs_01_0473.xml",
"product_code":"mrs",
"code":"109",
"des":"This section describes how to use Flink to run wordcount jobs.Flink has been installed in an MRS cluster.The cluster runs properly and the client has been correctly insta",
"doc_type":"cmpntguide",
"kw":"Using Flink from Scratch,Using Flink,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Flink from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0784.html",
+ "node_id":"mrs_01_0784.xml",
"product_code":"mrs",
"code":"110",
"des":"You can view Flink job information on the Yarn web UI.The Flink service has been installed in a cluster.For versions earlier than MRS 1.9.2, log in to MRS Manager and cho",
"doc_type":"cmpntguide",
"kw":"Viewing Flink Job Information,Using Flink,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Flink Job Information",
"githuburl":""
},
{
"uri":"mrs_01_0592.html",
+ "node_id":"mrs_01_0592.xml",
"product_code":"mrs",
"code":"111",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Flink Configuration Management",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flink Configuration Management",
"githuburl":""
},
{
"uri":"mrs_01_1565.html",
+ "node_id":"mrs_01_1565.xml",
"product_code":"mrs",
"code":"112",
"des":"All parameters of Flink must be set on a client. The path of a configuration file is as follows: Client installation path/Flink/flink/conf/flink-conf.yaml.You are advised",
"doc_type":"cmpntguide",
"kw":"Configuring Parameter Paths,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Parameter Paths",
"githuburl":""
},
{
"uri":"mrs_01_1566.html",
+ "node_id":"mrs_01_1566.xml",
"product_code":"mrs",
"code":"113",
"des":"JobManager and TaskManager are main components of Flink. You can configure the parameters for different security and performance scenarios on the client.Main configuratio",
"doc_type":"cmpntguide",
"kw":"JobManager & TaskManager,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"JobManager & TaskManager",
"githuburl":""
},
{
"uri":"mrs_01_1567.html",
+ "node_id":"mrs_01_1567.xml",
"product_code":"mrs",
"code":"114",
"des":"The Blob server on the JobManager node is used to receive JAR files uploaded by users on the client, send JAR files to TaskManager, and transfer log files. Flink provides",
"doc_type":"cmpntguide",
"kw":"Blob,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Blob",
"githuburl":""
},
{
"uri":"mrs_01_1568.html",
+ "node_id":"mrs_01_1568.xml",
"product_code":"mrs",
"code":"115",
"des":"The Akka actor model is the basis of communications between the Flink client and JobManager, JobManager and TaskManager, as well as TaskManager and TaskManager. Flink ena",
"doc_type":"cmpntguide",
"kw":"Distributed Coordination (via Akka),Flink Configuration Management,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Distributed Coordination (via Akka)",
"githuburl":""
},
{
"uri":"mrs_01_1569.html",
+ "node_id":"mrs_01_1569.xml",
"product_code":"mrs",
"code":"116",
"des":"When the secure Flink cluster is required, SSL-related configuration items must be set.Configuration items include the SSL switch, certificate, password, and encryption a",
"doc_type":"cmpntguide",
"kw":"SSL,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SSL",
"githuburl":""
},
{
"uri":"mrs_01_1570.html",
+ "node_id":"mrs_01_1570.xml",
"product_code":"mrs",
"code":"117",
"des":"When Flink runs a job, data transmission and reverse pressure detection between tasks depend on Netty. In certain environments, Netty parameters should be configured.For ",
"doc_type":"cmpntguide",
"kw":"Network communication (via Netty),Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Network communication (via Netty)",
"githuburl":""
},
{
"uri":"mrs_01_1571.html",
+ "node_id":"mrs_01_1571.xml",
"product_code":"mrs",
"code":"118",
"des":"When JobManager is started, the web server in the same process is also started.You can access the web server to obtain information about the current Flink cluster, includ",
"doc_type":"cmpntguide",
"kw":"JobManager Web Frontend,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"JobManager Web Frontend",
"githuburl":""
},
{
"uri":"mrs_01_1572.html",
+ "node_id":"mrs_01_1572.xml",
"product_code":"mrs",
"code":"119",
"des":"Result files are created when tasks are running. Flink enables you to configure parameters for file creation.Configuration items include overwriting policy and directory ",
"doc_type":"cmpntguide",
"kw":"File Systems,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"File Systems",
"githuburl":""
},
{
"uri":"mrs_01_1573.html",
+ "node_id":"mrs_01_1573.xml",
"product_code":"mrs",
"code":"120",
"des":"Flink enables HA and job exception, as well as job pause and recovery during version upgrade. Flink depends on state backend to store job states and on the restart strate",
"doc_type":"cmpntguide",
"kw":"State Backend,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"State Backend",
"githuburl":""
},
{
"uri":"mrs_01_1574.html",
+ "node_id":"mrs_01_1574.xml",
"product_code":"mrs",
"code":"121",
"des":"Flink Kerberos configuration items must be configured in security mode.The configuration items include keytab and principal of Kerberos.",
"doc_type":"cmpntguide",
"kw":"Kerberos-based Security,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kerberos-based Security",
"githuburl":""
},
{
"uri":"mrs_01_1575.html",
+ "node_id":"mrs_01_1575.xml",
"product_code":"mrs",
"code":"122",
"des":"The Flink HA mode depends on ZooKeeper. Therefore, ZooKeeper-related configuration items must be set.Configuration items include the ZooKeeper address, path, and security",
"doc_type":"cmpntguide",
"kw":"HA,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HA",
"githuburl":""
},
{
"uri":"mrs_01_1576.html",
+ "node_id":"mrs_01_1576.xml",
"product_code":"mrs",
"code":"123",
"des":"In scenarios raising special requirements on JVM configuration, users can use configuration items to transfer JVM parameters to the client, JobManager, and TaskManager.Co",
"doc_type":"cmpntguide",
"kw":"Environment,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Environment",
"githuburl":""
},
{
"uri":"mrs_01_1577.html",
+ "node_id":"mrs_01_1577.xml",
"product_code":"mrs",
"code":"124",
"des":"Flink runs on a Yarn cluster and JobManager runs on ApplicationMaster. Certain configuration parameters of JobManager depend on Yarn. By setting Yarn-related configuratio",
"doc_type":"cmpntguide",
"kw":"Yarn,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Yarn",
"githuburl":""
},
{
"uri":"mrs_01_1578.html",
+ "node_id":"mrs_01_1578.xml",
"product_code":"mrs",
"code":"125",
"des":"The Netty connection is used among multiple jobs to reduce latency. In this case, NettySink is used on the server and NettySource is used on the client for data transmiss",
"doc_type":"cmpntguide",
"kw":"Pipeline,Flink Configuration Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Pipeline",
"githuburl":""
},
{
"uri":"mrs_01_0593.html",
+ "node_id":"mrs_01_0593.xml",
"product_code":"mrs",
"code":"126",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Security Configuration",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Security Configuration",
"githuburl":""
},
{
"uri":"mrs_01_1579.html",
+ "node_id":"mrs_01_1579.xml",
"product_code":"mrs",
"code":"127",
"des":"All Flink cluster components support authentication.The Kerberos authentication is supported between Flink cluster components and external components, such as Yarn, HDFS,",
"doc_type":"cmpntguide",
"kw":"Security Features,Security Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Security Features",
"githuburl":""
},
{
"uri":"mrs_01_1580.html",
+ "node_id":"mrs_01_1580.xml",
"product_code":"mrs",
"code":"128",
"des":"Sample project data of Flink is stored in Kafka. A user with Kafka permission can send data to Kafka and receive data from it.Run Linux command line to create a topic. Be",
"doc_type":"cmpntguide",
"kw":"Configuring Kafka,Security Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1581.html",
+ "node_id":"mrs_01_1581.xml",
"product_code":"mrs",
"code":"129",
"des":"This section applies to MRS 3.x or later clusters.Configure files.nettyconnector.registerserver.topic.storage: (Mandatory) Configures the path (on a third-party server) t",
"doc_type":"cmpntguide",
"kw":"Configuring Pipeline,Security Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Pipeline",
"githuburl":""
},
{
"uri":"mrs_01_0594.html",
+ "node_id":"mrs_01_0594.xml",
"product_code":"mrs",
"code":"130",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Security Hardening",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Security Hardening",
"githuburl":""
},
{
"uri":"mrs_01_1583.html",
+ "node_id":"mrs_01_1583.xml",
"product_code":"mrs",
"code":"131",
"des":"Flink uses the following three authentication modes:Kerberos authentication: It is used between the Flink Yarn client and Yarn ResourceManager, JobManager and ZooKeeper, ",
"doc_type":"cmpntguide",
"kw":"Authentication and Encryption,Security Hardening,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Authentication and Encryption",
"githuburl":""
},
{
"uri":"mrs_01_1584.html",
+ "node_id":"mrs_01_1584.xml",
"product_code":"mrs",
"code":"132",
"des":"In HA mode of Flink, ZooKeeper can be used to manage clusters and discover services. Zookeeper supports SASL ACL control. Only users who have passed the SASL (Kerberos) a",
"doc_type":"cmpntguide",
"kw":"ACL Control,Security Hardening,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ACL Control",
"githuburl":""
},
{
"uri":"mrs_01_1585.html",
+ "node_id":"mrs_01_1585.xml",
"product_code":"mrs",
"code":"133",
"des":"Note: The same coding mode is used on the web service client and server to prevent garbled characters and to enable input verification.Security hardening: apply UTF-8 to ",
"doc_type":"cmpntguide",
"kw":"Web Security,Security Hardening,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Web Security",
"githuburl":""
},
{
"uri":"mrs_01_1586.html",
+ "node_id":"mrs_01_1586.xml",
"product_code":"mrs",
"code":"134",
"des":"All security functions of Flink are provided by the open source community or self-developed. Security features that need to be configured by users, such as authentication",
"doc_type":"cmpntguide",
"kw":"Security Statement,Using Flink,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Security Statement",
"githuburl":""
},
{
"uri":"mrs_01_24014.html",
+ "node_id":"mrs_01_24014.xml",
"product_code":"mrs",
"code":"135",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using the Flink Web UI",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24015.html",
+ "node_id":"mrs_01_24015.xml",
"product_code":"mrs",
"code":"136",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Overview",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview",
"githuburl":""
},
{
"uri":"mrs_01_24016.html",
+ "node_id":"mrs_01_24016.xml",
"product_code":"mrs",
"code":"137",
"des":"Flink web UI provides a web-based visual development platform. You only need to compile SQL statements to develop jobs, slashing the job development threshold. In additio",
"doc_type":"cmpntguide",
"kw":"Introduction to Flink Web UI,Overview,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24017.html",
+ "node_id":"mrs_01_24017.xml",
"product_code":"mrs",
"code":"138",
"des":"The Flink web UI application process is shown as follows:",
"doc_type":"cmpntguide",
"kw":"Flink Web UI Application Process,Overview,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flink Web UI Application Process",
"githuburl":""
},
{
"uri":"mrs_01_24047.html",
+ "node_id":"mrs_01_24047.xml",
"product_code":"mrs",
"code":"139",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"FlinkServer Permissions Management",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"FlinkServer Permissions Management",
"githuburl":""
},
{
"uri":"mrs_01_24048.html",
+ "node_id":"mrs_01_24048.xml",
"product_code":"mrs",
"code":"140",
"des":"User admin of Manager does not have the FlinkServer service operation permission. To perform FlinkServer service operations, you need to grant related permission to the u",
"doc_type":"cmpntguide",
"kw":"Overview,FlinkServer Permissions Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview",
"githuburl":""
},
{
"uri":"mrs_01_24049.html",
+ "node_id":"mrs_01_24049.xml",
"product_code":"mrs",
"code":"141",
"des":"This section describes how to create and configure a FlinkServer role on Manager as the system administrator. A FlinkServer role can be configured with FlinkServer admini",
"doc_type":"cmpntguide",
"kw":"Authentication Based on Users and Roles,FlinkServer Permissions Management,Component Operation Guide",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Authentication Based on Users and Roles",
"githuburl":""
},
{
"uri":"mrs_01_24019.html",
+ "node_id":"mrs_01_24019.xml",
"product_code":"mrs",
"code":"142",
"des":"After Flink is installed in an MRS cluster, you can connect to clusters and data as well as manage stream tables and jobs using the Flink web UI.This section describes ho",
"doc_type":"cmpntguide",
"kw":"Accessing the Flink Web UI,Using the Flink Web UI,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24020.html",
+ "node_id":"mrs_01_24020.xml",
"product_code":"mrs",
"code":"143",
"des":"Applications can be used to isolate different upper-layer services.After the application is created, you can switch to the application to be operated in the upper left co",
"doc_type":"cmpntguide",
"kw":"Creating an Application on the Flink Web UI,Using the Flink Web UI,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating an Application on the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24021.html",
+ "node_id":"mrs_01_24021.xml",
"product_code":"mrs",
"code":"144",
"des":"Different clusters can be accessed by configuring the cluster connection.To obtain the cluster client configuration files, perform the following steps:Log in to FusionIns",
"doc_type":"cmpntguide",
"kw":"Creating a Cluster Connection on the Flink Web UI,Using the Flink Web UI,Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Cluster Connection on the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24022.html",
+ "node_id":"mrs_01_24022.xml",
"product_code":"mrs",
"code":"145",
"des":"You can use data connections to access different data services. Currently, FlinkServer supports HDFS and Kafka data connections.",
"doc_type":"cmpntguide",
"kw":"Creating a Data Connection on the Flink Web UI,Using the Flink Web UI,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Data Connection on the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24023.html",
+ "node_id":"mrs_01_24023.xml",
"product_code":"mrs",
"code":"146",
"des":"Data tables can be used to define basic attributes and parameters of source tables, dimension tables, and output tables.",
"doc_type":"cmpntguide",
"kw":"Managing Tables on the Flink Web UI,Using the Flink Web UI,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Tables on the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_24024.html",
+ "node_id":"mrs_01_24024.xml",
"product_code":"mrs",
"code":"147",
"des":"Define Flink jobs, including Flink SQL and Flink JAR jobs.Creating a Flink SQL jobDevelop the job on the job development page.Click Check Semantic to check the input cont",
"doc_type":"cmpntguide",
"kw":"Managing Jobs on the Flink Web UI,Using the Flink Web UI,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Jobs on the Flink Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0596.html",
+ "node_id":"mrs_01_0596.xml",
"product_code":"mrs",
"code":"148",
"des":"Log path:Run logs of a Flink job: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of executing tasks are stored in ",
"doc_type":"cmpntguide",
"kw":"Flink Log Overview,Using Flink,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flink Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_0597.html",
+ "node_id":"mrs_01_0597.xml",
"product_code":"mrs",
"code":"149",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Flink Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flink Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1587.html",
+ "node_id":"mrs_01_1587.xml",
"product_code":"mrs",
"code":"150",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Optimization DataStream",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimization DataStream",
"githuburl":""
},
{
"uri":"mrs_01_1588.html",
+ "node_id":"mrs_01_1588.xml",
"product_code":"mrs",
"code":"151",
"des":"The computing of Flink depends on memory. If the memory is insufficient, the performance of Flink will be greatly deteriorated. One solution is to monitor garbage collect",
"doc_type":"cmpntguide",
"kw":"Memory Configuration Optimization,Optimization DataStream,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Memory Configuration Optimization",
"githuburl":""
},
{
"uri":"mrs_01_1589.html",
+ "node_id":"mrs_01_1589.xml",
"product_code":"mrs",
"code":"152",
"des":"The degree of parallelism (DOP) indicates the number of tasks to be executed concurrently. It determines the number of data blocks after the operation. Configuring the DO",
"doc_type":"cmpntguide",
"kw":"Configuring DOP,Optimization DataStream,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring DOP",
"githuburl":""
},
{
"uri":"mrs_01_1590.html",
+ "node_id":"mrs_01_1590.xml",
"product_code":"mrs",
"code":"153",
"des":"In Flink on Yarn mode, there are JobManagers and TaskManagers. JobManagers and TaskManagers schedule and run tasks.Therefore, configuring parameters of JobManagers and Ta",
"doc_type":"cmpntguide",
"kw":"Configuring Process Parameters,Optimization DataStream,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Process Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1591.html",
+ "node_id":"mrs_01_1591.xml",
"product_code":"mrs",
"code":"154",
"des":"The divide of tasks can be optimized by optimizing the partitioning method. If data skew occurs in a certain task, the whole execution process is delayed. Therefore, when",
"doc_type":"cmpntguide",
"kw":"Optimizing the Design of Partitioning Method,Optimization DataStream,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing the Design of Partitioning Method",
"githuburl":""
},
{
"uri":"mrs_01_1592.html",
+ "node_id":"mrs_01_1592.xml",
"product_code":"mrs",
"code":"155",
"des":"The communication of Flink is based on Netty network. The network performance determines the data switching speed and task execution efficiency. Therefore, the performanc",
"doc_type":"cmpntguide",
"kw":"Configuring the Netty Network Communication,Optimization DataStream,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Netty Network Communication",
"githuburl":""
},
{
"uri":"mrs_01_1593.html",
+ "node_id":"mrs_01_1593.xml",
"product_code":"mrs",
"code":"156",
"des":"If data skew occurs (certain data volume is extremely large), the execution time of tasks is inconsistent even though no GC is performed.Redefine keys. Use keys of smalle",
"doc_type":"cmpntguide",
"kw":"Experience Summary,Optimization DataStream,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Experience Summary",
"githuburl":""
},
{
"uri":"mrs_01_0598.html",
+ "node_id":"mrs_01_0598.xml",
"product_code":"mrs",
"code":"157",
"des":"This section applies to MRS 3.x or later clusters.Before running the Flink shell commands, perform the following steps:source /opt/client/bigdata_envkinit Service user",
"doc_type":"cmpntguide",
"kw":"Common Flink Shell Commands,Using Flink,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Flink Shell Commands",
"githuburl":""
},
{
"uri":"mrs_01_0390.html",
+ "node_id":"mrs_01_0390.xml",
"product_code":"mrs",
"code":"158",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Flume",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Flume",
"githuburl":""
},
{
"uri":"mrs_01_0397.html",
+ "node_id":"mrs_01_0397.xml",
"product_code":"mrs",
"code":"159",
"des":"You can use Flume to import collected log information to Kafka.A streaming cluster that contains components such as Flume and Kafka and has Kerberos authentication enable",
"doc_type":"cmpntguide",
"kw":"Using Flume from Scratch,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Flume from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0391.html",
+ "node_id":"mrs_01_0391.xml",
"product_code":"mrs",
"code":"160",
"des":"Flume is a distributed, reliable, and highly available system for aggregating massive logs, which can efficiently collect, aggregate, and move massive log data from diffe",
"doc_type":"cmpntguide",
"kw":"Overview,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview",
"githuburl":""
},
{
"uri":"mrs_01_0392.html",
+ "node_id":"mrs_01_0392.xml",
"product_code":"mrs",
"code":"161",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Installing the Flume Client",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Installing the Flume Client",
"githuburl":""
},
{
"uri":"mrs_01_1594.html",
+ "node_id":"mrs_01_1594.xml",
"product_code":"mrs",
"code":"162",
"des":"To use Flume to collect logs, you must install the Flume client on a log host. You can create an ECS and install the Flume client on it.This section applies to MRS 3.x or",
"doc_type":"cmpntguide",
"kw":"Installing the Flume Client on Clusters of Versions Earlier Than MRS 3.x,Installing the Flume Client",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Installing the Flume Client on Clusters of Versions Earlier Than MRS 3.x",
"githuburl":""
},
{
"uri":"mrs_01_1595.html",
+ "node_id":"mrs_01_1595.xml",
"product_code":"mrs",
"code":"163",
"des":"To use Flume to collect logs, you must install the Flume client on a log host. You can create an ECS and install the Flume client on it.This section applies to MRS 3.x or",
"doc_type":"cmpntguide",
"kw":"Installing the Flume Client on MRS 3.x or Later Clusters,Installing the Flume Client,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Installing the Flume Client on MRS 3.x or Later Clusters",
"githuburl":""
},
{
"uri":"mrs_01_0393.html",
+ "node_id":"mrs_01_0393.xml",
"product_code":"mrs",
"code":"164",
"des":"You can view logs to locate faults.The Flume client has been installed.ls -lR flume-client-*A log file is shown as follows:In the log file, FlumeClient.log is the run log",
"doc_type":"cmpntguide",
"kw":"Viewing Flume Client Logs,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Flume Client Logs",
"githuburl":""
},
{
"uri":"mrs_01_0394.html",
+ "node_id":"mrs_01_0394.xml",
"product_code":"mrs",
"code":"165",
"des":"You can stop and start the Flume client or uninstall the Flume client when the Flume data ingestion channel is not required.Stop the Flume client of the Flume role.Assume",
"doc_type":"cmpntguide",
"kw":"Stopping or Uninstalling the Flume Client,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Stopping or Uninstalling the Flume Client",
"githuburl":""
},
{
"uri":"mrs_01_0395.html",
+ "node_id":"mrs_01_0395.xml",
"product_code":"mrs",
"code":"166",
"des":"You can use the encryption tool provided by the Flume client to encrypt some parameter values in the configuration file.The Flume client has been installed.cd fusioninsig",
"doc_type":"cmpntguide",
"kw":"Using the Encryption Tool of the Flume Client,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Encryption Tool of the Flume Client",
"githuburl":""
},
{
"uri":"mrs_01_1057.html",
+ "node_id":"mrs_01_1057.xml",
"product_code":"mrs",
"code":"167",
"des":"This section applies to MRS 3.x or later clusters.This configuration guide describes how to configure common Flume services. For non-common Source, Channel, and Sink conf",
"doc_type":"cmpntguide",
"kw":"Flume Service Configuration Guide,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flume Service Configuration Guide",
"githuburl":""
},
{
"uri":"mrs_01_0396.html",
+ "node_id":"mrs_01_0396.xml",
"product_code":"mrs",
"code":"168",
"des":"For versions earlier than MRS 3.x, configure Flume parameters in the properties.properties file.For MRS 3.x or later, some parameters can be configured on Manager.This se",
"doc_type":"cmpntguide",
"kw":"Flume Configuration Parameter Description,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flume Configuration Parameter Description",
"githuburl":""
},
{
"uri":"mrs_01_1058.html",
+ "node_id":"mrs_01_1058.xml",
"product_code":"mrs",
"code":"169",
"des":"This section describes how to use environment variables in the properties.properties configuration file.This section applies to MRS 3.x or later clusters.The Flume servic",
"doc_type":"cmpntguide",
"kw":"Using Environment Variables in the properties.properties File,Using Flume,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Environment Variables in the properties.properties File",
"githuburl":""
},
{
"uri":"mrs_01_1059.html",
+ "node_id":"mrs_01_1059.xml",
"product_code":"mrs",
"code":"170",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Non-Encrypted Transmission",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Non-Encrypted Transmission",
"githuburl":""
},
{
"uri":"mrs_01_1060.html",
+ "node_id":"mrs_01_1060.xml",
"product_code":"mrs",
"code":"171",
"des":"This section describes how to configure Flume server and client parameters after the cluster and the Flume service are installed to ensure proper running of the service.T",
"doc_type":"cmpntguide",
"kw":"Configuring Non-encrypted Transmission,Non-Encrypted Transmission,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Non-encrypted Transmission",
"githuburl":""
},
{
"uri":"mrs_01_1061.html",
+ "node_id":"mrs_01_1061.xml",
"product_code":"mrs",
"code":"172",
"des":"This section describes how to use the Flume client to collect static logs from a local host and save them to the topic list (test1) of Kafka.This section applies to MRS 3",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka,Non-Encrypted Transmissio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1063.html",
+ "node_id":"mrs_01_1063.xml",
"product_code":"mrs",
"code":"173",
"des":"This section describes how to use the Flume client to collect static logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MRS",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS,Non-Encrypted Transmission",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1064.html",
+ "node_id":"mrs_01_1064.xml",
"product_code":"mrs",
"code":"174",
"des":"This section describes how to use the Flume client to collect dynamic logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MR",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS,Non-Encrypted Transmissio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1065.html",
+ "node_id":"mrs_01_1065.xml",
"product_code":"mrs",
"code":"175",
"des":"This section describes how to use the Flume client to collect logs from the topic list (test1) of Kafka and save them to the /flume/test directory on HDFS.This section ap",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS,Non-Encrypted Transmission,C",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1066.html",
+ "node_id":"mrs_01_1066.xml",
"product_code":"mrs",
"code":"176",
"des":"This section describes how to use the Flume client to collect logs from the topic list (test1) of the Kafka client and save them to the /flume/test directory on HDFS.This",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client,Non",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client",
"githuburl":""
},
{
"uri":"mrs_01_1067.html",
+ "node_id":"mrs_01_1067.xml",
"product_code":"mrs",
"code":"177",
"des":"This section describes how to use the Flume client to collect static logs from a local host and save them to the flume_test HBase table. In this scenario, multi-level age",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase,Non-Encrypted Transmissio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase",
"githuburl":""
},
{
"uri":"mrs_01_1068.html",
+ "node_id":"mrs_01_1068.xml",
"product_code":"mrs",
"code":"178",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Encrypted Transmission",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Encrypted Transmission",
"githuburl":""
},
{
"uri":"mrs_01_1069.html",
+ "node_id":"mrs_01_1069.xml",
"product_code":"mrs",
"code":"179",
"des":"This section describes how to configure the server and client parameters of the Flume service (including the Flume and MonitorServer roles) after the cluster is installed",
"doc_type":"cmpntguide",
"kw":"Configuring the Encrypted Transmission,Encrypted Transmission,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Encrypted Transmission",
"githuburl":""
},
{
"uri":"mrs_01_1070.html",
+ "node_id":"mrs_01_1070.xml",
"product_code":"mrs",
"code":"180",
"des":"This section describes how to use Flume to collect static logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MRS 3.x or lat",
"doc_type":"cmpntguide",
"kw":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS,Encrypted Transmission,Com",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1596.html",
+ "node_id":"mrs_01_1596.xml",
"product_code":"mrs",
"code":"181",
"des":"The Flume client outside the FusionInsight cluster is a part of the end-to-end data collection. Both the Flume client outside the cluster and the Flume server in the clus",
"doc_type":"cmpntguide",
"kw":"Viewing Flume Client Monitoring Information,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Flume Client Monitoring Information",
"githuburl":""
},
{
"uri":"mrs_01_1071.html",
+ "node_id":"mrs_01_1071.xml",
"product_code":"mrs",
"code":"182",
"des":"This section describes how to connect to Kafka using the Flume client in security mode.This section applies to MRS 3.x or later.Set keyTab and principal based on site req",
"doc_type":"cmpntguide",
"kw":"Connecting Flume to Kafka in Security Mode,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Connecting Flume to Kafka in Security Mode",
"githuburl":""
},
{
"uri":"mrs_01_1072.html",
+ "node_id":"mrs_01_1072.xml",
"product_code":"mrs",
"code":"183",
"des":"This section describes how to use Flume to connect to Hive (version 3.1.0) in the cluster.This section applies to MRS 3.x or later.Flume and Hive have been correctly inst",
"doc_type":"cmpntguide",
"kw":"Connecting Flume with Hive in Security Mode,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Connecting Flume with Hive in Security Mode",
"githuburl":""
},
{
"uri":"mrs_01_1073.html",
+ "node_id":"mrs_01_1073.xml",
"product_code":"mrs",
"code":"184",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Configuring the Flume Service Model",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Flume Service Model",
"githuburl":""
},
{
"uri":"mrs_01_1074.html",
+ "node_id":"mrs_01_1074.xml",
"product_code":"mrs",
"code":"185",
"des":"This section applies to MRS 3.x or later.Guide a reasonable Flume service configuration by providing performance differences between Flume common modules, to avoid a nons",
"doc_type":"cmpntguide",
"kw":"Overview,Configuring the Flume Service Model,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview",
"githuburl":""
},
{
"uri":"mrs_01_1075.html",
+ "node_id":"mrs_01_1075.xml",
"product_code":"mrs",
"code":"186",
"des":"This section applies to MRS 3.x or later.During Flume service configuration and module selection, the ultimate throughput of a sink must be greater than the maximum throu",
"doc_type":"cmpntguide",
"kw":"Service Model Configuration Guide,Configuring the Flume Service Model,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Service Model Configuration Guide",
"githuburl":""
},
{
"uri":"mrs_01_1081.html",
+ "node_id":"mrs_01_1081.xml",
"product_code":"mrs",
"code":"187",
"des":"Log path: The default path of Flume log files is /var/log/Bigdata/Role name.FlumeServer: /var/log/Bigdata/flume/flumeFlumeClient: /var/log/Bigdata/flume-client-n/flumeMon",
"doc_type":"cmpntguide",
"kw":"Introduction to Flume Logs,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to Flume Logs",
"githuburl":""
},
{
"uri":"mrs_01_1082.html",
+ "node_id":"mrs_01_1082.xml",
"product_code":"mrs",
"code":"188",
"des":"This section describes how to join and log out of a cgroup, query the cgroup status, and change the cgroup CPU threshold.This section applies to MRS 3.x or later.Join Cgr",
"doc_type":"cmpntguide",
"kw":"Flume Client Cgroup Usage Guide,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Flume Client Cgroup Usage Guide",
"githuburl":""
},
{
"uri":"mrs_01_1083.html",
+ "node_id":"mrs_01_1083.xml",
"product_code":"mrs",
"code":"189",
"des":"This section describes how to perform secondary development for third-party plug-ins.This section applies to MRS 3.x or later.You have obtained the third-party JAR packag",
"doc_type":"cmpntguide",
"kw":"Secondary Development Guide for Flume Third-Party Plug-ins,Using Flume,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Secondary Development Guide for Flume Third-Party Plug-ins",
"githuburl":""
},
{
"uri":"mrs_01_1598.html",
+ "node_id":"mrs_01_1598.xml",
"product_code":"mrs",
"code":"190",
"des":"Flume logs are stored in /var/log/Bigdata/flume/flume/flumeServer.log. Most data transmission exceptions and data transmission failures are recorded in logs. You can run ",
"doc_type":"cmpntguide",
"kw":"Common Issues About Flume,Using Flume,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Flume",
"githuburl":""
},
{
"uri":"mrs_01_0500.html",
+ "node_id":"mrs_01_0500.xml",
"product_code":"mrs",
"code":"191",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using HBase",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HBase",
"githuburl":""
},
{
"uri":"mrs_01_0368.html",
+ "node_id":"mrs_01_0368.xml",
"product_code":"mrs",
"code":"192",
"des":"HBase is a column-based distributed storage system that features high reliability, performance, and scalability. This section describes how to use HBase from scratch, inc",
"doc_type":"cmpntguide",
"kw":"Using HBase from Scratch,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HBase from Scratch",
"githuburl":""
},
{
"uri":"bakmrs_01_0368.html",
+ "node_id":"bakmrs_01_0368.xml",
"product_code":"mrs",
"code":"193",
"des":"This section describes how to use the HBase client in an O&M scenario or a service scenario.The client has been installed. For example, the installation directory is /opt",
"doc_type":"cmpntguide",
"kw":"Using an HBase Client,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using an HBase Client",
"githuburl":""
},
{
"uri":"mrs_01_1608.html",
+ "node_id":"mrs_01_1608.xml",
"product_code":"mrs",
"code":"194",
"des":"This section guides the system administrator to create and configure an HBase role on Manager. The HBase role can set HBase administrator permissions and read (R), write ",
"doc_type":"cmpntguide",
"kw":"Creating HBase Roles,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating HBase Roles",
"githuburl":""
},
{
"uri":"mrs_01_0501.html",
+ "node_id":"mrs_01_0501.xml",
"product_code":"mrs",
"code":"195",
"des":"As a key feature to ensure high availability of the HBase cluster system, HBase cluster replication provides HBase with remote data replication in real time. It provides ",
"doc_type":"cmpntguide",
"kw":"Configuring HBase Replication,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HBase Replication",
"githuburl":""
},
{
"uri":"mrs_01_0443.html",
+ "node_id":"mrs_01_0443.xml",
"product_code":"mrs",
"code":"196",
"des":"The operations described in this section apply only to clusters of versions earlier than MRS 3.x.If the default parameter settings of the MRS service cannot meet your req",
"doc_type":"cmpntguide",
"kw":"Configuring HBase Parameters,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HBase Parameters",
"githuburl":""
},
{
"uri":"mrs_01_0502.html",
+ "node_id":"mrs_01_0502.xml",
"product_code":"mrs",
"code":"197",
"des":"DistCp is used to copy the data stored on HDFS from a cluster to another cluster. DistCp depends on the cross-cluster copy function, which is disabled by default. This fu",
"doc_type":"cmpntguide",
"kw":"Enabling Cross-Cluster Copy,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Enabling Cross-Cluster Copy",
"githuburl":""
},
{
"uri":"mrs_01_0510.html",
+ "node_id":"mrs_01_0510.xml",
"product_code":"mrs",
"code":"198",
"des":"Active and standby clusters have been installed and started.Time is consistent between the active and standby clusters and the NTP service on the active and standby clust",
"doc_type":"cmpntguide",
"kw":"Using the ReplicationSyncUp Tool,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the ReplicationSyncUp Tool",
"githuburl":""
},
{
- "uri":"mrs_01_24119.html",
+ "uri":"mrs_01_1609.html",
+ "node_id":"mrs_01_1609.xml",
"product_code":"mrs",
"code":"199",
- "des":"This section applies only to MRS 3.1.0 or later.This section describes common GeoMesa commands. For more GeoMesa commands, visit https://www.geomesa.org/documentation/use",
- "doc_type":"cmpntguide",
- "kw":"GeoMesa Command Line,Using HBase,Component Operation Guide (Normal)",
- "title":"GeoMesa Command Line",
- "githuburl":""
- },
- {
- "uri":"mrs_01_1609.html",
- "product_code":"mrs",
- "code":"200",
"des":"HBase disaster recovery (DR), a key feature that is used to ensure high availability (HA) of the HBase cluster system, provides the real-time remote DR function for HBase",
"doc_type":"cmpntguide",
"kw":"Configuring HBase DR,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HBase DR",
"githuburl":""
},
{
"uri":"mrs_01_24112.html",
+ "node_id":"mrs_01_24112.xml",
"product_code":"mrs",
- "code":"201",
+ "code":"200",
"des":"HBase encodes data blocks in HFiles to reduce duplicate keys in KeyValues, reducing used space. Currently, the following data block encoding modes are supported: NONE, PR",
"doc_type":"cmpntguide",
"kw":"Configuring HBase Data Compression and Encoding,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HBase Data Compression and Encoding",
"githuburl":""
},
{
"uri":"mrs_01_1610.html",
+ "node_id":"mrs_01_1610.xml",
"product_code":"mrs",
- "code":"202",
+ "code":"201",
"des":"The system administrator can configure HBase cluster DR to improve system availability. If the active cluster in the DR environment is faulty and the connection to the HB",
"doc_type":"cmpntguide",
"kw":"Performing an HBase DR Service Switchover,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performing an HBase DR Service Switchover",
"githuburl":""
},
{
"uri":"mrs_01_1611.html",
+ "node_id":"mrs_01_1611.xml",
"product_code":"mrs",
- "code":"203",
+ "code":"202",
"des":"The HBase cluster in the current environment is a DR cluster. Due to some reasons, the active and standby clusters need to be switched over. That is, the standby cluster ",
"doc_type":"cmpntguide",
"kw":"Performing an HBase DR Active/Standby Cluster Switchover,Using HBase,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performing an HBase DR Active/Standby Cluster Switchover",
"githuburl":""
},
{
"uri":"mrs_01_1612.html",
+ "node_id":"mrs_01_1612.xml",
"product_code":"mrs",
- "code":"204",
+ "code":"203",
"des":"The Apache HBase official website provides the function of importing data in batches. For details, see the description of the Import and ImportTsv tools at http://hbase.a",
"doc_type":"cmpntguide",
"kw":"Community BulkLoad Tool,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Community BulkLoad Tool",
"githuburl":""
},
{
"uri":"mrs_01_1631.html",
+ "node_id":"mrs_01_1631.xml",
"product_code":"mrs",
- "code":"205",
+ "code":"204",
"des":"In the actual application scenario, data in various sizes needs to be stored, for example, image data and documents. Data whose size is smaller than 10 MB can be stored i",
"doc_type":"cmpntguide",
"kw":"Configuring the MOB,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the MOB",
"githuburl":""
},
{
"uri":"mrs_01_1009.html",
+ "node_id":"mrs_01_1009.xml",
"product_code":"mrs",
- "code":"206",
+ "code":"205",
"des":"This topic provides the procedure to configure the secure HBase replication during cross-realm Kerberos setup in security mode.Mapping for all the FQDNs to their realms s",
"doc_type":"cmpntguide",
"kw":"Configuring Secure HBase Replication,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Secure HBase Replication",
"githuburl":""
},
{
"uri":"mrs_01_1010.html",
+ "node_id":"mrs_01_1010.xml",
"product_code":"mrs",
- "code":"207",
+ "code":"206",
"des":"In a faulty environment, there are possibilities that a region may be stuck in transition for longer duration due to various reasons like slow region server response, uns",
"doc_type":"cmpntguide",
"kw":"Configuring Region In Transition Recovery Chore Service,Using HBase,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Region In Transition Recovery Chore Service",
"githuburl":""
},
{
"uri":"mrs_01_1056.html",
+ "node_id":"mrs_01_1056.xml",
"product_code":"mrs",
- "code":"208",
+ "code":"207",
"des":"Log path: The default storage path of HBase logs is /var/log/Bigdata/hbase/Role name.HMaster: /var/log/Bigdata/hbase/hm (run logs) and /var/log/Bigdata/audit/hbase/hm (au",
"doc_type":"cmpntguide",
"kw":"HBase Log Overview,Using HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HBase Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_1013.html",
+ "node_id":"mrs_01_1013.xml",
"product_code":"mrs",
- "code":"209",
+ "code":"208",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"HBase Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HBase Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1636.html",
+ "node_id":"mrs_01_1636.xml",
"product_code":"mrs",
- "code":"210",
+ "code":"209",
"des":"BulkLoad uses MapReduce jobs to directly generate files that comply with the internal data format of HBase, and then loads the generated StoreFiles to a running cluster. ",
"doc_type":"cmpntguide",
"kw":"Improving the BulkLoad Efficiency,HBase Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving the BulkLoad Efficiency",
"githuburl":""
},
{
"uri":"mrs_01_1637.html",
+ "node_id":"mrs_01_1637.xml",
"product_code":"mrs",
- "code":"211",
+ "code":"210",
"des":"In the scenario where a large number of requests are continuously put, setting the following two parameters to false can greatly improve the Put performance.hbase.regions",
"doc_type":"cmpntguide",
"kw":"Improving Put Performance,HBase Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Put Performance",
"githuburl":""
},
{
"uri":"mrs_01_1016.html",
+ "node_id":"mrs_01_1016.xml",
"product_code":"mrs",
- "code":"212",
+ "code":"211",
"des":"HBase has many configuration parameters related to read and write performance. The configuration parameters need to be adjusted based on the read/write request loads. Thi",
"doc_type":"cmpntguide",
"kw":"Optimizing Put and Scan Performance,HBase Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Put and Scan Performance",
"githuburl":""
},
{
"uri":"mrs_01_1017.html",
+ "node_id":"mrs_01_1017.xml",
"product_code":"mrs",
- "code":"213",
+ "code":"212",
"des":"Scenarios where data needs to be written to HBase in real time, or large-scale and consecutive put scenariosThis section applies to MRS 3.x and later versions.The HBase p",
"doc_type":"cmpntguide",
"kw":"Improving Real-time Data Write Performance,HBase Performance Tuning,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Real-time Data Write Performance",
"githuburl":""
},
{
"uri":"mrs_01_1018.html",
+ "node_id":"mrs_01_1018.xml",
"product_code":"mrs",
- "code":"214",
+ "code":"213",
"des":"HBase data needs to be read.The get or scan interface of HBase has been invoked and data is read in real time from HBase.Data reading server tuningParameter portal:Go to ",
"doc_type":"cmpntguide",
"kw":"Improving Real-time Data Read Performance,HBase Performance Tuning,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Real-time Data Read Performance",
"githuburl":""
},
{
"uri":"mrs_01_1019.html",
+ "node_id":"mrs_01_1019.xml",
"product_code":"mrs",
- "code":"215",
+ "code":"214",
"des":"When the number of clusters reaches a certain scale, the default settings of the Java virtual machine (JVM) cannot meet the cluster requirements. In this case, the cluste",
"doc_type":"cmpntguide",
"kw":"Optimizing JVM Parameters,HBase Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing JVM Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1638.html",
+ "node_id":"mrs_01_1638.xml",
"product_code":"mrs",
- "code":"216",
+ "code":"215",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About HBase",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About HBase",
"githuburl":""
},
{
"uri":"mrs_01_1639.html",
+ "node_id":"mrs_01_1639.xml",
"product_code":"mrs",
- "code":"217",
+ "code":"216",
"des":"A HBase server is faulty and cannot provide services. In this case, when a table operation is performed on the HBase client, why is the operation suspended and no respons",
"doc_type":"cmpntguide",
"kw":"Why Does a Client Keep Failing to Connect to a Server for a Long Time?,Common Issues About HBase,Com",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does a Client Keep Failing to Connect to a Server for a Long Time?",
"githuburl":""
},
{
"uri":"mrs_01_1640.html",
+ "node_id":"mrs_01_1640.xml",
"product_code":"mrs",
- "code":"218",
+ "code":"217",
"des":"Why submitted operations fail by stopping BulkLoad on the client during BulkLoad data importing?When BulkLoad is enabled on the client, a partitioner file is generated an",
"doc_type":"cmpntguide",
"kw":"Operation Failures Occur in Stopping BulkLoad On the Client,Common Issues About HBase,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Operation Failures Occur in Stopping BulkLoad On the Client",
"githuburl":""
},
{
"uri":"mrs_01_1641.html",
+ "node_id":"mrs_01_1641.xml",
"product_code":"mrs",
- "code":"219",
+ "code":"218",
"des":"When HBase consecutively deletes and creates the same table, why may a table creation exception occur?Execution process: Disable Table > Drop Table > Create Table > Disab",
"doc_type":"cmpntguide",
"kw":"Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?",
"githuburl":""
},
{
"uri":"mrs_01_1642.html",
+ "node_id":"mrs_01_1642.xml",
"product_code":"mrs",
- "code":"220",
+ "code":"219",
"des":"Why other services become unstable if HBase sets up a large number of connections over the network port?When the OS command lsof or netstat is run, it is found that many ",
"doc_type":"cmpntguide",
"kw":"Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network P",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?",
"githuburl":""
},
{
"uri":"mrs_01_1643.html",
+ "node_id":"mrs_01_1643.xml",
"product_code":"mrs",
- "code":"221",
+ "code":"220",
"des":"The HBase bulkLoad task (a single table contains 26 TB data) has 210,000 maps and 10,000 reduce tasks (in MRS 3.x or later), and the task fails.ZooKeeper I/O bottleneck o",
"doc_type":"cmpntguide",
"kw":"Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,0",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?",
"githuburl":""
},
{
"uri":"mrs_01_1644.html",
+ "node_id":"mrs_01_1644.xml",
"product_code":"mrs",
- "code":"222",
+ "code":"221",
"des":"How do I restore a region in the RIT state for a long time?Log in to the HMaster Web UI, choose Procedure & Locks in the navigation tree, and check whether any process ID",
"doc_type":"cmpntguide",
"kw":"How Do I Restore a Region in the RIT State for a Long Time?,Common Issues About HBase,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Restore a Region in the RIT State for a Long Time?",
"githuburl":""
},
{
"uri":"mrs_01_1645.html",
+ "node_id":"mrs_01_1645.xml",
"product_code":"mrs",
- "code":"223",
+ "code":"222",
"des":"Why does HMaster exit due to timeout when waiting for the namespace table to go online?During the HMaster active/standby switchover or startup, HMaster performs WAL split",
"doc_type":"cmpntguide",
"kw":"Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?,Common Issu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?",
"githuburl":""
},
{
"uri":"mrs_01_1646.html",
+ "node_id":"mrs_01_1646.xml",
"product_code":"mrs",
- "code":"224",
+ "code":"223",
"des":"Why does the following exception occur on the client when I use the HBase client to operate table data?At the same time, the following log is displayed on RegionServer:Th",
"doc_type":"cmpntguide",
"kw":"Why Does SocketTimeoutException Occur When a Client Queries HBase?,Common Issues About HBase,Compone",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does SocketTimeoutException Occur When a Client Queries HBase?",
"githuburl":""
},
{
"uri":"mrs_01_1647.html",
+ "node_id":"mrs_01_1647.xml",
"product_code":"mrs",
- "code":"225",
+ "code":"224",
"des":"Why modified and deleted data can still be queried by using the scan command?Because of the scalability of HBase, all values specific to the versions in the queried colum",
"doc_type":"cmpntguide",
"kw":"Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?,Common Issues About HB",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?",
"githuburl":""
},
{
"uri":"mrs_01_1648.html",
+ "node_id":"mrs_01_1648.xml",
"product_code":"mrs",
- "code":"226",
+ "code":"225",
"des":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?During HBase shell execution JRuby create temporary files under java.i",
"doc_type":"cmpntguide",
"kw":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?",
"githuburl":""
},
{
"uri":"mrs_01_1649.html",
+ "node_id":"mrs_01_1649.xml",
"product_code":"mrs",
- "code":"227",
+ "code":"226",
"des":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?When an online RegionServer goes down abruptly, it is displayed under \"Dead R",
"doc_type":"cmpntguide",
"kw":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?,Common",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?",
"githuburl":""
},
{
"uri":"mrs_01_1650.html",
+ "node_id":"mrs_01_1650.xml",
"product_code":"mrs",
- "code":"228",
+ "code":"227",
"des":"If the data to be imported by HBase bulkload has identical rowkeys, the data import is successful but identical query criteria produce different query results.Data with a",
"doc_type":"cmpntguide",
"kw":"Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?",
"githuburl":""
},
{
"uri":"mrs_01_1651.html",
+ "node_id":"mrs_01_1651.xml",
"product_code":"mrs",
- "code":"229",
+ "code":"228",
"des":"What should I do if I fail to create tables due to the FAILED_OPEN state of Regions?If a network, HDFS, or Active HMaster fault occurs during the creation of tables, some",
"doc_type":"cmpntguide",
"kw":"What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?,Common Issues A",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?",
"githuburl":""
},
{
"uri":"mrs_01_1652.html",
+ "node_id":"mrs_01_1652.xml",
"product_code":"mrs",
- "code":"230",
+ "code":"229",
"des":"In security mode, names of tables that failed to be created are unnecessarily retained in the table-lock node (default directory is /hbase/table-lock) of ZooKeeper. How d",
"doc_type":"cmpntguide",
"kw":"How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?,Common Issues ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?",
"githuburl":""
},
{
"uri":"mrs_01_1653.html",
+ "node_id":"mrs_01_1653.xml",
"product_code":"mrs",
- "code":"231",
+ "code":"230",
"des":"Why does HBase become faulty when I set quota for the directory used by HBase in HDFS?The flush operation of a table is to write memstore data to HDFS.If the HDFS directo",
"doc_type":"cmpntguide",
"kw":"Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?,Common Issu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?",
"githuburl":""
},
{
"uri":"mrs_01_1654.html",
+ "node_id":"mrs_01_1654.xml",
"product_code":"mrs",
- "code":"232",
+ "code":"231",
"des":"Why HMaster times out while waiting for namespace table to be assigned after rebuilding meta using OfflineMetaRepair tool and startups failed?HMaster abort with following",
"doc_type":"cmpntguide",
"kw":"Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using O",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed",
"githuburl":""
},
{
"uri":"mrs_01_1655.html",
+ "node_id":"mrs_01_1655.xml",
"product_code":"mrs",
- "code":"233",
+ "code":"232",
"des":"Why messages containing FileNotFoundException and no lease are frequently displayed in the HMaster logs during the WAL splitting process?During the WAL splitting process,",
"doc_type":"cmpntguide",
"kw":"Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster L",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?",
"githuburl":""
},
{
"uri":"mrs_01_1657.html",
+ "node_id":"mrs_01_1657.xml",
"product_code":"mrs",
- "code":"234",
+ "code":"233",
"des":"When a tenant accesses Phoenix, a message is displayed indicating that the tenant has insufficient rights.You need to associate the HBase service and Yarn queues when cre",
"doc_type":"cmpntguide",
"kw":"Insufficient Rights When a Tenant Accesses Phoenix,Common Issues About HBase,Component Operation Gui",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Insufficient Rights When a Tenant Accesses Phoenix",
"githuburl":""
},
{
"uri":"mrs_01_1659.html",
+ "node_id":"mrs_01_1659.xml",
"product_code":"mrs",
- "code":"235",
+ "code":"234",
"des":"The system automatically rolls back data after an HBase recovery task fails. If \"Rollback recovery failed\" is displayed, the rollback fails. After the rollback fails, dat",
"doc_type":"cmpntguide",
"kw":"What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating \"Rollback recove",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating \"Rollback recovery failed\"?",
"githuburl":""
},
{
"uri":"mrs_01_1660.html",
+ "node_id":"mrs_01_1660.xml",
"product_code":"mrs",
- "code":"236",
+ "code":"235",
"des":"When the HBaseFsck tool is used to check the region status in MRS 3.x and later versions, if the log contains ERROR: (regions region1 and region2) There is an overlap in ",
"doc_type":"cmpntguide",
"kw":"How Do I Fix Region Overlapping?,Common Issues About HBase,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Fix Region Overlapping?",
"githuburl":""
},
{
"uri":"mrs_01_1661.html",
+ "node_id":"mrs_01_1661.xml",
"product_code":"mrs",
- "code":"237",
+ "code":"236",
"des":"(MRS 3.x and later versions) Check the hbase-omm-*.out log of the node where RegionServer fails to be started. It is found that the log contains An error report file with",
"doc_type":"cmpntguide",
"kw":"Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Se",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?",
"githuburl":""
},
{
"uri":"mrs_01_0625.html",
+ "node_id":"mrs_01_0625.xml",
"product_code":"mrs",
- "code":"238",
+ "code":"237",
"des":"Why does the LoadIncrementalHFiles tool fail to be executed and \"Permission denied\" is displayed when a Linux user is manually created in a normal cluster and DataNode in",
"doc_type":"cmpntguide",
"kw":"Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and \"Permission denied\" Is Displayed Whe",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and \"Permission denied\" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?",
"githuburl":""
},
{
"uri":"mrs_01_2210.html",
+ "node_id":"mrs_01_2210.xml",
"product_code":"mrs",
- "code":"239",
+ "code":"238",
"des":"When the sqlline script is used on the client, the error message \"import argparse\" is displayed.",
"doc_type":"cmpntguide",
"kw":"Why Is the Error Message \"import argparse\" Displayed When the Phoenix sqlline Script Is Used?,Common",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is the Error Message \"import argparse\" Displayed When the Phoenix sqlline Script Is Used?",
"githuburl":""
},
{
"uri":"mrs_01_2211.html",
+ "node_id":"mrs_01_2211.xml",
"product_code":"mrs",
- "code":"240",
+ "code":"239",
"des":"When the indexed field data is updated, if a batch of data exists in the user table, the BulkLoad tool cannot update the global and partial mutable indexes.Problem Analys",
"doc_type":"cmpntguide",
"kw":"How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?,Common Issues About HBase,Componen",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?",
"githuburl":""
},
{
"uri":"mrs_01_2212.html",
+ "node_id":"mrs_01_2212.xml",
"product_code":"mrs",
- "code":"241",
+ "code":"240",
"des":"When CTBase accesses the HBase service with the Ranger plug-ins enabled and you are creating a cluster table, a message is displayed indicating that the permission is ins",
"doc_type":"cmpntguide",
"kw":"Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to th",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?",
"githuburl":""
},
{
"uri":"mrs_01_0790.html",
+ "node_id":"mrs_01_0790.xml",
"product_code":"mrs",
- "code":"242",
+ "code":"241",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using HDFS",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HDFS",
"githuburl":""
},
{
"uri":"mrs_01_0791.html",
+ "node_id":"mrs_01_0791.xml",
"product_code":"mrs",
- "code":"243",
+ "code":"242",
"des":"In HDFS, each file object needs to register corresponding information in the NameNode and occupies certain storage space. As the number of files increases, if the origina",
"doc_type":"cmpntguide",
"kw":"Configuring Memory Management,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Memory Management",
"githuburl":""
},
{
"uri":"mrs_01_1662.html",
+ "node_id":"mrs_01_1662.xml",
"product_code":"mrs",
- "code":"244",
+ "code":"243",
"des":"This section describes how to create and configure an HDFS role on FusionInsight Manager. The HDFS role is granted the rights to read, write, and execute HDFS directories",
"doc_type":"cmpntguide",
"kw":"Creating an HDFS Role,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating an HDFS Role",
"githuburl":""
},
{
"uri":"mrs_01_1663.html",
+ "node_id":"mrs_01_1663.xml",
"product_code":"mrs",
- "code":"245",
+ "code":"244",
"des":"This section describes how to use the HDFS client in an O&M scenario or service scenario.The client has been installed.For example, the installation directory is /opt/had",
"doc_type":"cmpntguide",
"kw":"Using the HDFS Client,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the HDFS Client",
"githuburl":""
},
{
"uri":"mrs_01_0794.html",
+ "node_id":"mrs_01_0794.xml",
"product_code":"mrs",
- "code":"246",
+ "code":"245",
"des":"DistCp is a tool used to perform large-amount data replication between clusters or in a cluster. It uses MapReduce tasks to implement distributed copy of a large amount o",
"doc_type":"cmpntguide",
"kw":"Running the DistCp Command,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Running the DistCp Command",
"githuburl":""
},
{
"uri":"mrs_01_0795.html",
+ "node_id":"mrs_01_0795.xml",
"product_code":"mrs",
- "code":"247",
+ "code":"246",
"des":"This section describes the directory structure in HDFS, as shown in the following table.",
"doc_type":"cmpntguide",
"kw":"Overview of HDFS File System Directories,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Overview of HDFS File System Directories",
"githuburl":""
},
{
"uri":"mrs_01_1664.html",
+ "node_id":"mrs_01_1664.xml",
"product_code":"mrs",
- "code":"248",
+ "code":"247",
"des":"This section applies to MRS 3.x or later clusters.If the storage directory defined by the HDFS DataNode is incorrect or the HDFS storage plan changes, the system administ",
"doc_type":"cmpntguide",
"kw":"Changing the DataNode Storage Directory,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Changing the DataNode Storage Directory",
"githuburl":""
},
{
"uri":"mrs_01_0797.html",
+ "node_id":"mrs_01_0797.xml",
"product_code":"mrs",
- "code":"249",
+ "code":"248",
"des":"The permission for some HDFS directories is 777 or 750 by default, which brings potential security risks. You are advised to modify the permission for the HDFS directorie",
"doc_type":"cmpntguide",
"kw":"Configuring HDFS Directory Permission,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HDFS Directory Permission",
"githuburl":""
},
{
"uri":"mrs_01_1665.html",
+ "node_id":"mrs_01_1665.xml",
"product_code":"mrs",
- "code":"250",
+ "code":"249",
"des":"This section applies to MRS 3.x or later.Before deploying a cluster, you can deploy a Network File System (NFS) server based on requirements to store NameNode metadata to",
"doc_type":"cmpntguide",
"kw":"Configuring NFS,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring NFS",
"githuburl":""
},
{
"uri":"mrs_01_0799.html",
+ "node_id":"mrs_01_0799.xml",
"product_code":"mrs",
- "code":"251",
+ "code":"250",
"des":"In HDFS, DataNode stores user files and directories as blocks, and file objects are generated on the NameNode to map each file, directory, and block on the DataNode.The f",
"doc_type":"cmpntguide",
"kw":"Planning HDFS Capacity,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Planning HDFS Capacity",
"githuburl":""
},
{
"uri":"mrs_01_0801.html",
+ "node_id":"mrs_01_0801.xml",
"product_code":"mrs",
- "code":"252",
+ "code":"251",
"des":"When you open an HDFS file, an error occurs due to the limit on the number of file handles. Information similar to the following is displayed.You can contact the systemad",
"doc_type":"cmpntguide",
"kw":"Configuring ulimit for HBase and HDFS,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring ulimit for HBase and HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1667.html",
+ "node_id":"mrs_01_1667.xml",
"product_code":"mrs",
- "code":"253",
+ "code":"252",
"des":"This section applies to MRS 3.x or later clusters.In the HDFS cluster, unbalanced disk usage among DataNodes may occur, for example, when new DataNodes are added to the c",
"doc_type":"cmpntguide",
"kw":"Balancing DataNode Capacity,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Balancing DataNode Capacity",
"githuburl":""
},
{
"uri":"mrs_01_0804.html",
+ "node_id":"mrs_01_0804.xml",
"product_code":"mrs",
- "code":"254",
+ "code":"253",
"des":"By default, NameNode randomly selects a DataNode to write files. If the disk capacity of some DataNodes in a cluster is inconsistent (the total disk capacity of some node",
"doc_type":"cmpntguide",
"kw":"Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes,Using HDFS,Compone",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes",
"githuburl":""
},
{
"uri":"mrs_01_0805.html",
+ "node_id":"mrs_01_0805.xml",
"product_code":"mrs",
- "code":"255",
+ "code":"254",
"des":"Generally, multiple services are deployed in a cluster, and the storage of most services depends on the HDFS file system. Different components such as Spark and Yarn or c",
"doc_type":"cmpntguide",
"kw":"Configuring the Number of Files in a Single HDFS Directory,Using HDFS,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Number of Files in a Single HDFS Directory",
"githuburl":""
},
{
"uri":"mrs_01_0806.html",
+ "node_id":"mrs_01_0806.xml",
"product_code":"mrs",
- "code":"256",
+ "code":"255",
"des":"On HDFS, deleted files are moved to the recycle bin (trash can) so that the data deleted by mistake can be restored.You can set the time threshold for storing files in th",
"doc_type":"cmpntguide",
"kw":"Configuring the Recycle Bin Mechanism,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Recycle Bin Mechanism",
"githuburl":""
},
{
"uri":"mrs_01_0807.html",
+ "node_id":"mrs_01_0807.xml",
"product_code":"mrs",
- "code":"257",
+ "code":"256",
"des":"HDFS allows users to modify the default permissions of files and directories. The default mask provided by the HDFS for creating file and directory permissions is 022. If",
"doc_type":"cmpntguide",
"kw":"Setting Permissions on Files and Directories,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Setting Permissions on Files and Directories",
"githuburl":""
},
{
"uri":"mrs_01_0808.html",
+ "node_id":"mrs_01_0808.xml",
"product_code":"mrs",
- "code":"258",
+ "code":"257",
"des":"In security mode, users can flexibly set the maximum token lifetime and token renewal interval in HDFS based on cluster requirements.Navigation path for setting parameter",
"doc_type":"cmpntguide",
"kw":"Setting the Maximum Lifetime and Renewal Interval of a Token,Using HDFS,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Setting the Maximum Lifetime and Renewal Interval of a Token",
"githuburl":""
},
{
"uri":"mrs_01_1669.html",
+ "node_id":"mrs_01_1669.xml",
"product_code":"mrs",
- "code":"259",
+ "code":"258",
"des":"In the open source version, if multiple data storage volumes are configured for a DataNode, the DataNode stops providing services by default if one of the volumes is dama",
"doc_type":"cmpntguide",
"kw":"Configuring the Damaged Disk Volume,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Damaged Disk Volume",
"githuburl":""
},
{
"uri":"mrs_01_0810.html",
+ "node_id":"mrs_01_0810.xml",
"product_code":"mrs",
- "code":"260",
+ "code":"259",
"des":"Encrypted channel is an encryption protocol of remote procedure call (RPC) in HDFS. When a user invokes RPC, the user's login name will be transmitted to RPC through RPC ",
"doc_type":"cmpntguide",
"kw":"Configuring Encrypted Channels,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Encrypted Channels",
"githuburl":""
},
{
"uri":"mrs_01_0811.html",
+ "node_id":"mrs_01_0811.xml",
"product_code":"mrs",
- "code":"261",
+ "code":"260",
"des":"Clients probably encounter running errors when the network is not stable. Users can adjust the following parameter values to improve the running efficiency.Go to the All ",
"doc_type":"cmpntguide",
"kw":"Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable,Usi",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable",
"githuburl":""
},
{
"uri":"mrs_01_1670.html",
+ "node_id":"mrs_01_1670.xml",
"product_code":"mrs",
- "code":"262",
+ "code":"261",
"des":"This section applies to MRS 3.x or later.In the existing default DFSclient failover proxy provider, if a NameNode in a process is faulty, all HDFS client instances in the",
"doc_type":"cmpntguide",
"kw":"Configuring the NameNode Blacklist,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the NameNode Blacklist",
"githuburl":""
},
{
"uri":"mrs_01_1672.html",
+ "node_id":"mrs_01_1672.xml",
"product_code":"mrs",
- "code":"263",
+ "code":"262",
"des":"This section applies to MRS 3.x or later.Several finished Hadoop clusters are faulty because the NameNode is overloaded and unresponsive.Such problem is caused by the ini",
"doc_type":"cmpntguide",
"kw":"Optimizing HDFS NameNode RPC QoS,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing HDFS NameNode RPC QoS",
"githuburl":""
},
{
"uri":"mrs_01_1673.html",
+ "node_id":"mrs_01_1673.xml",
"product_code":"mrs",
- "code":"264",
+ "code":"263",
"des":"When the speed at which the client writes data to the HDFS is greater than the disk bandwidth of the DataNode, the disk bandwidth is fully occupied. As a result, the Data",
"doc_type":"cmpntguide",
"kw":"Optimizing HDFS DataNode RPC QoS,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing HDFS DataNode RPC QoS",
"githuburl":""
},
{
"uri":"mrs_01_1675.html",
+ "node_id":"mrs_01_1675.xml",
"product_code":"mrs",
- "code":"265",
+ "code":"264",
"des":"When the Yarn local directory and DataNode directory are on the same disk, the disk with larger capacity can run more tasks. Therefore, more intermediate data is stored i",
"doc_type":"cmpntguide",
"kw":"Configuring Reserved Percentage of Disk Usage on DataNodes,Using HDFS,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Reserved Percentage of Disk Usage on DataNodes",
"githuburl":""
},
{
"uri":"mrs_01_1676.html",
+ "node_id":"mrs_01_1676.xml",
"product_code":"mrs",
- "code":"266",
+ "code":"265",
"des":"You need to configure the nodes for storing HDFS file data blocks based on data features. You can configure a label expression to an HDFS directory or file and assign one",
"doc_type":"cmpntguide",
"kw":"Configuring HDFS NodeLabel,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HDFS NodeLabel",
"githuburl":""
},
{
"uri":"mrs_01_2360.html",
+ "node_id":"mrs_01_2360.xml",
"product_code":"mrs",
- "code":"267",
+ "code":"266",
"des":"AZ Mover is a copy migration tool used to move copies to meet the new AZ policies set on the directory. It can be used to migrate copies from one AZ policy to another. AZ",
"doc_type":"cmpntguide",
"kw":"Using HDFS AZ Mover,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HDFS AZ Mover",
"githuburl":""
},
{
"uri":"mrs_01_1681.html",
+ "node_id":"mrs_01_1681.xml",
"product_code":"mrs",
- "code":"268",
+ "code":"267",
"des":"In an HDFS cluster configured with HA, the active NameNode processes all client requests, and the standby NameNode reserves the latest metadata and block location informa",
"doc_type":"cmpntguide",
"kw":"Configuring the Observer NameNode to Process Read Requests,Using HDFS,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Observer NameNode to Process Read Requests",
"githuburl":""
},
{
"uri":"mrs_01_1684.html",
+ "node_id":"mrs_01_1684.xml",
"product_code":"mrs",
- "code":"269",
+ "code":"268",
"des":"Performing this operation can concurrently modify file and directory permissions and access control tools in a cluster.This section applies to MRS 3.x or later clusters.P",
"doc_type":"cmpntguide",
"kw":"Performing Concurrent Operations on HDFS Files,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performing Concurrent Operations on HDFS Files",
"githuburl":""
},
{
"uri":"mrs_01_0828.html",
+ "node_id":"mrs_01_0828.xml",
"product_code":"mrs",
- "code":"270",
+ "code":"269",
"des":"Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)Da",
"doc_type":"cmpntguide",
"kw":"Introduction to HDFS Logs,Using HDFS,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to HDFS Logs",
"githuburl":""
},
{
"uri":"mrs_01_0829.html",
+ "node_id":"mrs_01_0829.xml",
"product_code":"mrs",
- "code":"271",
+ "code":"270",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"HDFS Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HDFS Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1687.html",
+ "node_id":"mrs_01_1687.xml",
"product_code":"mrs",
- "code":"272",
+ "code":"271",
"des":"Improve the HDFS write performance by modifying the HDFS attributes.This section applies to MRS 3.x or later.Navigation path for setting parameters:On FusionInsight Manag",
"doc_type":"cmpntguide",
"kw":"Improving Write Performance,HDFS Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Write Performance",
"githuburl":""
},
{
"uri":"mrs_01_1688.html",
+ "node_id":"mrs_01_1688.xml",
"product_code":"mrs",
- "code":"273",
+ "code":"272",
"des":"Improve the HDFS read performance by using the client to cache the metadata for block locations.This function is recommended only for reading files that are not modified ",
"doc_type":"cmpntguide",
"kw":"Improving Read Performance Using Client Metadata Cache,HDFS Performance Tuning,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Read Performance Using Client Metadata Cache",
"githuburl":""
},
{
"uri":"mrs_01_1689.html",
+ "node_id":"mrs_01_1689.xml",
"product_code":"mrs",
- "code":"274",
+ "code":"273",
"des":"When HDFS is deployed in high availability (HA) mode with multiple NameNode instances, the HDFS client needs to connect to each NameNode in sequence to determine which is",
"doc_type":"cmpntguide",
"kw":"Improving the Connection Between the Client and NameNode Using Current Active Cache,HDFS Performance",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving the Connection Between the Client and NameNode Using Current Active Cache",
"githuburl":""
},
{
"uri":"mrs_01_1690.html",
+ "node_id":"mrs_01_1690.xml",
"product_code":"mrs",
- "code":"275",
+ "code":"274",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"FAQ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"FAQ",
"githuburl":""
},
{
"uri":"mrs_01_1691.html",
+ "node_id":"mrs_01_1691.xml",
"product_code":"mrs",
- "code":"276",
+ "code":"275",
"des":"The NameNode startup is slow when it is restarted immediately after a large number of files (for example, 1 million files) are deleted.It takes time for the DataNode to d",
"doc_type":"cmpntguide",
"kw":"NameNode Startup Is Slow,FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"NameNode Startup Is Slow",
"githuburl":""
},
{
"uri":"mrs_01_1693.html",
+ "node_id":"mrs_01_1693.xml",
"product_code":"mrs",
- "code":"277",
+ "code":"276",
"des":"The DataNode is normal, but cannot report data blocks. As a result, the existing data blocks cannot be used.This error may occur when the number of data blocks in a data ",
"doc_type":"cmpntguide",
"kw":"DataNode Is Normal but Cannot Report Data Blocks,FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"DataNode Is Normal but Cannot Report Data Blocks",
"githuburl":""
},
{
"uri":"mrs_01_1694.html",
+ "node_id":"mrs_01_1694.xml",
"product_code":"mrs",
- "code":"278",
+ "code":"277",
"des":"When errors occur in the dfs.datanode.data.dir directory of DataNode due to the permission or disk damage, HDFS WebUI does not display information about damaged data.Afte",
"doc_type":"cmpntguide",
"kw":"HDFS WebUI Cannot Properly Update Information About Damaged Data,FAQ,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HDFS WebUI Cannot Properly Update Information About Damaged Data",
"githuburl":""
},
{
"uri":"mrs_01_1695.html",
+ "node_id":"mrs_01_1695.xml",
"product_code":"mrs",
- "code":"279",
+ "code":"278",
"des":"Why distcp command fails in the secure cluster with the following error displayed?Client side exceptionServer side exceptionThe preceding error may occur if webhdfs:// is",
"doc_type":"cmpntguide",
"kw":"Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?,FAQ,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?",
"githuburl":""
},
{
"uri":"mrs_01_1696.html",
+ "node_id":"mrs_01_1696.xml",
"product_code":"mrs",
- "code":"280",
+ "code":"279",
"des":"If the number of disks specified by dfs.datanode.data.dir is equal to the value of dfs.datanode.failed.volumes.tolerated, DataNode startup will fail.By default, the failu",
"doc_type":"cmpntguide",
"kw":"Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals d",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?",
"githuburl":""
},
{
"uri":"mrs_01_1697.html",
+ "node_id":"mrs_01_1697.xml",
"product_code":"mrs",
- "code":"281",
+ "code":"280",
"des":"The capacity of a DataNode fails to calculate when multiple data.dir directories are configured in a disk partition.Currently, the capacity is calculated based on disks, ",
"doc_type":"cmpntguide",
"kw":"Failed to Calculate the Capacity of a DataNode when Multiple data.dir Directories Are Configured in ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Failed to Calculate the Capacity of a DataNode when Multiple data.dir Directories Are Configured in a Disk Partition",
"githuburl":""
},
{
"uri":"mrs_01_1698.html",
+ "node_id":"mrs_01_1698.xml",
"product_code":"mrs",
- "code":"282",
+ "code":"281",
"des":"When the standby NameNode is powered off during metadata (namespace) storage, it fails to be started and the following error information is displayed.When the standby Nam",
"doc_type":"cmpntguide",
"kw":"Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) St",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage",
"githuburl":""
},
{
"uri":"mrs_01_1699.html",
+ "node_id":"mrs_01_1699.xml",
"product_code":"mrs",
- "code":"283",
+ "code":"282",
"des":"Why data in the buffer is lost if a power outage occurs during storage of small files?Because of a power outage, the blocks in the buffer are not written to the disk imme",
"doc_type":"cmpntguide",
"kw":"Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files,FAQ,Component ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files",
"githuburl":""
},
{
"uri":"mrs_01_1700.html",
+ "node_id":"mrs_01_1700.xml",
"product_code":"mrs",
- "code":"284",
+ "code":"283",
"des":"When HDFS calls the FileInputFormat getSplit method, the ArrayIndexOutOfBoundsException: 0 appears in the following log:The elements of each block correspondent frame are",
"doc_type":"cmpntguide",
"kw":"Why Does Array Border-crossing Occur During FileInputFormat Split?,FAQ,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Array Border-crossing Occur During FileInputFormat Split?",
"githuburl":""
},
{
"uri":"mrs_01_1701.html",
+ "node_id":"mrs_01_1701.xml",
"product_code":"mrs",
- "code":"285",
+ "code":"284",
"des":"When the storage policy of the file is set to LAZY_PERSIST, the storage type of the first replica should be RAM_DISK, and the storage type of other replicas should be DIS",
"doc_type":"cmpntguide",
"kw":"Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?,FAQ,Comp",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?",
"githuburl":""
},
{
"uri":"mrs_01_1702.html",
+ "node_id":"mrs_01_1702.xml",
"product_code":"mrs",
- "code":"286",
+ "code":"285",
"des":"When the NameNode node is overloaded (100% of the CPU is occupied), the NameNode is unresponsive. The HDFS clients that are connected to the overloaded NameNode fail to r",
"doc_type":"cmpntguide",
"kw":"The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time,FAQ,Component Operat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time",
"githuburl":""
},
{
"uri":"mrs_01_1703.html",
+ "node_id":"mrs_01_1703.xml",
"product_code":"mrs",
- "code":"287",
+ "code":"286",
"des":"In DataNode, the storage directory of data blocks is specified by dfs.datanode.data.dir.Can I modify dfs.datanode.data.dir tomodify the data storage directory?Can I modif",
"doc_type":"cmpntguide",
"kw":"Can I Delete or Modify the Data Storage Directory in DataNode?,FAQ,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Can I Delete or Modify the Data Storage Directory in DataNode?",
"githuburl":""
},
{
"uri":"mrs_01_1704.html",
+ "node_id":"mrs_01_1704.xml",
"product_code":"mrs",
- "code":"288",
+ "code":"287",
"des":"Why are some blocks missing on the NameNode UI after the rollback is successful?This problem occurs because blocks with new IDs or genstamps may exist on the DataNode. Th",
"doc_type":"cmpntguide",
"kw":"Blocks Miss on the NameNode UI After the Successful Rollback,FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Blocks Miss on the NameNode UI After the Successful Rollback",
"githuburl":""
},
{
"uri":"mrs_01_1705.html",
+ "node_id":"mrs_01_1705.xml",
"product_code":"mrs",
- "code":"289",
+ "code":"288",
"des":"Why is an \"java.net.SocketException: No buffer space available\" exception reported when data is written to HDFS?This problem occurs when files are written to the HDFS. Ch",
"doc_type":"cmpntguide",
"kw":"Why Is \"java.net.SocketException: No buffer space available\" Reported When Data Is Written to HDFS,F",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is \"java.net.SocketException: No buffer space available\" Reported When Data Is Written to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1706.html",
+ "node_id":"mrs_01_1706.xml",
"product_code":"mrs",
- "code":"290",
+ "code":"289",
"des":"Why are there two standby NameNodes after the active NameNode is restarted?When this problem occurs, check the ZooKeeper and ZooKeeper FC logs. You can find that the sess",
"doc_type":"cmpntguide",
"kw":"Why are There Two Standby NameNodes After the active NameNode Is Restarted?,FAQ,Component Operation ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why are There Two Standby NameNodes After the active NameNode Is Restarted?",
"githuburl":""
},
{
"uri":"mrs_01_1707.html",
+ "node_id":"mrs_01_1707.xml",
"product_code":"mrs",
- "code":"291",
+ "code":"290",
"des":"After I start a Balance process in HDFS, the process is shut down abnormally. If I attempt to execute the Balance process again, it fails again.After a Balance process is",
"doc_type":"cmpntguide",
"kw":"When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?,FAQ,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?",
"githuburl":""
},
{
"uri":"mrs_01_1708.html",
+ "node_id":"mrs_01_1708.xml",
"product_code":"mrs",
- "code":"292",
+ "code":"291",
"des":"Occasionally, nternet Explorer 9, Explorer 10, or Explorer 11 fails to access the native HDFS UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the nati",
"doc_type":"cmpntguide",
"kw":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native HDFS U",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI",
"githuburl":""
},
{
"uri":"mrs_01_1709.html",
+ "node_id":"mrs_01_1709.xml",
"product_code":"mrs",
- "code":"293",
+ "code":"292",
"des":"If a JournalNode server is powered off, the data directory disk is fully occupied, and the network is abnormal, the EditLog sequence number on the JournalNode is inconsec",
"doc_type":"cmpntguide",
"kw":"NameNode Fails to Be Restarted Due to EditLog Discontinuity,FAQ,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"NameNode Fails to Be Restarted Due to EditLog Discontinuity",
"githuburl":""
},
{
"uri":"mrs_01_0581.html",
+ "node_id":"mrs_01_0581.xml",
"product_code":"mrs",
- "code":"294",
+ "code":"293",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Hive",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hive",
"githuburl":""
},
{
"uri":"mrs_01_0442.html",
+ "node_id":"mrs_01_0442.xml",
"product_code":"mrs",
- "code":"295",
+ "code":"294",
"des":"Hive is a data warehouse framework built on Hadoop. It maps structured data files to a database table and provides SQL-like functions to analyze and process data. It also",
"doc_type":"cmpntguide",
"kw":"Using Hive from Scratch,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hive from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0582.html",
+ "node_id":"mrs_01_0582.xml",
"product_code":"mrs",
- "code":"296",
+ "code":"295",
"des":"Go to the Hive configurations page by referring to Modifying Cluster Service Configuration Parameters.",
"doc_type":"cmpntguide",
"kw":"Configuring Hive Parameters,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Hive Parameters",
"githuburl":""
},
{
"uri":"mrs_01_2330.html",
+ "node_id":"mrs_01_2330.xml",
"product_code":"mrs",
- "code":"297",
+ "code":"296",
"des":"Hive SQL supports all features of Hive-3.1.0. For details, see https://cwiki.apache.org/confluence/display/hive/languagemanual.Table 1 describes the extended Hive stateme",
"doc_type":"cmpntguide",
"kw":"Hive SQL,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive SQL",
"githuburl":""
},
{
"uri":"mrs_01_0947.html",
+ "node_id":"mrs_01_0947.xml",
"product_code":"mrs",
- "code":"298",
+ "code":"297",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Permission Management",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Permission Management",
"githuburl":""
},
{
"uri":"mrs_01_0948.html",
+ "node_id":"mrs_01_0948.xml",
"product_code":"mrs",
- "code":"299",
+ "code":"298",
"des":"Hive is a data warehouse framework built on Hadoop. It provides basic data analysis services using the Hive query language (HQL), a language like the structured query lan",
"doc_type":"cmpntguide",
"kw":"Hive Permission,Permission Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Permission",
"githuburl":""
},
{
"uri":"mrs_01_0949.html",
+ "node_id":"mrs_01_0949.xml",
"product_code":"mrs",
- "code":"300",
+ "code":"299",
"des":"This section describes how to create and configure a Hive role on Manager as the system administrator. The Hive role can be granted the permissions of the Hive administra",
"doc_type":"cmpntguide",
"kw":"Creating a Hive Role,Permission Management,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Hive Role",
"githuburl":""
},
{
"uri":"mrs_01_0950.html",
+ "node_id":"mrs_01_0950.xml",
"product_code":"mrs",
- "code":"301",
+ "code":"300",
"des":"You can configure related permissions if you need to access tables or databases created by other users. Hive supports column-based permission control. If a user needs to ",
"doc_type":"cmpntguide",
"kw":"Configuring Permissions for Hive Tables, Columns, or Databases,Permission Management,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Permissions for Hive Tables, Columns, or Databases",
"githuburl":""
},
{
"uri":"mrs_01_0951.html",
+ "node_id":"mrs_01_0951.xml",
"product_code":"mrs",
- "code":"302",
+ "code":"301",
"des":"Hive may need to be associated with other components. For example, Yarn permissions are required in the scenario of using HQL statements to trigger MapReduce jobs, and HB",
"doc_type":"cmpntguide",
"kw":"Configuring Permissions to Use Other Components for Hive,Permission Management,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Permissions to Use Other Components for Hive",
"githuburl":""
},
{
"uri":"mrs_01_0952.html",
+ "node_id":"mrs_01_0952.xml",
"product_code":"mrs",
- "code":"303",
+ "code":"302",
"des":"This section guides users to use a Hive client in an O&M or service scenario.The client has been installed. For example, the client is installed in the /opt/hadoopclient ",
"doc_type":"cmpntguide",
"kw":"Using a Hive Client,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using a Hive Client",
"githuburl":""
},
{
"uri":"mrs_01_0953.html",
+ "node_id":"mrs_01_0953.xml",
"product_code":"mrs",
- "code":"304",
+ "code":"303",
"des":"HDFS Colocation is the data location control function provided by HDFS. The HDFS Colocation API stores associated data or data on which associated operations are performe",
"doc_type":"cmpntguide",
"kw":"Using HDFS Colocation to Store Hive Tables,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HDFS Colocation to Store Hive Tables",
"githuburl":""
},
{
"uri":"mrs_01_0954.html",
+ "node_id":"mrs_01_0954.xml",
"product_code":"mrs",
- "code":"305",
+ "code":"304",
"des":"Hive supports encryption of one or multiple columns in a table. When creating a Hive table, you can specify the column to be encrypted and encryption algorithm. When data",
"doc_type":"cmpntguide",
"kw":"Using the Hive Column Encryption Function,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Hive Column Encryption Function",
"githuburl":""
},
{
"uri":"mrs_01_0955.html",
+ "node_id":"mrs_01_0955.xml",
"product_code":"mrs",
- "code":"306",
+ "code":"305",
"des":"In most cases, a carriage return character is used as the row delimiter in Hive tables stored in text files, that is, the carriage return character is used as the termina",
"doc_type":"cmpntguide",
"kw":"Customizing Row Separators,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Customizing Row Separators",
"githuburl":""
},
{
"uri":"mrs_01_24293.html",
+ "node_id":"mrs_01_24293.xml",
"product_code":"",
- "code":"307",
+ "code":"306",
"des":"For mutually trusted Hive and HBase clusters with Kerberos authentication enabled, you can access the HBase cluster and synchronize its key configurations to HiveServer o",
"doc_type":"",
"kw":"Configuring Hive on HBase in Across Clusters with Mutual Trust Enabled,Using Hive,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Configuring Hive on HBase in Across Clusters with Mutual Trust Enabled",
"githuburl":""
},
{
"uri":"mrs_01_0956.html",
+ "node_id":"mrs_01_0956.xml",
"product_code":"mrs",
- "code":"308",
+ "code":"307",
"des":"Due to the limitations of underlying storage systems, Hive does not support the ability to delete a single piece of table data. In Hive on HBase, MRS Hive supports the ab",
"doc_type":"cmpntguide",
"kw":"Deleting Single-Row Records from Hive on HBase,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Deleting Single-Row Records from Hive on HBase",
"githuburl":""
},
{
"uri":"mrs_01_0957.html",
+ "node_id":"mrs_01_0957.xml",
"product_code":"mrs",
- "code":"309",
+ "code":"308",
"des":"WebHCat provides external REST APIs for Hive. By default, the open-source community version uses the HTTP protocol.MRS Hive supports the HTTPS protocol that is more secur",
"doc_type":"cmpntguide",
"kw":"Configuring HTTPS/HTTP-based REST APIs,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HTTPS/HTTP-based REST APIs",
"githuburl":""
},
{
"uri":"mrs_01_0958.html",
+ "node_id":"mrs_01_0958.xml",
"product_code":"mrs",
- "code":"310",
+ "code":"309",
"des":"The Transform function is not allowed by Hive of the open source version.MRS Hive supports the configuration of the Transform function. The function is disabled by defaul",
"doc_type":"cmpntguide",
"kw":"Enabling or Disabling the Transform Function,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Enabling or Disabling the Transform Function",
"githuburl":""
},
{
"uri":"mrs_01_0959.html",
+ "node_id":"mrs_01_0959.xml",
"product_code":"mrs",
- "code":"311",
+ "code":"310",
"des":"This section describes how to create a view on Hive when MRS is configured in security mode, authorize access permissions to different users, and specify that different u",
"doc_type":"cmpntguide",
"kw":"Access Control of a Dynamic Table View on Hive,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Access Control of a Dynamic Table View on Hive",
"githuburl":""
},
{
"uri":"mrs_01_0960.html",
+ "node_id":"mrs_01_0960.xml",
"product_code":"mrs",
- "code":"312",
+ "code":"311",
"des":"You must have ADMIN permission when creating temporary functions on Hive of the open source community version.MRS Hive supports the configuration of the function for crea",
"doc_type":"cmpntguide",
"kw":"Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions,Using Hive,Com",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions",
"githuburl":""
},
{
"uri":"mrs_01_0961.html",
+ "node_id":"mrs_01_0961.xml",
"product_code":"mrs",
- "code":"313",
+ "code":"312",
"des":"Hive allows users to create external tables to associate with other relational databases. External tables read data from associated relational databases and support Join ",
"doc_type":"cmpntguide",
"kw":"Using Hive to Read Data in a Relational Database,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hive to Read Data in a Relational Database",
"githuburl":""
},
{
"uri":"mrs_01_0962.html",
+ "node_id":"mrs_01_0962.xml",
"product_code":"mrs",
- "code":"314",
+ "code":"313",
"des":"Hive supports the following types of traditional relational database syntax:GroupingEXCEPT and INTERSECTSyntax description:Grouping takes effect only when the Group by st",
"doc_type":"cmpntguide",
"kw":"Supporting Traditional Relational Database Syntax in Hive,Using Hive,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Supporting Traditional Relational Database Syntax in Hive",
"githuburl":""
},
{
"uri":"mrs_01_0966.html",
+ "node_id":"mrs_01_0966.xml",
"product_code":"mrs",
- "code":"315",
+ "code":"314",
"des":"This function is applicable to Hive and Spark2x in MRS 3.x and later.With this function enabled, if the select permission is granted to a user during Hive table creation,",
"doc_type":"cmpntguide",
"kw":"Viewing Table Structures Using the show create Statement as Users with the select Permission,Using H",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Table Structures Using the show create Statement as Users with the select Permission",
"githuburl":""
},
{
"uri":"mrs_01_0967.html",
+ "node_id":"mrs_01_0967.xml",
"product_code":"mrs",
- "code":"316",
+ "code":"315",
"des":"This function applies to Hive.After this function is enabled, run the following command to write a directory into Hive: insert overwrite directory \"/path1\".... After the ",
"doc_type":"cmpntguide",
"kw":"Writing a Directory into Hive with the Old Data Removed to the Recycle Bin,Using Hive,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Writing a Directory into Hive with the Old Data Removed to the Recycle Bin",
"githuburl":""
},
{
"uri":"mrs_01_0968.html",
+ "node_id":"mrs_01_0968.xml",
"product_code":"mrs",
- "code":"317",
+ "code":"316",
"des":"This function applies to Hive.With this function enabled, run the insert overwrite directory/path1/path2/path3... command to write a subdirectory. The permission of the /",
"doc_type":"cmpntguide",
"kw":"Inserting Data to a Directory That Does Not Exist,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Inserting Data to a Directory That Does Not Exist",
"githuburl":""
},
{
"uri":"mrs_01_0969.html",
+ "node_id":"mrs_01_0969.xml",
"product_code":"mrs",
- "code":"318",
+ "code":"317",
"des":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, only the Hive ad",
"doc_type":"cmpntguide",
"kw":"Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator,Using ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator",
"githuburl":""
},
{
"uri":"mrs_01_0970.html",
+ "node_id":"mrs_01_0970.xml",
"product_code":"mrs",
- "code":"319",
+ "code":"318",
"des":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the location key",
"doc_type":"cmpntguide",
"kw":"Disabling of Specifying the location Keyword When Creating an Internal Hive Table,Using Hive,Compone",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Disabling of Specifying the location Keyword When Creating an Internal Hive Table",
"githuburl":""
},
{
"uri":"mrs_01_0971.html",
+ "node_id":"mrs_01_0971.xml",
"product_code":"mrs",
- "code":"320",
+ "code":"319",
"des":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the user or user",
"doc_type":"cmpntguide",
"kw":"Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read,Using Hive,Co",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read",
"githuburl":""
},
{
"uri":"mrs_01_0972.html",
+ "node_id":"mrs_01_0972.xml",
"product_code":"mrs",
- "code":"321",
+ "code":"320",
"des":"This function applies to Hive.The number of OS user groups is limited, and the number of roles that can be created in Hive cannot exceed 32. After this function is enable",
"doc_type":"cmpntguide",
"kw":"Authorizing Over 32 Roles in Hive,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Authorizing Over 32 Roles in Hive",
"githuburl":""
},
{
"uri":"mrs_01_0973.html",
+ "node_id":"mrs_01_0973.xml",
"product_code":"mrs",
- "code":"322",
+ "code":"321",
"des":"This function applies to Hive.This function is used to limit the maximum number of maps for Hive tasks on the server to avoid performance deterioration caused by overload",
"doc_type":"cmpntguide",
"kw":"Restricting the Maximum Number of Maps for Hive Tasks,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Restricting the Maximum Number of Maps for Hive Tasks",
"githuburl":""
},
{
"uri":"mrs_01_0974.html",
+ "node_id":"mrs_01_0974.xml",
"product_code":"mrs",
- "code":"323",
+ "code":"322",
"des":"This function applies to Hive.This function can be enabled to specify specific users to access HiveServer services on specific nodes, achieving HiveServer resource isolat",
"doc_type":"cmpntguide",
"kw":"HiveServer Lease Isolation,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HiveServer Lease Isolation",
"githuburl":""
},
{
"uri":"mrs_01_0975.html",
+ "node_id":"mrs_01_0975.xml",
"product_code":"mrs",
- "code":"324",
+ "code":"323",
"des":"Hive supports transactions at the table and partition levels. When the transaction mode is enabled, transaction tables can be incrementally updated, deleted, and read, im",
"doc_type":"cmpntguide",
"kw":"Hive Supporting Transactions,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Supporting Transactions",
"githuburl":""
},
{
"uri":"mrs_01_1750.html",
+ "node_id":"mrs_01_1750.xml",
"product_code":"mrs",
- "code":"325",
+ "code":"324",
"des":"Hive can use the Tez engine to process data computing tasks. Before executing a task, you can manually switch the execution engine to Tez.The TimelineServer role of the Y",
"doc_type":"cmpntguide",
"kw":"Switching the Hive Execution Engine to Tez,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Switching the Hive Execution Engine to Tez",
"githuburl":""
},
- {
- "uri":"mrs_01_2311.html",
- "product_code":"mrs",
- "code":"326",
- "des":"A Hive materialized view is a special table obtained based on the query results of Hive internal tables. A materialized view can be considered as an intermediate table th",
- "doc_type":"cmpntguide",
- "kw":"Hive Materialized View,Using Hive,Component Operation Guide (Normal)",
- "title":"Hive Materialized View",
- "githuburl":""
- },
{
"uri":"mrs_01_0976.html",
+ "node_id":"mrs_01_0976.xml",
"product_code":"mrs",
- "code":"327",
+ "code":"325",
"des":"Log path: The default save path of Hive logs is /var/log/Bigdata/hive/role name, the default save path of Hive1 logs is /var/log/Bigdata/hive1/role name, and the others f",
"doc_type":"cmpntguide",
"kw":"Hive Log Overview,Using Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_0977.html",
+ "node_id":"mrs_01_0977.xml",
"product_code":"mrs",
- "code":"328",
+ "code":"326",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Hive Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_0978.html",
+ "node_id":"mrs_01_0978.xml",
"product_code":"mrs",
- "code":"329",
+ "code":"327",
"des":"During the Select query, Hive generally scans the entire table, which is time-consuming. To improve query efficiency, create table partitions based on service requirement",
"doc_type":"cmpntguide",
"kw":"Creating Table Partitions,Hive Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating Table Partitions",
"githuburl":""
},
{
"uri":"mrs_01_0979.html",
+ "node_id":"mrs_01_0979.xml",
"product_code":"mrs",
- "code":"330",
+ "code":"328",
"des":"When the Join statement is used, the command execution speed and query speed may be slow in case of large data volume. To resolve this problem, you can optimize Join.Join",
"doc_type":"cmpntguide",
"kw":"Optimizing Join,Hive Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Join",
"githuburl":""
},
{
"uri":"mrs_01_0980.html",
+ "node_id":"mrs_01_0980.xml",
"product_code":"mrs",
- "code":"331",
+ "code":"329",
"des":"Optimize the Group by statement to accelerate the command execution and query speed.During the Group by operation, Map performs grouping and distributes the groups to Red",
"doc_type":"cmpntguide",
"kw":"Optimizing Group By,Hive Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Group By",
"githuburl":""
},
{
"uri":"mrs_01_0981.html",
+ "node_id":"mrs_01_0981.xml",
"product_code":"mrs",
- "code":"332",
+ "code":"330",
"des":"ORC is an efficient column storage format and has higher compression ratio and reading efficiency than other file formats.You are advised to use ORC as the default Hive t",
"doc_type":"cmpntguide",
"kw":"Optimizing Data Storage,Hive Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Data Storage",
"githuburl":""
},
{
"uri":"mrs_01_0982.html",
+ "node_id":"mrs_01_0982.xml",
"product_code":"mrs",
- "code":"333",
+ "code":"331",
"des":"When SQL statements are executed on Hive, if the (a&b) or (a&c) logic exists in the statements, you are advised to change the logic to a & (b or c).If condition a is p_pa",
"doc_type":"cmpntguide",
"kw":"Optimizing SQL Statements,Hive Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing SQL Statements",
"githuburl":""
},
{
"uri":"mrs_01_0983.html",
+ "node_id":"mrs_01_0983.xml",
"product_code":"mrs",
- "code":"334",
+ "code":"332",
"des":"When joining multiple tables in Hive, Hive supports Cost-Based Optimization (CBO). The system automatically selects the optimal plan based on the table statistics, such a",
"doc_type":"cmpntguide",
"kw":"Optimizing the Query Function Using Hive CBO,Hive Performance Tuning,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing the Query Function Using Hive CBO",
"githuburl":""
},
{
"uri":"mrs_01_1752.html",
+ "node_id":"mrs_01_1752.xml",
"product_code":"mrs",
- "code":"335",
+ "code":"333",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Hive",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Hive",
"githuburl":""
},
{
"uri":"mrs_01_1753.html",
+ "node_id":"mrs_01_1753.xml",
"product_code":"mrs",
- "code":"336",
+ "code":"334",
"des":"How can I delete permanent user-defined functions (UDFs) on multiple HiveServers at the same time?Multiple HiveServers share one MetaStore database. Therefore, there is a",
"doc_type":"cmpntguide",
"kw":"How Do I Delete UDFs on Multiple HiveServers at the Same Time?,Common Issues About Hive,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Delete UDFs on Multiple HiveServers at the Same Time?",
"githuburl":""
},
{
"uri":"mrs_01_1754.html",
+ "node_id":"mrs_01_1754.xml",
"product_code":"mrs",
- "code":"337",
+ "code":"335",
"des":"Why cannot the DROP operation be performed for a backed up Hive table?Snapshots have been created for an HDFS directory mapping to the backed up Hive table, so the HDFS d",
"doc_type":"cmpntguide",
"kw":"Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?,Common Issues About Hive,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?",
"githuburl":""
},
{
"uri":"mrs_01_1755.html",
+ "node_id":"mrs_01_1755.xml",
"product_code":"mrs",
- "code":"338",
+ "code":"336",
"des":"How to perform operations on local files (such as reading the content of a file) with Hive user-defined functions?By default, you can perform operations on local files wi",
"doc_type":"cmpntguide",
"kw":"How to Perform Operations on Local Files with Hive User-Defined Functions,Common Issues About Hive,C",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Perform Operations on Local Files with Hive User-Defined Functions",
"githuburl":""
},
{
"uri":"mrs_01_1756.html",
+ "node_id":"mrs_01_1756.xml",
"product_code":"mrs",
- "code":"339",
+ "code":"337",
"des":"How do I stop a MapReduce task manually if the task is suspended for a long time?",
"doc_type":"cmpntguide",
"kw":"How Do I Forcibly Stop MapReduce Jobs Executed by Hive?,Common Issues About Hive,Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Forcibly Stop MapReduce Jobs Executed by Hive?",
"githuburl":""
},
{
"uri":"mrs_01_1758.html",
+ "node_id":"mrs_01_1758.xml",
"product_code":"mrs",
- "code":"340",
+ "code":"338",
"des":"How do I monitor the Hive table size?The HDFS refined monitoring function allows you to monitor the size of a specified table directory.The Hive and HDFS components are r",
"doc_type":"cmpntguide",
"kw":"How Do I Monitor the Hive Table Size?,Common Issues About Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Monitor the Hive Table Size?",
"githuburl":""
},
{
"uri":"mrs_01_1759.html",
+ "node_id":"mrs_01_1759.xml",
"product_code":"mrs",
- "code":"341",
+ "code":"339",
"des":"How do I prevent key directories from data loss caused by misoperations of the insert overwrite statement?During monitoring of key Hive databases, tables, or directories,",
"doc_type":"cmpntguide",
"kw":"How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Stat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?",
"githuburl":""
},
{
"uri":"mrs_01_1760.html",
+ "node_id":"mrs_01_1760.xml",
"product_code":"mrs",
- "code":"342",
+ "code":"340",
"des":"This function applies to Hive.Perform the following operations to configure parameters. When Hive on Spark tasks are executed in the environment where the HBase is not in",
"doc_type":"cmpntguide",
"kw":"Why Is Hive on Spark Task Freezing When HBase Is Not Installed?,Common Issues About Hive,Component O",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is Hive on Spark Task Freezing When HBase Is Not Installed?",
"githuburl":""
},
{
"uri":"mrs_01_1761.html",
+ "node_id":"mrs_01_1761.xml",
"product_code":"mrs",
- "code":"343",
+ "code":"341",
"des":"When a table with more than 32,000 partitions is created in Hive, an exception occurs during the query with the WHERE partition. In addition, the exception information pr",
"doc_type":"cmpntguide",
"kw":"Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionI",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive",
"githuburl":""
},
{
"uri":"mrs_01_1762.html",
+ "node_id":"mrs_01_1762.xml",
"product_code":"mrs",
- "code":"344",
+ "code":"342",
"des":"When users check the JDK version used by the client, if the JDK version is IBM JDK, the Beeline client needs to be reconstructed. Otherwise, the client will fail to conne",
"doc_type":"cmpntguide",
"kw":"Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?,Common Issues Ab",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?",
"githuburl":""
},
{
"uri":"mrs_01_1763.html",
+ "node_id":"mrs_01_1763.xml",
"product_code":"mrs",
- "code":"345",
+ "code":"343",
"des":"Can Hive tables be stored in OBS or HDFS?The location of a common Hive table stored on OBS can be set to an HDFS path.In the same Hive service, you can create tables stor",
"doc_type":"cmpntguide",
"kw":"Description of Hive Table Location (Either Be an OBS or HDFS Path),Common Issues About Hive,Componen",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Description of Hive Table Location (Either Be an OBS or HDFS Path)",
"githuburl":""
},
{
"uri":"mrs_01_2309.html",
+ "node_id":"mrs_01_2309.xml",
"product_code":"mrs",
- "code":"346",
+ "code":"344",
"des":"Hive uses the Tez engine to execute union-related statements to write data. After Hive is switched to the MapReduce engine for query, no data is found.When Hive uses the ",
"doc_type":"cmpntguide",
"kw":"Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Ex",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?",
"githuburl":""
},
{
"uri":"mrs_01_2310.html",
+ "node_id":"mrs_01_2310.xml",
"product_code":"mrs",
- "code":"347",
+ "code":"345",
"des":"Why Does Data Inconsistency Occur When Data Is Concurrently Written to a Hive Table Through an API?Hive does not support concurrent data insertion for the same table or p",
"doc_type":"cmpntguide",
"kw":"Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?,Common Issues Abou",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?",
"githuburl":""
},
{
"uri":"mrs_01_2325.html",
+ "node_id":"mrs_01_2325.xml",
"product_code":"mrs",
- "code":"348",
+ "code":"346",
"des":"When the vectorized parameterhive.vectorized.execution.enabled is set to true, why do some null pointers or type conversion exceptions occur occasionally when Hive on Tez",
"doc_type":"cmpntguide",
"kw":"Why Does Hive Not Support Vectorized Query?,Common Issues About Hive,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Hive Not Support Vectorized Query?",
"githuburl":""
},
{
"uri":"mrs_01_2343.html",
+ "node_id":"mrs_01_2343.xml",
"product_code":"mrs",
- "code":"349",
+ "code":"347",
"des":"The HDFS data directory of the Hive table is deleted by mistake, but the metadata still exists. As a result, an error is reported during task execution.This is a exceptio",
"doc_type":"cmpntguide",
"kw":"Why Does Metadata Still Exist When the HDFS Data Directory of the Hive Table Is Deleted by Mistake?,",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Metadata Still Exist When the HDFS Data Directory of the Hive Table Is Deleted by Mistake?",
"githuburl":""
},
{
"uri":"mrs_01_24482.html",
+ "node_id":"mrs_01_24482.xml",
"product_code":"",
- "code":"350",
+ "code":"348",
"des":"How do I disable the logging function of Hive?cd/opt/Bigdata/clientsource bigdata_envIn security mode, run the following command to complete user authentication and log i",
"doc_type":"",
"kw":"How Do I Disable the Logging Function of Hive?,Common Issues About Hive,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"How Do I Disable the Logging Function of Hive?",
"githuburl":""
},
{
"uri":"mrs_01_24486.html",
+ "node_id":"mrs_01_24486.xml",
"product_code":"",
- "code":"351",
+ "code":"349",
"des":"In the scenario where the fine-grained permission is configured for multiple MRS users to access OBS, after the permission for deleting Hive tables in the OBS directory i",
"doc_type":"",
"kw":"Why Hive Tables in the OBS Directory Fail to Be Deleted?,Common Issues About Hive,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Why Hive Tables in the OBS Directory Fail to Be Deleted?",
"githuburl":""
},
{
"uri":"mrs_01_24117.html",
+ "node_id":"mrs_01_24117.xml",
"product_code":"mrs",
- "code":"352",
+ "code":"350",
"des":"The error message \"java.lang.OutOfMemoryError: Java heap space.\" is displayed during Hive SQL execution.Solution:For MapReduce tasks, increase the values of the following",
"doc_type":"cmpntguide",
"kw":"Hive Configuration Problems,Common Issues About Hive,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Configuration Problems",
"githuburl":""
},
{
"uri":"mrs_01_24025.html",
+ "node_id":"mrs_01_24025.xml",
"product_code":"mrs",
- "code":"353",
+ "code":"351",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Hudi",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hudi",
"githuburl":""
},
{
"uri":"mrs_01_24033.html",
+ "node_id":"mrs_01_24033.xml",
"product_code":"mrs",
- "code":"354",
+ "code":"352",
"des":"This section describes capabilities of Hudi using spark-shell. Using the Spark data source, this section describes how to insert and update a Hudi dataset of the default ",
"doc_type":"cmpntguide",
"kw":"Getting Started,Using Hudi,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Getting Started",
"githuburl":""
},
{
"uri":"mrs_01_24062.html",
+ "node_id":"mrs_01_24062.xml",
"product_code":"mrs",
- "code":"355",
+ "code":"353",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Basic Operations",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Basic Operations",
"githuburl":""
},
{
"uri":"mrs_01_24103.html",
+ "node_id":"mrs_01_24103.xml",
"product_code":"mrs",
- "code":"356",
+ "code":"354",
"des":"When writing data, Hudi generates a Hudi table based on attributes such as the storage path, table name, and partition structure.Hudi table data files can be stored in th",
"doc_type":"cmpntguide",
"kw":"Hudi Table Schema,Basic Operations,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hudi Table Schema",
"githuburl":""
},
{
"uri":"mrs_01_24034.html",
+ "node_id":"mrs_01_24034.xml",
"product_code":"mrs",
- "code":"357",
+ "code":"355",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Write",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Write",
"githuburl":""
},
{
"uri":"mrs_01_24035.html",
+ "node_id":"mrs_01_24035.xml",
"product_code":"mrs",
- "code":"358",
+ "code":"356",
"des":"Hudi provides multiple write modes. For details, see the configuration item hoodie.datasource.write.operation. This section describes upsert, insert, and bulk_insert.inse",
"doc_type":"cmpntguide",
"kw":"Batch Write,Write,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Batch Write",
"githuburl":""
},
{
"uri":"mrs_01_24064.html",
+ "node_id":"mrs_01_24064.xml",
"product_code":"mrs",
- "code":"359",
+ "code":"357",
"des":"You can run run_hive_sync_tool.sh to synchronize data in the Hudi table to Hive.For example, run the following command to synchronize the Hudi table in the hdfs://haclust",
"doc_type":"cmpntguide",
"kw":"Synchronizing Hudi Table Data to Hive,Write,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Synchronizing Hudi Table Data to Hive",
"githuburl":""
},
{
"uri":"mrs_01_24037.html",
+ "node_id":"mrs_01_24037.xml",
"product_code":"mrs",
- "code":"360",
+ "code":"358",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Read",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Read",
"githuburl":""
},
{
"uri":"mrs_01_24098.html",
+ "node_id":"mrs_01_24098.xml",
"product_code":"mrs",
- "code":"361",
+ "code":"359",
"des":"Reading the real-time view (using Hive and SparkSQL as an example): Directly read the Hudi table stored in Hive.select count(*) from test;Reading the real-time view (usin",
"doc_type":"cmpntguide",
"kw":"Reading COW Table Views,Read,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Reading COW Table Views",
"githuburl":""
},
{
"uri":"mrs_01_24099.html",
+ "node_id":"mrs_01_24099.xml",
"product_code":"mrs",
- "code":"362",
+ "code":"360",
"des":"After the MOR table is synchronized to Hive, the following two tables are synchronized to Hive: Table name_rt and Table name_ro. The table suffixed with rt indicates the ",
"doc_type":"cmpntguide",
"kw":"Reading MOR Table Views,Read,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Reading MOR Table Views",
"githuburl":""
},
{
"uri":"mrs_01_24038.html",
+ "node_id":"mrs_01_24038.xml",
"product_code":"mrs",
- "code":"363",
+ "code":"361",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Data Management and Maintenance",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Data Management and Maintenance",
"githuburl":""
},
{
"uri":"mrs_01_24088.html",
+ "node_id":"mrs_01_24088.xml",
"product_code":"mrs",
- "code":"364",
+ "code":"362",
"des":"Clustering reorganizes data layout to improve query performance without affecting the ingestion speed.Hudi provides different operations, such as insert, upsert, and bulk",
"doc_type":"cmpntguide",
"kw":"Clustering,Data Management and Maintenance,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Clustering",
"githuburl":""
},
{
"uri":"mrs_01_24089.html",
+ "node_id":"mrs_01_24089.xml",
"product_code":"mrs",
- "code":"365",
+ "code":"363",
"des":"Cleaning is used to delete data of versions that are no longer required.Hudi uses the cleaner working in the background to continuously delete unnecessary data of old ver",
"doc_type":"cmpntguide",
"kw":"Cleaning,Data Management and Maintenance,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Cleaning",
"githuburl":""
},
{
"uri":"mrs_01_24090.html",
+ "node_id":"mrs_01_24090.xml",
"product_code":"mrs",
- "code":"366",
+ "code":"364",
"des":"A compaction merges base and log files of MOR tables.For MOR tables, data is stored in columnar Parquet files and row-based Avro files, updates are recorded in incrementa",
"doc_type":"cmpntguide",
"kw":"Compaction,Data Management and Maintenance,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Compaction",
"githuburl":""
},
{
"uri":"mrs_01_24091.html",
+ "node_id":"mrs_01_24091.xml",
"product_code":"mrs",
- "code":"367",
+ "code":"365",
"des":"Savepoints are used to save and restore data of the customized version.Savepoints provided by Hudi can save different commits so that the cleaner program does not delete ",
"doc_type":"cmpntguide",
"kw":"Savepoint,Data Management and Maintenance,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Savepoint",
"githuburl":""
},
{
"uri":"mrs_01_24165.html",
+ "node_id":"mrs_01_24165.xml",
"product_code":"mrs",
- "code":"368",
+ "code":"366",
"des":"Uses an external service (ZooKeeper or Hive MetaStore) as the distributed mutex lock service.Files can be concurrently written, but commits cannot be concurrent. The comm",
"doc_type":"cmpntguide",
"kw":"Single-Table Concurrent Write,Data Management and Maintenance,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Single-Table Concurrent Write",
"githuburl":""
},
{
"uri":"mrs_01_24100.html",
+ "node_id":"mrs_01_24100.xml",
"product_code":"mrs",
- "code":"369",
+ "code":"367",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using the Hudi Client",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Hudi Client",
"githuburl":""
},
{
"uri":"mrs_01_24063.html",
+ "node_id":"mrs_01_24063.xml",
"product_code":"mrs",
- "code":"370",
+ "code":"368",
"des":"For a cluster with Kerberos authentication enabled, a user has been created on FusionInsight Manager of the cluster and associated with user groups hadoop and hive.The Hu",
"doc_type":"cmpntguide",
"kw":"Operating a Hudi Table Using hudi-cli.sh,Using the Hudi Client,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Operating a Hudi Table Using hudi-cli.sh",
"githuburl":""
},
{
"uri":"mrs_01_24032.html",
+ "node_id":"mrs_01_24032.xml",
"product_code":"mrs",
- "code":"371",
+ "code":"369",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Configuration Reference",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuration Reference",
"githuburl":""
},
{
"uri":"mrs_01_24093.html",
+ "node_id":"mrs_01_24093.xml",
"product_code":"mrs",
- "code":"372",
+ "code":"370",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Write Configuration,Configuration Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Write Configuration",
"githuburl":""
},
{
"uri":"mrs_01_24094.html",
+ "node_id":"mrs_01_24094.xml",
"product_code":"mrs",
- "code":"373",
+ "code":"371",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Configuration of Hive Table Synchronization,Configuration Reference,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuration of Hive Table Synchronization",
"githuburl":""
},
{
"uri":"mrs_01_24095.html",
+ "node_id":"mrs_01_24095.xml",
"product_code":"mrs",
- "code":"374",
+ "code":"372",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Index Configuration,Configuration Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Index Configuration",
"githuburl":""
},
{
"uri":"mrs_01_24096.html",
+ "node_id":"mrs_01_24096.xml",
"product_code":"mrs",
- "code":"375",
+ "code":"373",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Storage Configuration,Configuration Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Storage Configuration",
"githuburl":""
},
{
"uri":"mrs_01_24097.html",
+ "node_id":"mrs_01_24097.xml",
"product_code":"mrs",
- "code":"376",
+ "code":"374",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Compaction and Cleaning Configurations,Configuration Reference,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Compaction and Cleaning Configurations",
"githuburl":""
},
{
"uri":"mrs_01_24167.html",
+ "node_id":"mrs_01_24167.xml",
"product_code":"mrs",
- "code":"377",
+ "code":"375",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Single-Table Concurrent Write Configuration,Configuration Reference,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Single-Table Concurrent Write Configuration",
"githuburl":""
},
{
"uri":"mrs_01_24039.html",
+ "node_id":"mrs_01_24039.xml",
"product_code":"mrs",
- "code":"378",
+ "code":"376",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Hudi Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hudi Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_24101.html",
+ "node_id":"mrs_01_24101.xml",
"product_code":"mrs",
- "code":"379",
+ "code":"377",
"des":"In the current version, Spark is recommended for Hudi write operations. Therefore, the tuning methods of Hudi are similar to those of Spark. For details, see Spark2x Perf",
"doc_type":"cmpntguide",
"kw":"Performance Tuning Methods,Hudi Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performance Tuning Methods",
"githuburl":""
},
{
"uri":"mrs_01_24102.html",
+ "node_id":"mrs_01_24102.xml",
"product_code":"mrs",
- "code":"380",
+ "code":"378",
"des":"For MOR tables:The essence of MOR tables is to write incremental files, so the tuning is based on the data size (dataSize) of Hudi.If dataSize is only several GBs, you ar",
"doc_type":"cmpntguide",
"kw":"Recommended Resource Configuration,Hudi Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Recommended Resource Configuration",
"githuburl":""
},
{
"uri":"mrs_01_24065.html",
+ "node_id":"mrs_01_24065.xml",
"product_code":"mrs",
- "code":"381",
+ "code":"379",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Hudi",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Hudi",
"githuburl":""
},
{
"uri":"mrs_01_24070.html",
+ "node_id":"mrs_01_24070.xml",
"product_code":"mrs",
- "code":"382",
+ "code":"380",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Data Write",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Data Write",
"githuburl":""
},
{
"uri":"mrs_01_24071.html",
+ "node_id":"mrs_01_24071.xml",
"product_code":"mrs",
- "code":"383",
+ "code":"381",
"des":"The following error is reported when data is written:You are advised to evolve schemas in backward compatible mode while using Hudi. This error usually occurs when you de",
"doc_type":"cmpntguide",
"kw":"Parquet/Avro schema Is Reported When Updated Data Is Written,Data Write,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Parquet/Avro schema Is Reported When Updated Data Is Written",
"githuburl":""
},
{
"uri":"mrs_01_24072.html",
+ "node_id":"mrs_01_24072.xml",
"product_code":"mrs",
- "code":"384",
+ "code":"382",
"des":"The following error is reported when data is written:This error will occur again because schema evolutions are in non-backwards compatible mode. Basically, there is some ",
"doc_type":"cmpntguide",
"kw":"UnsupportedOperationException Is Reported When Updated Data Is Written,Data Write,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"UnsupportedOperationException Is Reported When Updated Data Is Written",
"githuburl":""
},
{
"uri":"mrs_01_24073.html",
+ "node_id":"mrs_01_24073.xml",
"product_code":"mrs",
- "code":"385",
+ "code":"383",
"des":"The following error is reported when data is written:This error may occur if a schema contains some non-nullable field whose value is not present or is null.You are advis",
"doc_type":"cmpntguide",
"kw":"SchemaCompatabilityException Is Reported When Updated Data Is Written,Data Write,Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SchemaCompatabilityException Is Reported When Updated Data Is Written",
"githuburl":""
},
{
"uri":"mrs_01_24074.html",
+ "node_id":"mrs_01_24074.xml",
"product_code":"mrs",
- "code":"386",
+ "code":"384",
"des":"Hudi consumes much space in a temporary folder during upsert.Hudi will spill part of input data to disk if the maximum memory for merge is reached when much input data is",
"doc_type":"cmpntguide",
"kw":"What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?,Data Write,Compone",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?",
"githuburl":""
},
{
"uri":"mrs_01_24504.html",
+ "node_id":"mrs_01_24504.xml",
"product_code":"",
- "code":"387",
+ "code":"385",
"des":"Decimal data is initially written to a Hudi table using the BULK_INSERT command. Then when data is subsequently written using UPSERT, the following error is reported:Caus",
"doc_type":"",
"kw":"Hudi Fails to Write Decimal Data with Lower Precision,Data Write,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Hudi Fails to Write Decimal Data with Lower Precision",
"githuburl":""
},
{
"uri":"mrs_01_24075.html",
+ "node_id":"mrs_01_24075.xml",
"product_code":"mrs",
- "code":"388",
+ "code":"386",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Data Collection",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Data Collection",
"githuburl":""
},
{
"uri":"mrs_01_24077.html",
+ "node_id":"mrs_01_24077.xml",
"product_code":"mrs",
- "code":"389",
+ "code":"387",
"des":"The error \"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer\" is reported in the main thread, and the following error is reported.This error may ",
"doc_type":"cmpntguide",
"kw":"IllegalArgumentException Is Reported When Kafka Is Used to Collect Data,Data Collection,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"IllegalArgumentException Is Reported When Kafka Is Used to Collect Data",
"githuburl":""
},
{
"uri":"mrs_01_24078.html",
+ "node_id":"mrs_01_24078.xml",
"product_code":"mrs",
- "code":"390",
+ "code":"388",
"des":"The following error is reported when data is collected:This error usually occurs when a field marked as recordKey or partitionKey is not present in the input record. Cros",
"doc_type":"cmpntguide",
"kw":"HoodieException Is Reported When Data Is Collected,Data Collection,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HoodieException Is Reported When Data Is Collected",
"githuburl":""
},
{
"uri":"mrs_01_24079.html",
+ "node_id":"mrs_01_24079.xml",
"product_code":"mrs",
- "code":"391",
+ "code":"389",
"des":"Is it possible to use a nullable field that contains null records as a primary key when creating a Hudi table?No. HoodieKeyException will be thrown.",
"doc_type":"cmpntguide",
"kw":"HoodieKeyException Is Reported When Data Is Collected,Data Collection,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HoodieKeyException Is Reported When Data Is Collected",
"githuburl":""
},
{
"uri":"mrs_01_24080.html",
+ "node_id":"mrs_01_24080.xml",
"product_code":"mrs",
- "code":"392",
+ "code":"390",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Hive Synchronization",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Synchronization",
"githuburl":""
},
{
"uri":"mrs_01_24081.html",
+ "node_id":"mrs_01_24081.xml",
"product_code":"mrs",
- "code":"393",
+ "code":"391",
"des":"The following error is reported during Hive data synchronization:This error usually occurs when you try to add a new column to an existing Hive table using the HiveSyncTo",
"doc_type":"cmpntguide",
"kw":"SQLException Is Reported During Hive Data Synchronization,Hive Synchronization,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SQLException Is Reported During Hive Data Synchronization",
"githuburl":""
},
{
"uri":"mrs_01_24082.html",
+ "node_id":"mrs_01_24082.xml",
"product_code":"mrs",
- "code":"394",
+ "code":"392",
"des":"The following error is reported during Hive data synchronization:This error occurs because HiveSyncTool currently supports only few compatible data type conversions. The ",
"doc_type":"cmpntguide",
"kw":"HoodieHiveSyncException Is Reported During Hive Data Synchronization,Hive Synchronization,Component ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HoodieHiveSyncException Is Reported During Hive Data Synchronization",
"githuburl":""
},
{
"uri":"mrs_01_24083.html",
+ "node_id":"mrs_01_24083.xml",
"product_code":"mrs",
- "code":"395",
+ "code":"393",
"des":"The following error is reported during Hive data synchronization:This error usually occurs when Hive synchronization is performed on the Hudi dataset but the configured h",
"doc_type":"cmpntguide",
"kw":"SemanticException Is Reported During Hive Data Synchronization,Hive Synchronization,Component Operat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SemanticException Is Reported During Hive Data Synchronization",
"githuburl":""
},
{
"uri":"mrs_01_0369.html",
+ "node_id":"mrs_01_0369.xml",
"product_code":"mrs",
- "code":"396",
+ "code":"394",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Hue (Versions Earlier Than MRS 3.x)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hue (Versions Earlier Than MRS 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_1020.html",
+ "node_id":"mrs_01_1020.xml",
"product_code":"mrs",
- "code":"397",
+ "code":"395",
"des":"Hue provides the file browser function using a graphical user interface (GUI) so that you can view files and directories on Hive.You have installed Hive and Hue, and the ",
"doc_type":"cmpntguide",
"kw":"Using Hue from Scratch,Using Hue (Versions Earlier Than MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hue from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0370.html",
+ "node_id":"mrs_01_0370.xml",
"product_code":"mrs",
- "code":"398",
+ "code":"396",
"des":"After Hue is installed in an MRS cluster, users can use Hadoop and Hive on the Hue web UI.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication e",
"doc_type":"cmpntguide",
"kw":"Accessing the Hue Web UI,Using Hue (Versions Earlier Than MRS 3.x),Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_1021.html",
+ "node_id":"mrs_01_1021.xml",
"product_code":"mrs",
- "code":"399",
+ "code":"397",
"des":"For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
"doc_type":"cmpntguide",
"kw":"Hue Common Parameters,Using Hue (Versions Earlier Than MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hue Common Parameters",
"githuburl":""
},
{
"uri":"mrs_01_0371.html",
+ "node_id":"mrs_01_0371.xml",
"product_code":"mrs",
- "code":"400",
+ "code":"398",
"des":"Users can use the Hue web UI to execute HiveQL statements in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
"doc_type":"cmpntguide",
"kw":"Using HiveQL Editor on the Hue Web UI,Using Hue (Versions Earlier Than MRS 3.x),Component Operation ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HiveQL Editor on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0372.html",
+ "node_id":"mrs_01_0372.xml",
"product_code":"mrs",
- "code":"401",
+ "code":"399",
"des":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
"doc_type":"cmpntguide",
"kw":"Using the Metadata Browser on the Hue Web UI,Using Hue (Versions Earlier Than MRS 3.x),Component Ope",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Metadata Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0373.html",
+ "node_id":"mrs_01_0373.xml",
"product_code":"mrs",
- "code":"402",
+ "code":"400",
"des":"Users can use the Hue web UI to manage files in HDFS in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this func",
"doc_type":"cmpntguide",
"kw":"Using File Browser on the Hue Web UI,Using Hue (Versions Earlier Than MRS 3.x),Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using File Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0374.html",
+ "node_id":"mrs_01_0374.xml",
"product_code":"mrs",
- "code":"403",
+ "code":"401",
"des":"You can use the Hue web UI to query all jobs in the cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this function.V",
"doc_type":"cmpntguide",
"kw":"Using Job Browser on the Hue Web UI,Using Hue (Versions Earlier Than MRS 3.x),Component Operation Gu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Job Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0130.html",
+ "node_id":"mrs_01_0130.xml",
"product_code":"mrs",
- "code":"404",
+ "code":"402",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Hue (MRS 3.x or Later)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hue (MRS 3.x or Later)",
"githuburl":""
},
{
"uri":"mrs_01_0131.html",
+ "node_id":"mrs_01_0131.xml",
"product_code":"mrs",
- "code":"405",
+ "code":"403",
"des":"Hue aggregates interfaces which interact with most Apache Hadoop components and enables you to use Hadoop components with ease on a web UI. You can operate components suc",
"doc_type":"cmpntguide",
"kw":"Using Hue from Scratch,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hue from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0132.html",
+ "node_id":"mrs_01_0132.xml",
"product_code":"mrs",
- "code":"406",
+ "code":"404",
"des":"After Hue is installed in an MRS cluster, users can use Hadoop-related components on the Hue web UI.This section describes how to open the Hue web UI on the MRS cluster.T",
"doc_type":"cmpntguide",
"kw":"Accessing the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0133.html",
+ "node_id":"mrs_01_0133.xml",
"product_code":"mrs",
- "code":"407",
+ "code":"405",
"des":"Go to the All Configurations page of the Hue service by referring to Modifying Cluster Service Configuration Parameters.For details about Hue common parameters, see Table",
"doc_type":"cmpntguide",
"kw":"Hue Common Parameters,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hue Common Parameters",
"githuburl":""
},
{
"uri":"mrs_01_0134.html",
+ "node_id":"mrs_01_0134.xml",
"product_code":"mrs",
- "code":"408",
+ "code":"406",
"des":"Users can use the Hue web UI to execute HiveQL statements in an MRS cluster.Hive supports the following functions:Executes and manages HiveQL statements.Views the HiveQL ",
"doc_type":"cmpntguide",
"kw":"Using HiveQL Editor on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HiveQL Editor on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_2370.html",
+ "node_id":"mrs_01_2370.xml",
"product_code":"mrs",
- "code":"409",
+ "code":"407",
"des":"You can use Hue to execute SparkSql statements in a cluster on a graphical user interface (GUI).Before using the SparkSql editor, you need to modify the Spark2x configura",
"doc_type":"cmpntguide",
"kw":"Using the SparkSql Editor on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the SparkSql Editor on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0135.html",
+ "node_id":"mrs_01_0135.xml",
"product_code":"mrs",
- "code":"410",
+ "code":"408",
"des":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.Access the Hue web UI. For details, see Accessing the Hue Web UI.Viewing metadata of Hive tablesCli",
"doc_type":"cmpntguide",
"kw":"Using the Metadata Browser on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Metadata Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0136.html",
+ "node_id":"mrs_01_0136.xml",
"product_code":"mrs",
- "code":"411",
+ "code":"409",
"des":"Users can use the Hue web UI to manage files in HDFS.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk management operation",
"doc_type":"cmpntguide",
"kw":"Using File Browser on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using File Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0137.html",
+ "node_id":"mrs_01_0137.xml",
"product_code":"mrs",
- "code":"412",
+ "code":"410",
"des":"Users can use the Hue web UI to query all jobs in an MRS cluster.View the jobs in the current cluster.The number on Job Browser indicates the total number of jobs in the ",
"doc_type":"cmpntguide",
"kw":"Using Job Browser on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Job Browser on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_2371.html",
+ "node_id":"mrs_01_2371.xml",
"product_code":"mrs",
- "code":"413",
+ "code":"411",
"des":"You can use Hue to create or query HBase tables in a cluster and run tasks on the Hue web UI.Make sure that the HBase component has been installed in the MRS cluster and ",
"doc_type":"cmpntguide",
"kw":"Using HBase on the Hue Web UI,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using HBase on the Hue Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0138.html",
+ "node_id":"mrs_01_0138.xml",
"product_code":"mrs",
- "code":"414",
+ "code":"412",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Typical Scenarios",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Typical Scenarios",
"githuburl":""
},
{
"uri":"mrs_01_0139.html",
+ "node_id":"mrs_01_0139.xml",
"product_code":"mrs",
- "code":"415",
+ "code":"413",
"des":"Hue provides the file browser function for users to use HDFS in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk ",
"doc_type":"cmpntguide",
"kw":"HDFS on Hue,Typical Scenarios,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"HDFS on Hue",
"githuburl":""
},
{
"uri":"mrs_01_0141.html",
+ "node_id":"mrs_01_0141.xml",
"product_code":"mrs",
- "code":"416",
+ "code":"414",
"des":"Hue provides the Hive GUI management function so that users can query Hive data in GUI mode.Access the Hue web UI. For details, see Accessing the Hue Web UI.In the naviga",
"doc_type":"cmpntguide",
"kw":"Hive on Hue,Typical Scenarios,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive on Hue",
"githuburl":""
},
{
"uri":"mrs_01_0144.html",
+ "node_id":"mrs_01_0144.xml",
"product_code":"mrs",
- "code":"417",
+ "code":"415",
"des":"Hue provides the Oozie job manager function, in this case, you can use Oozie in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not pe",
"doc_type":"cmpntguide",
"kw":"Oozie on Hue,Typical Scenarios,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Oozie on Hue",
"githuburl":""
},
{
"uri":"mrs_01_0147.html",
+ "node_id":"mrs_01_0147.xml",
"product_code":"mrs",
- "code":"418",
+ "code":"416",
"des":"Log paths: The default paths of Hue logs are /var/log/Bigdata/hue (for storing run logs) and /var/log/Bigdata/audit/hue (for storing audit logs).Log archive rules: The au",
"doc_type":"cmpntguide",
"kw":"Hue Log Overview,Using Hue (MRS 3.x or Later),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hue Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_1764.html",
+ "node_id":"mrs_01_1764.xml",
"product_code":"mrs",
- "code":"419",
+ "code":"417",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Hue",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Hue",
"githuburl":""
},
{
"uri":"mrs_01_1765.html",
+ "node_id":"mrs_01_1765.xml",
"product_code":"mrs",
- "code":"420",
+ "code":"418",
"des":"What do I do if all HQL statements fail to be executed when I use Internet Explorer to access Hive Editor in Hue and the message \"There was an error with your query\" is d",
"doc_type":"cmpntguide",
"kw":"How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?,Common Issu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?",
"githuburl":""
},
{
"uri":"mrs_01_1766.html",
+ "node_id":"mrs_01_1766.xml",
"product_code":"mrs",
- "code":"421",
+ "code":"419",
"des":"When Hive is used, the use database statement is entered in the text box to switch the database, and other statements are also entered, why does the database fail to be s",
"doc_type":"cmpntguide",
"kw":"Why Does the use database Statement Become Invalid When Hive Is Used?,Common Issues About Hue,Compon",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the use database Statement Become Invalid When Hive Is Used?",
"githuburl":""
},
{
"uri":"mrs_01_0156.html",
+ "node_id":"mrs_01_0156.xml",
"product_code":"mrs",
- "code":"422",
+ "code":"420",
"des":"What can I do if an error message shown in the following figure is displayed, indicating that the HDFS file cannot be accessed when I use Hue web UI to access the HDFS fi",
"doc_type":"cmpntguide",
"kw":"What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?,Common Issues About Hue,Component O",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?",
"githuburl":""
},
{
"uri":"mrs_01_2367.html",
+ "node_id":"mrs_01_2367.xml",
"product_code":"mrs",
- "code":"423",
+ "code":"421",
"des":"What can I do when a large file fails to be uploaded on the Hue page?You are advised to run commands on the client to upload large files instead of using the Hue file bro",
"doc_type":"cmpntguide",
"kw":"How Do I Do If a Large File Fails to Upload on the Hue Page?,Common Issues About Hue,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Do If a Large File Fails to Upload on the Hue Page?",
"githuburl":""
},
{
"uri":"mrs_01_2368.html",
+ "node_id":"mrs_01_2368.xml",
"product_code":"mrs",
- "code":"424",
+ "code":"422",
"des":"Why is the native Hue page blank if the Hive service is not installed in a cluster?In MRS 3.x, Hue depends on Hive. If this problem occurs, check whether the Hive compone",
"doc_type":"cmpntguide",
"kw":"Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cl",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cluster?",
"githuburl":""
},
{
"uri":"mrs_01_0375.html",
+ "node_id":"mrs_01_0375.xml",
"product_code":"mrs",
- "code":"425",
+ "code":"423",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Kafka",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1031.html",
+ "node_id":"mrs_01_1031.xml",
"product_code":"mrs",
- "code":"426",
+ "code":"424",
"des":"You can create, query, and delete topics on a cluster client.The client has been installed. For example, the client is installed in the /opt/hadoopclient directory. The c",
"doc_type":"cmpntguide",
"kw":"Using Kafka from Scratch,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Kafka from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0376.html",
+ "node_id":"mrs_01_0376.xml",
"product_code":"mrs",
- "code":"427",
+ "code":"425",
"des":"You can manage Kafka topics on a cluster client based on service requirements. Management permission is required for clusters with Kerberos authentication enabled.You hav",
"doc_type":"cmpntguide",
"kw":"Managing Kafka Topics,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Kafka Topics",
"githuburl":""
},
{
"uri":"mrs_01_0377.html",
+ "node_id":"mrs_01_0377.xml",
"product_code":"mrs",
- "code":"428",
+ "code":"426",
"des":"You can query existing Kafka topics on MRS.For versions earlier than MRS 1.9.2, log in to MRS Manager and choose Services > Kafka.For MRS 1.9.2 or later, click the cluste",
"doc_type":"cmpntguide",
"kw":"Querying Kafka Topics,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Querying Kafka Topics",
"githuburl":""
},
{
"uri":"mrs_01_0378.html",
+ "node_id":"mrs_01_0378.xml",
"product_code":"mrs",
- "code":"429",
+ "code":"427",
"des":"For clusters with Kerberos authentication enabled, using Kafka requires relevant permissions. MRS clusters can grant the use permission of Kafka to different users.Table ",
"doc_type":"cmpntguide",
"kw":"Managing Kafka User Permissions,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Kafka User Permissions",
"githuburl":""
},
{
"uri":"mrs_01_0379.html",
+ "node_id":"mrs_01_0379.xml",
"product_code":"mrs",
- "code":"430",
+ "code":"428",
"des":"You can produce or consume messages in Kafka topics using the MRS cluster client. For clusters with Kerberos authentication enabled, you must have the permission to perfo",
"doc_type":"cmpntguide",
"kw":"Managing Messages in Kafka Topics,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Messages in Kafka Topics",
"githuburl":""
},
{
"uri":"mrs_01_0441.html",
+ "node_id":"mrs_01_0441.xml",
"product_code":"mrs",
- "code":"431",
+ "code":"429",
"des":"This section describes how to use the Maxwell data synchronization tool to migrate offline binlog-based data to an MRS Kafka cluster.Maxwell is an open source application",
"doc_type":"cmpntguide",
"kw":"Synchronizing Binlog-based MySQL Data to the MRS Cluster,Using Kafka,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Synchronizing Binlog-based MySQL Data to the MRS Cluster",
"githuburl":""
},
{
"uri":"mrs_01_1032.html",
+ "node_id":"mrs_01_1032.xml",
"product_code":"mrs",
- "code":"432",
+ "code":"430",
"des":"This section describes how to create and configure a Kafka role.This section applies to MRS 3.x or later.Users can create Kafka roles only in security mode.If the current",
"doc_type":"cmpntguide",
"kw":"Creating a Kafka Role,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Kafka Role",
"githuburl":""
},
{
"uri":"mrs_01_1033.html",
+ "node_id":"mrs_01_1033.xml",
"product_code":"mrs",
- "code":"433",
+ "code":"431",
"des":"This section applies to MRS 3.x or later.For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
"doc_type":"cmpntguide",
"kw":"Kafka Common Parameters,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Common Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1035.html",
+ "node_id":"mrs_01_1035.xml",
"product_code":"mrs",
- "code":"434",
+ "code":"432",
"des":"This section applies to MRS 3.x or later.Producer APIIndicates the API defined in org.apache.kafka.clients.producer.KafkaProducer. When kafka-console-producer.sh is used,",
"doc_type":"cmpntguide",
"kw":"Safety Instructions on Using Kafka,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Safety Instructions on Using Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1036.html",
+ "node_id":"mrs_01_1036.xml",
"product_code":"mrs",
- "code":"435",
+ "code":"433",
"des":"This section applies to MRS 3.x or later.The maximum number of topics depends on the number of file handles (mainly used by data and index files on site) opened in the pr",
"doc_type":"cmpntguide",
"kw":"Kafka Specifications,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Specifications",
"githuburl":""
},
{
"uri":"mrs_01_1767.html",
+ "node_id":"mrs_01_1767.xml",
"product_code":"mrs",
- "code":"436",
+ "code":"434",
"des":"This section guides users to use a Kafka client in an O&M or service scenario.This section applies to MRS 3.x or later clusters.The client has been installed. For example",
"doc_type":"cmpntguide",
"kw":"Using the Kafka Client,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Kafka Client",
"githuburl":""
},
{
"uri":"mrs_01_1037.html",
+ "node_id":"mrs_01_1037.xml",
"product_code":"mrs",
- "code":"437",
+ "code":"435",
"des":"For the Kafka message transmission assurance mechanism, different parameters are available for meeting different performance and reliability requirements. This section de",
"doc_type":"cmpntguide",
"kw":"Configuring Kafka HA and High Reliability Parameters,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Kafka HA and High Reliability Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1038.html",
+ "node_id":"mrs_01_1038.xml",
"product_code":"mrs",
- "code":"438",
+ "code":"436",
"des":"This section applies to MRS 3.x or later.When a broker storage directory is added, the system administrator needs to change the broker storage directory on FusionInsight ",
"doc_type":"cmpntguide",
"kw":"Changing the Broker Storage Directory,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Changing the Broker Storage Directory",
"githuburl":""
},
{
"uri":"mrs_01_1039.html",
+ "node_id":"mrs_01_1039.xml",
"product_code":"mrs",
- "code":"439",
+ "code":"437",
"des":"This section describes how to view the current expenditure on the client based on service requirements.This section applies to MRS 3.x or later.The system administrator h",
"doc_type":"cmpntguide",
"kw":"Checking the Consumption Status of Consumer Group,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Checking the Consumption Status of Consumer Group",
"githuburl":""
},
{
"uri":"mrs_01_1040.html",
+ "node_id":"mrs_01_1040.xml",
"product_code":"mrs",
- "code":"440",
+ "code":"438",
"des":"This section describes how to use the Kafka balancing tool on a client to balance the load of the Kafka cluster based on service requirements in scenarios such as node de",
"doc_type":"cmpntguide",
"kw":"Kafka Balancing Tool Instructions,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Balancing Tool Instructions",
"githuburl":""
},
{
"uri":"mrs_01_24299.html",
+ "node_id":"mrs_01_24299.xml",
"product_code":"",
- "code":"441",
+ "code":"439",
"des":"This section describes how to use the Kafka balancing tool on the client to balance the load of the Kafka cluster after Kafka nodes are scaled out.This section applies to",
"doc_type":"",
"kw":"Balancing Data After Kafka Node Scale-Out,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Balancing Data After Kafka Node Scale-Out",
"githuburl":""
},
{
"uri":"mrs_01_1041.html",
+ "node_id":"mrs_01_1041.xml",
"product_code":"mrs",
- "code":"442",
+ "code":"440",
"des":"Operations need to be performed on tokens when the token authentication mechanism is used.This section applies to security clusters of MRS 3.x or later.The system adminis",
"doc_type":"cmpntguide",
"kw":"Kafka Token Authentication Mechanism Tool Usage,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Token Authentication Mechanism Tool Usage",
"githuburl":""
},
{
"uri":"mrs_01_1042.html",
+ "node_id":"mrs_01_1042.xml",
"product_code":"mrs",
- "code":"443",
+ "code":"441",
"des":"This section applies to MRS 3.x or later.Log paths: The default storage path of Kafka logs is /var/log/Bigdata/kafka. The default storage path of audit logs is /var/log/B",
"doc_type":"cmpntguide",
"kw":"Introduction to Kafka Logs,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to Kafka Logs",
"githuburl":""
},
{
"uri":"mrs_01_1043.html",
+ "node_id":"mrs_01_1043.xml",
"product_code":"mrs",
- "code":"444",
+ "code":"442",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1044.html",
+ "node_id":"mrs_01_1044.xml",
"product_code":"mrs",
- "code":"445",
+ "code":"443",
"des":"You can modify Kafka server parameters to improve Kafka processing capabilities in specific service scenarios.Modify the service configuration parameters. For details, se",
"doc_type":"cmpntguide",
"kw":"Kafka Performance Tuning,Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_2312.html",
+ "node_id":"mrs_01_2312.xml",
"product_code":"mrs",
- "code":"446",
+ "code":"444",
"des":"Feature description: The function of creating idempotent producers is introduced in Kafka 0.11.0.0. After this function is enabled, producers are automatically upgraded t",
"doc_type":"cmpntguide",
"kw":"Kafka Feature Description,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Feature Description",
"githuburl":""
},
{
"uri":"mrs_01_24534.html",
+ "node_id":"mrs_01_24534.xml",
"product_code":"",
- "code":"447",
+ "code":"445",
"des":"This section describes how to use Kafka client commands to migrate partition data between disks on a node without stopping the Kafka service.The system administrator has ",
"doc_type":"",
"kw":"Migrating Data Between Kafka Nodes,Using Kafka,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Migrating Data Between Kafka Nodes",
"githuburl":""
},
{
"uri":"mrs_01_1768.html",
+ "node_id":"mrs_01_1768.xml",
"product_code":"mrs",
- "code":"448",
+ "code":"446",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Kafka",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1769.html",
+ "node_id":"mrs_01_1769.xml",
"product_code":"mrs",
- "code":"449",
+ "code":"447",
"des":"How do I delete a Kafka topic if it fails to be deleted?Possible cause 1: The delete.topic.enable configuration item is not set to true. The deletion can be performed onl",
"doc_type":"cmpntguide",
"kw":"How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?,Common Issues About Kafka,Component ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?",
"githuburl":""
},
{
"uri":"mrs_01_0435.html",
+ "node_id":"mrs_01_0435.xml",
"product_code":"mrs",
- "code":"450",
+ "code":"448",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using KafkaManager",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using KafkaManager",
"githuburl":""
},
{
"uri":"mrs_01_0436.html",
+ "node_id":"mrs_01_0436.xml",
"product_code":"mrs",
- "code":"451",
+ "code":"449",
"des":"KafkaManager is a tool for managing Apache Kafka and provides GUI-based metric monitoring and management of Kafka clusters. This section applies to MRS 1.9.2 clusters.Kaf",
"doc_type":"cmpntguide",
"kw":"Introduction to KafkaManager,Using KafkaManager,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to KafkaManager",
"githuburl":""
},
{
"uri":"mrs_01_0437.html",
+ "node_id":"mrs_01_0437.xml",
"product_code":"mrs",
- "code":"452",
+ "code":"450",
"des":"You can monitor and manage Kafka clusters on the graphical KafkaManager web UI.This section applies to MRS 1.9.2 clusters.KafkaManager has been installed in a cluster.The",
"doc_type":"cmpntguide",
"kw":"Accessing the KafkaManager Web UI,Using KafkaManager,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the KafkaManager Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0438.html",
+ "node_id":"mrs_01_0438.xml",
"product_code":"mrs",
- "code":"453",
+ "code":"451",
"des":"This section applies to MRS 1.9.2 clusters.Kafka cluster management includes the following operations:Adding a Cluster on the KafkaManager Web UIUpdating Cluster Paramete",
"doc_type":"cmpntguide",
"kw":"Managing Kafka Clusters,Using KafkaManager,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Kafka Clusters",
"githuburl":""
},
{
"uri":"mrs_01_0439.html",
+ "node_id":"mrs_01_0439.xml",
"product_code":"mrs",
- "code":"454",
+ "code":"452",
"des":"This section applies to MRS 1.9.2 clusters.The Kafka cluster monitoring management includes the following operations:Viewing Broker InformationViewing Topic InformationVi",
"doc_type":"cmpntguide",
"kw":"Kafka Cluster Monitoring Management,Using KafkaManager,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Kafka Cluster Monitoring Management",
"githuburl":""
},
{
"uri":"mrs_01_0400.html",
+ "node_id":"mrs_01_0400.xml",
"product_code":"mrs",
- "code":"455",
+ "code":"453",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Loader",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Loader",
"githuburl":""
},
{
"uri":"mrs_01_1084.html",
+ "node_id":"mrs_01_1084.xml",
"product_code":"mrs",
- "code":"456",
+ "code":"454",
"des":"You can use Loader to import data from the SFTP server to HDFS.This section applies to MRS clusters earlier than 3.x.You have prepared service data.You have created an an",
"doc_type":"cmpntguide",
"kw":"Using Loader from Scratch,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Loader from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_0401.html",
+ "node_id":"mrs_01_0401.xml",
"product_code":"mrs",
- "code":"457",
+ "code":"455",
"des":"This section applies to MRS clusters earlier than 3.x.The process for migrating user data with Loader is as follows:Access the Loader page of the Hue web UI.Manage Loader",
"doc_type":"cmpntguide",
"kw":"How to Use Loader,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Use Loader",
"githuburl":""
},
{
"uri":"mrs_01_0402.html",
+ "node_id":"mrs_01_0402.xml",
"product_code":"mrs",
- "code":"458",
+ "code":"456",
"des":"This section applies to versions earlier than MRS 3.x.Loader supports the following links. This section describes configurations of each link.obs-connectorgeneric-jdbc-co",
"doc_type":"cmpntguide",
"kw":"Loader Link Configuration,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Loader Link Configuration",
"githuburl":""
},
{
"uri":"mrs_01_0403.html",
+ "node_id":"mrs_01_0403.xml",
"product_code":"mrs",
- "code":"459",
+ "code":"457",
"des":"You can create, view, edit, and delete links on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see ",
"doc_type":"cmpntguide",
"kw":"Managing Loader Links (Versions Earlier Than MRS 3.x),Using Loader,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Loader Links (Versions Earlier Than MRS 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_0404.html",
+ "node_id":"mrs_01_0404.xml",
"product_code":"mrs",
- "code":"460",
+ "code":"458",
"des":"When Loader jobs obtain data from different data sources, a link corresponding to a data source type needs to be selected and the link properties need to be configured.Th",
"doc_type":"cmpntguide",
"kw":"Source Link Configurations of Loader Jobs,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Source Link Configurations of Loader Jobs",
"githuburl":""
},
{
"uri":"mrs_01_0405.html",
+ "node_id":"mrs_01_0405.xml",
"product_code":"mrs",
- "code":"461",
+ "code":"459",
"des":"When Loader jobs save data to different storage locations, a destination link needs to be selected and the link properties need to be configured.",
"doc_type":"cmpntguide",
"kw":"Destination Link Configurations of Loader Jobs,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Destination Link Configurations of Loader Jobs",
"githuburl":""
},
{
"uri":"mrs_01_0406.html",
+ "node_id":"mrs_01_0406.xml",
"product_code":"mrs",
- "code":"462",
+ "code":"460",
"des":"You can create, view, edit, and delete jobs on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see L",
"doc_type":"cmpntguide",
"kw":"Managing Loader Jobs,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Managing Loader Jobs",
"githuburl":""
},
{
"uri":"mrs_01_0407.html",
+ "node_id":"mrs_01_0407.xml",
"product_code":"mrs",
- "code":"463",
+ "code":"461",
"des":"As a component for batch data export, Loader can import and export data using a relational database.You have prepared service data.Procedure for MRS clusters earlier than",
"doc_type":"cmpntguide",
"kw":"Preparing a Driver for MySQL Database Link,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Preparing a Driver for MySQL Database Link",
"githuburl":""
},
{
"uri":"mrs_01_1165.html",
+ "node_id":"mrs_01_1165.xml",
"product_code":"mrs",
- "code":"464",
+ "code":"462",
"des":"Log path: The default storage path of Loader log files is /var/log/Bigdata/loader/Log category.runlog: /var/log/Bigdata/loader/runlog (run logs)scriptlog: /var/log/Bigdat",
"doc_type":"cmpntguide",
"kw":"Loader Log Overview,Using Loader,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Loader Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_0408.html",
+ "node_id":"mrs_01_0408.xml",
"product_code":"mrs",
- "code":"465",
+ "code":"463",
"des":"If you need to import a large volume of data from the external cluster to the internal cluster, import it from OBS to HDFS.You have prepared service data.You have created",
"doc_type":"cmpntguide",
"kw":"Example: Using Loader to Import Data from OBS to HDFS,Using Loader,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Example: Using Loader to Import Data from OBS to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1785.html",
+ "node_id":"mrs_01_1785.xml",
"product_code":"mrs",
- "code":"466",
+ "code":"464",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Loader",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Loader",
"githuburl":""
},
{
"uri":"mrs_01_1786.html",
+ "node_id":"mrs_01_1786.xml",
"product_code":"mrs",
- "code":"467",
+ "code":"465",
"des":"Internet Explorer 11 or Internet Explorer 10 is used to access the web UI of Loader. After data is submitted, an error occurs.SymptomWhen the submitted data is saved, a s",
"doc_type":"cmpntguide",
"kw":"How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Expl",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Explorer 11 ?",
"githuburl":""
},
{
"uri":"mrs_01_1787.html",
+ "node_id":"mrs_01_1787.xml",
"product_code":"mrs",
- "code":"468",
+ "code":"466",
"des":"Three types of connectors are available for importing data from the Oracle database to HDFS using Loader. That is, generic-jdbc-connector, oracle-connector, and oracle-pa",
"doc_type":"cmpntguide",
"kw":"Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to H",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS",
"githuburl":""
},
{
"uri":"mrs_01_0834.html",
+ "node_id":"mrs_01_0834.xml",
"product_code":"mrs",
- "code":"469",
+ "code":"467",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using MapReduce",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using MapReduce",
"githuburl":""
},
{
"uri":"mrs_01_0836.html",
+ "node_id":"mrs_01_0836.xml",
"product_code":"mrs",
- "code":"470",
+ "code":"468",
"des":"Job and task logs are generated during execution of a MapReduce application.Job logs are generated by the MRApplicationMaster, which record details about the start and ru",
"doc_type":"cmpntguide",
"kw":"Configuring the Log Archiving and Clearing Mechanism,Using MapReduce,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Log Archiving and Clearing Mechanism",
"githuburl":""
},
{
"uri":"mrs_01_0837.html",
+ "node_id":"mrs_01_0837.xml",
"product_code":"mrs",
- "code":"471",
+ "code":"469",
"des":"When the network is unstable or the cluster I/O and CPU are overloaded, client applications might encounter running failures.Adjust the following parameters in the mapred",
"doc_type":"cmpntguide",
"kw":"Reducing Client Application Failure Rate,Using MapReduce,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Reducing Client Application Failure Rate",
"githuburl":""
},
{
"uri":"mrs_01_0838.html",
+ "node_id":"mrs_01_0838.xml",
"product_code":"mrs",
- "code":"472",
+ "code":"470",
"des":"If you want to transmit a job from Windows to Linux, set mapreduce.app-submission.cross-platform to true. If this parameter is unavailable for a cluster or its value is f",
"doc_type":"cmpntguide",
"kw":"Transmitting MapReduce Tasks from Windows to Linux,Using MapReduce,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Transmitting MapReduce Tasks from Windows to Linux",
"githuburl":""
},
{
"uri":"mrs_01_0839.html",
+ "node_id":"mrs_01_0839.xml",
"product_code":"mrs",
- "code":"473",
+ "code":"471",
"des":"This section applies to MRS 3.x or later.Distributed caching is useful in the following scenarios:Rolling UpgradeDuring the upgrade, applications must keep the text conte",
"doc_type":"cmpntguide",
"kw":"Configuring the Distributed Cache,Using MapReduce,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Distributed Cache",
"githuburl":""
},
{
"uri":"mrs_01_0840.html",
+ "node_id":"mrs_01_0840.xml",
"product_code":"mrs",
- "code":"474",
+ "code":"472",
"des":"When the MapReduce shuffle service is started, it attempts to bind an IP address based on local host. If the MapReduce shuffle service is required to connect to a specifi",
"doc_type":"cmpntguide",
"kw":"Configuring the MapReduce Shuffle Address,Using MapReduce,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the MapReduce Shuffle Address",
"githuburl":""
},
{
"uri":"mrs_01_0841.html",
+ "node_id":"mrs_01_0841.xml",
"product_code":"mrs",
- "code":"475",
+ "code":"473",
"des":"This function is used to specify the MapReduce cluster administrator.The systemadministrator list is specified by mapreduce.cluster.administrators. The cluster administra",
"doc_type":"cmpntguide",
"kw":"Configuring the Cluster Administrator List,Using MapReduce,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Cluster Administrator List",
"githuburl":""
},
{
"uri":"mrs_01_0842.html",
+ "node_id":"mrs_01_0842.xml",
"product_code":"mrs",
- "code":"476",
+ "code":"474",
"des":"Log paths:JobhistoryServer: /var/log/Bigdata/mapreduce/jobhistory (run log) and /var/log/Bigdata/audit/mapreduce/jobhistory (audit log)Container: /srv/BigData/hadoop/data",
"doc_type":"cmpntguide",
"kw":"Introduction to MapReduce Logs,Using MapReduce,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Introduction to MapReduce Logs",
"githuburl":""
},
{
"uri":"mrs_01_0843.html",
+ "node_id":"mrs_01_0843.xml",
"product_code":"mrs",
- "code":"477",
+ "code":"475",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"MapReduce Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"MapReduce Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_0844.html",
+ "node_id":"mrs_01_0844.xml",
"product_code":"mrs",
- "code":"478",
+ "code":"476",
"des":"Optimization can be performed when the number of CPU cores is large, for example, the number of CPU cores is three times the number of disks.You can set the following par",
"doc_type":"cmpntguide",
"kw":"Optimization Configuration for Multiple CPU Cores,MapReduce Performance Tuning,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimization Configuration for Multiple CPU Cores",
"githuburl":""
},
{
"uri":"mrs_01_0845.html",
+ "node_id":"mrs_01_0845.xml",
"product_code":"mrs",
- "code":"479",
+ "code":"477",
"des":"The performance optimization effect is verified by comparing actual values with the baseline data. Therefore, determining optimal job baseline is critical to performance ",
"doc_type":"cmpntguide",
"kw":"Determining the Job Baseline,MapReduce Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Determining the Job Baseline",
"githuburl":""
},
{
"uri":"mrs_01_0846.html",
+ "node_id":"mrs_01_0846.xml",
"product_code":"mrs",
- "code":"480",
+ "code":"478",
"des":"During the shuffle procedure of MapReduce, the Map task writes intermediate data into disks, and the Reduce task copies and adds the data to the reduce function. Hadoop p",
"doc_type":"cmpntguide",
"kw":"Streamlining Shuffle,MapReduce Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Streamlining Shuffle",
"githuburl":""
},
{
"uri":"mrs_01_0847.html",
+ "node_id":"mrs_01_0847.xml",
"product_code":"mrs",
- "code":"481",
+ "code":"479",
"des":"A big job containing 100,000 Map tasks fails. It is found that the failure is triggered by the slow response of ApplicationMaster (AM).When the number of tasks increases,",
"doc_type":"cmpntguide",
"kw":"AM Optimization for Big Tasks,MapReduce Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"AM Optimization for Big Tasks",
"githuburl":""
},
{
"uri":"mrs_01_0848.html",
+ "node_id":"mrs_01_0848.xml",
"product_code":"mrs",
- "code":"482",
+ "code":"480",
"des":"If a cluster has hundreds or thousands of nodes, the hardware or software fault of a node may prolong the execution time of the entire task (as most tasks are already com",
"doc_type":"cmpntguide",
"kw":"Speculative Execution,MapReduce Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Speculative Execution",
"githuburl":""
},
{
"uri":"mrs_01_0849.html",
+ "node_id":"mrs_01_0849.xml",
"product_code":"mrs",
- "code":"483",
+ "code":"481",
"des":"The Slow Start feature specifies the proportion of Map tasks to be completed before Reduce tasks are started. If the Reduce tasks are started too early, resources will be",
"doc_type":"cmpntguide",
"kw":"Using Slow Start,MapReduce Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Slow Start",
"githuburl":""
},
{
"uri":"mrs_01_0850.html",
+ "node_id":"mrs_01_0850.xml",
"product_code":"mrs",
- "code":"484",
+ "code":"482",
"des":"By default, if an MR job generates a large number of output files, it takes a long time for the job to commit the temporary outputs of a task to the final output director",
"doc_type":"cmpntguide",
"kw":"Optimizing Performance for Committing MR Jobs,MapReduce Performance Tuning,Component Operation Guide",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Performance for Committing MR Jobs",
"githuburl":""
},
{
"uri":"mrs_01_1788.html",
+ "node_id":"mrs_01_1788.xml",
"product_code":"mrs",
- "code":"485",
+ "code":"483",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About MapReduce",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About MapReduce",
"githuburl":""
},
{
"uri":"mrs_01_1789.html",
+ "node_id":"mrs_01_1789.xml",
"product_code":"mrs",
- "code":"486",
+ "code":"484",
"des":"MapReduce job takes a very long time (more than 10minutes) when the ResourceManager switch while the job is running.This is because, ResorceManager HA is enabled but the ",
"doc_type":"cmpntguide",
"kw":"Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?,Common Is",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?",
"githuburl":""
},
{
"uri":"mrs_01_1790.html",
+ "node_id":"mrs_01_1790.xml",
"product_code":"mrs",
- "code":"487",
+ "code":"485",
"des":"MapReduce job is not progressing for long timeThis is because of less memory. When the memory is less, the time taken by the job to copy the map output increases signific",
"doc_type":"cmpntguide",
"kw":"Why Does a MapReduce Task Stay Unchanged for a Long Time?,Common Issues About MapReduce,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does a MapReduce Task Stay Unchanged for a Long Time?",
"githuburl":""
},
{
"uri":"mrs_01_1791.html",
+ "node_id":"mrs_01_1791.xml",
"product_code":"mrs",
- "code":"488",
+ "code":"486",
"des":"Why is the client unavailable when the MR ApplicationMaster or ResourceManager is moved to the D state during job running?When a task is running, the MR ApplicationMaster",
"doc_type":"cmpntguide",
"kw":"Why the Client Hangs During Job Running?,Common Issues About MapReduce,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why the Client Hangs During Job Running?",
"githuburl":""
},
{
"uri":"mrs_01_1792.html",
+ "node_id":"mrs_01_1792.xml",
"product_code":"mrs",
- "code":"489",
+ "code":"487",
"des":"In security mode, why delegation token HDFS_DELEGATION_TOKEN is not found in the cache?In MapReduce, by default HDFS_DELEGATION_TOKEN will be canceled after the job compl",
"doc_type":"cmpntguide",
"kw":"Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?,Common Issues About MapReduce,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?",
"githuburl":""
},
{
"uri":"mrs_01_1793.html",
+ "node_id":"mrs_01_1793.xml",
"product_code":"mrs",
- "code":"490",
+ "code":"488",
"des":"How do I set the job priority when submitting a MapReduce task?You can add the parameter -Dmapreduce.job.priority= in the command to set task priority when subm",
"doc_type":"cmpntguide",
"kw":"How Do I Set the Task Priority When Submitting a MapReduce Task?,Common Issues About MapReduce,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Set the Task Priority When Submitting a MapReduce Task?",
"githuburl":""
},
{
"uri":"mrs_01_1797.html",
+ "node_id":"mrs_01_1797.xml",
"product_code":"mrs",
- "code":"491",
+ "code":"489",
"des":"After the address of MapReduce JobHistoryServer is changed, why the wrong page is displayed when I click the tracking URL on the ResourceManager WebUI?JobHistoryServer ad",
"doc_type":"cmpntguide",
"kw":"After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I C",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?",
"githuburl":""
},
{
"uri":"mrs_01_1799.html",
+ "node_id":"mrs_01_1799.xml",
"product_code":"mrs",
- "code":"492",
+ "code":"490",
"des":"MapReduce or Yarn job fails in multiple nameService environment using viewFS.When using viewFS only the mount directories are accessible, so the most possible cause is th",
"doc_type":"cmpntguide",
"kw":"MapReduce Job Failed in Multiple NameService Environment,Common Issues About MapReduce,Component Ope",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"MapReduce Job Failed in Multiple NameService Environment",
"githuburl":""
},
{
"uri":"mrs_01_1800.html",
+ "node_id":"mrs_01_1800.xml",
"product_code":"mrs",
- "code":"493",
+ "code":"491",
"des":"MapReduce task fails and the ratio of fault nodes to all nodes is smaller than the blacklist threshold configured by yarn.resourcemanager.am-scheduling.node-blacklisting-",
"doc_type":"cmpntguide",
"kw":"Why a Fault MapReduce Node Is Not Blacklisted?,Common Issues About MapReduce,Component Operation Gui",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why a Fault MapReduce Node Is Not Blacklisted?",
"githuburl":""
},
{
"uri":"mrs_01_1807.html",
+ "node_id":"mrs_01_1807.xml",
"product_code":"mrs",
- "code":"494",
+ "code":"492",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Oozie",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Oozie",
"githuburl":""
},
{
"uri":"mrs_01_1808.html",
+ "node_id":"mrs_01_1808.xml",
"product_code":"mrs",
- "code":"495",
+ "code":"493",
"des":"Oozie is an open-source workflow engine that is used to schedule and coordinate Hadoop jobs.Oozie can be used to submit a wide array of jobs, such as Hive, Spark2x, Loade",
"doc_type":"cmpntguide",
"kw":"Using Oozie from Scratch,Using Oozie,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Using Oozie from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_1810.html",
+ "node_id":"mrs_01_1810.xml",
"product_code":"mrs",
- "code":"496",
+ "code":"494",
"des":"This section describes how to use the Oozie client in an O&M scenario or service scenario.The client has been installed. For example, the installation directory is /opt/c",
"doc_type":"cmpntguide",
"kw":"Using the Oozie Client,Using Oozie,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Using the Oozie Client",
"githuburl":""
},
{
"uri":"mrs_01_1812.html",
+ "node_id":"mrs_01_1812.xml",
"product_code":"mrs",
- "code":"497",
+ "code":"495",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Oozie Client to Submit an Oozie Job",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Oozie Client to Submit an Oozie Job",
"githuburl":""
},
{
"uri":"mrs_01_1813.html",
+ "node_id":"mrs_01_1813.xml",
"product_code":"mrs",
- "code":"498",
+ "code":"496",
"des":"This section describes how to use the Oozie client to submit a Hive job.Hive jobs are divided into the following types:Hive jobHive job that is connected in JDBC modeHive",
"doc_type":"cmpntguide",
"kw":"Submitting a Hive Job,Using Oozie Client to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Hive Job",
"githuburl":""
},
{
"uri":"mrs_01_1814.html",
+ "node_id":"mrs_01_1814.xml",
"product_code":"mrs",
- "code":"499",
+ "code":"497",
"des":"This section describes how to submit a Spark2x job using the Oozie client.You are advised to download the latest client.The Spark2x and Oozie components and clients have ",
"doc_type":"cmpntguide",
"kw":"Submitting a Spark2x Job,Using Oozie Client to Submit an Oozie Job,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Spark2x Job",
"githuburl":""
},
{
"uri":"mrs_01_1815.html",
+ "node_id":"mrs_01_1815.xml",
"product_code":"mrs",
- "code":"500",
+ "code":"498",
"des":"This section describes how to submit a Loader job using the Oozie client.You are advised to download the latest client.The Hive and Oozie components and clients have been",
"doc_type":"cmpntguide",
"kw":"Submitting a Loader Job,Using Oozie Client to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Loader Job",
"githuburl":""
},
{
"uri":"mrs_01_2392.html",
+ "node_id":"mrs_01_2392.xml",
"product_code":"mrs",
- "code":"501",
+ "code":"499",
"des":"This section describes how to submit a DistCp job using the Oozie client.You are advised to download the latest client.The HDFS and Oozie components and clients have been",
"doc_type":"cmpntguide",
"kw":"Submitting a DistCp Job,Using Oozie Client to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a DistCp Job",
"githuburl":""
},
{
"uri":"mrs_01_1816.html",
+ "node_id":"mrs_01_1816.xml",
"product_code":"mrs",
- "code":"502",
+ "code":"500",
"des":"In addition to Hive, Spark2x, and Loader jobs, MapReduce, Java, Shell, HDFS, SSH, SubWorkflow, Streaming, and scheduled jobs can be submitted using the Oozie client.You a",
"doc_type":"cmpntguide",
"kw":"Submitting Other Jobs,Using Oozie Client to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting Other Jobs",
"githuburl":""
},
{
"uri":"mrs_01_1817.html",
+ "node_id":"mrs_01_1817.xml",
"product_code":"mrs",
- "code":"503",
+ "code":"501",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Hue to Submit an Oozie Job",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Hue to Submit an Oozie Job",
"githuburl":""
},
{
"uri":"mrs_01_1818.html",
+ "node_id":"mrs_01_1818.xml",
"product_code":"mrs",
- "code":"504",
+ "code":"502",
"des":"You can submit an Oozie job on the Hue management page, but a workflow must be created before the job is submitted.Before using Hue to submit an Oozie job, configure the ",
"doc_type":"cmpntguide",
"kw":"Creating a Workflow,Using Hue to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Creating a Workflow",
"githuburl":""
},
{
"uri":"mrs_01_1819.html",
+ "node_id":"mrs_01_1819.xml",
"product_code":"mrs",
- "code":"505",
+ "code":"503",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Submitting a Workflow Job",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting a Workflow Job",
"githuburl":""
},
{
"uri":"mrs_01_1820.html",
+ "node_id":"mrs_01_1820.xml",
"product_code":"mrs",
- "code":"506",
+ "code":"504",
"des":"This section describes how to submit an Oozie job of the Hive2 type on the Hue web UI.For example, if the input parameter is INPUT=/user/admin/examples/input-data/table, ",
"doc_type":"cmpntguide",
"kw":"Submitting a Hive2 Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Hive2 Job",
"githuburl":""
},
{
"uri":"mrs_01_1821.html",
+ "node_id":"mrs_01_1821.xml",
"product_code":"mrs",
- "code":"507",
+ "code":"505",
"des":"This section describes how to submit an Oozie job of the Spark2x type on Hue.For example, add the following parameters:hdfs://hacluster/user/admin/examples/input-data/tex",
"doc_type":"cmpntguide",
"kw":"Submitting a Spark2x Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting a Spark2x Job",
"githuburl":""
},
{
"uri":"mrs_01_1822.html",
+ "node_id":"mrs_01_1822.xml",
"product_code":"mrs",
- "code":"508",
+ "code":"506",
"des":"This section describes how to submit an Oozie job of the Java type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
"doc_type":"cmpntguide",
"kw":"Submitting a Java Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting a Java Job",
"githuburl":""
},
{
"uri":"mrs_01_1823.html",
+ "node_id":"mrs_01_1823.xml",
"product_code":"mrs",
- "code":"509",
+ "code":"507",
"des":"This section describes how to submit an Oozie job of the Loader type on the Hue web UI.Job id is the ID of the Loader job to be orchestrated and can be obtained from the ",
"doc_type":"cmpntguide",
"kw":"Submitting a Loader Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Loader Job",
"githuburl":""
},
{
"uri":"mrs_01_1824.html",
+ "node_id":"mrs_01_1824.xml",
"product_code":"mrs",
- "code":"510",
+ "code":"508",
"des":"This section describes how to submit an Oozie job of the MapReduce type on the Hue web UI.For example, set the value of mapred.input.dir to /user/admin/examples/input-dat",
"doc_type":"cmpntguide",
"kw":"Submitting a MapReduce Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a MapReduce Job",
"githuburl":""
},
{
"uri":"mrs_01_1825.html",
+ "node_id":"mrs_01_1825.xml",
"product_code":"mrs",
- "code":"511",
+ "code":"509",
"des":"This section describes how to submit an Oozie job of the Sub-workflow type on the Hue web UI.If you need to modify the job name before saving the job (default value: My W",
"doc_type":"cmpntguide",
"kw":"Submitting a Sub-workflow Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Sub-workflow Job",
"githuburl":""
},
{
"uri":"mrs_01_1826.html",
+ "node_id":"mrs_01_1826.xml",
"product_code":"mrs",
- "code":"512",
+ "code":"510",
"des":"This section describes how to submit an Oozie job of the Shell type on the Hue web UI.If the file is stored in HDFS, select the path of the .sh file, for example, user/hu",
"doc_type":"cmpntguide",
"kw":"Submitting a Shell Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Shell Job",
"githuburl":""
},
{
"uri":"mrs_01_1827.html",
+ "node_id":"mrs_01_1827.xml",
"product_code":"mrs",
- "code":"513",
+ "code":"511",
"des":"This section describes how to submit an Oozie job of the HDFS type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
"doc_type":"cmpntguide",
"kw":"Submitting an HDFS Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting an HDFS Job",
"githuburl":""
},
{
"uri":"mrs_01_1828.html",
+ "node_id":"mrs_01_1828.xml",
"product_code":"mrs",
- "code":"514",
+ "code":"512",
"des":"This section describes how to submit an Oozie job of the Streaming type on the Hue web UI.for example, /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-3.1.1.ja",
"doc_type":"cmpntguide",
"kw":"Submitting a Streaming Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting a Streaming Job",
"githuburl":""
},
{
"uri":"mrs_01_1829.html",
+ "node_id":"mrs_01_1829.xml",
"product_code":"mrs",
- "code":"515",
+ "code":"513",
"des":"This section describes how to submit an Oozie job of the DistCp type on the Hue web UI.If yes, go to 4.If no, go to 7.source_ip: service address of the HDFS NameNode in t",
"doc_type":"cmpntguide",
"kw":"Submitting a DistCp Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a DistCp Job",
"githuburl":""
},
{
"uri":"mrs_01_1830.html",
+ "node_id":"mrs_01_1830.xml",
"product_code":"mrs",
- "code":"516",
+ "code":"514",
"des":"This section guides you to enable unidirectional password-free mutual trust when Oozie nodes are used to execute shell scripts of external nodes through SSH jobs.You have",
"doc_type":"cmpntguide",
"kw":"Example of Mutual Trust Operations,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Example of Mutual Trust Operations",
"githuburl":""
},
{
"uri":"mrs_01_1831.html",
+ "node_id":"mrs_01_1831.xml",
"product_code":"mrs",
- "code":"517",
+ "code":"515",
"des":"This section guides you to submit an Oozie job of the SSH type on the Hue web UI.Due to security risks, SSH jobs cannot be submitted by default. To use the SSH function, ",
"doc_type":"cmpntguide",
"kw":"Submitting an SSH Job,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting an SSH Job",
"githuburl":""
},
{
"uri":"mrs_01_2372.html",
+ "node_id":"mrs_01_2372.xml",
"product_code":"mrs",
- "code":"518",
+ "code":"516",
"des":"This section describes how to submit a Hive job on the Hue web UI.After the job is submitted, you can view the related contents of the job, such as the detailed informati",
"doc_type":"cmpntguide",
"kw":"Submitting a Hive Script,Submitting a Workflow Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Submitting a Hive Script",
"githuburl":""
},
{
"uri":"mrs_01_1840.html",
+ "node_id":"mrs_01_1840.xml",
"product_code":"mrs",
- "code":"519",
+ "code":"517",
"des":"This section describes how to submit a job of the periodic scheduling type on the Hue web UI.Required workflow jobs have been configured before the coordinator task is su",
"doc_type":"cmpntguide",
"kw":"Submitting a Coordinator Periodic Scheduling Job,Using Hue to Submit an Oozie Job,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Coordinator Periodic Scheduling Job",
"githuburl":""
},
{
"uri":"mrs_01_1841.html",
+ "node_id":"mrs_01_1841.xml",
"product_code":"mrs",
- "code":"520",
+ "code":"518",
"des":"In the case that multiple scheduled jobs exist at the same time, you can manage the jobs in batches over the Bundle task. This section describes how to submit a job of th",
"doc_type":"cmpntguide",
"kw":"Submitting a Bundle Batch Processing Job,Using Hue to Submit an Oozie Job,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Submitting a Bundle Batch Processing Job",
"githuburl":""
},
{
"uri":"mrs_01_1842.html",
+ "node_id":"mrs_01_1842.xml",
"product_code":"mrs",
- "code":"521",
+ "code":"519",
"des":"After the jobs are submitted, you can view the execution status of a specific job on Hue.",
"doc_type":"cmpntguide",
"kw":"Querying the Operation Results,Using Hue to Submit an Oozie Job,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Querying the Operation Results",
"githuburl":""
},
{
"uri":"mrs_01_1843.html",
+ "node_id":"mrs_01_1843.xml",
"product_code":"mrs",
- "code":"522",
+ "code":"520",
"des":"Log path: The default storage paths of Oozie log files are as follows:Run log: /var/log/Bigdata/oozieAudit log: /var/log/Bigdata/audit/oozieLog archiving rule: Oozie logs",
"doc_type":"cmpntguide",
"kw":"Oozie Log Overview,Using Oozie,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Oozie Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_1844.html",
+ "node_id":"mrs_01_1844.xml",
"product_code":"mrs",
- "code":"523",
+ "code":"521",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Oozie",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Oozie",
"githuburl":""
},
{
"uri":"mrs_01_1846.html",
+ "node_id":"mrs_01_1846.xml",
"product_code":"mrs",
- "code":"524",
+ "code":"522",
"des":"Why are not Coordinator scheduled jobs executed on time on the Hue or Oozie client?Use UTC time. For example, set start=2016-12-20T09:00Z in job.properties file.",
"doc_type":"cmpntguide",
"kw":"Oozie Scheduled Tasks Are Not Executed on Time,Common Issues About Oozie,Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Oozie Scheduled Tasks Are Not Executed on Time",
"githuburl":""
},
{
"uri":"mrs_01_1847.html",
+ "node_id":"mrs_01_1847.xml",
"product_code":"mrs",
- "code":"525",
+ "code":"523",
"des":"A new JAR package is uploaded to the /user/oozie/share/lib directory on HDFS. However, an error indicating that the class cannot be found is reported during task executio",
"doc_type":"cmpntguide",
"kw":"Why Update of the share lib Directory of Oozie on HDFS Does Not Take Effect?,Common Issues About Ooz",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Why Update of the share lib Directory of Oozie on HDFS Does Not Take Effect?",
"githuburl":""
},
{
"uri":"mrs_01_24479.html",
+ "node_id":"mrs_01_24479.xml",
"product_code":"mrs",
- "code":"526",
+ "code":"524",
"des":"Check the job logs on Yarn. Run the command executed through Hive SQL using beeline to ensure that Hive is running properly.If error information such as \"classnotfoundExc",
"doc_type":"cmpntguide",
"kw":"Common Oozie Troubleshooting Methods,Common Issues About Oozie,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide;productdesc;usermanual",
+ "prodname":"mrs",
+ "IsBot":"Yes"
+ }
+ ],
"title":"Common Oozie Troubleshooting Methods",
"githuburl":""
},
{
"uri":"mrs_01_0599.html",
+ "node_id":"mrs_01_0599.xml",
"product_code":"mrs",
- "code":"527",
+ "code":"525",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using OpenTSDB",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using OpenTSDB",
"githuburl":""
},
{
"uri":"mrs_01_0471.html",
+ "node_id":"mrs_01_0471.xml",
"product_code":"mrs",
- "code":"528",
+ "code":"526",
"des":"You can perform an interactive operation on an MRS cluster client. For a cluster with Kerberos authentication enabled, the user must belong to the opentsdb, hbase, opents",
"doc_type":"cmpntguide",
"kw":"Using an MRS Client to Operate OpenTSDB Metric Data,Using OpenTSDB,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using an MRS Client to Operate OpenTSDB Metric Data",
"githuburl":""
},
{
"uri":"mrs_01_0472.html",
+ "node_id":"mrs_01_0472.xml",
"product_code":"mrs",
- "code":"529",
+ "code":"527",
"des":"For example, to write data of a metric named testdata, whose timestamp is 1524900185, value is true, tag is key and value, run the following command:: indicates t",
"doc_type":"cmpntguide",
"kw":"Running the curl Command to Operate OpenTSDB,Using OpenTSDB,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Running the curl Command to Operate OpenTSDB",
"githuburl":""
},
{
"uri":"mrs_01_0432.html",
+ "node_id":"mrs_01_0432.xml",
"product_code":"mrs",
- "code":"530",
+ "code":"528",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Presto",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Presto",
"githuburl":""
},
{
"uri":"mrs_01_0433.html",
+ "node_id":"mrs_01_0433.xml",
"product_code":"mrs",
- "code":"531",
+ "code":"529",
"des":"You can view the Presto statistics on the graphical Presto web UI. You are advised to use Google Chrome to access the Presto web UI because it cannot be accessed using In",
"doc_type":"cmpntguide",
"kw":"Accessing the Presto Web UI,Using Presto,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Presto Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0434.html",
+ "node_id":"mrs_01_0434.xml",
"product_code":"mrs",
- "code":"532",
+ "code":"530",
"des":"You can perform an interactive query on an MRS cluster client. For clusters with Kerberos authentication enabled, users who submit topologies must belong to the presto gr",
"doc_type":"cmpntguide",
"kw":"Using a Client to Execute Query Statements,Using Presto,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using a Client to Execute Query Statements",
"githuburl":""
},
{
"uri":"mrs_01_0635.html",
+ "node_id":"mrs_01_0635.xml",
"product_code":"mrs",
- "code":"533",
+ "code":"531",
"des":"The Presto component has been installed in an MRS cluster.You have synchronized IAM users. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to",
"doc_type":"cmpntguide",
"kw":"Using Presto to Dump Data in DLF,Using Presto,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Presto to Dump Data in DLF",
"githuburl":""
},
{
"uri":"mrs_01_0636.html",
+ "node_id":"mrs_01_0636.xml",
"product_code":"mrs",
- "code":"534",
+ "code":"532",
"des":"MRS 3.x does not enable you to configure Presto permissions.By default, the Hive Catalog authorization of the Presto component is enabled in a security cluster. The Prest",
"doc_type":"cmpntguide",
"kw":"Configuring Presto Permissions,Using Presto,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Presto Permissions",
"githuburl":""
},
{
"uri":"mrs_01_0761.html",
+ "node_id":"mrs_01_0761.xml",
"product_code":"mrs",
- "code":"535",
+ "code":"533",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Ranger (MRS 1.9.2)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Ranger (MRS 1.9.2)",
"githuburl":""
},
{
"uri":"mrs_01_0763.html",
+ "node_id":"mrs_01_0763.xml",
"product_code":"mrs",
- "code":"536",
+ "code":"534",
"des":"Currently, only normal MRS 1.9.2 clusters support Ranger. Security clusters with Kerberos authentication enabled do not support Ranger.After the cluster is created, Range",
"doc_type":"cmpntguide",
"kw":"Creating a Ranger Cluster,Using Ranger (MRS 1.9.2),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Ranger Cluster",
"githuburl":""
},
{
"uri":"mrs_01_0764.html",
+ "node_id":"mrs_01_0764.xml",
"product_code":"mrs",
- "code":"537",
+ "code":"535",
"des":"You can manage Ranger on the Ranger web UI.After logging in to the Ranger Web UI for the first time, change the password and keep it secure.Ranger UserSync is an importan",
"doc_type":"cmpntguide",
"kw":"Accessing the Ranger Web UI and Synchronizing Unix Users to the Ranger Web UI,Using Ranger (MRS 1.9.",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Ranger Web UI and Synchronizing Unix Users to the Ranger Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0765.html",
+ "node_id":"mrs_01_0765.xml",
"product_code":"mrs",
- "code":"538",
+ "code":"536",
"des":"After an MRS cluster with Ranger installed is created, Hive and Impala access control is not integrated into Ranger. This section describes how to integrate Hive into Ran",
"doc_type":"cmpntguide",
"kw":"Configuring Hive/Impala Access Permissions in Ranger,Using Ranger (MRS 1.9.2),Component Operation Gu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Hive/Impala Access Permissions in Ranger",
"githuburl":""
},
{
"uri":"mrs_01_0766.html",
+ "node_id":"mrs_01_0766.xml",
"product_code":"mrs",
- "code":"539",
+ "code":"537",
"des":"After an MRS cluster with Ranger installed is created, HBase access control is not integrated into Ranger. This section describes how to integrate HBase into Ranger.Addin",
"doc_type":"cmpntguide",
"kw":"Configuring HBase Access Permissions in Ranger,Using Ranger (MRS 1.9.2),Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring HBase Access Permissions in Ranger",
"githuburl":""
},
{
"uri":"mrs_01_1849.html",
+ "node_id":"mrs_01_1849.xml",
"product_code":"mrs",
- "code":"540",
+ "code":"538",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Ranger (MRS 3.x)",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Ranger (MRS 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_1850.html",
+ "node_id":"mrs_01_1850.xml",
"product_code":"mrs",
- "code":"541",
+ "code":"539",
"des":"Ranger provides a centralized permission management framework to implement fine-grained permission control on components such as HDFS, HBase, Hive, and Yarn. In addition,",
"doc_type":"cmpntguide",
"kw":"Logging In to the Ranger Web UI,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Logging In to the Ranger Web UI",
"githuburl":""
},
{
"uri":"mrs_01_2393.html",
+ "node_id":"mrs_01_2393.xml",
"product_code":"mrs",
- "code":"542",
+ "code":"540",
"des":"This section guides you how to enable Ranger authentication. Ranger authentication is enabled by default in security mode and disabled by default in normal mode.If Enable",
"doc_type":"cmpntguide",
"kw":"Enabling Ranger Authentication,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Enabling Ranger Authentication",
"githuburl":""
},
{
"uri":"mrs_01_1851.html",
+ "node_id":"mrs_01_1851.xml",
"product_code":"mrs",
- "code":"543",
+ "code":"541",
"des":"In the newly installed MRS cluster, Ranger is installed by default, with the Ranger authentication model enabled. The systemadministrator can set fine-grained security po",
"doc_type":"cmpntguide",
"kw":"Configuring Component Permission Policies,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Component Permission Policies",
"githuburl":""
},
{
"uri":"mrs_01_1852.html",
+ "node_id":"mrs_01_1852.xml",
"product_code":"mrs",
- "code":"544",
+ "code":"542",
"des":"The systemadministrator can view audit logs of the Ranger running and the permission control after Ranger authentication is enabled on the Ranger web UI.",
"doc_type":"cmpntguide",
"kw":"Viewing Ranger Audit Information,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Ranger Audit Information",
"githuburl":""
},
{
"uri":"mrs_01_1853.html",
+ "node_id":"mrs_01_1853.xml",
"product_code":"mrs",
- "code":"545",
+ "code":"543",
"des":"Security zone can be configured using Ranger. Rangeradministrators can divide resources of each component into multiple security zones where administrators set security p",
"doc_type":"cmpntguide",
"kw":"Configuring a Security Zone,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring a Security Zone",
"githuburl":""
},
{
"uri":"mrs_01_2394.html",
+ "node_id":"mrs_01_2394.xml",
"product_code":"mrs",
- "code":"546",
+ "code":"544",
"des":"By default, the Ranger data source of the security cluster can be accessed by FusionInsight Manager LDAP users. By default, the Ranger data source of a common cluster can",
"doc_type":"cmpntguide",
"kw":"Changing the Ranger Data Source to LDAP for a Normal Cluster,Using Ranger (MRS 3.x),Component Operat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Changing the Ranger Data Source to LDAP for a Normal Cluster",
"githuburl":""
},
{
"uri":"mrs_01_1854.html",
+ "node_id":"mrs_01_1854.xml",
"product_code":"mrs",
- "code":"547",
+ "code":"545",
"des":"You can view Ranger permission settings, such as users, user groups, and roles.Users: displays all user information synchronized from LDAP or OS to Ranger.Groups: display",
"doc_type":"cmpntguide",
"kw":"Viewing Ranger Permission Information,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Ranger Permission Information",
"githuburl":""
},
{
"uri":"mrs_01_1856.html",
+ "node_id":"mrs_01_1856.xml",
"product_code":"mrs",
- "code":"548",
+ "code":"546",
"des":"The Rangeradministrator can use Ranger to configure the read, write, and execution permissions on HDFS directories or files for HDFS users.The Ranger service has been ins",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for HDFS,Using Ranger (MRS 3.x),Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for HDFS",
"githuburl":""
},
{
"uri":"mrs_01_1857.html",
+ "node_id":"mrs_01_1857.xml",
"product_code":"mrs",
- "code":"549",
+ "code":"547",
"des":"Rangeradministrators can use Ranger to configure permissions on HBase tables, column families, and columns for HBase users.The Ranger service has been installed and is ru",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for HBase,Using Ranger (MRS 3.x),Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for HBase",
"githuburl":""
},
{
"uri":"mrs_01_1858.html",
+ "node_id":"mrs_01_1858.xml",
"product_code":"mrs",
- "code":"550",
+ "code":"548",
"des":"The Rangeradministrator can use Ranger to set permissions for Hive users. The default administrator account of Hive is hive and the initial password is Hive@123.The Range",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for Hive,Using Ranger (MRS 3.x),Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for Hive",
"githuburl":""
},
{
"uri":"mrs_01_1859.html",
+ "node_id":"mrs_01_1859.xml",
"product_code":"mrs",
- "code":"551",
+ "code":"549",
"des":"The Rangeradministrator can use Ranger to configure Yarn administrator permissions for Yarn users, allowing them to manage Yarn queue resources.The Ranger service has bee",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for Yarn,Using Ranger (MRS 3.x),Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for Yarn",
"githuburl":""
},
{
"uri":"mrs_01_1860.html",
+ "node_id":"mrs_01_1860.xml",
"product_code":"mrs",
- "code":"552",
+ "code":"550",
"des":"The Rangeradministrator can use Ranger to set permissions for Spark2x users.After Ranger authentication is enabled or disabled on Spark2x, you need to restart Spark2x.Dow",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for Spark2x,Using Ranger (MRS 3.x),Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for Spark2x",
"githuburl":""
},
{
"uri":"mrs_01_1861.html",
+ "node_id":"mrs_01_1861.xml",
"product_code":"mrs",
- "code":"553",
+ "code":"551",
"des":"The Rangeradministrator can use Ranger to configure the read, write, and management permissions of the Kafka topic and the management permission of the cluster for the Ka",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for Kafka,Using Ranger (MRS 3.x),Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1863.html",
+ "node_id":"mrs_01_1863.xml",
"product_code":"mrs",
- "code":"554",
+ "code":"552",
"des":"The Rangeradministrator can use Ranger to set permissions for Storm users.The Ranger service has been installed and is running properly.You have created users, user group",
"doc_type":"cmpntguide",
"kw":"Adding a Ranger Access Permission Policy for Storm,Using Ranger (MRS 3.x),Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adding a Ranger Access Permission Policy for Storm",
"githuburl":""
},
{
"uri":"mrs_01_1865.html",
+ "node_id":"mrs_01_1865.xml",
"product_code":"mrs",
- "code":"555",
+ "code":"553",
"des":"Log path: The default storage path of Ranger logs is /var/log/Bigdata/ranger/Role name.RangerAdmin: /var/log/Bigdata/ranger/rangeradmin (run logs)TagSync: /var/log/Bigdat",
"doc_type":"cmpntguide",
"kw":"Ranger Log Overview,Using Ranger (MRS 3.x),Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Ranger Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_1866.html",
+ "node_id":"mrs_01_1866.xml",
"product_code":"mrs",
- "code":"556",
+ "code":"554",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Ranger",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Ranger",
"githuburl":""
},
{
"uri":"mrs_01_1867.html",
+ "node_id":"mrs_01_1867.xml",
"product_code":"mrs",
- "code":"557",
+ "code":"555",
"des":"During cluster installation, Ranger fails to be started, and the error message \"ERROR: cannot drop sequence X_POLICY_REF_ACCESS_TYPE_SEQ \" is displayed in the task list o",
"doc_type":"cmpntguide",
"kw":"Why Ranger Startup Fails During the Cluster Installation?,Common Issues About Ranger,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Ranger Startup Fails During the Cluster Installation?",
"githuburl":""
},
{
"uri":"mrs_01_1868.html",
+ "node_id":"mrs_01_1868.xml",
"product_code":"mrs",
- "code":"558",
+ "code":"556",
"des":"How do I determine whether the Ranger authentication is enabled for a service that supports the authentication?Log in to FusionInsight Manager and choose Cluster > Servic",
"doc_type":"cmpntguide",
"kw":"How Do I Determine Whether the Ranger Authentication Is Used for a Service?,Common Issues About Rang",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Determine Whether the Ranger Authentication Is Used for a Service?",
"githuburl":""
},
{
"uri":"mrs_01_2300.html",
+ "node_id":"mrs_01_2300.xml",
"product_code":"mrs",
- "code":"559",
+ "code":"557",
"des":"When a new user logs in to Ranger, why is the 401 error reported after the password is changed?The UserSync synchronizes user data at an interval of 5 minutes by default.",
"doc_type":"cmpntguide",
"kw":"Why Cannot a New User Log In to Ranger After Changing the Password?,Common Issues About Ranger,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot a New User Log In to Ranger After Changing the Password?",
"githuburl":""
},
{
"uri":"mrs_01_2355.html",
+ "node_id":"mrs_01_2355.xml",
"product_code":"mrs",
- "code":"560",
+ "code":"558",
"des":"When a Ranger access permission policy is added for HBase and wildcard characters are used to search for an existing HBase table in the policy, the table cannot be found.",
"doc_type":"cmpntguide",
"kw":"When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search fo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables",
"githuburl":""
},
{
"uri":"mrs_01_0589.html",
+ "node_id":"mrs_01_0589.xml",
"product_code":"mrs",
- "code":"561",
+ "code":"559",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Spark",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Spark",
"githuburl":""
},
{
"uri":"mrs_01_1925.html",
+ "node_id":"mrs_01_1925.xml",
"product_code":"mrs",
- "code":"562",
+ "code":"560",
"des":"This section applies to versions earlier than MRS 3.x.",
"doc_type":"cmpntguide",
"kw":"Precautions,Using Spark,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Precautions",
"githuburl":""
},
{
"uri":"mrs_01_0366.html",
+ "node_id":"mrs_01_0366.xml",
"product_code":"mrs",
- "code":"563",
+ "code":"561",
"des":"This section describes how to use Spark to submit a SparkPi job. SparkPi, a typical Spark job, is used to calculate the value of Pi (π).Multiple open-source Spark sample ",
"doc_type":"cmpntguide",
"kw":"Getting Started with Spark,Using Spark,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Getting Started with Spark",
"githuburl":""
},
{
"uri":"mrs_01_0367.html",
+ "node_id":"mrs_01_0367.xml",
"product_code":"mrs",
- "code":"564",
+ "code":"562",
"des":"Spark provides the Spark SQL language that is similar to SQL to perform operations on structured data. This section describes how to use Spark SQL from scratch. Create a ",
"doc_type":"cmpntguide",
"kw":"Getting Started with Spark SQL,Using Spark,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Getting Started with Spark SQL",
"githuburl":""
},
{
"uri":"mrs_01_1183.html",
+ "node_id":"mrs_01_1183.xml",
"product_code":"mrs",
- "code":"565",
+ "code":"563",
"des":"After an MRS cluster is created, you can create and submit jobs on the client. The client can be installed on nodes inside or outside the cluster.Nodes inside the cluster",
"doc_type":"cmpntguide",
"kw":"Using the Spark Client,Using Spark,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the Spark Client",
"githuburl":""
},
{
"uri":"mrs_01_0767.html",
+ "node_id":"mrs_01_0767.xml",
"product_code":"mrs",
- "code":"566",
+ "code":"564",
"des":"The Spark web UI is used to view the running status of Spark applications. Google Chrome is recommended for better user experience.Spark has two web UIs.Spark UI: used to",
"doc_type":"cmpntguide",
"kw":"Accessing the Spark Web UI,Using Spark,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing the Spark Web UI",
"githuburl":""
},
{
"uri":"mrs_01_0584.html",
+ "node_id":"mrs_01_0584.xml",
"product_code":"mrs",
- "code":"567",
+ "code":"565",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Interconnecting Spark with OpenTSDB",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Interconnecting Spark with OpenTSDB",
"githuburl":""
},
{
"uri":"mrs_01_0585.html",
+ "node_id":"mrs_01_0585.xml",
"product_code":"mrs",
- "code":"568",
+ "code":"566",
"des":"MRS Spark can be used to access the data source of OpenTSDB, create and associate tables in the Spark, and query and insert the OpenTSDB data.Use the CREATE TABLE command",
"doc_type":"cmpntguide",
"kw":"Creating a Table and Associating It with OpenTSDB,Interconnecting Spark with OpenTSDB,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Table and Associating It with OpenTSDB",
"githuburl":""
},
{
"uri":"mrs_01_0586.html",
+ "node_id":"mrs_01_0586.xml",
"product_code":"mrs",
- "code":"569",
+ "code":"567",
"des":"Run the INSERT INTO statement to insert the data in the table to the associated OpenTSDB metric.The inserted data cannot be null. If the inserted data is the same as the ",
"doc_type":"cmpntguide",
"kw":"Inserting Data to the OpenTSDB Table,Interconnecting Spark with OpenTSDB,Component Operation Guide (",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Inserting Data to the OpenTSDB Table",
"githuburl":""
},
{
"uri":"mrs_01_0587.html",
+ "node_id":"mrs_01_0587.xml",
"product_code":"mrs",
- "code":"570",
+ "code":"568",
"des":"This SELECT command is used to query data in an OpenTSDB table.The to-be-queried table must exist. Otherwise, an error is reported.The value of tagv must exist. Otherwise",
"doc_type":"cmpntguide",
"kw":"Querying an OpenTSDB Table,Interconnecting Spark with OpenTSDB,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Querying an OpenTSDB Table",
"githuburl":""
},
{
"uri":"mrs_01_0588.html",
+ "node_id":"mrs_01_0588.xml",
"product_code":"mrs",
- "code":"571",
+ "code":"569",
"des":"By default, OpenTSDB connects to the local TSD process of the node where the Spark executor resides. In MRS, use the default configuration.Run the set statement in spark-",
"doc_type":"cmpntguide",
"kw":"Modifying the Default Configuration Data,Interconnecting Spark with OpenTSDB,Component Operation Gui",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Modifying the Default Configuration Data",
"githuburl":""
},
{
"uri":"mrs_01_1926.html",
+ "node_id":"mrs_01_1926.xml",
"product_code":"mrs",
- "code":"572",
+ "code":"570",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Spark2x",
+ "search_title":"",
+ "metedata":[
+ {
+ "IsBot":"No",
+ "documenttype":"cmpntguide;usermanual;productdesc",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Spark2x",
"githuburl":""
},
{
"uri":"mrs_01_1927.html",
+ "node_id":"mrs_01_1927.xml",
"product_code":"mrs",
- "code":"573",
+ "code":"571",
"des":"This section applies to MRS 3.x or later clusters.",
"doc_type":"cmpntguide",
"kw":"Precautions,Using Spark2x,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Precautions",
"githuburl":""
},
{
"uri":"mrs_01_1928.html",
+ "node_id":"mrs_01_1928.xml",
"product_code":"mrs",
- "code":"574",
+ "code":"572",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Basic Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Basic Operation",
"githuburl":""
},
{
"uri":"mrs_01_1929.html",
+ "node_id":"mrs_01_1929.xml",
"product_code":"mrs",
- "code":"575",
+ "code":"573",
"des":"This section describes how to use Spark2x to submit Spark applications, including Spark Core and Spark SQL. Spark Core is the kernel module of Spark. It executes tasks an",
"doc_type":"cmpntguide",
"kw":"Getting Started,Basic Operation,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Getting Started",
"githuburl":""
},
{
"uri":"mrs_01_1930.html",
+ "node_id":"mrs_01_1930.xml",
"product_code":"mrs",
- "code":"576",
+ "code":"574",
"des":"This section describes how to quickly configure common parameters and lists parameters that are not recommended to be modified when Spark2x is used.Some parameters have b",
"doc_type":"cmpntguide",
"kw":"Configuring Parameters Rapidly,Basic Operation,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Parameters Rapidly",
"githuburl":""
},
{
"uri":"mrs_01_1931.html",
+ "node_id":"mrs_01_1931.xml",
"product_code":"mrs",
- "code":"577",
+ "code":"575",
"des":"This section describes common configuration items used in Spark. Subsections are divided by feature so that you can quickly find required configuration items. If you use ",
"doc_type":"cmpntguide",
"kw":"Common Parameters,Basic Operation,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1933.html",
+ "node_id":"mrs_01_1933.xml",
"product_code":"mrs",
- "code":"578",
+ "code":"576",
"des":"Spark on HBase allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read data f",
"doc_type":"cmpntguide",
"kw":"Spark on HBase Overview and Basic Applications,Basic Operation,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark on HBase Overview and Basic Applications",
"githuburl":""
},
{
"uri":"mrs_01_1934.html",
+ "node_id":"mrs_01_1934.xml",
"product_code":"mrs",
- "code":"579",
+ "code":"577",
"des":"Spark on HBase V2 allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read dat",
"doc_type":"cmpntguide",
"kw":"Spark on HBase V2 Overview and Basic Applications,Basic Operation,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark on HBase V2 Overview and Basic Applications",
"githuburl":""
},
{
"uri":"mrs_01_1935.html",
+ "node_id":"mrs_01_1935.xml",
"product_code":"mrs",
- "code":"580",
+ "code":"578",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"SparkSQL Permission Management(Security Mode)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SparkSQL Permission Management(Security Mode)",
"githuburl":""
},
{
"uri":"mrs_01_1936.html",
+ "node_id":"mrs_01_1936.xml",
"product_code":"mrs",
- "code":"581",
+ "code":"579",
"des":"Similar to Hive, Spark SQL is a data warehouse framework built on Hadoop, providing storage of structured data like structured query language (SQL).MRS supports users, us",
"doc_type":"cmpntguide",
"kw":"Spark SQL Permissions,SparkSQL Permission Management(Security Mode),Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark SQL Permissions",
"githuburl":""
},
{
"uri":"mrs_01_1937.html",
+ "node_id":"mrs_01_1937.xml",
"product_code":"mrs",
- "code":"582",
+ "code":"580",
"des":"This section describes how to create and configure a SparkSQL role on Manager as the system administrator. The Spark SQL role can be configured with the Sparkadministrato",
"doc_type":"cmpntguide",
"kw":"Creating a Spark SQL Role,SparkSQL Permission Management(Security Mode),Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating a Spark SQL Role",
"githuburl":""
},
{
"uri":"mrs_01_1938.html",
+ "node_id":"mrs_01_1938.xml",
"product_code":"mrs",
- "code":"583",
+ "code":"581",
"des":"You can configure related permissions if you need to access tables or databases created by other users. SparkSQL supports column-based permission control. If a user needs",
"doc_type":"cmpntguide",
"kw":"Configuring Permissions for SparkSQL Tables, Columns, and Databases,SparkSQL Permission Management(S",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Permissions for SparkSQL Tables, Columns, and Databases",
"githuburl":""
},
{
"uri":"mrs_01_1939.html",
+ "node_id":"mrs_01_1939.xml",
"product_code":"mrs",
- "code":"584",
+ "code":"582",
"des":"SparkSQL may need to be associated with other components. For example, Spark on HBase requires HBase permissions. The following describes how to associate SparkSQL with H",
"doc_type":"cmpntguide",
"kw":"Configuring Permissions for SparkSQL to Use Other Components,SparkSQL Permission Management(Security",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Permissions for SparkSQL to Use Other Components",
"githuburl":""
},
{
"uri":"mrs_01_1940.html",
+ "node_id":"mrs_01_1940.xml",
"product_code":"mrs",
- "code":"585",
+ "code":"583",
"des":"This section describes how to configure SparkSQL permission management functions (client configuration is similar to server configuration). To enable table permission, ad",
"doc_type":"cmpntguide",
"kw":"Configuring the Client and Server,SparkSQL Permission Management(Security Mode),Component Operation ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Client and Server",
"githuburl":""
},
{
"uri":"mrs_01_1941.html",
+ "node_id":"mrs_01_1941.xml",
"product_code":"mrs",
- "code":"586",
+ "code":"584",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Scenario-Specific Configuration",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Scenario-Specific Configuration",
"githuburl":""
},
{
"uri":"mrs_01_1942.html",
+ "node_id":"mrs_01_1942.xml",
"product_code":"mrs",
- "code":"587",
+ "code":"585",
"des":"In this mode, multiple ThriftServers coexist in the cluster and the client can randomly connect any ThriftServer to perform service operations. When one or multiple Thrif",
"doc_type":"cmpntguide",
"kw":"Configuring Multi-active Instance Mode,Scenario-Specific Configuration,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Multi-active Instance Mode",
"githuburl":""
},
{
"uri":"mrs_01_1943.html",
+ "node_id":"mrs_01_1943.xml",
"product_code":"mrs",
- "code":"588",
+ "code":"586",
"des":"In multi-tenant mode, JDBCServers are bound with tenants. Each tenant corresponds to one or more JDBCServers, and a JDBCServer provides services for only one tenant. Diff",
"doc_type":"cmpntguide",
"kw":"Configuring the Multi-tenant Mode,Scenario-Specific Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Multi-tenant Mode",
"githuburl":""
},
{
"uri":"mrs_01_1944.html",
+ "node_id":"mrs_01_1944.xml",
"product_code":"mrs",
- "code":"589",
+ "code":"587",
"des":"When using a cluster, if you want to switch between multi-active instance mode and multi-tenant mode, the following configurations are required.Switch from multi-tenant m",
"doc_type":"cmpntguide",
"kw":"Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode,Scenario",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode",
"githuburl":""
},
{
"uri":"mrs_01_1945.html",
+ "node_id":"mrs_01_1945.xml",
"product_code":"mrs",
- "code":"590",
+ "code":"588",
"des":"Functions such as UI, EventLog, and dynamic resource scheduling in Spark are implemented through event transfer. Events include SparkListenerJobStart and SparkListenerJob",
"doc_type":"cmpntguide",
"kw":"Configuring the Size of the Event Queue,Scenario-Specific Configuration,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Size of the Event Queue",
"githuburl":""
},
{
"uri":"mrs_01_1947.html",
+ "node_id":"mrs_01_1947.xml",
"product_code":"mrs",
- "code":"591",
+ "code":"589",
"des":"When the executor off-heap memory is too small, or processes with higher priority preempt resources, the physical memory usage will exceed the maximal value. To prevent t",
"doc_type":"cmpntguide",
"kw":"Configuring Executor Off-Heap Memory,Scenario-Specific Configuration,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Executor Off-Heap Memory",
"githuburl":""
},
{
"uri":"mrs_01_1948.html",
+ "node_id":"mrs_01_1948.xml",
"product_code":"mrs",
- "code":"592",
+ "code":"590",
"des":"A large amount of memory is required when Spark SQL executes a query, especially during Aggregate and Join operations. If the memory is limited, OutOfMemoryError may occu",
"doc_type":"cmpntguide",
"kw":"Enhancing Stability in a Limited Memory Condition,Scenario-Specific Configuration,Component Operatio",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Enhancing Stability in a Limited Memory Condition",
"githuburl":""
},
{
"uri":"mrs_01_1949.html",
+ "node_id":"mrs_01_1949.xml",
"product_code":"mrs",
- "code":"593",
+ "code":"591",
"des":"When yarn.log-aggregation-enable of Yarn is set to true, the container log aggregation function is enabled. Log aggregation indicates that after applications are run on Y",
"doc_type":"cmpntguide",
"kw":"Viewing Aggregated Container Logs on the Web UI,Scenario-Specific Configuration,Component Operation ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Viewing Aggregated Container Logs on the Web UI",
"githuburl":""
},
{
"uri":"mrs_01_1951.html",
+ "node_id":"mrs_01_1951.xml",
"product_code":"mrs",
- "code":"594",
+ "code":"592",
"des":"Values of some configuration parameters of Spark client vary depending on its work mode (YARN-Client or YARN-Cluster). If you switch Spark client between different modes ",
"doc_type":"cmpntguide",
"kw":"Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes,Scenario-Specific Configurat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes",
"githuburl":""
},
{
"uri":"mrs_01_1952.html",
+ "node_id":"mrs_01_1952.xml",
"product_code":"mrs",
- "code":"595",
+ "code":"593",
"des":"By default, SparkSQL divides data into 200 data blocks during shuffle. In data-intensive scenarios, each data block may have excessive size. If a single data block of a t",
"doc_type":"cmpntguide",
"kw":"Configuring the Default Number of Data Blocks Divided by SparkSQL,Scenario-Specific Configuration,Co",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Default Number of Data Blocks Divided by SparkSQL",
"githuburl":""
},
{
"uri":"mrs_01_1953.html",
+ "node_id":"mrs_01_1953.xml",
"product_code":"mrs",
- "code":"596",
+ "code":"594",
"des":"The compression format of a Parquet table can be configured as follows:If the Parquet table is a partitioned one, set the parquet.compression parameter of the Parquet tab",
"doc_type":"cmpntguide",
"kw":"Configuring the Compression Format of a Parquet Table,Scenario-Specific Configuration,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Compression Format of a Parquet Table",
"githuburl":""
},
{
"uri":"mrs_01_1954.html",
+ "node_id":"mrs_01_1954.xml",
"product_code":"mrs",
- "code":"597",
+ "code":"595",
"des":"In Spark WebUI, the Executor page can display information about Lost Executor. Executors are dynamically recycled. If the JDBCServer tasks are large, there may be too man",
"doc_type":"cmpntguide",
"kw":"Configuring the Number of Lost Executors Displayed in WebUI,Scenario-Specific Configuration,Componen",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Number of Lost Executors Displayed in WebUI",
"githuburl":""
},
{
"uri":"mrs_01_1957.html",
+ "node_id":"mrs_01_1957.xml",
"product_code":"mrs",
- "code":"598",
+ "code":"596",
"des":"In some scenarios, to locate problems or check information by changing the log level,you can add the -Dlog4j.configuration.watch=true parameter to the JVM parameter of a ",
"doc_type":"cmpntguide",
"kw":"Setting the Log Level Dynamically,Scenario-Specific Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Setting the Log Level Dynamically",
"githuburl":""
},
{
"uri":"mrs_01_1958.html",
+ "node_id":"mrs_01_1958.xml",
"product_code":"mrs",
- "code":"599",
+ "code":"597",
"des":"When Spark is used to submit tasks, the driver obtains tokens from HBase by default. To access HBase, you need to configure the jaas.conf file for security authentication",
"doc_type":"cmpntguide",
"kw":"Configuring Whether Spark Obtains HBase Tokens,Scenario-Specific Configuration,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Whether Spark Obtains HBase Tokens",
"githuburl":""
},
{
"uri":"mrs_01_1959.html",
+ "node_id":"mrs_01_1959.xml",
"product_code":"mrs",
- "code":"600",
+ "code":"598",
"des":"If the Spark Streaming application is connected to Kafka, after the Spark Streaming application is terminated abnormally and restarted from the checkpoint, the system pre",
"doc_type":"cmpntguide",
"kw":"Configuring LIFO for Kafka,Scenario-Specific Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring LIFO for Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1960.html",
+ "node_id":"mrs_01_1960.xml",
"product_code":"mrs",
- "code":"601",
+ "code":"599",
"des":"When the Spark Streaming application is connected to Kafka and the application is restarted, the application reads data from Kafka based on the last read topic offset and",
"doc_type":"cmpntguide",
"kw":"Configuring Reliability for Connected Kafka,Scenario-Specific Configuration,Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Reliability for Connected Kafka",
"githuburl":""
},
{
"uri":"mrs_01_1961.html",
+ "node_id":"mrs_01_1961.xml",
"product_code":"mrs",
- "code":"602",
+ "code":"600",
"des":"When a query statement is executed, the returned result may be large (containing more than 100,000 records). In this case, JDBCServer out of memory (OOM) may occur. There",
"doc_type":"cmpntguide",
"kw":"Configuring Streaming Reading of Driver Execution Results,Scenario-Specific Configuration,Component ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Streaming Reading of Driver Execution Results",
"githuburl":""
},
{
"uri":"mrs_01_1962.html",
+ "node_id":"mrs_01_1962.xml",
"product_code":"mrs",
- "code":"603",
+ "code":"601",
"des":"When you perform the select query in Hive partitioned tables, the FileNotFoundException exception is displayed if a specified partition path does not exist in HDFS. To av",
"doc_type":"cmpntguide",
"kw":"Filtering Partitions without Paths in Partitioned Tables,Scenario-Specific Configuration,Component O",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Filtering Partitions without Paths in Partitioned Tables",
"githuburl":""
},
{
"uri":"mrs_01_1963.html",
+ "node_id":"mrs_01_1963.xml",
"product_code":"mrs",
- "code":"604",
+ "code":"602",
"des":"Users need to implement security protection for Spark2x web UI when some data on the UI cannot be viewed by other users. Once a user attempts to log in to the UI, Spark2x",
"doc_type":"cmpntguide",
"kw":"Configuring Spark2x Web UI ACLs,Scenario-Specific Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Spark2x Web UI ACLs",
"githuburl":""
},
{
"uri":"mrs_01_1964.html",
+ "node_id":"mrs_01_1964.xml",
"product_code":"mrs",
- "code":"605",
+ "code":"603",
"des":"ORC is a column-based storage format in the Hadoop ecosystem. It originates from Apache Hive and is used to reduce the Hadoop data storage space and accelerate the Hive q",
"doc_type":"cmpntguide",
"kw":"Configuring Vector-based ORC Data Reading,Scenario-Specific Configuration,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Vector-based ORC Data Reading",
"githuburl":""
},
{
"uri":"mrs_01_1965.html",
+ "node_id":"mrs_01_1965.xml",
"product_code":"mrs",
- "code":"606",
+ "code":"604",
"des":"In earlier versions, the predicate for pruning Hive table partitions is pushed down. Only comparison expressions between column names and integers or character strings ca",
"doc_type":"cmpntguide",
"kw":"Broaden Support for Hive Partition Pruning Predicate Pushdown,Scenario-Specific Configuration,Compon",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Broaden Support for Hive Partition Pruning Predicate Pushdown",
"githuburl":""
},
{
"uri":"mrs_01_1966.html",
+ "node_id":"mrs_01_1966.xml",
"product_code":"mrs",
- "code":"607",
+ "code":"605",
"des":"In earlier versions, when the insert overwrite syntax is used to overwrite partition tables, only partitions with specified expressions are matched, and partitions withou",
"doc_type":"cmpntguide",
"kw":"Hive Dynamic Partition Overwriting Syntax,Scenario-Specific Configuration,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Hive Dynamic Partition Overwriting Syntax",
"githuburl":""
},
{
"uri":"mrs_01_1967.html",
+ "node_id":"mrs_01_1967.xml",
"product_code":"mrs",
- "code":"608",
+ "code":"606",
"des":"The execution plan for SQL statements is optimized in Spark. Common optimization rules are heuristic optimization rules. Heuristic optimization rules are provided based o",
"doc_type":"cmpntguide",
"kw":"Configuring the Column Statistics Histogram to Enhance the CBO Accuracy,Scenario-Specific Configurat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Column Statistics Histogram to Enhance the CBO Accuracy",
"githuburl":""
},
{
"uri":"mrs_01_1969.html",
+ "node_id":"mrs_01_1969.xml",
"product_code":"mrs",
- "code":"609",
+ "code":"607",
"des":"JobHistory can use local disks to cache the historical data of Spark applications to prevent the JobHistory memory from loading a large amount of application data, reduci",
"doc_type":"cmpntguide",
"kw":"Configuring Local Disk Cache for JobHistory,Scenario-Specific Configuration,Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Local Disk Cache for JobHistory",
"githuburl":""
},
{
"uri":"mrs_01_1970.html",
+ "node_id":"mrs_01_1970.xml",
"product_code":"mrs",
- "code":"610",
+ "code":"608",
"des":"The Spark SQL adaptive execution feature enables Spark SQL to optimize subsequent execution processes based on intermediate results to improve overall execution efficienc",
"doc_type":"cmpntguide",
"kw":"Configuring Spark SQL to Enable the Adaptive Execution Feature,Scenario-Specific Configuration,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Spark SQL to Enable the Adaptive Execution Feature",
"githuburl":""
},
{
"uri":"mrs_01_24170.html",
+ "node_id":"mrs_01_24170.xml",
"product_code":"mrs",
- "code":"611",
+ "code":"609",
"des":"When the event log mode is enabled for Spark, that is, spark.eventLog.enabled is set to true, events are written to a configured log file to record the program running pr",
"doc_type":"cmpntguide",
"kw":"Configuring Event Log Rollover,Scenario-Specific Configuration,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Event Log Rollover",
"githuburl":""
},
{
"uri":"mrs_01_2317.html",
+ "node_id":"mrs_01_2317.xml",
"product_code":"mrs",
- "code":"612",
+ "code":"610",
"des":"When Ranger is used as the permission management service of Spark SQL, the certificate in the cluster is required for accessing RangerAdmin. If you use a third-party JDK ",
"doc_type":"cmpntguide",
"kw":"Adapting to the Third-party JDK When Ranger Is Used,Basic Operation,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Adapting to the Third-party JDK When Ranger Is Used",
"githuburl":""
},
{
"uri":"mrs_01_1971.html",
+ "node_id":"mrs_01_1971.xml",
"product_code":"mrs",
- "code":"613",
+ "code":"611",
"des":"Log paths:Executor run log: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of running tasks are stored in the prec",
"doc_type":"cmpntguide",
"kw":"Spark2x Logs,Using Spark2x,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark2x Logs",
"githuburl":""
},
{
"uri":"mrs_01_1972.html",
+ "node_id":"mrs_01_1972.xml",
"product_code":"mrs",
- "code":"614",
+ "code":"612",
"des":"Container logs of running Spark applications are distributed on multiple nodes. This section describes how to quickly obtain container logs.You can run the yarn logs comm",
"doc_type":"cmpntguide",
"kw":"Obtaining Container Logs of a Running Spark Application,Using Spark2x,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Obtaining Container Logs of a Running Spark Application",
"githuburl":""
},
{
"uri":"mrs_01_1973.html",
+ "node_id":"mrs_01_1973.xml",
"product_code":"mrs",
- "code":"615",
+ "code":"613",
"des":"In a large-scale Hadoop production cluster, HDFS metadata is stored in the NameNode memory, and the cluster scale is restricted by the memory limitation of each NameNode.",
"doc_type":"cmpntguide",
"kw":"Small File Combination Tools,Using Spark2x,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Small File Combination Tools",
"githuburl":""
},
{
"uri":"mrs_01_2362.html",
+ "node_id":"mrs_01_2362.xml",
"product_code":"mrs",
- "code":"616",
+ "code":"614",
"des":"The first query of CarbonData is slow, which may cause a delay for nodes that have high requirements on real-time performance.The tool provides the following functions:Pr",
"doc_type":"cmpntguide",
"kw":"Using CarbonData for First Query,Using Spark2x,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using CarbonData for First Query",
"githuburl":""
},
{
"uri":"mrs_01_1974.html",
+ "node_id":"mrs_01_1974.xml",
"product_code":"mrs",
- "code":"617",
+ "code":"615",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Spark2x Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark2x Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1975.html",
+ "node_id":"mrs_01_1975.xml",
"product_code":"mrs",
- "code":"618",
+ "code":"616",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Spark Core Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark Core Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1976.html",
+ "node_id":"mrs_01_1976.xml",
"product_code":"mrs",
- "code":"619",
+ "code":"617",
"des":"Spark supports the following types of serialization:JavaSerializerKryoSerializerData serialization affects the Spark application performance. In specific data format, Kry",
"doc_type":"cmpntguide",
"kw":"Data Serialization,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Data Serialization",
"githuburl":""
},
{
"uri":"mrs_01_1977.html",
+ "node_id":"mrs_01_1977.xml",
"product_code":"mrs",
- "code":"620",
+ "code":"618",
"des":"Spark is a memory-based computing frame. If the memory is insufficient during computing, the Spark execution efficiency will be adversely affected. You can determine whet",
"doc_type":"cmpntguide",
"kw":"Optimizing Memory Configuration,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Memory Configuration",
"githuburl":""
},
{
"uri":"mrs_01_1978.html",
+ "node_id":"mrs_01_1978.xml",
"product_code":"mrs",
- "code":"621",
+ "code":"619",
"des":"The degree of parallelism (DOP) specifies the number of tasks to be executed concurrently. It determines the number of data blocks after the shuffle operation. Configure ",
"doc_type":"cmpntguide",
"kw":"Setting the DOP,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Setting the DOP",
"githuburl":""
},
{
"uri":"mrs_01_1979.html",
+ "node_id":"mrs_01_1979.xml",
"product_code":"mrs",
- "code":"622",
+ "code":"620",
"des":"Broadcast distributes data sets to each node. It allows data to be obtained locally when a data set is needed during a Spark task. If broadcast is not used, data serializ",
"doc_type":"cmpntguide",
"kw":"Using Broadcast Variables,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Broadcast Variables",
"githuburl":""
},
{
"uri":"mrs_01_1980.html",
+ "node_id":"mrs_01_1980.xml",
"product_code":"mrs",
- "code":"623",
+ "code":"621",
"des":"When the Spark system runs applications that contain a shuffle process, an executor process also writes shuffle data and provides shuffle data for other executors in addi",
"doc_type":"cmpntguide",
"kw":"Using the external shuffle service to improve performance,Spark Core Tuning,Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the external shuffle service to improve performance",
"githuburl":""
},
{
"uri":"mrs_01_1981.html",
+ "node_id":"mrs_01_1981.xml",
"product_code":"mrs",
- "code":"624",
+ "code":"622",
"des":"Resources are a key factor that affects Spark execution efficiency. When a long-running service (such as the JDBCServer) is allocated with multiple executors without task",
"doc_type":"cmpntguide",
"kw":"Configuring Dynamic Resource Scheduling in Yarn Mode,Spark Core Tuning,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Dynamic Resource Scheduling in Yarn Mode",
"githuburl":""
},
{
"uri":"mrs_01_1982.html",
+ "node_id":"mrs_01_1982.xml",
"product_code":"mrs",
- "code":"625",
+ "code":"623",
"des":"There are three processes in Spark on Yarn mode: driver, ApplicationMaster, and executor. The Driver and Executor handle the scheduling and running of the task. The Appli",
"doc_type":"cmpntguide",
"kw":"Configuring Process Parameters,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Process Parameters",
"githuburl":""
},
{
"uri":"mrs_01_1983.html",
+ "node_id":"mrs_01_1983.xml",
"product_code":"mrs",
- "code":"626",
+ "code":"624",
"des":"Optimal program structure helps increase execution efficiency. During application programming, avoid shuffle operations and combine narrow-dependency operations.This topi",
"doc_type":"cmpntguide",
"kw":"Designing the Direction Acyclic Graph (DAG),Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Designing the Direction Acyclic Graph (DAG)",
"githuburl":""
},
{
"uri":"mrs_01_1984.html",
+ "node_id":"mrs_01_1984.xml",
"product_code":"mrs",
- "code":"627",
+ "code":"625",
"des":"If the overhead of each record is high, for example:Use mapPartitions to calculate data by partition.Use mapPartitions to flexibly operate data. For example, to calculate",
"doc_type":"cmpntguide",
"kw":"Experience,Spark Core Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Experience",
"githuburl":""
},
{
"uri":"mrs_01_1985.html",
+ "node_id":"mrs_01_1985.xml",
"product_code":"mrs",
- "code":"628",
+ "code":"626",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Spark SQL and DataFrame Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark SQL and DataFrame Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1986.html",
+ "node_id":"mrs_01_1986.xml",
"product_code":"mrs",
- "code":"629",
+ "code":"627",
"des":"When two tables are joined in Spark SQL, the broadcast function (see section \"Using Broadcast Variables\") can be used to broadcast tables to each node. This minimizes shu",
"doc_type":"cmpntguide",
"kw":"Optimizing the Spark SQL Join Operation,Spark SQL and DataFrame Tuning,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing the Spark SQL Join Operation",
"githuburl":""
},
{
"uri":"mrs_01_1987.html",
+ "node_id":"mrs_01_1987.xml",
"product_code":"mrs",
- "code":"630",
+ "code":"628",
"des":"When multiple tables are joined in Spark SQL, skew occurs in join keys and the data volume in some Hash buckets is much higher than that in other buckets. As a result, so",
"doc_type":"cmpntguide",
"kw":"Improving Spark SQL Calculation Performance Under Data Skew,Spark SQL and DataFrame Tuning,Component",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Improving Spark SQL Calculation Performance Under Data Skew",
"githuburl":""
},
{
"uri":"mrs_01_1988.html",
+ "node_id":"mrs_01_1988.xml",
"product_code":"mrs",
- "code":"631",
+ "code":"629",
"des":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
"doc_type":"cmpntguide",
"kw":"Optimizing Spark SQL Performance in the Small File Scenario,Spark SQL and DataFrame Tuning,Component",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Spark SQL Performance in the Small File Scenario",
"githuburl":""
},
{
"uri":"mrs_01_1989.html",
+ "node_id":"mrs_01_1989.xml",
"product_code":"mrs",
- "code":"632",
+ "code":"630",
"des":"The INSERT...SELECT operation needs to be optimized if any of the following conditions is true:Many small files need to be queried.A few large files need to be queried.Th",
"doc_type":"cmpntguide",
"kw":"Optimizing the INSERT...SELECT Operation,Spark SQL and DataFrame Tuning,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing the INSERT...SELECT Operation",
"githuburl":""
},
{
"uri":"mrs_01_1990.html",
+ "node_id":"mrs_01_1990.xml",
"product_code":"mrs",
- "code":"633",
+ "code":"631",
"des":"Multiple clients can be connected to JDBCServer at the same time. However, if the number of concurrent tasks is too large, the default configuration of JDBCServer must be",
"doc_type":"cmpntguide",
"kw":"Multiple JDBC Clients Concurrently Connecting to JDBCServer,Spark SQL and DataFrame Tuning,Component",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Multiple JDBC Clients Concurrently Connecting to JDBCServer",
"githuburl":""
},
{
"uri":"mrs_01_1992.html",
+ "node_id":"mrs_01_1992.xml",
"product_code":"mrs",
- "code":"634",
+ "code":"632",
"des":"When SparkSQL inserts data to dynamic partitioned tables, the more partitions there are, the more HDFS files a single task generates and the more memory metadata occupies",
"doc_type":"cmpntguide",
"kw":"Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables,Spark SQL and DataFrame Tuni",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables",
"githuburl":""
},
{
"uri":"mrs_01_1995.html",
+ "node_id":"mrs_01_1995.xml",
"product_code":"mrs",
- "code":"635",
+ "code":"633",
"des":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
"doc_type":"cmpntguide",
"kw":"Optimizing Small Files,Spark SQL and DataFrame Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Small Files",
"githuburl":""
},
{
"uri":"mrs_01_1996.html",
+ "node_id":"mrs_01_1996.xml",
"product_code":"mrs",
- "code":"636",
+ "code":"634",
"des":"Spark SQL supports hash aggregate algorithm. Namely, use fast aggregate hashmap as cache to improve aggregate performance. The hashmap replaces the previous ColumnarBatch",
"doc_type":"cmpntguide",
"kw":"Optimizing the Aggregate Algorithms,Spark SQL and DataFrame Tuning,Component Operation Guide (Normal",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing the Aggregate Algorithms",
"githuburl":""
},
{
"uri":"mrs_01_1997.html",
+ "node_id":"mrs_01_1997.xml",
"product_code":"mrs",
- "code":"637",
+ "code":"635",
"des":"Save the partition information about the datasource table to the Metastore and process partition information in the Metastore.Optimize the datasource tables, support synt",
"doc_type":"cmpntguide",
"kw":"Optimizing Datasource Tables,Spark SQL and DataFrame Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Datasource Tables",
"githuburl":""
},
{
"uri":"mrs_01_1998.html",
+ "node_id":"mrs_01_1998.xml",
"product_code":"mrs",
- "code":"638",
+ "code":"636",
"des":"Spark SQL supports rule-based optimization by default. However, the rule-based optimization cannot ensure that Spark selects the optimal query plan. Cost-Based Optimizer ",
"doc_type":"cmpntguide",
"kw":"Merging CBO,Spark SQL and DataFrame Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Merging CBO",
"githuburl":""
},
{
"uri":"mrs_01_1999.html",
+ "node_id":"mrs_01_1999.xml",
"product_code":"mrs",
- "code":"639",
+ "code":"637",
"des":"This section describes how to enable or disable the query optimization for inter-source complex SQL.(Optional) Prepare for connecting to the MPPDB data source.If the data",
"doc_type":"cmpntguide",
"kw":"Optimizing SQL Query of Data of Multiple Sources,Spark SQL and DataFrame Tuning,Component Operation ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing SQL Query of Data of Multiple Sources",
"githuburl":""
},
{
"uri":"mrs_01_2000.html",
+ "node_id":"mrs_01_2000.xml",
"product_code":"mrs",
- "code":"640",
+ "code":"638",
"des":"This section describes the optimization suggestions for SQL statements in multi-level nesting and hybrid join scenarios.The following provides an example of complex query",
"doc_type":"cmpntguide",
"kw":"SQL Optimization for Multi-level Nesting and Hybrid Join,Spark SQL and DataFrame Tuning,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"SQL Optimization for Multi-level Nesting and Hybrid Join",
"githuburl":""
},
{
"uri":"mrs_01_2001.html",
+ "node_id":"mrs_01_2001.xml",
"product_code":"mrs",
- "code":"641",
+ "code":"639",
"des":"Streaming is a mini-batch streaming processing framework that features second-level delay and high throughput. To optimize Streaming is to improve its throughput while ma",
"doc_type":"cmpntguide",
"kw":"Spark Streaming Tuning,Spark2x Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark Streaming Tuning",
"githuburl":""
},
{
"uri":"mrs_01_2002.html",
+ "node_id":"mrs_01_2002.xml",
"product_code":"mrs",
- "code":"642",
+ "code":"640",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Spark2x",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Spark2x",
"githuburl":""
},
{
"uri":"mrs_01_2003.html",
+ "node_id":"mrs_01_2003.xml",
"product_code":"mrs",
- "code":"643",
+ "code":"641",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Spark Core",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark Core",
"githuburl":""
},
{
"uri":"mrs_01_2004.html",
+ "node_id":"mrs_01_2004.xml",
"product_code":"mrs",
- "code":"644",
+ "code":"642",
"des":"How do I view the aggregated container logs on the page when the log aggregation function is enabled on YARN?For details, see Viewing Aggregated Container Logs on the Web",
"doc_type":"cmpntguide",
"kw":"How Do I View Aggregated Spark Application Logs?,Spark Core,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I View Aggregated Spark Application Logs?",
"githuburl":""
},
{
"uri":"mrs_01_2005.html",
+ "node_id":"mrs_01_2005.xml",
"product_code":"mrs",
- "code":"645",
+ "code":"643",
"des":"Communication between ApplicationMaster and ResourceManager remains abnormal for a long time. Why is the driver return code inconsistent with application status on Resour",
"doc_type":"cmpntguide",
"kw":"Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager We",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?",
"githuburl":""
},
{
"uri":"mrs_01_2006.html",
+ "node_id":"mrs_01_2006.xml",
"product_code":"mrs",
- "code":"646",
+ "code":"644",
"des":"Why cannot exit the Driver process after running the yarn application -kill applicationID command to stop the Spark Streaming application?Running the yarn application -ki",
"doc_type":"cmpntguide",
"kw":"Why Cannot Exit the Driver Process?,Spark Core,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot Exit the Driver Process?",
"githuburl":""
},
{
"uri":"mrs_01_2007.html",
+ "node_id":"mrs_01_2007.xml",
"product_code":"mrs",
- "code":"647",
+ "code":"645",
"des":"On a large cluster of 380 nodes, run the ScalaSort test case in the HiBench test that runs the 29T data, and configure Executor as --executor-cores 4. The following abnor",
"doc_type":"cmpntguide",
"kw":"Why Does FetchFailedException Occur When the Network Connection Is Timed out,Spark Core,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does FetchFailedException Occur When the Network Connection Is Timed out",
"githuburl":""
},
{
"uri":"mrs_01_2008.html",
+ "node_id":"mrs_01_2008.xml",
"product_code":"mrs",
- "code":"648",
+ "code":"646",
"des":"How to configure the event queue size if the following Driver log information is displayed indicating that the event queue overflows?Common applicationsDropping SparkList",
"doc_type":"cmpntguide",
"kw":"How to Configure Event Queue Size If Event Queue Overflows?,Spark Core,Component Operation Guide (No",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Configure Event Queue Size If Event Queue Overflows?",
"githuburl":""
},
{
"uri":"mrs_01_2009.html",
+ "node_id":"mrs_01_2009.xml",
"product_code":"mrs",
- "code":"649",
+ "code":"647",
"des":"During Spark application execution, if the driver fails to connect to ResourceManager, the following error is reported and it does not exit for a long time. What can I do",
"doc_type":"cmpntguide",
"kw":"What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Exe",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?",
"githuburl":""
},
{
"uri":"mrs_01_2010.html",
+ "node_id":"mrs_01_2010.xml",
"product_code":"mrs",
- "code":"650",
+ "code":"648",
"des":"When Spark executes an application, an error similar to the following is reported and the application ends. What can I do?Symptom: The value of spark.rpc.io.connectionTim",
"doc_type":"cmpntguide",
"kw":"What Can I Do If \"Connection to ip:port has been quiet for xxx ms while there are outstanding reques",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If \"Connection to ip:port has been quiet for xxx ms while there are outstanding requests\" Is Reported When Spark Executes an Application and the Application Ends?",
"githuburl":""
},
{
"uri":"mrs_01_2011.html",
+ "node_id":"mrs_01_2011.xml",
"product_code":"mrs",
- "code":"651",
+ "code":"649",
"des":"If the NodeManager is shut down with the Executor dynamic allocation enabled, the Executors on the node where the NodeManeger is shut down fail to be removed from the dri",
"doc_type":"cmpntguide",
"kw":"Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?,Spark Core,Component Operati",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?",
"githuburl":""
},
{
"uri":"mrs_01_2012.html",
+ "node_id":"mrs_01_2012.xml",
"product_code":"mrs",
- "code":"652",
+ "code":"650",
"des":"ExternalShuffle is enabled for the application that runs Spark. Task loss occurs in the application because the message \"java.lang.NullPointerException: Password cannot b",
"doc_type":"cmpntguide",
"kw":"What Can I Do If the Message \"Password cannot be null if SASL is enabled\" Is Displayed?,Spark Core,C",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If the Message \"Password cannot be null if SASL is enabled\" Is Displayed?",
"githuburl":""
},
{
"uri":"mrs_01_2013.html",
+ "node_id":"mrs_01_2013.xml",
"product_code":"mrs",
- "code":"653",
+ "code":"651",
"des":"When inserting data into the dynamic partition table, a large number of shuffle files are damaged due to the disk disconnection, node error, and the like. In this case, w",
"doc_type":"cmpntguide",
"kw":"What Should I Do If the Message \"Failed to CREATE_FILE\" Is Displayed in the Restarted Tasks When Dat",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Should I Do If the Message \"Failed to CREATE_FILE\" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?",
"githuburl":""
},
{
"uri":"mrs_01_2014.html",
+ "node_id":"mrs_01_2014.xml",
"product_code":"mrs",
- "code":"654",
+ "code":"652",
"des":"When Hash shuffle is used to run a job that consists of 1000000 map tasks x 100000 reduce tasks, run logs report many message failures and Executor heartbeat timeout, lea",
"doc_type":"cmpntguide",
"kw":"Why Tasks Fail When Hash Shuffle Is Used?,Spark Core,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Tasks Fail When Hash Shuffle Is Used?",
"githuburl":""
},
{
"uri":"mrs_01_2015.html",
+ "node_id":"mrs_01_2015.xml",
"product_code":"mrs",
- "code":"655",
+ "code":"653",
"des":"When the http(s)://: mode is used to access the Spark JobHistory page, if the displayed Spark JobHistory page is not the page of FusionInsight Manag",
"doc_type":"cmpntguide",
"kw":"What Can I Do If the Error Message \"DNS query failed\" Is Displayed When I Access the Aggregated Logs",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If the Error Message \"DNS query failed\" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?",
"githuburl":""
},
{
"uri":"mrs_01_2016.html",
+ "node_id":"mrs_01_2016.xml",
"product_code":"mrs",
- "code":"656",
+ "code":"654",
"des":"When I execute a 100 TB TPC-DS test suite in the JDBCServer mode, the \"Timeout waiting for task\" is displayed. As a result, shuffle fetch fails, the stage keeps retrying,",
"doc_type":"cmpntguide",
"kw":"What Can I Do If Shuffle Fetch Fails Due to the \"Timeout Waiting for Task\" Exception?,Spark Core,Com",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Can I Do If Shuffle Fetch Fails Due to the \"Timeout Waiting for Task\" Exception?",
"githuburl":""
},
{
"uri":"mrs_01_2017.html",
+ "node_id":"mrs_01_2017.xml",
"product_code":"mrs",
- "code":"657",
+ "code":"655",
"des":"When I run Spark tasks with a large data volume, for example, 100 TB TPCDS test suite, why does the Stage retry due to Executor loss sometimes? The message \"Executor 532 ",
"doc_type":"cmpntguide",
"kw":"Why Does the Stage Retry due to the Crash of the Executor?,Spark Core,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the Stage Retry due to the Crash of the Executor?",
"githuburl":""
},
{
"uri":"mrs_01_2018.html",
+ "node_id":"mrs_01_2018.xml",
"product_code":"mrs",
- "code":"658",
+ "code":"656",
"des":"When more than 50 terabytes of data is shuffled, some executors fail to register shuffle services due to timeout. The shuffle tasks then fail. Why? The error log is as fo",
"doc_type":"cmpntguide",
"kw":"Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?",
"githuburl":""
},
{
"uri":"mrs_01_2019.html",
+ "node_id":"mrs_01_2019.xml",
"product_code":"mrs",
- "code":"659",
+ "code":"657",
"des":"During the execution of Spark applications, if the YARN External Shuffle service is enabled and there are too many shuffle tasks, the java.lang.OutofMemoryError: Direct b",
"doc_type":"cmpntguide",
"kw":"Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications,Spa",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications",
"githuburl":""
},
{
"uri":"mrs_01_2021.html",
+ "node_id":"mrs_01_2021.xml",
"product_code":"mrs",
- "code":"660",
+ "code":"658",
"des":"Execution of the sparkbench task (for example, Wordcount) of HiBench6 fails. The bench.log indicates that the Yarn task fails to be executed. The failure information disp",
"doc_type":"cmpntguide",
"kw":"Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?",
"githuburl":""
},
{
"uri":"mrs_01_2022.html",
+ "node_id":"mrs_01_2022.xml",
"product_code":"mrs",
- "code":"661",
+ "code":"659",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Spark SQL and DataFrame",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Spark SQL and DataFrame",
"githuburl":""
},
{
"uri":"mrs_01_2023.html",
+ "node_id":"mrs_01_2023.xml",
"product_code":"mrs",
- "code":"662",
+ "code":"660",
"des":"Suppose that there is a table src(d1, d2, m) with the following data:The results for statement \"select d1, sum(d1) from src group by d1, d2 with rollup\" are shown as belo",
"doc_type":"cmpntguide",
"kw":"What Do I have to Note When Using Spark SQL ROLLUP and CUBE?,Spark SQL and DataFrame,Component Opera",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Do I have to Note When Using Spark SQL ROLLUP and CUBE?",
"githuburl":""
},
{
"uri":"mrs_01_2024.html",
+ "node_id":"mrs_01_2024.xml",
"product_code":"mrs",
- "code":"663",
+ "code":"661",
"des":"Why temporary tables of the previous database are displayed after the database is switched?Create a temporary DataSource table, for example:create temporary table ds_parq",
"doc_type":"cmpntguide",
"kw":"Why Spark SQL Is Displayed as a Temporary Table in Different Databases?,Spark SQL and DataFrame,Comp",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Spark SQL Is Displayed as a Temporary Table in Different Databases?",
"githuburl":""
},
{
"uri":"mrs_01_2025.html",
+ "node_id":"mrs_01_2025.xml",
"product_code":"mrs",
- "code":"664",
+ "code":"662",
"des":"Is it possible to assign parameter values through Spark commands, in addition to through a user interface or a configuration file?Spark configuration options can be defin",
"doc_type":"cmpntguide",
"kw":"How to Assign a Parameter Value in a Spark Command?,Spark SQL and DataFrame,Component Operation Guid",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Assign a Parameter Value in a Spark Command?",
"githuburl":""
},
{
"uri":"mrs_01_2026.html",
+ "node_id":"mrs_01_2026.xml",
"product_code":"mrs",
- "code":"665",
+ "code":"663",
"des":"The following error information is displayed when a new user creates a table using SparkSQL:When you create a table using Spark SQL, the interface of Hive is called by th",
"doc_type":"cmpntguide",
"kw":"What Directory Permissions Do I Need to Create a Table Using SparkSQL?,Spark SQL and DataFrame,Compo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Directory Permissions Do I Need to Create a Table Using SparkSQL?",
"githuburl":""
},
{
"uri":"mrs_01_2027.html",
+ "node_id":"mrs_01_2027.xml",
"product_code":"mrs",
- "code":"666",
+ "code":"664",
"des":"Why do I fail to delete the UDF using another service, for example, delete the UDF created by Hive using Spark SQL.The UDF can be created using any of the following servi",
"doc_type":"cmpntguide",
"kw":"Why Do I Fail to Delete the UDF Using Another Service?,Spark SQL and DataFrame,Component Operation G",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do I Fail to Delete the UDF Using Another Service?",
"githuburl":""
},
{
"uri":"mrs_01_2028.html",
+ "node_id":"mrs_01_2028.xml",
"product_code":"mrs",
- "code":"667",
+ "code":"665",
"des":"Why cannot I query newly inserted data in a parquet Hive table using SparkSQL? This problem occurs in the following scenarios:For partitioned tables and non-partitioned t",
"doc_type":"cmpntguide",
"kw":"Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?,Spark SQL and DataFra",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?",
"githuburl":""
},
{
"uri":"mrs_01_2029.html",
+ "node_id":"mrs_01_2029.xml",
"product_code":"mrs",
- "code":"668",
+ "code":"666",
"des":"What is cache table used for? Which point should I pay attention to while using cache table?Spark SQL caches tables into memory so that data can be directly read from mem",
"doc_type":"cmpntguide",
"kw":"How to Use Cache Table?,Spark SQL and DataFrame,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How to Use Cache Table?",
"githuburl":""
},
{
"uri":"mrs_01_2030.html",
+ "node_id":"mrs_01_2030.xml",
"product_code":"mrs",
- "code":"669",
+ "code":"667",
"des":"During the repartition operation, the number of blocks (spark.sql.shuffle.partitions) is set to 4,500, and the number of keys used by repartition exceeds 4,000. It is exp",
"doc_type":"cmpntguide",
"kw":"Why Are Some Partitions Empty During Repartition?,Spark SQL and DataFrame,Component Operation Guide ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Are Some Partitions Empty During Repartition?",
"githuburl":""
},
{
"uri":"mrs_01_2031.html",
+ "node_id":"mrs_01_2031.xml",
"product_code":"mrs",
- "code":"670",
+ "code":"668",
"des":"When the default configuration is used, 16 terabytes of text data fails to be converted into 4 terabytes of parquet data, and the error information below is displayed. Wh",
"doc_type":"cmpntguide",
"kw":"Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?,Spark SQL",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?",
"githuburl":""
},
{
"uri":"mrs_01_2033.html",
+ "node_id":"mrs_01_2033.xml",
"product_code":"mrs",
- "code":"671",
+ "code":"669",
"des":"When the table name is set to table, why the error information similar to the following is displayed after the drop table table command or other command is run?The word t",
"doc_type":"cmpntguide",
"kw":"Why the Operation Fails When the Table Name Is TABLE?,Spark SQL and DataFrame,Component Operation Gu",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why the Operation Fails When the Table Name Is TABLE?",
"githuburl":""
},
{
"uri":"mrs_01_2034.html",
+ "node_id":"mrs_01_2034.xml",
"product_code":"mrs",
- "code":"672",
+ "code":"670",
"des":"When the analyze table statement is executed using spark-sql, the task is suspended and the information below is displayed. Why?When the statement is executed, the SQL st",
"doc_type":"cmpntguide",
"kw":"Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?",
"githuburl":""
},
{
"uri":"mrs_01_2035.html",
+ "node_id":"mrs_01_2035.xml",
"product_code":"mrs",
- "code":"673",
+ "code":"671",
"des":"If I access a parquet table on which I do not have permission, why a job is run before \"Missing Privileges\" is displayed?The execution sequence of Spark SQL statement par",
"doc_type":"cmpntguide",
"kw":"If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before \"Missing Priv",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before \"Missing Privileges\" Is Displayed?",
"githuburl":""
},
{
"uri":"mrs_01_2036.html",
+ "node_id":"mrs_01_2036.xml",
"product_code":"mrs",
- "code":"674",
+ "code":"672",
"des":"When do I fail to modify the metadata in the datasource and Spark on HBase table by running the Hive command?The current Spark version does not support modifying the meta",
"doc_type":"cmpntguide",
"kw":"Why Do I Fail to Modify MetaData by Running the Hive Command?,Spark SQL and DataFrame,Component Oper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do I Fail to Modify MetaData by Running the Hive Command?",
"githuburl":""
},
{
"uri":"mrs_01_2037.html",
+ "node_id":"mrs_01_2037.xml",
"product_code":"mrs",
- "code":"675",
+ "code":"673",
"des":"After successfully running Spark tasks with large data volume, for example, 2-TB TPCDS test suite, why is the abnormal stack information \"RejectedExecutionException\" disp",
"doc_type":"cmpntguide",
"kw":"Why Is \"RejectedExecutionException\" Displayed When I Exit Spark SQL?,Spark SQL and DataFrame,Compone",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is \"RejectedExecutionException\" Displayed When I Exit Spark SQL?",
"githuburl":""
},
{
"uri":"mrs_01_2038.html",
+ "node_id":"mrs_01_2038.xml",
"product_code":"mrs",
- "code":"676",
+ "code":"674",
"des":"During a health check, if the concurrent statements exceed the threshold of the thread pool, the health check statements fail to be executed, the health check program tim",
"doc_type":"cmpntguide",
"kw":"What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?,Spark SQL and",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?",
"githuburl":""
},
{
"uri":"mrs_01_2039.html",
+ "node_id":"mrs_01_2039.xml",
"product_code":"mrs",
- "code":"677",
+ "code":"675",
"des":"Why no result is found when 2016-6-30 is set in the date field as the filter condition?As shown in the following figure, trx_dte_par in the select count (*) from trxfintr",
"doc_type":"cmpntguide",
"kw":"Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?,Spark SQL an",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?",
"githuburl":""
},
{
"uri":"mrs_01_2040.html",
+ "node_id":"mrs_01_2040.xml",
"product_code":"mrs",
- "code":"678",
+ "code":"676",
"des":"Why does the --hivevaroption I specified in the command for starting spark-beeline fail to take effect?In the V100R002C60 version, if I use the --hivevar =\n org.apache.flink\n fli",
"doc_type":"cmpntguide",
"kw":"Completely Migrating Storm Services,Migrating Storm Services to Flink,Component Operation Guide (Nor",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Completely Migrating Storm Services",
"githuburl":""
},
{
"uri":"mrs_01_1051.html",
+ "node_id":"mrs_01_1051.xml",
"product_code":"mrs",
- "code":"728",
+ "code":"726",
"des":"This section describes how to embed Storm code in DataStream of Flink in embedded migration mode. For example, the code of Spout or Bolt compiled using Storm API is embed",
"doc_type":"cmpntguide",
"kw":"Performing Embedded Service Migration,Migrating Storm Services to Flink,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performing Embedded Service Migration",
"githuburl":""
},
{
"uri":"mrs_01_1052.html",
+ "node_id":"mrs_01_1052.xml",
"product_code":"mrs",
- "code":"729",
+ "code":"727",
"des":"If the Storm services use the storm-hdfs or storm-hbase plug-in package for interconnection, you need to specify the following security parameters when migrating Storm se",
"doc_type":"cmpntguide",
"kw":"Migrating Services of External Security Components Interconnected with Storm,Migrating Storm Service",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Migrating Services of External Security Components Interconnected with Storm",
"githuburl":""
},
{
"uri":"mrs_01_1053.html",
+ "node_id":"mrs_01_1053.xml",
"product_code":"mrs",
- "code":"730",
+ "code":"728",
"des":"This section applies to MRS 3.x or later.Log paths: The default paths of Storm log files are /var/log/Bigdata/storm/Role name (run logs) and /var/log/Bigdata/audit/storm/",
"doc_type":"cmpntguide",
"kw":"Storm Log Introduction,Using Storm,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Storm Log Introduction",
"githuburl":""
},
{
"uri":"mrs_01_1054.html",
+ "node_id":"mrs_01_1054.xml",
"product_code":"mrs",
- "code":"731",
+ "code":"729",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_1055.html",
+ "node_id":"mrs_01_1055.xml",
"product_code":"mrs",
- "code":"732",
+ "code":"730",
"des":"You can modify Storm parameters to improve Storm performance in specific service scenarios.This section applies to MRS 3.x or later.Modify the service configuration param",
"doc_type":"cmpntguide",
"kw":"Storm Performance Tuning,Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Storm Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_2067.html",
+ "node_id":"mrs_01_2067.xml",
"product_code":"mrs",
- "code":"733",
+ "code":"731",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Tez",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Tez",
"githuburl":""
},
{
"uri":"mrs_01_2068.html",
+ "node_id":"mrs_01_2068.xml",
"product_code":"mrs",
- "code":"734",
+ "code":"732",
"des":"This section applies to MRS 3.x or later clusters.",
"doc_type":"cmpntguide",
"kw":"Precautions,Using Tez,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Precautions",
"githuburl":""
},
{
"uri":"mrs_01_2069.html",
+ "node_id":"mrs_01_2069.xml",
"product_code":"mrs",
- "code":"735",
+ "code":"733",
"des":"On Manager, choose Cluster > Service > Tez > Configuration > All Configurations. Enter a parameter name in the search box.",
"doc_type":"cmpntguide",
"kw":"Common Tez Parameters,Using Tez,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Tez Parameters",
"githuburl":""
},
{
"uri":"mrs_01_2070.html",
+ "node_id":"mrs_01_2070.xml",
"product_code":"mrs",
- "code":"736",
+ "code":"734",
"des":"Tez displays the Tez task execution process on a GUI. You can view the task execution details on the GUI.The TimelineServer instance of the Yarn service has been installe",
"doc_type":"cmpntguide",
"kw":"Accessing TezUI,Using Tez,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing TezUI",
"githuburl":""
},
{
"uri":"mrs_01_2071.html",
+ "node_id":"mrs_01_2071.xml",
"product_code":"mrs",
- "code":"737",
+ "code":"735",
"des":"Log path: The default save path of Tez logs is /var/log/Bigdata/tez/role name.TezUI: /var/log/Bigdata/tez/tezui (run logs) and /var/log/Bigdata/audit/tez/tezui (audit log",
"doc_type":"cmpntguide",
"kw":"Log Overview,Using Tez,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_2072.html",
+ "node_id":"mrs_01_2072.xml",
"product_code":"mrs",
- "code":"738",
+ "code":"736",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues",
"githuburl":""
},
{
"uri":"mrs_01_2073.html",
+ "node_id":"mrs_01_2073.xml",
"product_code":"mrs",
- "code":"739",
+ "code":"737",
"des":"After a user logs in to Manager and switches to the Tez web UI, the submitted Tez tasks are not displayed.The Tez task data displayed on the Tez WebUI requires the suppor",
"doc_type":"cmpntguide",
"kw":"TezUI Cannot Display Tez Task Execution Details,Common Issues,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"TezUI Cannot Display Tez Task Execution Details",
"githuburl":""
},
{
"uri":"mrs_01_2074.html",
+ "node_id":"mrs_01_2074.xml",
"product_code":"mrs",
- "code":"740",
+ "code":"738",
"des":"When a user logs in to Manager and switches to the Tez web UI, error 404 or 503 is displayed.The Tez web UI depends on the TimelineServer instance of Yarn. Therefore, Tim",
"doc_type":"cmpntguide",
"kw":"Error Occurs When a User Switches to the Tez Web UI,Common Issues,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Error Occurs When a User Switches to the Tez Web UI",
"githuburl":""
},
{
"uri":"mrs_01_2075.html",
+ "node_id":"mrs_01_2075.xml",
"product_code":"mrs",
- "code":"741",
+ "code":"739",
"des":"A user logs in to the Tez web UI and clicks Logs, but the Yarn log page fails to be displayed and data cannot be loaded.Currently, the hostname is used for the access to ",
"doc_type":"cmpntguide",
"kw":"Yarn Logs Cannot Be Viewed on the TezUI Page,Common Issues,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Yarn Logs Cannot Be Viewed on the TezUI Page",
"githuburl":""
},
{
"uri":"mrs_01_2076.html",
+ "node_id":"mrs_01_2076.xml",
"product_code":"mrs",
- "code":"742",
+ "code":"740",
"des":"A user logs in to Manager and switches to the Tez web UI page, but no data for the submitted task is displayed on the Hive Queries page.To display task data on the Hive Q",
"doc_type":"cmpntguide",
"kw":"Table Data Is Empty on the TezUI HiveQueries Page,Common Issues,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Table Data Is Empty on the TezUI HiveQueries Page",
"githuburl":""
},
{
"uri":"mrs_01_0851.html",
+ "node_id":"mrs_01_0851.xml",
"product_code":"mrs",
- "code":"743",
+ "code":"741",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using Yarn",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using Yarn",
"githuburl":""
},
{
"uri":"mrs_01_0852.html",
+ "node_id":"mrs_01_0852.xml",
"product_code":"mrs",
- "code":"744",
+ "code":"742",
"des":"The Yarn service provides queues for users. Users allocate system resources to each queue. After the configuration is complete, you can click Refresh Queue or restart the",
"doc_type":"cmpntguide",
"kw":"Common YARN Parameters,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common YARN Parameters",
"githuburl":""
},
{
"uri":"mrs_01_0853.html",
+ "node_id":"mrs_01_0853.xml",
"product_code":"mrs",
- "code":"745",
+ "code":"743",
"des":"This section describes how to create and configure a Yarn role. The Yarn role can be assigned with Yarn administrator permission and manage Yarn queue resources.If the cu",
"doc_type":"cmpntguide",
"kw":"Creating Yarn Roles,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Creating Yarn Roles",
"githuburl":""
},
{
"uri":"mrs_01_0854.html",
+ "node_id":"mrs_01_0854.xml",
"product_code":"mrs",
- "code":"746",
+ "code":"744",
"des":"This section guides users to use a Yarn client in an O&M or service scenario.The client has been installed.For example, the installation directory is /opt/hadoopclient. T",
"doc_type":"cmpntguide",
"kw":"Using the YARN Client,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using the YARN Client",
"githuburl":""
},
{
"uri":"mrs_01_0855.html",
+ "node_id":"mrs_01_0855.xml",
"product_code":"mrs",
- "code":"747",
+ "code":"745",
"des":"If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware r",
"doc_type":"cmpntguide",
"kw":"Configuring Resources for a NodeManager Role Instance,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Resources for a NodeManager Role Instance",
"githuburl":""
},
{
"uri":"mrs_01_0856.html",
+ "node_id":"mrs_01_0856.xml",
"product_code":"mrs",
- "code":"748",
+ "code":"746",
"des":"If the storage directories defined by the Yarn NodeManager are incorrect or the Yarn storage plan changes, the system administrator needs to modify the NodeManager storag",
"doc_type":"cmpntguide",
"kw":"Changing NodeManager Storage Directories,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Changing NodeManager Storage Directories",
"githuburl":""
},
{
"uri":"mrs_01_0857.html",
+ "node_id":"mrs_01_0857.xml",
"product_code":"mrs",
- "code":"749",
+ "code":"747",
"des":"In the multi-tenant scenario in security mode, a cluster can be used by multiple users, and tasks of multiple users can be submitted and executed. Users are invisible to ",
"doc_type":"cmpntguide",
"kw":"Configuring Strict Permission Control for Yarn,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Strict Permission Control for Yarn",
"githuburl":""
},
{
"uri":"mrs_01_0858.html",
+ "node_id":"mrs_01_0858.xml",
"product_code":"mrs",
- "code":"750",
+ "code":"748",
"des":"Yarn provides the container log aggregation function to collect logs generated by containers on each node to HDFS to release local disk space. You can collect logs in eit",
"doc_type":"cmpntguide",
"kw":"Configuring Container Log Aggregation,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Container Log Aggregation",
"githuburl":""
},
{
"uri":"mrs_01_0859.html",
+ "node_id":"mrs_01_0859.xml",
"product_code":"mrs",
- "code":"751",
+ "code":"749",
"des":"This section applies to MRS 3.x or later clusters.CGroups is a Linux kernel feature. In YARN this feature allows containers to be limited in their resource usage (example",
"doc_type":"cmpntguide",
"kw":"Using CGroups with YARN,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using CGroups with YARN",
"githuburl":""
},
{
"uri":"mrs_01_0860.html",
+ "node_id":"mrs_01_0860.xml",
"product_code":"mrs",
- "code":"752",
+ "code":"750",
"des":"When resources are insufficient or ApplicationMaster fails to start, a client probably encounters running errors.Go to the All Configurations page of Yarn and enter a par",
"doc_type":"cmpntguide",
"kw":"Configuring the Number of ApplicationMaster Retries,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Number of ApplicationMaster Retries",
"githuburl":""
},
{
"uri":"mrs_01_0861.html",
+ "node_id":"mrs_01_0861.xml",
"product_code":"mrs",
- "code":"753",
+ "code":"751",
"des":"This section applies to clusters of MRS 3.x or later.During the process of starting the configuration, when the ApplicationMaster creates a container, the allocated memor",
"doc_type":"cmpntguide",
"kw":"Configure the ApplicationMaster to Automatically Adjust the Allocated Memory,Using Yarn,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configure the ApplicationMaster to Automatically Adjust the Allocated Memory",
"githuburl":""
},
{
"uri":"mrs_01_0862.html",
+ "node_id":"mrs_01_0862.xml",
"product_code":"mrs",
- "code":"754",
+ "code":"752",
"des":"The value of the yarn.http.policy parameter must be consistent on both the server and clients. Web UIs on clients will be garbled if an inconsistency exists, for example,",
"doc_type":"cmpntguide",
"kw":"Configuring the Access Channel Protocol,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Access Channel Protocol",
"githuburl":""
},
{
"uri":"mrs_01_0863.html",
+ "node_id":"mrs_01_0863.xml",
"product_code":"mrs",
- "code":"755",
+ "code":"753",
"des":"If memory usage of the submitted application cannot be estimated, you can modify the configuration on the server to determine whether to check the memory usage.If the mem",
"doc_type":"cmpntguide",
"kw":"Configuring Memory Usage Detection,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Memory Usage Detection",
"githuburl":""
},
{
"uri":"mrs_01_0864.html",
+ "node_id":"mrs_01_0864.xml",
"product_code":"mrs",
- "code":"756",
+ "code":"754",
"des":"If the custom scheduler is set in ResourceManager, you can set the corresponding web page and other Web applications for the custom scheduler.Go to the All Configurations",
"doc_type":"cmpntguide",
"kw":"Configuring the Additional Scheduler WebUI,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Additional Scheduler WebUI",
"githuburl":""
},
{
"uri":"mrs_01_0865.html",
+ "node_id":"mrs_01_0865.xml",
"product_code":"mrs",
- "code":"757",
+ "code":"755",
"des":"The Yarn Restart feature includes ResourceManager Restart and NodeManager Restart.When ResourceManager Restart is enabled, the new active ResourceManager node loads the i",
"doc_type":"cmpntguide",
"kw":"Configuring Yarn Restart,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Yarn Restart",
"githuburl":""
},
{
"uri":"mrs_01_0866.html",
+ "node_id":"mrs_01_0866.xml",
"product_code":"mrs",
- "code":"758",
+ "code":"756",
"des":"This section applies to clusters of MRS 3.x or later.In YARN, ApplicationMasters run on NodeManagers just like every other container (ignoring unmanaged ApplicationMaster",
"doc_type":"cmpntguide",
"kw":"Configuring ApplicationMaster Work Preserving,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring ApplicationMaster Work Preserving",
"githuburl":""
},
{
"uri":"mrs_01_0867.html",
+ "node_id":"mrs_01_0867.xml",
"product_code":"mrs",
- "code":"759",
+ "code":"757",
"des":"This section applies to clusters of MRS 3.x or later.The default log level of localized container is INFO. You can change the log level by configuring yarn.nodemanager.co",
"doc_type":"cmpntguide",
"kw":"Configuring the Localized Log Levels,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the Localized Log Levels",
"githuburl":""
},
{
"uri":"mrs_01_0868.html",
+ "node_id":"mrs_01_0868.xml",
"product_code":"mrs",
- "code":"760",
+ "code":"758",
"des":"This section applies to clusters of MRS 3.x or later.Currently, YARN allows the user that starts the NodeManager to run the task submitted by all other users, or the user",
"doc_type":"cmpntguide",
"kw":"Configuring Users That Run Tasks,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring Users That Run Tasks",
"githuburl":""
},
{
"uri":"mrs_01_0870.html",
+ "node_id":"mrs_01_0870.xml",
"product_code":"mrs",
- "code":"761",
+ "code":"759",
"des":"The default paths for saving Yarn logs are as follows:ResourceManager: /var/log/Bigdata/yarn/rm (run logs) and /var/log/Bigdata/audit/yarn/rm (audit logs)NodeManager: /va",
"doc_type":"cmpntguide",
"kw":"Yarn Log Overview,Using Yarn,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Yarn Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_0871.html",
+ "node_id":"mrs_01_0871.xml",
"product_code":"mrs",
- "code":"762",
+ "code":"760",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Yarn Performance Tuning",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Yarn Performance Tuning",
"githuburl":""
},
{
"uri":"mrs_01_0872.html",
+ "node_id":"mrs_01_0872.xml",
"product_code":"mrs",
- "code":"763",
+ "code":"761",
"des":"The capacity scheduler of ResourceManager implements job preemption to simplify job running in queues and improve resource utilization. The process is as follows:Assume t",
"doc_type":"cmpntguide",
"kw":"Preempting a Task,Yarn Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Preempting a Task",
"githuburl":""
},
{
"uri":"mrs_01_0873.html",
+ "node_id":"mrs_01_0873.xml",
"product_code":"mrs",
- "code":"764",
+ "code":"762",
"des":"The resource contention scenarios of a cluster are as follows:Submit two jobs (Job 1 and Job 2) with lower priorities.Some tasks of running Job 1 and Job 2 are in the run",
"doc_type":"cmpntguide",
"kw":"Setting the Task Priority,Yarn Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Setting the Task Priority",
"githuburl":""
},
{
"uri":"mrs_01_0874.html",
+ "node_id":"mrs_01_0874.xml",
"product_code":"mrs",
- "code":"765",
+ "code":"763",
"des":"After the scheduler of a big data cluster is properly configured, you can adjust the available memory, CPU resources, and local disk of each node to optimize the performa",
"doc_type":"cmpntguide",
"kw":"Optimizing Node Configuration,Yarn Performance Tuning,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Optimizing Node Configuration",
"githuburl":""
},
{
"uri":"mrs_01_2077.html",
+ "node_id":"mrs_01_2077.xml",
"product_code":"mrs",
- "code":"766",
+ "code":"764",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About Yarn",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About Yarn",
"githuburl":""
},
{
"uri":"mrs_01_2078.html",
+ "node_id":"mrs_01_2078.xml",
"product_code":"mrs",
- "code":"767",
+ "code":"765",
"des":"Why mounted directory for Container is not cleared after the completion of the job while using CGroups?The mounted path for the Container should be cleared even if job is",
"doc_type":"cmpntguide",
"kw":"Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGrou",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?",
"githuburl":""
},
{
"uri":"mrs_01_2079.html",
+ "node_id":"mrs_01_2079.xml",
"product_code":"mrs",
- "code":"768",
+ "code":"766",
"des":"Why is the HDFS_DELEGATION_TOKEN expired exception reported when a job fails in security mode?HDFS_DELEGATION_TOKEN expires because the token is not updated or it is acce",
"doc_type":"cmpntguide",
"kw":"Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?,Common Issues About Yarn,Component O",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?",
"githuburl":""
},
{
"uri":"mrs_01_2080.html",
+ "node_id":"mrs_01_2080.xml",
"product_code":"mrs",
- "code":"769",
+ "code":"767",
"des":"If Yarn is restarted in either of the following scenarios, local logs will not be deleted as scheduled and will be retained permanently:When Yarn is restarted during task",
"doc_type":"cmpntguide",
"kw":"Why Are Local Logs Not Deleted After YARN Is Restarted?,Common Issues About Yarn,Component Operation",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Are Local Logs Not Deleted After YARN Is Restarted?",
"githuburl":""
},
{
"uri":"mrs_01_2081.html",
+ "node_id":"mrs_01_2081.xml",
"product_code":"mrs",
- "code":"770",
+ "code":"768",
"des":"Why the task does not fail even though AppAttempts restarts due to failure for more than two times?During the task execution process, if the ContainerExitStatus returns v",
"doc_type":"cmpntguide",
"kw":"Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?,Common Issues A",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?",
"githuburl":""
},
{
"uri":"mrs_01_2082.html",
+ "node_id":"mrs_01_2082.xml",
"product_code":"mrs",
- "code":"771",
+ "code":"769",
"des":"After I moved an application from one queue to another, why is it moved back to the original queue after ResourceManager restarts?This problem is caused by the constraint",
"doc_type":"cmpntguide",
"kw":"Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?,Common Issues",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?",
"githuburl":""
},
{
"uri":"mrs_01_2083.html",
+ "node_id":"mrs_01_2083.xml",
"product_code":"mrs",
- "code":"772",
+ "code":"770",
"des":"Why does Yarn not release the blacklist even all nodes are added to the blacklist?In Yarn, when the number of application nodes added to the blacklist by ApplicationMaste",
"doc_type":"cmpntguide",
"kw":"Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?,Common Issues Abo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?",
"githuburl":""
},
{
"uri":"mrs_01_2084.html",
+ "node_id":"mrs_01_2084.xml",
"product_code":"mrs",
- "code":"773",
+ "code":"771",
"des":"The switchover of ResourceManager occurs continuously when multiple, for example 2,000, tasks are running concurrently, causing the Yarn service unavailable.The cause is ",
"doc_type":"cmpntguide",
"kw":"Why Does the Switchover of ResourceManager Occur Continuously?,Common Issues About Yarn,Component Op",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the Switchover of ResourceManager Occur Continuously?",
"githuburl":""
},
{
"uri":"mrs_01_2085.html",
+ "node_id":"mrs_01_2085.xml",
"product_code":"mrs",
- "code":"774",
+ "code":"772",
"des":"Why does a new application fail if a NodeManager has been in unhealthy status for 10 minutes?When nodeSelectPolicy is set to SEQUENCE and the first NodeManager connected ",
"doc_type":"cmpntguide",
"kw":"Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?,Common",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?",
"githuburl":""
},
{
"uri":"mrs_01_2087.html",
+ "node_id":"mrs_01_2087.xml",
"product_code":"mrs",
- "code":"775",
+ "code":"773",
"des":"Why does an error occur when I query the applicationID of a completed or non-existing application using the RESTful APIs?The Superior scheduler only stores the applicatio",
"doc_type":"cmpntguide",
"kw":"Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Us",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?",
"githuburl":""
},
{
"uri":"mrs_01_2088.html",
+ "node_id":"mrs_01_2088.xml",
"product_code":"mrs",
- "code":"776",
+ "code":"774",
"des":"In Superior scheduling mode, if a single NodeManager is faulty, why may the MapReduce tasks fail?In normal cases, when the attempt of a single task of an application fail",
"doc_type":"cmpntguide",
"kw":"Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?,Co",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?",
"githuburl":""
},
{
"uri":"mrs_01_2089.html",
+ "node_id":"mrs_01_2089.xml",
"product_code":"mrs",
- "code":"777",
+ "code":"775",
"des":"When a queue is deleted when there are applications running in it, these applications are moved to the \"lost_and_found\" queue. When these applications are moved back to a",
"doc_type":"cmpntguide",
"kw":"Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?,Comm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?",
"githuburl":""
},
{
"uri":"mrs_01_2090.html",
+ "node_id":"mrs_01_2090.xml",
"product_code":"mrs",
- "code":"778",
+ "code":"776",
"des":"How do I limit the size of application diagnostic messages stored in the ZKstore?In some cases, it has been observed that diagnostic messages may grow infinitely. Because",
"doc_type":"cmpntguide",
"kw":"How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?,Common Issues Abou",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?",
"githuburl":""
},
{
"uri":"mrs_01_2091.html",
+ "node_id":"mrs_01_2091.xml",
"product_code":"mrs",
- "code":"779",
+ "code":"777",
"des":"Why does a MapReduce job fail to run when a non-ViewFS file system is configured as ViewFS?When a non-ViewFS file system is configured as a ViewFS using cluster, the user",
"doc_type":"cmpntguide",
"kw":"Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?,Common I",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?",
"githuburl":""
},
{
"uri":"mrs_01_24051.html",
+ "node_id":"mrs_01_24051.xml",
"product_code":"mrs",
- "code":"780",
+ "code":"778",
"des":"After the Native Task feature is enabled, Reduce tasks fail to run in some OSs.When -Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.nativetask.NativeM",
"doc_type":"cmpntguide",
"kw":"Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?,Common Issues ",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?",
"githuburl":""
},
{
"uri":"mrs_01_2092.html",
+ "node_id":"mrs_01_2092.xml",
"product_code":"mrs",
- "code":"781",
+ "code":"779",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using ZooKeeper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using ZooKeeper",
"githuburl":""
},
{
"uri":"mrs_01_2093.html",
+ "node_id":"mrs_01_2093.xml",
"product_code":"mrs",
- "code":"782",
+ "code":"780",
"des":"ZooKeeper is an open-source, highly reliable, and distributed consistency coordination service. ZooKeeper is designed to solve the problem that data consistency cannot be",
"doc_type":"cmpntguide",
"kw":"Using ZooKeeper from Scratch,Using ZooKeeper,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using ZooKeeper from Scratch",
"githuburl":""
},
{
"uri":"mrs_01_2094.html",
+ "node_id":"mrs_01_2094.xml",
"product_code":"mrs",
- "code":"783",
+ "code":"781",
"des":"Navigation path for setting parameters:Go to the All Configurations page of ZooKeeper by referring to Modifying Cluster Service Configuration Parameters. Enter a paramete",
"doc_type":"cmpntguide",
"kw":"Common ZooKeeper Parameters,Using ZooKeeper,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common ZooKeeper Parameters",
"githuburl":""
},
{
"uri":"mrs_01_2095.html",
+ "node_id":"mrs_01_2095.xml",
"product_code":"mrs",
- "code":"784",
+ "code":"782",
"des":"Use a ZooKeeper client in an O&M scenario or service scenario.You have installed the client. For example, the installation directory is /opt/client. The client directory ",
"doc_type":"cmpntguide",
"kw":"Using a ZooKeeper Client,Using ZooKeeper,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using a ZooKeeper Client",
"githuburl":""
},
{
"uri":"mrs_01_2097.html",
+ "node_id":"mrs_01_2097.xml",
"product_code":"mrs",
- "code":"785",
+ "code":"783",
"des":"Configure znode permission of ZooKeeper.ZooKeeper uses an access control list (ACL) to implement znode access control. The ZooKeeper client specifies a znode ACL, and the",
"doc_type":"cmpntguide",
"kw":"Configuring the ZooKeeper Permissions,Using ZooKeeper,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Configuring the ZooKeeper Permissions",
"githuburl":""
},
{
"uri":"mrs_01_2106.html",
+ "node_id":"mrs_01_2106.xml",
"product_code":"mrs",
- "code":"786",
+ "code":"784",
"des":"Log path: /var/log/Bigdata/zookeeper/quorumpeer (Run log), /var/log/Bigdata/audit/zookeeper/quorumpeer (Audit log)Log archive rule: The automatic ZooKeeper log compressio",
"doc_type":"cmpntguide",
"kw":"ZooKeeper Log Overview,Using ZooKeeper,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"ZooKeeper Log Overview",
"githuburl":""
},
{
"uri":"mrs_01_2107.html",
+ "node_id":"mrs_01_2107.xml",
"product_code":"mrs",
- "code":"787",
+ "code":"785",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Common Issues About ZooKeeper",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Common Issues About ZooKeeper",
"githuburl":""
},
{
"uri":"mrs_01_2108.html",
+ "node_id":"mrs_01_2108.xml",
"product_code":"mrs",
- "code":"788",
+ "code":"786",
"des":"After a large number of znodes are created, ZooKeeper servers in the ZooKeeper cluster become faulty and cannot be automatically recovered or restarted.Logs of followers:",
"doc_type":"cmpntguide",
"kw":"Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?,Common Issues About ZooKeeper,",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?",
"githuburl":""
},
{
"uri":"mrs_01_2109.html",
+ "node_id":"mrs_01_2109.xml",
"product_code":"mrs",
- "code":"789",
+ "code":"787",
"des":"After a large number of znodes are created in a parent directory, the ZooKeeper client will fail to fetch all child nodes of this parent directory in a single request.Log",
"doc_type":"cmpntguide",
"kw":"Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?,Common Issues About Zo",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?",
"githuburl":""
},
{
"uri":"mrs_01_2110.html",
+ "node_id":"mrs_01_2110.xml",
"product_code":"mrs",
- "code":"790",
+ "code":"788",
"des":"Why four letter commands do not work with linux netcat command when secure netty configurations are enabled at Zookeeper server?For example,echo stat |netcat host portLin",
"doc_type":"cmpntguide",
"kw":"Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are E",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?",
"githuburl":""
},
{
"uri":"mrs_01_2111.html",
+ "node_id":"mrs_01_2111.xml",
"product_code":"mrs",
- "code":"791",
+ "code":"789",
"des":"How to check whether the role of a ZooKeeper instance is a leader or follower.Log in to Manager and choose Cluster > Name of the desired cluster > Service > ZooKeeper > I",
"doc_type":"cmpntguide",
"kw":"How Do I Check Which ZooKeeper Instance Is a Leader?,Common Issues About ZooKeeper,Component Operati",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"How Do I Check Which ZooKeeper Instance Is a Leader?",
"githuburl":""
},
{
"uri":"mrs_01_2112.html",
+ "node_id":"mrs_01_2112.xml",
"product_code":"mrs",
- "code":"792",
+ "code":"790",
"des":"When the IBM JDK is used, the client fails to connect to ZooKeeper.The possible cause is that the jaas.conf file format of the IBM JDK is different from that of the commo",
"doc_type":"cmpntguide",
"kw":"Why Cannot the Client Connect to ZooKeeper using the IBM JDK?,Common Issues About ZooKeeper,Componen",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Cannot the Client Connect to ZooKeeper using the IBM JDK?",
"githuburl":""
},
{
"uri":"mrs_01_2113.html",
+ "node_id":"mrs_01_2113.xml",
"product_code":"mrs",
- "code":"793",
+ "code":"791",
"des":"The ZooKeeper client fails to refresh a TGT and therefore ZooKeeper cannot be accessed. The error message is as follows:ZooKeeper uses the system command kinit – R to ref",
"doc_type":"cmpntguide",
"kw":"What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?,Common Issues About ZooKeeper,Com",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?",
"githuburl":""
},
{
"uri":"mrs_01_2114.html",
+ "node_id":"mrs_01_2114.xml",
"product_code":"mrs",
- "code":"794",
+ "code":"792",
"des":"When the client connects to a non-leader instance, run the deleteall command to delete a large number of znodes, the error message \"Node does not exist\" is displayed, but",
"doc_type":"cmpntguide",
"kw":"Why Is Message \"Node does not exist\" Displayed when A Large Number of Znodes Are Deleted Using the d",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Why Is Message \"Node does not exist\" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand",
"githuburl":""
},
{
"uri":"mrs_01_2122.html",
+ "node_id":"mrs_01_2122.xml",
"product_code":"mrs",
- "code":"795",
+ "code":"793",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Appendix",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Appendix",
"githuburl":""
},
{
"uri":"mrs_01_2125.html",
+ "node_id":"mrs_01_2125.xml",
"product_code":"mrs",
- "code":"796",
+ "code":"794",
"des":"For MRS 1.9.2 or later: You can modify service configuration parameters on the cluster management page of the MRS management console.Log in to the MRS console. In the lef",
"doc_type":"cmpntguide",
"kw":"Modifying Cluster Service Configuration Parameters,Appendix,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Modifying Cluster Service Configuration Parameters",
"githuburl":""
},
{
"uri":"mrs_01_2123.html",
+ "node_id":"mrs_01_2123.xml",
"product_code":"mrs",
- "code":"797",
+ "code":"795",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Accessing Manager",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing Manager",
"githuburl":""
},
{
"uri":"mrs_01_0102.html",
+ "node_id":"mrs_01_0102.xml",
"product_code":"mrs",
- "code":"798",
+ "code":"796",
"des":"Clusters of versions earlier than MRS 3.x use MRS Manager to monitor, configure, and manage clusters. You can open the MRS Manager page on the MRS console.If you have bou",
"doc_type":"cmpntguide",
"kw":"Accessing MRS Manager (Versions Earlier Than MRS 3.x),Accessing Manager,Component Operation Guide (N",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing MRS Manager (Versions Earlier Than MRS 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_2124.html",
+ "node_id":"mrs_01_2124.xml",
"product_code":"mrs",
- "code":"799",
+ "code":"797",
"des":"In MRS 3.x or later, FusionInsight Manager is used to monitor, configure, and manage clusters. After the cluster is installed, you can use the account to log in to Fusion",
"doc_type":"cmpntguide",
"kw":"Accessing FusionInsight Manager (MRS 3.x or Later),Accessing Manager,Component Operation Guide (Norm",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Accessing FusionInsight Manager (MRS 3.x or Later)",
"githuburl":""
},
{
"uri":"mrs_01_2126.html",
+ "node_id":"mrs_01_2126.xml",
"product_code":"mrs",
- "code":"800",
+ "code":"798",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"cmpntguide",
"kw":"Using an MRS Client",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Using an MRS Client",
"githuburl":""
},
{
"uri":"mrs_01_2127.html",
+ "node_id":"mrs_01_2127.xml",
"product_code":"mrs",
- "code":"801",
+ "code":"799",
"des":"This section describes how to install clients of all services (excluding Flume) in an MRS cluster. For details about how to install the Flume client, see Installing the F",
"doc_type":"cmpntguide",
"kw":"Installing a Client (Version 3.x or Later),Using an MRS Client,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Installing a Client (Version 3.x or Later)",
"githuburl":""
},
{
"uri":"mrs_01_2128.html",
+ "node_id":"mrs_01_2128.xml",
"product_code":"mrs",
- "code":"802",
+ "code":"800",
"des":"An MRS client is required. The MRS cluster client can be installed on the Master or Core node in the cluster or on a node outside the cluster.After a cluster of versions ",
"doc_type":"cmpntguide",
"kw":"Installing a Client (Versions Earlier Than 3.x),Using an MRS Client,Component Operation Guide (Norma",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Installing a Client (Versions Earlier Than 3.x)",
"githuburl":""
},
{
"uri":"mrs_01_2129.html",
+ "node_id":"mrs_01_2129.xml",
"product_code":"mrs",
- "code":"803",
+ "code":"801",
"des":"A cluster provides a client for you to connect to a server, view task results, or manage data. If you modify service configuration parameters on Manager and restart the s",
"doc_type":"cmpntguide",
"kw":"Updating a Client (Version 3.x or Later),Using an MRS Client,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Updating a Client (Version 3.x or Later)",
"githuburl":""
},
{
"uri":"mrs_01_2130.html",
+ "node_id":"mrs_01_2130.xml",
"product_code":"mrs",
- "code":"804",
+ "code":"802",
"des":"This section applies to clusters of versions earlier than MRS 3.x. For MRS 3.x or later, see Updating a Client (Version 3.x or Later).ScenarioAn MRS cluster provides a cl",
"doc_type":"cmpntguide",
"kw":"Updating a Client (Versions Earlier Than 3.x),Using an MRS Client,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+ "documenttype":"cmpntguide",
+ "prodname":"mrs"
+ }
+ ],
"title":"Updating a Client (Versions Earlier Than 3.x)",
"githuburl":""
},
{
"uri":"en-us_topic_0000001351362309.html",
+ "node_id":"en-us_topic_0000001351362309.xml",
"product_code":"",
- "code":"805",
+ "code":"803",
"des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"doc_type":"",
"kw":"Change History,Component Operation Guide (Normal)",
+ "search_title":"",
+ "metedata":[
+ {
+
+ }
+ ],
"title":"Change History",
"githuburl":""
}
diff --git a/docs/mrs/component-operation-guide/CLASS.TXT.json b/docs/mrs/component-operation-guide/CLASS.TXT.json
index c7c2967c..17a0624f 100644
--- a/docs/mrs/component-operation-guide/CLASS.TXT.json
+++ b/docs/mrs/component-operation-guide/CLASS.TXT.json
@@ -1781,15 +1781,6 @@
"p_code":"191",
"code":"198"
},
- {
- "desc":"This section applies only to MRS 3.1.0 or later.This section describes common GeoMesa commands. For more GeoMesa commands, visit https://www.geomesa.org/documentation/use",
- "product_code":"mrs",
- "title":"GeoMesa Command Line",
- "uri":"mrs_01_24119.html",
- "doc_type":"cmpntguide",
- "p_code":"191",
- "code":"199"
- },
{
"desc":"HBase disaster recovery (DR), a key feature that is used to ensure high availability (HA) of the HBase cluster system, provides the real-time remote DR function for HBase",
"product_code":"mrs",
@@ -1797,7 +1788,7 @@
"uri":"mrs_01_1609.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"200"
+ "code":"199"
},
{
"desc":"HBase encodes data blocks in HFiles to reduce duplicate keys in KeyValues, reducing used space. Currently, the following data block encoding modes are supported: NONE, PR",
@@ -1806,7 +1797,7 @@
"uri":"mrs_01_24112.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"201"
+ "code":"200"
},
{
"desc":"The system administrator can configure HBase cluster DR to improve system availability. If the active cluster in the DR environment is faulty and the connection to the HB",
@@ -1815,7 +1806,7 @@
"uri":"mrs_01_1610.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"202"
+ "code":"201"
},
{
"desc":"The HBase cluster in the current environment is a DR cluster. Due to some reasons, the active and standby clusters need to be switched over. That is, the standby cluster ",
@@ -1824,7 +1815,7 @@
"uri":"mrs_01_1611.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"203"
+ "code":"202"
},
{
"desc":"The Apache HBase official website provides the function of importing data in batches. For details, see the description of the Import and ImportTsv tools at http://hbase.a",
@@ -1833,7 +1824,7 @@
"uri":"mrs_01_1612.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"204"
+ "code":"203"
},
{
"desc":"In the actual application scenario, data in various sizes needs to be stored, for example, image data and documents. Data whose size is smaller than 10 MB can be stored i",
@@ -1842,7 +1833,7 @@
"uri":"mrs_01_1631.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"205"
+ "code":"204"
},
{
"desc":"This topic provides the procedure to configure the secure HBase replication during cross-realm Kerberos setup in security mode.Mapping for all the FQDNs to their realms s",
@@ -1851,7 +1842,7 @@
"uri":"mrs_01_1009.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"206"
+ "code":"205"
},
{
"desc":"In a faulty environment, there are possibilities that a region may be stuck in transition for longer duration due to various reasons like slow region server response, uns",
@@ -1860,7 +1851,7 @@
"uri":"mrs_01_1010.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"207"
+ "code":"206"
},
{
"desc":"Log path: The default storage path of HBase logs is /var/log/Bigdata/hbase/Role name.HMaster: /var/log/Bigdata/hbase/hm (run logs) and /var/log/Bigdata/audit/hbase/hm (au",
@@ -1869,7 +1860,7 @@
"uri":"mrs_01_1056.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"208"
+ "code":"207"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -1878,7 +1869,7 @@
"uri":"mrs_01_1013.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"209"
+ "code":"208"
},
{
"desc":"BulkLoad uses MapReduce jobs to directly generate files that comply with the internal data format of HBase, and then loads the generated StoreFiles to a running cluster. ",
@@ -1886,8 +1877,8 @@
"title":"Improving the BulkLoad Efficiency",
"uri":"mrs_01_1636.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"210"
+ "p_code":"208",
+ "code":"209"
},
{
"desc":"In the scenario where a large number of requests are continuously put, setting the following two parameters to false can greatly improve the Put performance.hbase.regions",
@@ -1895,8 +1886,8 @@
"title":"Improving Put Performance",
"uri":"mrs_01_1637.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"211"
+ "p_code":"208",
+ "code":"210"
},
{
"desc":"HBase has many configuration parameters related to read and write performance. The configuration parameters need to be adjusted based on the read/write request loads. Thi",
@@ -1904,8 +1895,8 @@
"title":"Optimizing Put and Scan Performance",
"uri":"mrs_01_1016.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"212"
+ "p_code":"208",
+ "code":"211"
},
{
"desc":"Scenarios where data needs to be written to HBase in real time, or large-scale and consecutive put scenariosThis section applies to MRS 3.x and later versions.The HBase p",
@@ -1913,8 +1904,8 @@
"title":"Improving Real-time Data Write Performance",
"uri":"mrs_01_1017.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"213"
+ "p_code":"208",
+ "code":"212"
},
{
"desc":"HBase data needs to be read.The get or scan interface of HBase has been invoked and data is read in real time from HBase.Data reading server tuningParameter portal:Go to ",
@@ -1922,8 +1913,8 @@
"title":"Improving Real-time Data Read Performance",
"uri":"mrs_01_1018.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"214"
+ "p_code":"208",
+ "code":"213"
},
{
"desc":"When the number of clusters reaches a certain scale, the default settings of the Java virtual machine (JVM) cannot meet the cluster requirements. In this case, the cluste",
@@ -1931,8 +1922,8 @@
"title":"Optimizing JVM Parameters",
"uri":"mrs_01_1019.html",
"doc_type":"cmpntguide",
- "p_code":"209",
- "code":"215"
+ "p_code":"208",
+ "code":"214"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -1941,7 +1932,7 @@
"uri":"mrs_01_1638.html",
"doc_type":"cmpntguide",
"p_code":"191",
- "code":"216"
+ "code":"215"
},
{
"desc":"A HBase server is faulty and cannot provide services. In this case, when a table operation is performed on the HBase client, why is the operation suspended and no respons",
@@ -1949,8 +1940,8 @@
"title":"Why Does a Client Keep Failing to Connect to a Server for a Long Time?",
"uri":"mrs_01_1639.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"217"
+ "p_code":"215",
+ "code":"216"
},
{
"desc":"Why submitted operations fail by stopping BulkLoad on the client during BulkLoad data importing?When BulkLoad is enabled on the client, a partitioner file is generated an",
@@ -1958,8 +1949,8 @@
"title":"Operation Failures Occur in Stopping BulkLoad On the Client",
"uri":"mrs_01_1640.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"218"
+ "p_code":"215",
+ "code":"217"
},
{
"desc":"When HBase consecutively deletes and creates the same table, why may a table creation exception occur?Execution process: Disable Table > Drop Table > Create Table > Disab",
@@ -1967,8 +1958,8 @@
"title":"Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?",
"uri":"mrs_01_1641.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"219"
+ "p_code":"215",
+ "code":"218"
},
{
"desc":"Why other services become unstable if HBase sets up a large number of connections over the network port?When the OS command lsof or netstat is run, it is found that many ",
@@ -1976,8 +1967,8 @@
"title":"Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?",
"uri":"mrs_01_1642.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"220"
+ "p_code":"215",
+ "code":"219"
},
{
"desc":"The HBase bulkLoad task (a single table contains 26 TB data) has 210,000 maps and 10,000 reduce tasks (in MRS 3.x or later), and the task fails.ZooKeeper I/O bottleneck o",
@@ -1985,8 +1976,8 @@
"title":"Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?",
"uri":"mrs_01_1643.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"221"
+ "p_code":"215",
+ "code":"220"
},
{
"desc":"How do I restore a region in the RIT state for a long time?Log in to the HMaster Web UI, choose Procedure & Locks in the navigation tree, and check whether any process ID",
@@ -1994,8 +1985,8 @@
"title":"How Do I Restore a Region in the RIT State for a Long Time?",
"uri":"mrs_01_1644.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"222"
+ "p_code":"215",
+ "code":"221"
},
{
"desc":"Why does HMaster exit due to timeout when waiting for the namespace table to go online?During the HMaster active/standby switchover or startup, HMaster performs WAL split",
@@ -2003,8 +1994,8 @@
"title":"Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?",
"uri":"mrs_01_1645.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"223"
+ "p_code":"215",
+ "code":"222"
},
{
"desc":"Why does the following exception occur on the client when I use the HBase client to operate table data?At the same time, the following log is displayed on RegionServer:Th",
@@ -2012,8 +2003,8 @@
"title":"Why Does SocketTimeoutException Occur When a Client Queries HBase?",
"uri":"mrs_01_1646.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"224"
+ "p_code":"215",
+ "code":"223"
},
{
"desc":"Why modified and deleted data can still be queried by using the scan command?Because of the scalability of HBase, all values specific to the versions in the queried colum",
@@ -2021,8 +2012,8 @@
"title":"Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?",
"uri":"mrs_01_1647.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"225"
+ "p_code":"215",
+ "code":"224"
},
{
"desc":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?During HBase shell execution JRuby create temporary files under java.i",
@@ -2030,8 +2021,8 @@
"title":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?",
"uri":"mrs_01_1648.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"226"
+ "p_code":"215",
+ "code":"225"
},
{
"desc":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?When an online RegionServer goes down abruptly, it is displayed under \"Dead R",
@@ -2039,8 +2030,8 @@
"title":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?",
"uri":"mrs_01_1649.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"227"
+ "p_code":"215",
+ "code":"226"
},
{
"desc":"If the data to be imported by HBase bulkload has identical rowkeys, the data import is successful but identical query criteria produce different query results.Data with a",
@@ -2048,8 +2039,8 @@
"title":"Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?",
"uri":"mrs_01_1650.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"228"
+ "p_code":"215",
+ "code":"227"
},
{
"desc":"What should I do if I fail to create tables due to the FAILED_OPEN state of Regions?If a network, HDFS, or Active HMaster fault occurs during the creation of tables, some",
@@ -2057,8 +2048,8 @@
"title":"What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?",
"uri":"mrs_01_1651.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"229"
+ "p_code":"215",
+ "code":"228"
},
{
"desc":"In security mode, names of tables that failed to be created are unnecessarily retained in the table-lock node (default directory is /hbase/table-lock) of ZooKeeper. How d",
@@ -2066,8 +2057,8 @@
"title":"How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?",
"uri":"mrs_01_1652.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"230"
+ "p_code":"215",
+ "code":"229"
},
{
"desc":"Why does HBase become faulty when I set quota for the directory used by HBase in HDFS?The flush operation of a table is to write memstore data to HDFS.If the HDFS directo",
@@ -2075,8 +2066,8 @@
"title":"Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?",
"uri":"mrs_01_1653.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"231"
+ "p_code":"215",
+ "code":"230"
},
{
"desc":"Why HMaster times out while waiting for namespace table to be assigned after rebuilding meta using OfflineMetaRepair tool and startups failed?HMaster abort with following",
@@ -2084,8 +2075,8 @@
"title":"Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed",
"uri":"mrs_01_1654.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"232"
+ "p_code":"215",
+ "code":"231"
},
{
"desc":"Why messages containing FileNotFoundException and no lease are frequently displayed in the HMaster logs during the WAL splitting process?During the WAL splitting process,",
@@ -2093,8 +2084,8 @@
"title":"Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?",
"uri":"mrs_01_1655.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"233"
+ "p_code":"215",
+ "code":"232"
},
{
"desc":"When a tenant accesses Phoenix, a message is displayed indicating that the tenant has insufficient rights.You need to associate the HBase service and Yarn queues when cre",
@@ -2102,8 +2093,8 @@
"title":"Insufficient Rights When a Tenant Accesses Phoenix",
"uri":"mrs_01_1657.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"234"
+ "p_code":"215",
+ "code":"233"
},
{
"desc":"The system automatically rolls back data after an HBase recovery task fails. If \"Rollback recovery failed\" is displayed, the rollback fails. After the rollback fails, dat",
@@ -2111,8 +2102,8 @@
"title":"What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating \"Rollback recovery failed\"?",
"uri":"mrs_01_1659.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"235"
+ "p_code":"215",
+ "code":"234"
},
{
"desc":"When the HBaseFsck tool is used to check the region status in MRS 3.x and later versions, if the log contains ERROR: (regions region1 and region2) There is an overlap in ",
@@ -2120,8 +2111,8 @@
"title":"How Do I Fix Region Overlapping?",
"uri":"mrs_01_1660.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"236"
+ "p_code":"215",
+ "code":"235"
},
{
"desc":"(MRS 3.x and later versions) Check the hbase-omm-*.out log of the node where RegionServer fails to be started. It is found that the log contains An error report file with",
@@ -2129,8 +2120,8 @@
"title":"Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?",
"uri":"mrs_01_1661.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"237"
+ "p_code":"215",
+ "code":"236"
},
{
"desc":"Why does the LoadIncrementalHFiles tool fail to be executed and \"Permission denied\" is displayed when a Linux user is manually created in a normal cluster and DataNode in",
@@ -2138,8 +2129,8 @@
"title":"Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and \"Permission denied\" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?",
"uri":"mrs_01_0625.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"238"
+ "p_code":"215",
+ "code":"237"
},
{
"desc":"When the sqlline script is used on the client, the error message \"import argparse\" is displayed.",
@@ -2147,8 +2138,8 @@
"title":"Why Is the Error Message \"import argparse\" Displayed When the Phoenix sqlline Script Is Used?",
"uri":"mrs_01_2210.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"239"
+ "p_code":"215",
+ "code":"238"
},
{
"desc":"When the indexed field data is updated, if a batch of data exists in the user table, the BulkLoad tool cannot update the global and partial mutable indexes.Problem Analys",
@@ -2156,8 +2147,8 @@
"title":"How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?",
"uri":"mrs_01_2211.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"240"
+ "p_code":"215",
+ "code":"239"
},
{
"desc":"When CTBase accesses the HBase service with the Ranger plug-ins enabled and you are creating a cluster table, a message is displayed indicating that the permission is ins",
@@ -2165,8 +2156,8 @@
"title":"Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?",
"uri":"mrs_01_2212.html",
"doc_type":"cmpntguide",
- "p_code":"216",
- "code":"241"
+ "p_code":"215",
+ "code":"240"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2175,7 +2166,7 @@
"uri":"mrs_01_0790.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"242"
+ "code":"241"
},
{
"desc":"In HDFS, each file object needs to register corresponding information in the NameNode and occupies certain storage space. As the number of files increases, if the origina",
@@ -2183,8 +2174,8 @@
"title":"Configuring Memory Management",
"uri":"mrs_01_0791.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"243"
+ "p_code":"241",
+ "code":"242"
},
{
"desc":"This section describes how to create and configure an HDFS role on FusionInsight Manager. The HDFS role is granted the rights to read, write, and execute HDFS directories",
@@ -2192,8 +2183,8 @@
"title":"Creating an HDFS Role",
"uri":"mrs_01_1662.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"244"
+ "p_code":"241",
+ "code":"243"
},
{
"desc":"This section describes how to use the HDFS client in an O&M scenario or service scenario.The client has been installed.For example, the installation directory is /opt/had",
@@ -2201,8 +2192,8 @@
"title":"Using the HDFS Client",
"uri":"mrs_01_1663.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"245"
+ "p_code":"241",
+ "code":"244"
},
{
"desc":"DistCp is a tool used to perform large-amount data replication between clusters or in a cluster. It uses MapReduce tasks to implement distributed copy of a large amount o",
@@ -2210,8 +2201,8 @@
"title":"Running the DistCp Command",
"uri":"mrs_01_0794.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"246"
+ "p_code":"241",
+ "code":"245"
},
{
"desc":"This section describes the directory structure in HDFS, as shown in the following table.",
@@ -2219,8 +2210,8 @@
"title":"Overview of HDFS File System Directories",
"uri":"mrs_01_0795.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"247"
+ "p_code":"241",
+ "code":"246"
},
{
"desc":"This section applies to MRS 3.x or later clusters.If the storage directory defined by the HDFS DataNode is incorrect or the HDFS storage plan changes, the system administ",
@@ -2228,8 +2219,8 @@
"title":"Changing the DataNode Storage Directory",
"uri":"mrs_01_1664.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"248"
+ "p_code":"241",
+ "code":"247"
},
{
"desc":"The permission for some HDFS directories is 777 or 750 by default, which brings potential security risks. You are advised to modify the permission for the HDFS directorie",
@@ -2237,8 +2228,8 @@
"title":"Configuring HDFS Directory Permission",
"uri":"mrs_01_0797.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"249"
+ "p_code":"241",
+ "code":"248"
},
{
"desc":"This section applies to MRS 3.x or later.Before deploying a cluster, you can deploy a Network File System (NFS) server based on requirements to store NameNode metadata to",
@@ -2246,8 +2237,8 @@
"title":"Configuring NFS",
"uri":"mrs_01_1665.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"250"
+ "p_code":"241",
+ "code":"249"
},
{
"desc":"In HDFS, DataNode stores user files and directories as blocks, and file objects are generated on the NameNode to map each file, directory, and block on the DataNode.The f",
@@ -2255,8 +2246,8 @@
"title":"Planning HDFS Capacity",
"uri":"mrs_01_0799.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"251"
+ "p_code":"241",
+ "code":"250"
},
{
"desc":"When you open an HDFS file, an error occurs due to the limit on the number of file handles. Information similar to the following is displayed.You can contact the systemad",
@@ -2264,8 +2255,8 @@
"title":"Configuring ulimit for HBase and HDFS",
"uri":"mrs_01_0801.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"252"
+ "p_code":"241",
+ "code":"251"
},
{
"desc":"This section applies to MRS 3.x or later clusters.In the HDFS cluster, unbalanced disk usage among DataNodes may occur, for example, when new DataNodes are added to the c",
@@ -2273,8 +2264,8 @@
"title":"Balancing DataNode Capacity",
"uri":"mrs_01_1667.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"253"
+ "p_code":"241",
+ "code":"252"
},
{
"desc":"By default, NameNode randomly selects a DataNode to write files. If the disk capacity of some DataNodes in a cluster is inconsistent (the total disk capacity of some node",
@@ -2282,8 +2273,8 @@
"title":"Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes",
"uri":"mrs_01_0804.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"254"
+ "p_code":"241",
+ "code":"253"
},
{
"desc":"Generally, multiple services are deployed in a cluster, and the storage of most services depends on the HDFS file system. Different components such as Spark and Yarn or c",
@@ -2291,8 +2282,8 @@
"title":"Configuring the Number of Files in a Single HDFS Directory",
"uri":"mrs_01_0805.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"255"
+ "p_code":"241",
+ "code":"254"
},
{
"desc":"On HDFS, deleted files are moved to the recycle bin (trash can) so that the data deleted by mistake can be restored.You can set the time threshold for storing files in th",
@@ -2300,8 +2291,8 @@
"title":"Configuring the Recycle Bin Mechanism",
"uri":"mrs_01_0806.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"256"
+ "p_code":"241",
+ "code":"255"
},
{
"desc":"HDFS allows users to modify the default permissions of files and directories. The default mask provided by the HDFS for creating file and directory permissions is 022. If",
@@ -2309,8 +2300,8 @@
"title":"Setting Permissions on Files and Directories",
"uri":"mrs_01_0807.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"257"
+ "p_code":"241",
+ "code":"256"
},
{
"desc":"In security mode, users can flexibly set the maximum token lifetime and token renewal interval in HDFS based on cluster requirements.Navigation path for setting parameter",
@@ -2318,8 +2309,8 @@
"title":"Setting the Maximum Lifetime and Renewal Interval of a Token",
"uri":"mrs_01_0808.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"258"
+ "p_code":"241",
+ "code":"257"
},
{
"desc":"In the open source version, if multiple data storage volumes are configured for a DataNode, the DataNode stops providing services by default if one of the volumes is dama",
@@ -2327,8 +2318,8 @@
"title":"Configuring the Damaged Disk Volume",
"uri":"mrs_01_1669.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"259"
+ "p_code":"241",
+ "code":"258"
},
{
"desc":"Encrypted channel is an encryption protocol of remote procedure call (RPC) in HDFS. When a user invokes RPC, the user's login name will be transmitted to RPC through RPC ",
@@ -2336,8 +2327,8 @@
"title":"Configuring Encrypted Channels",
"uri":"mrs_01_0810.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"260"
+ "p_code":"241",
+ "code":"259"
},
{
"desc":"Clients probably encounter running errors when the network is not stable. Users can adjust the following parameter values to improve the running efficiency.Go to the All ",
@@ -2345,8 +2336,8 @@
"title":"Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable",
"uri":"mrs_01_0811.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"261"
+ "p_code":"241",
+ "code":"260"
},
{
"desc":"This section applies to MRS 3.x or later.In the existing default DFSclient failover proxy provider, if a NameNode in a process is faulty, all HDFS client instances in the",
@@ -2354,8 +2345,8 @@
"title":"Configuring the NameNode Blacklist",
"uri":"mrs_01_1670.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"262"
+ "p_code":"241",
+ "code":"261"
},
{
"desc":"This section applies to MRS 3.x or later.Several finished Hadoop clusters are faulty because the NameNode is overloaded and unresponsive.Such problem is caused by the ini",
@@ -2363,8 +2354,8 @@
"title":"Optimizing HDFS NameNode RPC QoS",
"uri":"mrs_01_1672.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"263"
+ "p_code":"241",
+ "code":"262"
},
{
"desc":"When the speed at which the client writes data to the HDFS is greater than the disk bandwidth of the DataNode, the disk bandwidth is fully occupied. As a result, the Data",
@@ -2372,8 +2363,8 @@
"title":"Optimizing HDFS DataNode RPC QoS",
"uri":"mrs_01_1673.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"264"
+ "p_code":"241",
+ "code":"263"
},
{
"desc":"When the Yarn local directory and DataNode directory are on the same disk, the disk with larger capacity can run more tasks. Therefore, more intermediate data is stored i",
@@ -2381,8 +2372,8 @@
"title":"Configuring Reserved Percentage of Disk Usage on DataNodes",
"uri":"mrs_01_1675.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"265"
+ "p_code":"241",
+ "code":"264"
},
{
"desc":"You need to configure the nodes for storing HDFS file data blocks based on data features. You can configure a label expression to an HDFS directory or file and assign one",
@@ -2390,8 +2381,8 @@
"title":"Configuring HDFS NodeLabel",
"uri":"mrs_01_1676.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"266"
+ "p_code":"241",
+ "code":"265"
},
{
"desc":"AZ Mover is a copy migration tool used to move copies to meet the new AZ policies set on the directory. It can be used to migrate copies from one AZ policy to another. AZ",
@@ -2399,8 +2390,8 @@
"title":"Using HDFS AZ Mover",
"uri":"mrs_01_2360.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"267"
+ "p_code":"241",
+ "code":"266"
},
{
"desc":"In an HDFS cluster configured with HA, the active NameNode processes all client requests, and the standby NameNode reserves the latest metadata and block location informa",
@@ -2408,8 +2399,8 @@
"title":"Configuring the Observer NameNode to Process Read Requests",
"uri":"mrs_01_1681.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"268"
+ "p_code":"241",
+ "code":"267"
},
{
"desc":"Performing this operation can concurrently modify file and directory permissions and access control tools in a cluster.This section applies to MRS 3.x or later clusters.P",
@@ -2417,8 +2408,8 @@
"title":"Performing Concurrent Operations on HDFS Files",
"uri":"mrs_01_1684.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"269"
+ "p_code":"241",
+ "code":"268"
},
{
"desc":"Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)Da",
@@ -2426,8 +2417,8 @@
"title":"Introduction to HDFS Logs",
"uri":"mrs_01_0828.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"270"
+ "p_code":"241",
+ "code":"269"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2435,8 +2426,8 @@
"title":"HDFS Performance Tuning",
"uri":"mrs_01_0829.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"271"
+ "p_code":"241",
+ "code":"270"
},
{
"desc":"Improve the HDFS write performance by modifying the HDFS attributes.This section applies to MRS 3.x or later.Navigation path for setting parameters:On FusionInsight Manag",
@@ -2444,8 +2435,8 @@
"title":"Improving Write Performance",
"uri":"mrs_01_1687.html",
"doc_type":"cmpntguide",
- "p_code":"271",
- "code":"272"
+ "p_code":"270",
+ "code":"271"
},
{
"desc":"Improve the HDFS read performance by using the client to cache the metadata for block locations.This function is recommended only for reading files that are not modified ",
@@ -2453,8 +2444,8 @@
"title":"Improving Read Performance Using Client Metadata Cache",
"uri":"mrs_01_1688.html",
"doc_type":"cmpntguide",
- "p_code":"271",
- "code":"273"
+ "p_code":"270",
+ "code":"272"
},
{
"desc":"When HDFS is deployed in high availability (HA) mode with multiple NameNode instances, the HDFS client needs to connect to each NameNode in sequence to determine which is",
@@ -2462,8 +2453,8 @@
"title":"Improving the Connection Between the Client and NameNode Using Current Active Cache",
"uri":"mrs_01_1689.html",
"doc_type":"cmpntguide",
- "p_code":"271",
- "code":"274"
+ "p_code":"270",
+ "code":"273"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2471,8 +2462,8 @@
"title":"FAQ",
"uri":"mrs_01_1690.html",
"doc_type":"cmpntguide",
- "p_code":"242",
- "code":"275"
+ "p_code":"241",
+ "code":"274"
},
{
"desc":"The NameNode startup is slow when it is restarted immediately after a large number of files (for example, 1 million files) are deleted.It takes time for the DataNode to d",
@@ -2480,8 +2471,8 @@
"title":"NameNode Startup Is Slow",
"uri":"mrs_01_1691.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"276"
+ "p_code":"274",
+ "code":"275"
},
{
"desc":"The DataNode is normal, but cannot report data blocks. As a result, the existing data blocks cannot be used.This error may occur when the number of data blocks in a data ",
@@ -2489,8 +2480,8 @@
"title":"DataNode Is Normal but Cannot Report Data Blocks",
"uri":"mrs_01_1693.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"277"
+ "p_code":"274",
+ "code":"276"
},
{
"desc":"When errors occur in the dfs.datanode.data.dir directory of DataNode due to the permission or disk damage, HDFS WebUI does not display information about damaged data.Afte",
@@ -2498,8 +2489,8 @@
"title":"HDFS WebUI Cannot Properly Update Information About Damaged Data",
"uri":"mrs_01_1694.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"278"
+ "p_code":"274",
+ "code":"277"
},
{
"desc":"Why distcp command fails in the secure cluster with the following error displayed?Client side exceptionServer side exceptionThe preceding error may occur if webhdfs:// is",
@@ -2507,8 +2498,8 @@
"title":"Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?",
"uri":"mrs_01_1695.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"279"
+ "p_code":"274",
+ "code":"278"
},
{
"desc":"If the number of disks specified by dfs.datanode.data.dir is equal to the value of dfs.datanode.failed.volumes.tolerated, DataNode startup will fail.By default, the failu",
@@ -2516,8 +2507,8 @@
"title":"Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?",
"uri":"mrs_01_1696.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"280"
+ "p_code":"274",
+ "code":"279"
},
{
"desc":"The capacity of a DataNode fails to calculate when multiple data.dir directories are configured in a disk partition.Currently, the capacity is calculated based on disks, ",
@@ -2525,8 +2516,8 @@
"title":"Failed to Calculate the Capacity of a DataNode when Multiple data.dir Directories Are Configured in a Disk Partition",
"uri":"mrs_01_1697.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"281"
+ "p_code":"274",
+ "code":"280"
},
{
"desc":"When the standby NameNode is powered off during metadata (namespace) storage, it fails to be started and the following error information is displayed.When the standby Nam",
@@ -2534,8 +2525,8 @@
"title":"Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage",
"uri":"mrs_01_1698.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"282"
+ "p_code":"274",
+ "code":"281"
},
{
"desc":"Why data in the buffer is lost if a power outage occurs during storage of small files?Because of a power outage, the blocks in the buffer are not written to the disk imme",
@@ -2543,8 +2534,8 @@
"title":"Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files",
"uri":"mrs_01_1699.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"283"
+ "p_code":"274",
+ "code":"282"
},
{
"desc":"When HDFS calls the FileInputFormat getSplit method, the ArrayIndexOutOfBoundsException: 0 appears in the following log:The elements of each block correspondent frame are",
@@ -2552,8 +2543,8 @@
"title":"Why Does Array Border-crossing Occur During FileInputFormat Split?",
"uri":"mrs_01_1700.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"284"
+ "p_code":"274",
+ "code":"283"
},
{
"desc":"When the storage policy of the file is set to LAZY_PERSIST, the storage type of the first replica should be RAM_DISK, and the storage type of other replicas should be DIS",
@@ -2561,8 +2552,8 @@
"title":"Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?",
"uri":"mrs_01_1701.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"285"
+ "p_code":"274",
+ "code":"284"
},
{
"desc":"When the NameNode node is overloaded (100% of the CPU is occupied), the NameNode is unresponsive. The HDFS clients that are connected to the overloaded NameNode fail to r",
@@ -2570,8 +2561,8 @@
"title":"The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time",
"uri":"mrs_01_1702.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"286"
+ "p_code":"274",
+ "code":"285"
},
{
"desc":"In DataNode, the storage directory of data blocks is specified by dfs.datanode.data.dir.Can I modify dfs.datanode.data.dir tomodify the data storage directory?Can I modif",
@@ -2579,8 +2570,8 @@
"title":"Can I Delete or Modify the Data Storage Directory in DataNode?",
"uri":"mrs_01_1703.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"287"
+ "p_code":"274",
+ "code":"286"
},
{
"desc":"Why are some blocks missing on the NameNode UI after the rollback is successful?This problem occurs because blocks with new IDs or genstamps may exist on the DataNode. Th",
@@ -2588,8 +2579,8 @@
"title":"Blocks Miss on the NameNode UI After the Successful Rollback",
"uri":"mrs_01_1704.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"288"
+ "p_code":"274",
+ "code":"287"
},
{
"desc":"Why is an \"java.net.SocketException: No buffer space available\" exception reported when data is written to HDFS?This problem occurs when files are written to the HDFS. Ch",
@@ -2597,8 +2588,8 @@
"title":"Why Is \"java.net.SocketException: No buffer space available\" Reported When Data Is Written to HDFS",
"uri":"mrs_01_1705.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"289"
+ "p_code":"274",
+ "code":"288"
},
{
"desc":"Why are there two standby NameNodes after the active NameNode is restarted?When this problem occurs, check the ZooKeeper and ZooKeeper FC logs. You can find that the sess",
@@ -2606,8 +2597,8 @@
"title":"Why are There Two Standby NameNodes After the active NameNode Is Restarted?",
"uri":"mrs_01_1706.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"290"
+ "p_code":"274",
+ "code":"289"
},
{
"desc":"After I start a Balance process in HDFS, the process is shut down abnormally. If I attempt to execute the Balance process again, it fails again.After a Balance process is",
@@ -2615,8 +2606,8 @@
"title":"When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?",
"uri":"mrs_01_1707.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"291"
+ "p_code":"274",
+ "code":"290"
},
{
"desc":"Occasionally, nternet Explorer 9, Explorer 10, or Explorer 11 fails to access the native HDFS UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the nati",
@@ -2624,8 +2615,8 @@
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI",
"uri":"mrs_01_1708.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"292"
+ "p_code":"274",
+ "code":"291"
},
{
"desc":"If a JournalNode server is powered off, the data directory disk is fully occupied, and the network is abnormal, the EditLog sequence number on the JournalNode is inconsec",
@@ -2633,8 +2624,8 @@
"title":"NameNode Fails to Be Restarted Due to EditLog Discontinuity",
"uri":"mrs_01_1709.html",
"doc_type":"cmpntguide",
- "p_code":"275",
- "code":"293"
+ "p_code":"274",
+ "code":"292"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2643,7 +2634,7 @@
"uri":"mrs_01_0581.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"294"
+ "code":"293"
},
{
"desc":"Hive is a data warehouse framework built on Hadoop. It maps structured data files to a database table and provides SQL-like functions to analyze and process data. It also",
@@ -2651,8 +2642,8 @@
"title":"Using Hive from Scratch",
"uri":"mrs_01_0442.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"295"
+ "p_code":"293",
+ "code":"294"
},
{
"desc":"Go to the Hive configurations page by referring to Modifying Cluster Service Configuration Parameters.",
@@ -2660,8 +2651,8 @@
"title":"Configuring Hive Parameters",
"uri":"mrs_01_0582.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"296"
+ "p_code":"293",
+ "code":"295"
},
{
"desc":"Hive SQL supports all features of Hive-3.1.0. For details, see https://cwiki.apache.org/confluence/display/hive/languagemanual.Table 1 describes the extended Hive stateme",
@@ -2669,8 +2660,8 @@
"title":"Hive SQL",
"uri":"mrs_01_2330.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"297"
+ "p_code":"293",
+ "code":"296"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2678,8 +2669,8 @@
"title":"Permission Management",
"uri":"mrs_01_0947.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"298"
+ "p_code":"293",
+ "code":"297"
},
{
"desc":"Hive is a data warehouse framework built on Hadoop. It provides basic data analysis services using the Hive query language (HQL), a language like the structured query lan",
@@ -2687,8 +2678,8 @@
"title":"Hive Permission",
"uri":"mrs_01_0948.html",
"doc_type":"cmpntguide",
- "p_code":"298",
- "code":"299"
+ "p_code":"297",
+ "code":"298"
},
{
"desc":"This section describes how to create and configure a Hive role on Manager as the system administrator. The Hive role can be granted the permissions of the Hive administra",
@@ -2696,8 +2687,8 @@
"title":"Creating a Hive Role",
"uri":"mrs_01_0949.html",
"doc_type":"cmpntguide",
- "p_code":"298",
- "code":"300"
+ "p_code":"297",
+ "code":"299"
},
{
"desc":"You can configure related permissions if you need to access tables or databases created by other users. Hive supports column-based permission control. If a user needs to ",
@@ -2705,8 +2696,8 @@
"title":"Configuring Permissions for Hive Tables, Columns, or Databases",
"uri":"mrs_01_0950.html",
"doc_type":"cmpntguide",
- "p_code":"298",
- "code":"301"
+ "p_code":"297",
+ "code":"300"
},
{
"desc":"Hive may need to be associated with other components. For example, Yarn permissions are required in the scenario of using HQL statements to trigger MapReduce jobs, and HB",
@@ -2714,8 +2705,8 @@
"title":"Configuring Permissions to Use Other Components for Hive",
"uri":"mrs_01_0951.html",
"doc_type":"cmpntguide",
- "p_code":"298",
- "code":"302"
+ "p_code":"297",
+ "code":"301"
},
{
"desc":"This section guides users to use a Hive client in an O&M or service scenario.The client has been installed. For example, the client is installed in the /opt/hadoopclient ",
@@ -2723,8 +2714,8 @@
"title":"Using a Hive Client",
"uri":"mrs_01_0952.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"303"
+ "p_code":"293",
+ "code":"302"
},
{
"desc":"HDFS Colocation is the data location control function provided by HDFS. The HDFS Colocation API stores associated data or data on which associated operations are performe",
@@ -2732,8 +2723,8 @@
"title":"Using HDFS Colocation to Store Hive Tables",
"uri":"mrs_01_0953.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"304"
+ "p_code":"293",
+ "code":"303"
},
{
"desc":"Hive supports encryption of one or multiple columns in a table. When creating a Hive table, you can specify the column to be encrypted and encryption algorithm. When data",
@@ -2741,8 +2732,8 @@
"title":"Using the Hive Column Encryption Function",
"uri":"mrs_01_0954.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"305"
+ "p_code":"293",
+ "code":"304"
},
{
"desc":"In most cases, a carriage return character is used as the row delimiter in Hive tables stored in text files, that is, the carriage return character is used as the termina",
@@ -2750,8 +2741,8 @@
"title":"Customizing Row Separators",
"uri":"mrs_01_0955.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"306"
+ "p_code":"293",
+ "code":"305"
},
{
"desc":"For mutually trusted Hive and HBase clusters with Kerberos authentication enabled, you can access the HBase cluster and synchronize its key configurations to HiveServer o",
@@ -2759,8 +2750,8 @@
"title":"Configuring Hive on HBase in Across Clusters with Mutual Trust Enabled",
"uri":"mrs_01_24293.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"307"
+ "p_code":"293",
+ "code":"306"
},
{
"desc":"Due to the limitations of underlying storage systems, Hive does not support the ability to delete a single piece of table data. In Hive on HBase, MRS Hive supports the ab",
@@ -2768,8 +2759,8 @@
"title":"Deleting Single-Row Records from Hive on HBase",
"uri":"mrs_01_0956.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"308"
+ "p_code":"293",
+ "code":"307"
},
{
"desc":"WebHCat provides external REST APIs for Hive. By default, the open-source community version uses the HTTP protocol.MRS Hive supports the HTTPS protocol that is more secur",
@@ -2777,8 +2768,8 @@
"title":"Configuring HTTPS/HTTP-based REST APIs",
"uri":"mrs_01_0957.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"309"
+ "p_code":"293",
+ "code":"308"
},
{
"desc":"The Transform function is not allowed by Hive of the open source version.MRS Hive supports the configuration of the Transform function. The function is disabled by defaul",
@@ -2786,8 +2777,8 @@
"title":"Enabling or Disabling the Transform Function",
"uri":"mrs_01_0958.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"310"
+ "p_code":"293",
+ "code":"309"
},
{
"desc":"This section describes how to create a view on Hive when MRS is configured in security mode, authorize access permissions to different users, and specify that different u",
@@ -2795,8 +2786,8 @@
"title":"Access Control of a Dynamic Table View on Hive",
"uri":"mrs_01_0959.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"311"
+ "p_code":"293",
+ "code":"310"
},
{
"desc":"You must have ADMIN permission when creating temporary functions on Hive of the open source community version.MRS Hive supports the configuration of the function for crea",
@@ -2804,8 +2795,8 @@
"title":"Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions",
"uri":"mrs_01_0960.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"312"
+ "p_code":"293",
+ "code":"311"
},
{
"desc":"Hive allows users to create external tables to associate with other relational databases. External tables read data from associated relational databases and support Join ",
@@ -2813,8 +2804,8 @@
"title":"Using Hive to Read Data in a Relational Database",
"uri":"mrs_01_0961.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"313"
+ "p_code":"293",
+ "code":"312"
},
{
"desc":"Hive supports the following types of traditional relational database syntax:GroupingEXCEPT and INTERSECTSyntax description:Grouping takes effect only when the Group by st",
@@ -2822,8 +2813,8 @@
"title":"Supporting Traditional Relational Database Syntax in Hive",
"uri":"mrs_01_0962.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"314"
+ "p_code":"293",
+ "code":"313"
},
{
"desc":"This function is applicable to Hive and Spark2x in MRS 3.x and later.With this function enabled, if the select permission is granted to a user during Hive table creation,",
@@ -2831,8 +2822,8 @@
"title":"Viewing Table Structures Using the show create Statement as Users with the select Permission",
"uri":"mrs_01_0966.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"315"
+ "p_code":"293",
+ "code":"314"
},
{
"desc":"This function applies to Hive.After this function is enabled, run the following command to write a directory into Hive: insert overwrite directory \"/path1\".... After the ",
@@ -2840,8 +2831,8 @@
"title":"Writing a Directory into Hive with the Old Data Removed to the Recycle Bin",
"uri":"mrs_01_0967.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"316"
+ "p_code":"293",
+ "code":"315"
},
{
"desc":"This function applies to Hive.With this function enabled, run the insert overwrite directory/path1/path2/path3... command to write a subdirectory. The permission of the /",
@@ -2849,8 +2840,8 @@
"title":"Inserting Data to a Directory That Does Not Exist",
"uri":"mrs_01_0968.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"317"
+ "p_code":"293",
+ "code":"316"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, only the Hive ad",
@@ -2858,8 +2849,8 @@
"title":"Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator",
"uri":"mrs_01_0969.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"318"
+ "p_code":"293",
+ "code":"317"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the location key",
@@ -2867,8 +2858,8 @@
"title":"Disabling of Specifying the location Keyword When Creating an Internal Hive Table",
"uri":"mrs_01_0970.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"319"
+ "p_code":"293",
+ "code":"318"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the user or user",
@@ -2876,8 +2867,8 @@
"title":"Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read",
"uri":"mrs_01_0971.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"320"
+ "p_code":"293",
+ "code":"319"
},
{
"desc":"This function applies to Hive.The number of OS user groups is limited, and the number of roles that can be created in Hive cannot exceed 32. After this function is enable",
@@ -2885,8 +2876,8 @@
"title":"Authorizing Over 32 Roles in Hive",
"uri":"mrs_01_0972.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"321"
+ "p_code":"293",
+ "code":"320"
},
{
"desc":"This function applies to Hive.This function is used to limit the maximum number of maps for Hive tasks on the server to avoid performance deterioration caused by overload",
@@ -2894,8 +2885,8 @@
"title":"Restricting the Maximum Number of Maps for Hive Tasks",
"uri":"mrs_01_0973.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"322"
+ "p_code":"293",
+ "code":"321"
},
{
"desc":"This function applies to Hive.This function can be enabled to specify specific users to access HiveServer services on specific nodes, achieving HiveServer resource isolat",
@@ -2903,8 +2894,8 @@
"title":"HiveServer Lease Isolation",
"uri":"mrs_01_0974.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"323"
+ "p_code":"293",
+ "code":"322"
},
{
"desc":"Hive supports transactions at the table and partition levels. When the transaction mode is enabled, transaction tables can be incrementally updated, deleted, and read, im",
@@ -2912,8 +2903,8 @@
"title":"Hive Supporting Transactions",
"uri":"mrs_01_0975.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"324"
+ "p_code":"293",
+ "code":"323"
},
{
"desc":"Hive can use the Tez engine to process data computing tasks. Before executing a task, you can manually switch the execution engine to Tez.The TimelineServer role of the Y",
@@ -2921,17 +2912,8 @@
"title":"Switching the Hive Execution Engine to Tez",
"uri":"mrs_01_1750.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"325"
- },
- {
- "desc":"A Hive materialized view is a special table obtained based on the query results of Hive internal tables. A materialized view can be considered as an intermediate table th",
- "product_code":"mrs",
- "title":"Hive Materialized View",
- "uri":"mrs_01_2311.html",
- "doc_type":"cmpntguide",
- "p_code":"294",
- "code":"326"
+ "p_code":"293",
+ "code":"324"
},
{
"desc":"Log path: The default save path of Hive logs is /var/log/Bigdata/hive/role name, the default save path of Hive1 logs is /var/log/Bigdata/hive1/role name, and the others f",
@@ -2939,8 +2921,8 @@
"title":"Hive Log Overview",
"uri":"mrs_01_0976.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"327"
+ "p_code":"293",
+ "code":"325"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -2948,8 +2930,8 @@
"title":"Hive Performance Tuning",
"uri":"mrs_01_0977.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"328"
+ "p_code":"293",
+ "code":"326"
},
{
"desc":"During the Select query, Hive generally scans the entire table, which is time-consuming. To improve query efficiency, create table partitions based on service requirement",
@@ -2957,8 +2939,8 @@
"title":"Creating Table Partitions",
"uri":"mrs_01_0978.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"329"
+ "p_code":"326",
+ "code":"327"
},
{
"desc":"When the Join statement is used, the command execution speed and query speed may be slow in case of large data volume. To resolve this problem, you can optimize Join.Join",
@@ -2966,8 +2948,8 @@
"title":"Optimizing Join",
"uri":"mrs_01_0979.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"330"
+ "p_code":"326",
+ "code":"328"
},
{
"desc":"Optimize the Group by statement to accelerate the command execution and query speed.During the Group by operation, Map performs grouping and distributes the groups to Red",
@@ -2975,8 +2957,8 @@
"title":"Optimizing Group By",
"uri":"mrs_01_0980.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"331"
+ "p_code":"326",
+ "code":"329"
},
{
"desc":"ORC is an efficient column storage format and has higher compression ratio and reading efficiency than other file formats.You are advised to use ORC as the default Hive t",
@@ -2984,8 +2966,8 @@
"title":"Optimizing Data Storage",
"uri":"mrs_01_0981.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"332"
+ "p_code":"326",
+ "code":"330"
},
{
"desc":"When SQL statements are executed on Hive, if the (a&b) or (a&c) logic exists in the statements, you are advised to change the logic to a & (b or c).If condition a is p_pa",
@@ -2993,8 +2975,8 @@
"title":"Optimizing SQL Statements",
"uri":"mrs_01_0982.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"333"
+ "p_code":"326",
+ "code":"331"
},
{
"desc":"When joining multiple tables in Hive, Hive supports Cost-Based Optimization (CBO). The system automatically selects the optimal plan based on the table statistics, such a",
@@ -3002,8 +2984,8 @@
"title":"Optimizing the Query Function Using Hive CBO",
"uri":"mrs_01_0983.html",
"doc_type":"cmpntguide",
- "p_code":"328",
- "code":"334"
+ "p_code":"326",
+ "code":"332"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3011,8 +2993,8 @@
"title":"Common Issues About Hive",
"uri":"mrs_01_1752.html",
"doc_type":"cmpntguide",
- "p_code":"294",
- "code":"335"
+ "p_code":"293",
+ "code":"333"
},
{
"desc":"How can I delete permanent user-defined functions (UDFs) on multiple HiveServers at the same time?Multiple HiveServers share one MetaStore database. Therefore, there is a",
@@ -3020,8 +3002,8 @@
"title":"How Do I Delete UDFs on Multiple HiveServers at the Same Time?",
"uri":"mrs_01_1753.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"336"
+ "p_code":"333",
+ "code":"334"
},
{
"desc":"Why cannot the DROP operation be performed for a backed up Hive table?Snapshots have been created for an HDFS directory mapping to the backed up Hive table, so the HDFS d",
@@ -3029,8 +3011,8 @@
"title":"Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?",
"uri":"mrs_01_1754.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"337"
+ "p_code":"333",
+ "code":"335"
},
{
"desc":"How to perform operations on local files (such as reading the content of a file) with Hive user-defined functions?By default, you can perform operations on local files wi",
@@ -3038,8 +3020,8 @@
"title":"How to Perform Operations on Local Files with Hive User-Defined Functions",
"uri":"mrs_01_1755.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"338"
+ "p_code":"333",
+ "code":"336"
},
{
"desc":"How do I stop a MapReduce task manually if the task is suspended for a long time?",
@@ -3047,8 +3029,8 @@
"title":"How Do I Forcibly Stop MapReduce Jobs Executed by Hive?",
"uri":"mrs_01_1756.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"339"
+ "p_code":"333",
+ "code":"337"
},
{
"desc":"How do I monitor the Hive table size?The HDFS refined monitoring function allows you to monitor the size of a specified table directory.The Hive and HDFS components are r",
@@ -3056,8 +3038,8 @@
"title":"How Do I Monitor the Hive Table Size?",
"uri":"mrs_01_1758.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"340"
+ "p_code":"333",
+ "code":"338"
},
{
"desc":"How do I prevent key directories from data loss caused by misoperations of the insert overwrite statement?During monitoring of key Hive databases, tables, or directories,",
@@ -3065,8 +3047,8 @@
"title":"How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?",
"uri":"mrs_01_1759.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"341"
+ "p_code":"333",
+ "code":"339"
},
{
"desc":"This function applies to Hive.Perform the following operations to configure parameters. When Hive on Spark tasks are executed in the environment where the HBase is not in",
@@ -3074,8 +3056,8 @@
"title":"Why Is Hive on Spark Task Freezing When HBase Is Not Installed?",
"uri":"mrs_01_1760.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"342"
+ "p_code":"333",
+ "code":"340"
},
{
"desc":"When a table with more than 32,000 partitions is created in Hive, an exception occurs during the query with the WHERE partition. In addition, the exception information pr",
@@ -3083,8 +3065,8 @@
"title":"Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive",
"uri":"mrs_01_1761.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"343"
+ "p_code":"333",
+ "code":"341"
},
{
"desc":"When users check the JDK version used by the client, if the JDK version is IBM JDK, the Beeline client needs to be reconstructed. Otherwise, the client will fail to conne",
@@ -3092,8 +3074,8 @@
"title":"Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?",
"uri":"mrs_01_1762.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"344"
+ "p_code":"333",
+ "code":"342"
},
{
"desc":"Can Hive tables be stored in OBS or HDFS?The location of a common Hive table stored on OBS can be set to an HDFS path.In the same Hive service, you can create tables stor",
@@ -3101,8 +3083,8 @@
"title":"Description of Hive Table Location (Either Be an OBS or HDFS Path)",
"uri":"mrs_01_1763.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"345"
+ "p_code":"333",
+ "code":"343"
},
{
"desc":"Hive uses the Tez engine to execute union-related statements to write data. After Hive is switched to the MapReduce engine for query, no data is found.When Hive uses the ",
@@ -3110,8 +3092,8 @@
"title":"Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?",
"uri":"mrs_01_2309.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"346"
+ "p_code":"333",
+ "code":"344"
},
{
"desc":"Why Does Data Inconsistency Occur When Data Is Concurrently Written to a Hive Table Through an API?Hive does not support concurrent data insertion for the same table or p",
@@ -3119,8 +3101,8 @@
"title":"Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?",
"uri":"mrs_01_2310.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"347"
+ "p_code":"333",
+ "code":"345"
},
{
"desc":"When the vectorized parameterhive.vectorized.execution.enabled is set to true, why do some null pointers or type conversion exceptions occur occasionally when Hive on Tez",
@@ -3128,8 +3110,8 @@
"title":"Why Does Hive Not Support Vectorized Query?",
"uri":"mrs_01_2325.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"348"
+ "p_code":"333",
+ "code":"346"
},
{
"desc":"The HDFS data directory of the Hive table is deleted by mistake, but the metadata still exists. As a result, an error is reported during task execution.This is a exceptio",
@@ -3137,8 +3119,8 @@
"title":"Why Does Metadata Still Exist When the HDFS Data Directory of the Hive Table Is Deleted by Mistake?",
"uri":"mrs_01_2343.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"349"
+ "p_code":"333",
+ "code":"347"
},
{
"desc":"How do I disable the logging function of Hive?cd/opt/Bigdata/clientsource bigdata_envIn security mode, run the following command to complete user authentication and log i",
@@ -3146,8 +3128,8 @@
"title":"How Do I Disable the Logging Function of Hive?",
"uri":"mrs_01_24482.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"350"
+ "p_code":"333",
+ "code":"348"
},
{
"desc":"In the scenario where the fine-grained permission is configured for multiple MRS users to access OBS, after the permission for deleting Hive tables in the OBS directory i",
@@ -3155,8 +3137,8 @@
"title":"Why Hive Tables in the OBS Directory Fail to Be Deleted?",
"uri":"mrs_01_24486.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"351"
+ "p_code":"333",
+ "code":"349"
},
{
"desc":"The error message \"java.lang.OutOfMemoryError: Java heap space.\" is displayed during Hive SQL execution.Solution:For MapReduce tasks, increase the values of the following",
@@ -3164,8 +3146,8 @@
"title":"Hive Configuration Problems",
"uri":"mrs_01_24117.html",
"doc_type":"cmpntguide",
- "p_code":"335",
- "code":"352"
+ "p_code":"333",
+ "code":"350"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3174,7 +3156,7 @@
"uri":"mrs_01_24025.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"353"
+ "code":"351"
},
{
"desc":"This section describes capabilities of Hudi using spark-shell. Using the Spark data source, this section describes how to insert and update a Hudi dataset of the default ",
@@ -3182,8 +3164,8 @@
"title":"Getting Started",
"uri":"mrs_01_24033.html",
"doc_type":"cmpntguide",
- "p_code":"353",
- "code":"354"
+ "p_code":"351",
+ "code":"352"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3191,8 +3173,8 @@
"title":"Basic Operations",
"uri":"mrs_01_24062.html",
"doc_type":"cmpntguide",
- "p_code":"353",
- "code":"355"
+ "p_code":"351",
+ "code":"353"
},
{
"desc":"When writing data, Hudi generates a Hudi table based on attributes such as the storage path, table name, and partition structure.Hudi table data files can be stored in th",
@@ -3200,8 +3182,8 @@
"title":"Hudi Table Schema",
"uri":"mrs_01_24103.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"356"
+ "p_code":"353",
+ "code":"354"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3209,8 +3191,8 @@
"title":"Write",
"uri":"mrs_01_24034.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"357"
+ "p_code":"353",
+ "code":"355"
},
{
"desc":"Hudi provides multiple write modes. For details, see the configuration item hoodie.datasource.write.operation. This section describes upsert, insert, and bulk_insert.inse",
@@ -3218,8 +3200,8 @@
"title":"Batch Write",
"uri":"mrs_01_24035.html",
"doc_type":"cmpntguide",
- "p_code":"357",
- "code":"358"
+ "p_code":"355",
+ "code":"356"
},
{
"desc":"You can run run_hive_sync_tool.sh to synchronize data in the Hudi table to Hive.For example, run the following command to synchronize the Hudi table in the hdfs://haclust",
@@ -3227,8 +3209,8 @@
"title":"Synchronizing Hudi Table Data to Hive",
"uri":"mrs_01_24064.html",
"doc_type":"cmpntguide",
- "p_code":"357",
- "code":"359"
+ "p_code":"355",
+ "code":"357"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3236,8 +3218,8 @@
"title":"Read",
"uri":"mrs_01_24037.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"360"
+ "p_code":"353",
+ "code":"358"
},
{
"desc":"Reading the real-time view (using Hive and SparkSQL as an example): Directly read the Hudi table stored in Hive.select count(*) from test;Reading the real-time view (usin",
@@ -3245,8 +3227,8 @@
"title":"Reading COW Table Views",
"uri":"mrs_01_24098.html",
"doc_type":"cmpntguide",
- "p_code":"360",
- "code":"361"
+ "p_code":"358",
+ "code":"359"
},
{
"desc":"After the MOR table is synchronized to Hive, the following two tables are synchronized to Hive: Table name_rt and Table name_ro. The table suffixed with rt indicates the ",
@@ -3254,8 +3236,8 @@
"title":"Reading MOR Table Views",
"uri":"mrs_01_24099.html",
"doc_type":"cmpntguide",
- "p_code":"360",
- "code":"362"
+ "p_code":"358",
+ "code":"360"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3263,8 +3245,8 @@
"title":"Data Management and Maintenance",
"uri":"mrs_01_24038.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"363"
+ "p_code":"353",
+ "code":"361"
},
{
"desc":"Clustering reorganizes data layout to improve query performance without affecting the ingestion speed.Hudi provides different operations, such as insert, upsert, and bulk",
@@ -3272,8 +3254,8 @@
"title":"Clustering",
"uri":"mrs_01_24088.html",
"doc_type":"cmpntguide",
- "p_code":"363",
- "code":"364"
+ "p_code":"361",
+ "code":"362"
},
{
"desc":"Cleaning is used to delete data of versions that are no longer required.Hudi uses the cleaner working in the background to continuously delete unnecessary data of old ver",
@@ -3281,8 +3263,8 @@
"title":"Cleaning",
"uri":"mrs_01_24089.html",
"doc_type":"cmpntguide",
- "p_code":"363",
- "code":"365"
+ "p_code":"361",
+ "code":"363"
},
{
"desc":"A compaction merges base and log files of MOR tables.For MOR tables, data is stored in columnar Parquet files and row-based Avro files, updates are recorded in incrementa",
@@ -3290,8 +3272,8 @@
"title":"Compaction",
"uri":"mrs_01_24090.html",
"doc_type":"cmpntguide",
- "p_code":"363",
- "code":"366"
+ "p_code":"361",
+ "code":"364"
},
{
"desc":"Savepoints are used to save and restore data of the customized version.Savepoints provided by Hudi can save different commits so that the cleaner program does not delete ",
@@ -3299,8 +3281,8 @@
"title":"Savepoint",
"uri":"mrs_01_24091.html",
"doc_type":"cmpntguide",
- "p_code":"363",
- "code":"367"
+ "p_code":"361",
+ "code":"365"
},
{
"desc":"Uses an external service (ZooKeeper or Hive MetaStore) as the distributed mutex lock service.Files can be concurrently written, but commits cannot be concurrent. The comm",
@@ -3308,8 +3290,8 @@
"title":"Single-Table Concurrent Write",
"uri":"mrs_01_24165.html",
"doc_type":"cmpntguide",
- "p_code":"363",
- "code":"368"
+ "p_code":"361",
+ "code":"366"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3317,8 +3299,8 @@
"title":"Using the Hudi Client",
"uri":"mrs_01_24100.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"369"
+ "p_code":"353",
+ "code":"367"
},
{
"desc":"For a cluster with Kerberos authentication enabled, a user has been created on FusionInsight Manager of the cluster and associated with user groups hadoop and hive.The Hu",
@@ -3326,8 +3308,8 @@
"title":"Operating a Hudi Table Using hudi-cli.sh",
"uri":"mrs_01_24063.html",
"doc_type":"cmpntguide",
- "p_code":"369",
- "code":"370"
+ "p_code":"367",
+ "code":"368"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3335,8 +3317,8 @@
"title":"Configuration Reference",
"uri":"mrs_01_24032.html",
"doc_type":"cmpntguide",
- "p_code":"355",
- "code":"371"
+ "p_code":"353",
+ "code":"369"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3344,8 +3326,8 @@
"title":"Write Configuration",
"uri":"mrs_01_24093.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"372"
+ "p_code":"369",
+ "code":"370"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3353,8 +3335,8 @@
"title":"Configuration of Hive Table Synchronization",
"uri":"mrs_01_24094.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"373"
+ "p_code":"369",
+ "code":"371"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3362,8 +3344,8 @@
"title":"Index Configuration",
"uri":"mrs_01_24095.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"374"
+ "p_code":"369",
+ "code":"372"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3371,8 +3353,8 @@
"title":"Storage Configuration",
"uri":"mrs_01_24096.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"375"
+ "p_code":"369",
+ "code":"373"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3380,8 +3362,8 @@
"title":"Compaction and Cleaning Configurations",
"uri":"mrs_01_24097.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"376"
+ "p_code":"369",
+ "code":"374"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3389,8 +3371,8 @@
"title":"Single-Table Concurrent Write Configuration",
"uri":"mrs_01_24167.html",
"doc_type":"cmpntguide",
- "p_code":"371",
- "code":"377"
+ "p_code":"369",
+ "code":"375"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3398,8 +3380,8 @@
"title":"Hudi Performance Tuning",
"uri":"mrs_01_24039.html",
"doc_type":"cmpntguide",
- "p_code":"353",
- "code":"378"
+ "p_code":"351",
+ "code":"376"
},
{
"desc":"In the current version, Spark is recommended for Hudi write operations. Therefore, the tuning methods of Hudi are similar to those of Spark. For details, see Spark2x Perf",
@@ -3407,8 +3389,8 @@
"title":"Performance Tuning Methods",
"uri":"mrs_01_24101.html",
"doc_type":"cmpntguide",
- "p_code":"378",
- "code":"379"
+ "p_code":"376",
+ "code":"377"
},
{
"desc":"For MOR tables:The essence of MOR tables is to write incremental files, so the tuning is based on the data size (dataSize) of Hudi.If dataSize is only several GBs, you ar",
@@ -3416,8 +3398,8 @@
"title":"Recommended Resource Configuration",
"uri":"mrs_01_24102.html",
"doc_type":"cmpntguide",
- "p_code":"378",
- "code":"380"
+ "p_code":"376",
+ "code":"378"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3425,8 +3407,8 @@
"title":"Common Issues About Hudi",
"uri":"mrs_01_24065.html",
"doc_type":"cmpntguide",
- "p_code":"353",
- "code":"381"
+ "p_code":"351",
+ "code":"379"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3434,8 +3416,8 @@
"title":"Data Write",
"uri":"mrs_01_24070.html",
"doc_type":"cmpntguide",
- "p_code":"381",
- "code":"382"
+ "p_code":"379",
+ "code":"380"
},
{
"desc":"The following error is reported when data is written:You are advised to evolve schemas in backward compatible mode while using Hudi. This error usually occurs when you de",
@@ -3443,8 +3425,8 @@
"title":"Parquet/Avro schema Is Reported When Updated Data Is Written",
"uri":"mrs_01_24071.html",
"doc_type":"cmpntguide",
- "p_code":"382",
- "code":"383"
+ "p_code":"380",
+ "code":"381"
},
{
"desc":"The following error is reported when data is written:This error will occur again because schema evolutions are in non-backwards compatible mode. Basically, there is some ",
@@ -3452,8 +3434,8 @@
"title":"UnsupportedOperationException Is Reported When Updated Data Is Written",
"uri":"mrs_01_24072.html",
"doc_type":"cmpntguide",
- "p_code":"382",
- "code":"384"
+ "p_code":"380",
+ "code":"382"
},
{
"desc":"The following error is reported when data is written:This error may occur if a schema contains some non-nullable field whose value is not present or is null.You are advis",
@@ -3461,8 +3443,8 @@
"title":"SchemaCompatabilityException Is Reported When Updated Data Is Written",
"uri":"mrs_01_24073.html",
"doc_type":"cmpntguide",
- "p_code":"382",
- "code":"385"
+ "p_code":"380",
+ "code":"383"
},
{
"desc":"Hudi consumes much space in a temporary folder during upsert.Hudi will spill part of input data to disk if the maximum memory for merge is reached when much input data is",
@@ -3470,8 +3452,8 @@
"title":"What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?",
"uri":"mrs_01_24074.html",
"doc_type":"cmpntguide",
- "p_code":"382",
- "code":"386"
+ "p_code":"380",
+ "code":"384"
},
{
"desc":"Decimal data is initially written to a Hudi table using the BULK_INSERT command. Then when data is subsequently written using UPSERT, the following error is reported:Caus",
@@ -3479,8 +3461,8 @@
"title":"Hudi Fails to Write Decimal Data with Lower Precision",
"uri":"mrs_01_24504.html",
"doc_type":"cmpntguide",
- "p_code":"382",
- "code":"387"
+ "p_code":"380",
+ "code":"385"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3488,8 +3470,8 @@
"title":"Data Collection",
"uri":"mrs_01_24075.html",
"doc_type":"cmpntguide",
- "p_code":"381",
- "code":"388"
+ "p_code":"379",
+ "code":"386"
},
{
"desc":"The error \"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer\" is reported in the main thread, and the following error is reported.This error may ",
@@ -3497,8 +3479,8 @@
"title":"IllegalArgumentException Is Reported When Kafka Is Used to Collect Data",
"uri":"mrs_01_24077.html",
"doc_type":"cmpntguide",
- "p_code":"388",
- "code":"389"
+ "p_code":"386",
+ "code":"387"
},
{
"desc":"The following error is reported when data is collected:This error usually occurs when a field marked as recordKey or partitionKey is not present in the input record. Cros",
@@ -3506,8 +3488,8 @@
"title":"HoodieException Is Reported When Data Is Collected",
"uri":"mrs_01_24078.html",
"doc_type":"cmpntguide",
- "p_code":"388",
- "code":"390"
+ "p_code":"386",
+ "code":"388"
},
{
"desc":"Is it possible to use a nullable field that contains null records as a primary key when creating a Hudi table?No. HoodieKeyException will be thrown.",
@@ -3515,8 +3497,8 @@
"title":"HoodieKeyException Is Reported When Data Is Collected",
"uri":"mrs_01_24079.html",
"doc_type":"cmpntguide",
- "p_code":"388",
- "code":"391"
+ "p_code":"386",
+ "code":"389"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3524,8 +3506,8 @@
"title":"Hive Synchronization",
"uri":"mrs_01_24080.html",
"doc_type":"cmpntguide",
- "p_code":"381",
- "code":"392"
+ "p_code":"379",
+ "code":"390"
},
{
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when you try to add a new column to an existing Hive table using the HiveSyncTo",
@@ -3533,8 +3515,8 @@
"title":"SQLException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24081.html",
"doc_type":"cmpntguide",
- "p_code":"392",
- "code":"393"
+ "p_code":"390",
+ "code":"391"
},
{
"desc":"The following error is reported during Hive data synchronization:This error occurs because HiveSyncTool currently supports only few compatible data type conversions. The ",
@@ -3542,8 +3524,8 @@
"title":"HoodieHiveSyncException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24082.html",
"doc_type":"cmpntguide",
- "p_code":"392",
- "code":"394"
+ "p_code":"390",
+ "code":"392"
},
{
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when Hive synchronization is performed on the Hudi dataset but the configured h",
@@ -3551,8 +3533,8 @@
"title":"SemanticException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24083.html",
"doc_type":"cmpntguide",
- "p_code":"392",
- "code":"395"
+ "p_code":"390",
+ "code":"393"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3561,7 +3543,7 @@
"uri":"mrs_01_0369.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"396"
+ "code":"394"
},
{
"desc":"Hue provides the file browser function using a graphical user interface (GUI) so that you can view files and directories on Hive.You have installed Hive and Hue, and the ",
@@ -3569,8 +3551,8 @@
"title":"Using Hue from Scratch",
"uri":"mrs_01_1020.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"397"
+ "p_code":"394",
+ "code":"395"
},
{
"desc":"After Hue is installed in an MRS cluster, users can use Hadoop and Hive on the Hue web UI.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication e",
@@ -3578,8 +3560,8 @@
"title":"Accessing the Hue Web UI",
"uri":"mrs_01_0370.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"398"
+ "p_code":"394",
+ "code":"396"
},
{
"desc":"For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
@@ -3587,8 +3569,8 @@
"title":"Hue Common Parameters",
"uri":"mrs_01_1021.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"399"
+ "p_code":"394",
+ "code":"397"
},
{
"desc":"Users can use the Hue web UI to execute HiveQL statements in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
@@ -3596,8 +3578,8 @@
"title":"Using HiveQL Editor on the Hue Web UI",
"uri":"mrs_01_0371.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"400"
+ "p_code":"394",
+ "code":"398"
},
{
"desc":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
@@ -3605,8 +3587,8 @@
"title":"Using the Metadata Browser on the Hue Web UI",
"uri":"mrs_01_0372.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"401"
+ "p_code":"394",
+ "code":"399"
},
{
"desc":"Users can use the Hue web UI to manage files in HDFS in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this func",
@@ -3614,8 +3596,8 @@
"title":"Using File Browser on the Hue Web UI",
"uri":"mrs_01_0373.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"402"
+ "p_code":"394",
+ "code":"400"
},
{
"desc":"You can use the Hue web UI to query all jobs in the cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this function.V",
@@ -3623,8 +3605,8 @@
"title":"Using Job Browser on the Hue Web UI",
"uri":"mrs_01_0374.html",
"doc_type":"cmpntguide",
- "p_code":"396",
- "code":"403"
+ "p_code":"394",
+ "code":"401"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3633,7 +3615,7 @@
"uri":"mrs_01_0130.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"404"
+ "code":"402"
},
{
"desc":"Hue aggregates interfaces which interact with most Apache Hadoop components and enables you to use Hadoop components with ease on a web UI. You can operate components suc",
@@ -3641,8 +3623,8 @@
"title":"Using Hue from Scratch",
"uri":"mrs_01_0131.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"405"
+ "p_code":"402",
+ "code":"403"
},
{
"desc":"After Hue is installed in an MRS cluster, users can use Hadoop-related components on the Hue web UI.This section describes how to open the Hue web UI on the MRS cluster.T",
@@ -3650,8 +3632,8 @@
"title":"Accessing the Hue Web UI",
"uri":"mrs_01_0132.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"406"
+ "p_code":"402",
+ "code":"404"
},
{
"desc":"Go to the All Configurations page of the Hue service by referring to Modifying Cluster Service Configuration Parameters.For details about Hue common parameters, see Table",
@@ -3659,8 +3641,8 @@
"title":"Hue Common Parameters",
"uri":"mrs_01_0133.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"407"
+ "p_code":"402",
+ "code":"405"
},
{
"desc":"Users can use the Hue web UI to execute HiveQL statements in an MRS cluster.Hive supports the following functions:Executes and manages HiveQL statements.Views the HiveQL ",
@@ -3668,8 +3650,8 @@
"title":"Using HiveQL Editor on the Hue Web UI",
"uri":"mrs_01_0134.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"408"
+ "p_code":"402",
+ "code":"406"
},
{
"desc":"You can use Hue to execute SparkSql statements in a cluster on a graphical user interface (GUI).Before using the SparkSql editor, you need to modify the Spark2x configura",
@@ -3677,8 +3659,8 @@
"title":"Using the SparkSql Editor on the Hue Web UI",
"uri":"mrs_01_2370.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"409"
+ "p_code":"402",
+ "code":"407"
},
{
"desc":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.Access the Hue web UI. For details, see Accessing the Hue Web UI.Viewing metadata of Hive tablesCli",
@@ -3686,8 +3668,8 @@
"title":"Using the Metadata Browser on the Hue Web UI",
"uri":"mrs_01_0135.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"410"
+ "p_code":"402",
+ "code":"408"
},
{
"desc":"Users can use the Hue web UI to manage files in HDFS.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk management operation",
@@ -3695,8 +3677,8 @@
"title":"Using File Browser on the Hue Web UI",
"uri":"mrs_01_0136.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"411"
+ "p_code":"402",
+ "code":"409"
},
{
"desc":"Users can use the Hue web UI to query all jobs in an MRS cluster.View the jobs in the current cluster.The number on Job Browser indicates the total number of jobs in the ",
@@ -3704,8 +3686,8 @@
"title":"Using Job Browser on the Hue Web UI",
"uri":"mrs_01_0137.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"412"
+ "p_code":"402",
+ "code":"410"
},
{
"desc":"You can use Hue to create or query HBase tables in a cluster and run tasks on the Hue web UI.Make sure that the HBase component has been installed in the MRS cluster and ",
@@ -3713,8 +3695,8 @@
"title":"Using HBase on the Hue Web UI",
"uri":"mrs_01_2371.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"413"
+ "p_code":"402",
+ "code":"411"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3722,8 +3704,8 @@
"title":"Typical Scenarios",
"uri":"mrs_01_0138.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"414"
+ "p_code":"402",
+ "code":"412"
},
{
"desc":"Hue provides the file browser function for users to use HDFS in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk ",
@@ -3731,8 +3713,8 @@
"title":"HDFS on Hue",
"uri":"mrs_01_0139.html",
"doc_type":"cmpntguide",
- "p_code":"414",
- "code":"415"
+ "p_code":"412",
+ "code":"413"
},
{
"desc":"Hue provides the Hive GUI management function so that users can query Hive data in GUI mode.Access the Hue web UI. For details, see Accessing the Hue Web UI.In the naviga",
@@ -3740,8 +3722,8 @@
"title":"Hive on Hue",
"uri":"mrs_01_0141.html",
"doc_type":"cmpntguide",
- "p_code":"414",
- "code":"416"
+ "p_code":"412",
+ "code":"414"
},
{
"desc":"Hue provides the Oozie job manager function, in this case, you can use Oozie in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not pe",
@@ -3749,8 +3731,8 @@
"title":"Oozie on Hue",
"uri":"mrs_01_0144.html",
"doc_type":"cmpntguide",
- "p_code":"414",
- "code":"417"
+ "p_code":"412",
+ "code":"415"
},
{
"desc":"Log paths: The default paths of Hue logs are /var/log/Bigdata/hue (for storing run logs) and /var/log/Bigdata/audit/hue (for storing audit logs).Log archive rules: The au",
@@ -3758,8 +3740,8 @@
"title":"Hue Log Overview",
"uri":"mrs_01_0147.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"418"
+ "p_code":"402",
+ "code":"416"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3767,8 +3749,8 @@
"title":"Common Issues About Hue",
"uri":"mrs_01_1764.html",
"doc_type":"cmpntguide",
- "p_code":"404",
- "code":"419"
+ "p_code":"402",
+ "code":"417"
},
{
"desc":"What do I do if all HQL statements fail to be executed when I use Internet Explorer to access Hive Editor in Hue and the message \"There was an error with your query\" is d",
@@ -3776,8 +3758,8 @@
"title":"How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?",
"uri":"mrs_01_1765.html",
"doc_type":"cmpntguide",
- "p_code":"419",
- "code":"420"
+ "p_code":"417",
+ "code":"418"
},
{
"desc":"When Hive is used, the use database statement is entered in the text box to switch the database, and other statements are also entered, why does the database fail to be s",
@@ -3785,8 +3767,8 @@
"title":"Why Does the use database Statement Become Invalid When Hive Is Used?",
"uri":"mrs_01_1766.html",
"doc_type":"cmpntguide",
- "p_code":"419",
- "code":"421"
+ "p_code":"417",
+ "code":"419"
},
{
"desc":"What can I do if an error message shown in the following figure is displayed, indicating that the HDFS file cannot be accessed when I use Hue web UI to access the HDFS fi",
@@ -3794,8 +3776,8 @@
"title":"What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?",
"uri":"mrs_01_0156.html",
"doc_type":"cmpntguide",
- "p_code":"419",
- "code":"422"
+ "p_code":"417",
+ "code":"420"
},
{
"desc":"What can I do when a large file fails to be uploaded on the Hue page?You are advised to run commands on the client to upload large files instead of using the Hue file bro",
@@ -3803,8 +3785,8 @@
"title":"How Do I Do If a Large File Fails to Upload on the Hue Page?",
"uri":"mrs_01_2367.html",
"doc_type":"cmpntguide",
- "p_code":"419",
- "code":"423"
+ "p_code":"417",
+ "code":"421"
},
{
"desc":"Why is the native Hue page blank if the Hive service is not installed in a cluster?In MRS 3.x, Hue depends on Hive. If this problem occurs, check whether the Hive compone",
@@ -3812,8 +3794,8 @@
"title":"Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cluster?",
"uri":"mrs_01_2368.html",
"doc_type":"cmpntguide",
- "p_code":"419",
- "code":"424"
+ "p_code":"417",
+ "code":"422"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3822,7 +3804,7 @@
"uri":"mrs_01_0375.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"425"
+ "code":"423"
},
{
"desc":"You can create, query, and delete topics on a cluster client.The client has been installed. For example, the client is installed in the /opt/hadoopclient directory. The c",
@@ -3830,8 +3812,8 @@
"title":"Using Kafka from Scratch",
"uri":"mrs_01_1031.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"426"
+ "p_code":"423",
+ "code":"424"
},
{
"desc":"You can manage Kafka topics on a cluster client based on service requirements. Management permission is required for clusters with Kerberos authentication enabled.You hav",
@@ -3839,8 +3821,8 @@
"title":"Managing Kafka Topics",
"uri":"mrs_01_0376.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"427"
+ "p_code":"423",
+ "code":"425"
},
{
"desc":"You can query existing Kafka topics on MRS.For versions earlier than MRS 1.9.2, log in to MRS Manager and choose Services > Kafka.For MRS 1.9.2 or later, click the cluste",
@@ -3848,8 +3830,8 @@
"title":"Querying Kafka Topics",
"uri":"mrs_01_0377.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"428"
+ "p_code":"423",
+ "code":"426"
},
{
"desc":"For clusters with Kerberos authentication enabled, using Kafka requires relevant permissions. MRS clusters can grant the use permission of Kafka to different users.Table ",
@@ -3857,8 +3839,8 @@
"title":"Managing Kafka User Permissions",
"uri":"mrs_01_0378.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"429"
+ "p_code":"423",
+ "code":"427"
},
{
"desc":"You can produce or consume messages in Kafka topics using the MRS cluster client. For clusters with Kerberos authentication enabled, you must have the permission to perfo",
@@ -3866,8 +3848,8 @@
"title":"Managing Messages in Kafka Topics",
"uri":"mrs_01_0379.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"430"
+ "p_code":"423",
+ "code":"428"
},
{
"desc":"This section describes how to use the Maxwell data synchronization tool to migrate offline binlog-based data to an MRS Kafka cluster.Maxwell is an open source application",
@@ -3875,8 +3857,8 @@
"title":"Synchronizing Binlog-based MySQL Data to the MRS Cluster",
"uri":"mrs_01_0441.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"431"
+ "p_code":"423",
+ "code":"429"
},
{
"desc":"This section describes how to create and configure a Kafka role.This section applies to MRS 3.x or later.Users can create Kafka roles only in security mode.If the current",
@@ -3884,8 +3866,8 @@
"title":"Creating a Kafka Role",
"uri":"mrs_01_1032.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"432"
+ "p_code":"423",
+ "code":"430"
},
{
"desc":"This section applies to MRS 3.x or later.For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
@@ -3893,8 +3875,8 @@
"title":"Kafka Common Parameters",
"uri":"mrs_01_1033.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"433"
+ "p_code":"423",
+ "code":"431"
},
{
"desc":"This section applies to MRS 3.x or later.Producer APIIndicates the API defined in org.apache.kafka.clients.producer.KafkaProducer. When kafka-console-producer.sh is used,",
@@ -3902,8 +3884,8 @@
"title":"Safety Instructions on Using Kafka",
"uri":"mrs_01_1035.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"434"
+ "p_code":"423",
+ "code":"432"
},
{
"desc":"This section applies to MRS 3.x or later.The maximum number of topics depends on the number of file handles (mainly used by data and index files on site) opened in the pr",
@@ -3911,8 +3893,8 @@
"title":"Kafka Specifications",
"uri":"mrs_01_1036.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"435"
+ "p_code":"423",
+ "code":"433"
},
{
"desc":"This section guides users to use a Kafka client in an O&M or service scenario.This section applies to MRS 3.x or later clusters.The client has been installed. For example",
@@ -3920,8 +3902,8 @@
"title":"Using the Kafka Client",
"uri":"mrs_01_1767.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"436"
+ "p_code":"423",
+ "code":"434"
},
{
"desc":"For the Kafka message transmission assurance mechanism, different parameters are available for meeting different performance and reliability requirements. This section de",
@@ -3929,8 +3911,8 @@
"title":"Configuring Kafka HA and High Reliability Parameters",
"uri":"mrs_01_1037.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"437"
+ "p_code":"423",
+ "code":"435"
},
{
"desc":"This section applies to MRS 3.x or later.When a broker storage directory is added, the system administrator needs to change the broker storage directory on FusionInsight ",
@@ -3938,8 +3920,8 @@
"title":"Changing the Broker Storage Directory",
"uri":"mrs_01_1038.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"438"
+ "p_code":"423",
+ "code":"436"
},
{
"desc":"This section describes how to view the current expenditure on the client based on service requirements.This section applies to MRS 3.x or later.The system administrator h",
@@ -3947,8 +3929,8 @@
"title":"Checking the Consumption Status of Consumer Group",
"uri":"mrs_01_1039.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"439"
+ "p_code":"423",
+ "code":"437"
},
{
"desc":"This section describes how to use the Kafka balancing tool on a client to balance the load of the Kafka cluster based on service requirements in scenarios such as node de",
@@ -3956,8 +3938,8 @@
"title":"Kafka Balancing Tool Instructions",
"uri":"mrs_01_1040.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"440"
+ "p_code":"423",
+ "code":"438"
},
{
"desc":"This section describes how to use the Kafka balancing tool on the client to balance the load of the Kafka cluster after Kafka nodes are scaled out.This section applies to",
@@ -3965,8 +3947,8 @@
"title":"Balancing Data After Kafka Node Scale-Out",
"uri":"mrs_01_24299.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"441"
+ "p_code":"423",
+ "code":"439"
},
{
"desc":"Operations need to be performed on tokens when the token authentication mechanism is used.This section applies to security clusters of MRS 3.x or later.The system adminis",
@@ -3974,8 +3956,8 @@
"title":"Kafka Token Authentication Mechanism Tool Usage",
"uri":"mrs_01_1041.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"442"
+ "p_code":"423",
+ "code":"440"
},
{
"desc":"This section applies to MRS 3.x or later.Log paths: The default storage path of Kafka logs is /var/log/Bigdata/kafka. The default storage path of audit logs is /var/log/B",
@@ -3983,8 +3965,8 @@
"title":"Introduction to Kafka Logs",
"uri":"mrs_01_1042.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"443"
+ "p_code":"423",
+ "code":"441"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -3992,8 +3974,8 @@
"title":"Performance Tuning",
"uri":"mrs_01_1043.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"444"
+ "p_code":"423",
+ "code":"442"
},
{
"desc":"You can modify Kafka server parameters to improve Kafka processing capabilities in specific service scenarios.Modify the service configuration parameters. For details, se",
@@ -4001,8 +3983,8 @@
"title":"Kafka Performance Tuning",
"uri":"mrs_01_1044.html",
"doc_type":"cmpntguide",
- "p_code":"444",
- "code":"445"
+ "p_code":"442",
+ "code":"443"
},
{
"desc":"Feature description: The function of creating idempotent producers is introduced in Kafka 0.11.0.0. After this function is enabled, producers are automatically upgraded t",
@@ -4010,8 +3992,8 @@
"title":"Kafka Feature Description",
"uri":"mrs_01_2312.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"446"
+ "p_code":"423",
+ "code":"444"
},
{
"desc":"This section describes how to use Kafka client commands to migrate partition data between disks on a node without stopping the Kafka service.The system administrator has ",
@@ -4019,8 +4001,8 @@
"title":"Migrating Data Between Kafka Nodes",
"uri":"mrs_01_24534.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"447"
+ "p_code":"423",
+ "code":"445"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4028,8 +4010,8 @@
"title":"Common Issues About Kafka",
"uri":"mrs_01_1768.html",
"doc_type":"cmpntguide",
- "p_code":"425",
- "code":"448"
+ "p_code":"423",
+ "code":"446"
},
{
"desc":"How do I delete a Kafka topic if it fails to be deleted?Possible cause 1: The delete.topic.enable configuration item is not set to true. The deletion can be performed onl",
@@ -4037,8 +4019,8 @@
"title":"How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?",
"uri":"mrs_01_1769.html",
"doc_type":"cmpntguide",
- "p_code":"448",
- "code":"449"
+ "p_code":"446",
+ "code":"447"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4047,7 +4029,7 @@
"uri":"mrs_01_0435.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"450"
+ "code":"448"
},
{
"desc":"KafkaManager is a tool for managing Apache Kafka and provides GUI-based metric monitoring and management of Kafka clusters. This section applies to MRS 1.9.2 clusters.Kaf",
@@ -4055,8 +4037,8 @@
"title":"Introduction to KafkaManager",
"uri":"mrs_01_0436.html",
"doc_type":"cmpntguide",
- "p_code":"450",
- "code":"451"
+ "p_code":"448",
+ "code":"449"
},
{
"desc":"You can monitor and manage Kafka clusters on the graphical KafkaManager web UI.This section applies to MRS 1.9.2 clusters.KafkaManager has been installed in a cluster.The",
@@ -4064,8 +4046,8 @@
"title":"Accessing the KafkaManager Web UI",
"uri":"mrs_01_0437.html",
"doc_type":"cmpntguide",
- "p_code":"450",
- "code":"452"
+ "p_code":"448",
+ "code":"450"
},
{
"desc":"This section applies to MRS 1.9.2 clusters.Kafka cluster management includes the following operations:Adding a Cluster on the KafkaManager Web UIUpdating Cluster Paramete",
@@ -4073,8 +4055,8 @@
"title":"Managing Kafka Clusters",
"uri":"mrs_01_0438.html",
"doc_type":"cmpntguide",
- "p_code":"450",
- "code":"453"
+ "p_code":"448",
+ "code":"451"
},
{
"desc":"This section applies to MRS 1.9.2 clusters.The Kafka cluster monitoring management includes the following operations:Viewing Broker InformationViewing Topic InformationVi",
@@ -4082,8 +4064,8 @@
"title":"Kafka Cluster Monitoring Management",
"uri":"mrs_01_0439.html",
"doc_type":"cmpntguide",
- "p_code":"450",
- "code":"454"
+ "p_code":"448",
+ "code":"452"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4092,7 +4074,7 @@
"uri":"mrs_01_0400.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"455"
+ "code":"453"
},
{
"desc":"You can use Loader to import data from the SFTP server to HDFS.This section applies to MRS clusters earlier than 3.x.You have prepared service data.You have created an an",
@@ -4100,8 +4082,8 @@
"title":"Using Loader from Scratch",
"uri":"mrs_01_1084.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"456"
+ "p_code":"453",
+ "code":"454"
},
{
"desc":"This section applies to MRS clusters earlier than 3.x.The process for migrating user data with Loader is as follows:Access the Loader page of the Hue web UI.Manage Loader",
@@ -4109,8 +4091,8 @@
"title":"How to Use Loader",
"uri":"mrs_01_0401.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"457"
+ "p_code":"453",
+ "code":"455"
},
{
"desc":"This section applies to versions earlier than MRS 3.x.Loader supports the following links. This section describes configurations of each link.obs-connectorgeneric-jdbc-co",
@@ -4118,8 +4100,8 @@
"title":"Loader Link Configuration",
"uri":"mrs_01_0402.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"458"
+ "p_code":"453",
+ "code":"456"
},
{
"desc":"You can create, view, edit, and delete links on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see ",
@@ -4127,8 +4109,8 @@
"title":"Managing Loader Links (Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0403.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"459"
+ "p_code":"453",
+ "code":"457"
},
{
"desc":"When Loader jobs obtain data from different data sources, a link corresponding to a data source type needs to be selected and the link properties need to be configured.Th",
@@ -4136,8 +4118,8 @@
"title":"Source Link Configurations of Loader Jobs",
"uri":"mrs_01_0404.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"460"
+ "p_code":"453",
+ "code":"458"
},
{
"desc":"When Loader jobs save data to different storage locations, a destination link needs to be selected and the link properties need to be configured.",
@@ -4145,8 +4127,8 @@
"title":"Destination Link Configurations of Loader Jobs",
"uri":"mrs_01_0405.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"461"
+ "p_code":"453",
+ "code":"459"
},
{
"desc":"You can create, view, edit, and delete jobs on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see L",
@@ -4154,8 +4136,8 @@
"title":"Managing Loader Jobs",
"uri":"mrs_01_0406.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"462"
+ "p_code":"453",
+ "code":"460"
},
{
"desc":"As a component for batch data export, Loader can import and export data using a relational database.You have prepared service data.Procedure for MRS clusters earlier than",
@@ -4163,8 +4145,8 @@
"title":"Preparing a Driver for MySQL Database Link",
"uri":"mrs_01_0407.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"463"
+ "p_code":"453",
+ "code":"461"
},
{
"desc":"Log path: The default storage path of Loader log files is /var/log/Bigdata/loader/Log category.runlog: /var/log/Bigdata/loader/runlog (run logs)scriptlog: /var/log/Bigdat",
@@ -4172,8 +4154,8 @@
"title":"Loader Log Overview",
"uri":"mrs_01_1165.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"464"
+ "p_code":"453",
+ "code":"462"
},
{
"desc":"If you need to import a large volume of data from the external cluster to the internal cluster, import it from OBS to HDFS.You have prepared service data.You have created",
@@ -4181,8 +4163,8 @@
"title":"Example: Using Loader to Import Data from OBS to HDFS",
"uri":"mrs_01_0408.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"465"
+ "p_code":"453",
+ "code":"463"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4190,8 +4172,8 @@
"title":"Common Issues About Loader",
"uri":"mrs_01_1785.html",
"doc_type":"cmpntguide",
- "p_code":"455",
- "code":"466"
+ "p_code":"453",
+ "code":"464"
},
{
"desc":"Internet Explorer 11 or Internet Explorer 10 is used to access the web UI of Loader. After data is submitted, an error occurs.SymptomWhen the submitted data is saved, a s",
@@ -4199,8 +4181,8 @@
"title":"How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Explorer 11 ?",
"uri":"mrs_01_1786.html",
"doc_type":"cmpntguide",
- "p_code":"466",
- "code":"467"
+ "p_code":"464",
+ "code":"465"
},
{
"desc":"Three types of connectors are available for importing data from the Oracle database to HDFS using Loader. That is, generic-jdbc-connector, oracle-connector, and oracle-pa",
@@ -4208,8 +4190,8 @@
"title":"Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS",
"uri":"mrs_01_1787.html",
"doc_type":"cmpntguide",
- "p_code":"466",
- "code":"468"
+ "p_code":"464",
+ "code":"466"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4218,7 +4200,7 @@
"uri":"mrs_01_0834.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"469"
+ "code":"467"
},
{
"desc":"Job and task logs are generated during execution of a MapReduce application.Job logs are generated by the MRApplicationMaster, which record details about the start and ru",
@@ -4226,8 +4208,8 @@
"title":"Configuring the Log Archiving and Clearing Mechanism",
"uri":"mrs_01_0836.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"470"
+ "p_code":"467",
+ "code":"468"
},
{
"desc":"When the network is unstable or the cluster I/O and CPU are overloaded, client applications might encounter running failures.Adjust the following parameters in the mapred",
@@ -4235,8 +4217,8 @@
"title":"Reducing Client Application Failure Rate",
"uri":"mrs_01_0837.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"471"
+ "p_code":"467",
+ "code":"469"
},
{
"desc":"If you want to transmit a job from Windows to Linux, set mapreduce.app-submission.cross-platform to true. If this parameter is unavailable for a cluster or its value is f",
@@ -4244,8 +4226,8 @@
"title":"Transmitting MapReduce Tasks from Windows to Linux",
"uri":"mrs_01_0838.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"472"
+ "p_code":"467",
+ "code":"470"
},
{
"desc":"This section applies to MRS 3.x or later.Distributed caching is useful in the following scenarios:Rolling UpgradeDuring the upgrade, applications must keep the text conte",
@@ -4253,8 +4235,8 @@
"title":"Configuring the Distributed Cache",
"uri":"mrs_01_0839.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"473"
+ "p_code":"467",
+ "code":"471"
},
{
"desc":"When the MapReduce shuffle service is started, it attempts to bind an IP address based on local host. If the MapReduce shuffle service is required to connect to a specifi",
@@ -4262,8 +4244,8 @@
"title":"Configuring the MapReduce Shuffle Address",
"uri":"mrs_01_0840.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"474"
+ "p_code":"467",
+ "code":"472"
},
{
"desc":"This function is used to specify the MapReduce cluster administrator.The systemadministrator list is specified by mapreduce.cluster.administrators. The cluster administra",
@@ -4271,8 +4253,8 @@
"title":"Configuring the Cluster Administrator List",
"uri":"mrs_01_0841.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"475"
+ "p_code":"467",
+ "code":"473"
},
{
"desc":"Log paths:JobhistoryServer: /var/log/Bigdata/mapreduce/jobhistory (run log) and /var/log/Bigdata/audit/mapreduce/jobhistory (audit log)Container: /srv/BigData/hadoop/data",
@@ -4280,8 +4262,8 @@
"title":"Introduction to MapReduce Logs",
"uri":"mrs_01_0842.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"476"
+ "p_code":"467",
+ "code":"474"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4289,8 +4271,8 @@
"title":"MapReduce Performance Tuning",
"uri":"mrs_01_0843.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"477"
+ "p_code":"467",
+ "code":"475"
},
{
"desc":"Optimization can be performed when the number of CPU cores is large, for example, the number of CPU cores is three times the number of disks.You can set the following par",
@@ -4298,8 +4280,8 @@
"title":"Optimization Configuration for Multiple CPU Cores",
"uri":"mrs_01_0844.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"478"
+ "p_code":"475",
+ "code":"476"
},
{
"desc":"The performance optimization effect is verified by comparing actual values with the baseline data. Therefore, determining optimal job baseline is critical to performance ",
@@ -4307,8 +4289,8 @@
"title":"Determining the Job Baseline",
"uri":"mrs_01_0845.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"479"
+ "p_code":"475",
+ "code":"477"
},
{
"desc":"During the shuffle procedure of MapReduce, the Map task writes intermediate data into disks, and the Reduce task copies and adds the data to the reduce function. Hadoop p",
@@ -4316,8 +4298,8 @@
"title":"Streamlining Shuffle",
"uri":"mrs_01_0846.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"480"
+ "p_code":"475",
+ "code":"478"
},
{
"desc":"A big job containing 100,000 Map tasks fails. It is found that the failure is triggered by the slow response of ApplicationMaster (AM).When the number of tasks increases,",
@@ -4325,8 +4307,8 @@
"title":"AM Optimization for Big Tasks",
"uri":"mrs_01_0847.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"481"
+ "p_code":"475",
+ "code":"479"
},
{
"desc":"If a cluster has hundreds or thousands of nodes, the hardware or software fault of a node may prolong the execution time of the entire task (as most tasks are already com",
@@ -4334,8 +4316,8 @@
"title":"Speculative Execution",
"uri":"mrs_01_0848.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"482"
+ "p_code":"475",
+ "code":"480"
},
{
"desc":"The Slow Start feature specifies the proportion of Map tasks to be completed before Reduce tasks are started. If the Reduce tasks are started too early, resources will be",
@@ -4343,8 +4325,8 @@
"title":"Using Slow Start",
"uri":"mrs_01_0849.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"483"
+ "p_code":"475",
+ "code":"481"
},
{
"desc":"By default, if an MR job generates a large number of output files, it takes a long time for the job to commit the temporary outputs of a task to the final output director",
@@ -4352,8 +4334,8 @@
"title":"Optimizing Performance for Committing MR Jobs",
"uri":"mrs_01_0850.html",
"doc_type":"cmpntguide",
- "p_code":"477",
- "code":"484"
+ "p_code":"475",
+ "code":"482"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4361,8 +4343,8 @@
"title":"Common Issues About MapReduce",
"uri":"mrs_01_1788.html",
"doc_type":"cmpntguide",
- "p_code":"469",
- "code":"485"
+ "p_code":"467",
+ "code":"483"
},
{
"desc":"MapReduce job takes a very long time (more than 10minutes) when the ResourceManager switch while the job is running.This is because, ResorceManager HA is enabled but the ",
@@ -4370,8 +4352,8 @@
"title":"Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?",
"uri":"mrs_01_1789.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"486"
+ "p_code":"483",
+ "code":"484"
},
{
"desc":"MapReduce job is not progressing for long timeThis is because of less memory. When the memory is less, the time taken by the job to copy the map output increases signific",
@@ -4379,8 +4361,8 @@
"title":"Why Does a MapReduce Task Stay Unchanged for a Long Time?",
"uri":"mrs_01_1790.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"487"
+ "p_code":"483",
+ "code":"485"
},
{
"desc":"Why is the client unavailable when the MR ApplicationMaster or ResourceManager is moved to the D state during job running?When a task is running, the MR ApplicationMaster",
@@ -4388,8 +4370,8 @@
"title":"Why the Client Hangs During Job Running?",
"uri":"mrs_01_1791.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"488"
+ "p_code":"483",
+ "code":"486"
},
{
"desc":"In security mode, why delegation token HDFS_DELEGATION_TOKEN is not found in the cache?In MapReduce, by default HDFS_DELEGATION_TOKEN will be canceled after the job compl",
@@ -4397,8 +4379,8 @@
"title":"Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?",
"uri":"mrs_01_1792.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"489"
+ "p_code":"483",
+ "code":"487"
},
{
"desc":"How do I set the job priority when submitting a MapReduce task?You can add the parameter -Dmapreduce.job.priority= in the command to set task priority when subm",
@@ -4406,8 +4388,8 @@
"title":"How Do I Set the Task Priority When Submitting a MapReduce Task?",
"uri":"mrs_01_1793.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"490"
+ "p_code":"483",
+ "code":"488"
},
{
"desc":"After the address of MapReduce JobHistoryServer is changed, why the wrong page is displayed when I click the tracking URL on the ResourceManager WebUI?JobHistoryServer ad",
@@ -4415,8 +4397,8 @@
"title":"After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?",
"uri":"mrs_01_1797.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"491"
+ "p_code":"483",
+ "code":"489"
},
{
"desc":"MapReduce or Yarn job fails in multiple nameService environment using viewFS.When using viewFS only the mount directories are accessible, so the most possible cause is th",
@@ -4424,8 +4406,8 @@
"title":"MapReduce Job Failed in Multiple NameService Environment",
"uri":"mrs_01_1799.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"492"
+ "p_code":"483",
+ "code":"490"
},
{
"desc":"MapReduce task fails and the ratio of fault nodes to all nodes is smaller than the blacklist threshold configured by yarn.resourcemanager.am-scheduling.node-blacklisting-",
@@ -4433,8 +4415,8 @@
"title":"Why a Fault MapReduce Node Is Not Blacklisted?",
"uri":"mrs_01_1800.html",
"doc_type":"cmpntguide",
- "p_code":"485",
- "code":"493"
+ "p_code":"483",
+ "code":"491"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4443,7 +4425,7 @@
"uri":"mrs_01_1807.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"494"
+ "code":"492"
},
{
"desc":"Oozie is an open-source workflow engine that is used to schedule and coordinate Hadoop jobs.Oozie can be used to submit a wide array of jobs, such as Hive, Spark2x, Loade",
@@ -4451,8 +4433,8 @@
"title":"Using Oozie from Scratch",
"uri":"mrs_01_1808.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"495"
+ "p_code":"492",
+ "code":"493"
},
{
"desc":"This section describes how to use the Oozie client in an O&M scenario or service scenario.The client has been installed. For example, the installation directory is /opt/c",
@@ -4460,8 +4442,8 @@
"title":"Using the Oozie Client",
"uri":"mrs_01_1810.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"496"
+ "p_code":"492",
+ "code":"494"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4469,8 +4451,8 @@
"title":"Using Oozie Client to Submit an Oozie Job",
"uri":"mrs_01_1812.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"497"
+ "p_code":"492",
+ "code":"495"
},
{
"desc":"This section describes how to use the Oozie client to submit a Hive job.Hive jobs are divided into the following types:Hive jobHive job that is connected in JDBC modeHive",
@@ -4478,8 +4460,8 @@
"title":"Submitting a Hive Job",
"uri":"mrs_01_1813.html",
"doc_type":"cmpntguide",
- "p_code":"497",
- "code":"498"
+ "p_code":"495",
+ "code":"496"
},
{
"desc":"This section describes how to submit a Spark2x job using the Oozie client.You are advised to download the latest client.The Spark2x and Oozie components and clients have ",
@@ -4487,8 +4469,8 @@
"title":"Submitting a Spark2x Job",
"uri":"mrs_01_1814.html",
"doc_type":"cmpntguide",
- "p_code":"497",
- "code":"499"
+ "p_code":"495",
+ "code":"497"
},
{
"desc":"This section describes how to submit a Loader job using the Oozie client.You are advised to download the latest client.The Hive and Oozie components and clients have been",
@@ -4496,8 +4478,8 @@
"title":"Submitting a Loader Job",
"uri":"mrs_01_1815.html",
"doc_type":"cmpntguide",
- "p_code":"497",
- "code":"500"
+ "p_code":"495",
+ "code":"498"
},
{
"desc":"This section describes how to submit a DistCp job using the Oozie client.You are advised to download the latest client.The HDFS and Oozie components and clients have been",
@@ -4505,8 +4487,8 @@
"title":"Submitting a DistCp Job",
"uri":"mrs_01_2392.html",
"doc_type":"cmpntguide",
- "p_code":"497",
- "code":"501"
+ "p_code":"495",
+ "code":"499"
},
{
"desc":"In addition to Hive, Spark2x, and Loader jobs, MapReduce, Java, Shell, HDFS, SSH, SubWorkflow, Streaming, and scheduled jobs can be submitted using the Oozie client.You a",
@@ -4514,8 +4496,8 @@
"title":"Submitting Other Jobs",
"uri":"mrs_01_1816.html",
"doc_type":"cmpntguide",
- "p_code":"497",
- "code":"502"
+ "p_code":"495",
+ "code":"500"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4523,8 +4505,8 @@
"title":"Using Hue to Submit an Oozie Job",
"uri":"mrs_01_1817.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"503"
+ "p_code":"492",
+ "code":"501"
},
{
"desc":"You can submit an Oozie job on the Hue management page, but a workflow must be created before the job is submitted.Before using Hue to submit an Oozie job, configure the ",
@@ -4532,8 +4514,8 @@
"title":"Creating a Workflow",
"uri":"mrs_01_1818.html",
"doc_type":"cmpntguide",
- "p_code":"503",
- "code":"504"
+ "p_code":"501",
+ "code":"502"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4541,8 +4523,8 @@
"title":"Submitting a Workflow Job",
"uri":"mrs_01_1819.html",
"doc_type":"cmpntguide",
- "p_code":"503",
- "code":"505"
+ "p_code":"501",
+ "code":"503"
},
{
"desc":"This section describes how to submit an Oozie job of the Hive2 type on the Hue web UI.For example, if the input parameter is INPUT=/user/admin/examples/input-data/table, ",
@@ -4550,8 +4532,8 @@
"title":"Submitting a Hive2 Job",
"uri":"mrs_01_1820.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"506"
+ "p_code":"503",
+ "code":"504"
},
{
"desc":"This section describes how to submit an Oozie job of the Spark2x type on Hue.For example, add the following parameters:hdfs://hacluster/user/admin/examples/input-data/tex",
@@ -4559,8 +4541,8 @@
"title":"Submitting a Spark2x Job",
"uri":"mrs_01_1821.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"507"
+ "p_code":"503",
+ "code":"505"
},
{
"desc":"This section describes how to submit an Oozie job of the Java type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
@@ -4568,8 +4550,8 @@
"title":"Submitting a Java Job",
"uri":"mrs_01_1822.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"508"
+ "p_code":"503",
+ "code":"506"
},
{
"desc":"This section describes how to submit an Oozie job of the Loader type on the Hue web UI.Job id is the ID of the Loader job to be orchestrated and can be obtained from the ",
@@ -4577,8 +4559,8 @@
"title":"Submitting a Loader Job",
"uri":"mrs_01_1823.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"509"
+ "p_code":"503",
+ "code":"507"
},
{
"desc":"This section describes how to submit an Oozie job of the MapReduce type on the Hue web UI.For example, set the value of mapred.input.dir to /user/admin/examples/input-dat",
@@ -4586,8 +4568,8 @@
"title":"Submitting a MapReduce Job",
"uri":"mrs_01_1824.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"510"
+ "p_code":"503",
+ "code":"508"
},
{
"desc":"This section describes how to submit an Oozie job of the Sub-workflow type on the Hue web UI.If you need to modify the job name before saving the job (default value: My W",
@@ -4595,8 +4577,8 @@
"title":"Submitting a Sub-workflow Job",
"uri":"mrs_01_1825.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"511"
+ "p_code":"503",
+ "code":"509"
},
{
"desc":"This section describes how to submit an Oozie job of the Shell type on the Hue web UI.If the file is stored in HDFS, select the path of the .sh file, for example, user/hu",
@@ -4604,8 +4586,8 @@
"title":"Submitting a Shell Job",
"uri":"mrs_01_1826.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"512"
+ "p_code":"503",
+ "code":"510"
},
{
"desc":"This section describes how to submit an Oozie job of the HDFS type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
@@ -4613,8 +4595,8 @@
"title":"Submitting an HDFS Job",
"uri":"mrs_01_1827.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"513"
+ "p_code":"503",
+ "code":"511"
},
{
"desc":"This section describes how to submit an Oozie job of the Streaming type on the Hue web UI.for example, /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-3.1.1.ja",
@@ -4622,8 +4604,8 @@
"title":"Submitting a Streaming Job",
"uri":"mrs_01_1828.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"514"
+ "p_code":"503",
+ "code":"512"
},
{
"desc":"This section describes how to submit an Oozie job of the DistCp type on the Hue web UI.If yes, go to 4.If no, go to 7.source_ip: service address of the HDFS NameNode in t",
@@ -4631,8 +4613,8 @@
"title":"Submitting a DistCp Job",
"uri":"mrs_01_1829.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"515"
+ "p_code":"503",
+ "code":"513"
},
{
"desc":"This section guides you to enable unidirectional password-free mutual trust when Oozie nodes are used to execute shell scripts of external nodes through SSH jobs.You have",
@@ -4640,8 +4622,8 @@
"title":"Example of Mutual Trust Operations",
"uri":"mrs_01_1830.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"516"
+ "p_code":"503",
+ "code":"514"
},
{
"desc":"This section guides you to submit an Oozie job of the SSH type on the Hue web UI.Due to security risks, SSH jobs cannot be submitted by default. To use the SSH function, ",
@@ -4649,8 +4631,8 @@
"title":"Submitting an SSH Job",
"uri":"mrs_01_1831.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"517"
+ "p_code":"503",
+ "code":"515"
},
{
"desc":"This section describes how to submit a Hive job on the Hue web UI.After the job is submitted, you can view the related contents of the job, such as the detailed informati",
@@ -4658,8 +4640,8 @@
"title":"Submitting a Hive Script",
"uri":"mrs_01_2372.html",
"doc_type":"cmpntguide",
- "p_code":"505",
- "code":"518"
+ "p_code":"503",
+ "code":"516"
},
{
"desc":"This section describes how to submit a job of the periodic scheduling type on the Hue web UI.Required workflow jobs have been configured before the coordinator task is su",
@@ -4667,8 +4649,8 @@
"title":"Submitting a Coordinator Periodic Scheduling Job",
"uri":"mrs_01_1840.html",
"doc_type":"cmpntguide",
- "p_code":"503",
- "code":"519"
+ "p_code":"501",
+ "code":"517"
},
{
"desc":"In the case that multiple scheduled jobs exist at the same time, you can manage the jobs in batches over the Bundle task. This section describes how to submit a job of th",
@@ -4676,8 +4658,8 @@
"title":"Submitting a Bundle Batch Processing Job",
"uri":"mrs_01_1841.html",
"doc_type":"cmpntguide",
- "p_code":"503",
- "code":"520"
+ "p_code":"501",
+ "code":"518"
},
{
"desc":"After the jobs are submitted, you can view the execution status of a specific job on Hue.",
@@ -4685,8 +4667,8 @@
"title":"Querying the Operation Results",
"uri":"mrs_01_1842.html",
"doc_type":"cmpntguide",
- "p_code":"503",
- "code":"521"
+ "p_code":"501",
+ "code":"519"
},
{
"desc":"Log path: The default storage paths of Oozie log files are as follows:Run log: /var/log/Bigdata/oozieAudit log: /var/log/Bigdata/audit/oozieLog archiving rule: Oozie logs",
@@ -4694,8 +4676,8 @@
"title":"Oozie Log Overview",
"uri":"mrs_01_1843.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"522"
+ "p_code":"492",
+ "code":"520"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4703,8 +4685,8 @@
"title":"Common Issues About Oozie",
"uri":"mrs_01_1844.html",
"doc_type":"cmpntguide",
- "p_code":"494",
- "code":"523"
+ "p_code":"492",
+ "code":"521"
},
{
"desc":"Why are not Coordinator scheduled jobs executed on time on the Hue or Oozie client?Use UTC time. For example, set start=2016-12-20T09:00Z in job.properties file.",
@@ -4712,8 +4694,8 @@
"title":"Oozie Scheduled Tasks Are Not Executed on Time",
"uri":"mrs_01_1846.html",
"doc_type":"cmpntguide",
- "p_code":"523",
- "code":"524"
+ "p_code":"521",
+ "code":"522"
},
{
"desc":"A new JAR package is uploaded to the /user/oozie/share/lib directory on HDFS. However, an error indicating that the class cannot be found is reported during task executio",
@@ -4721,8 +4703,8 @@
"title":"Why Update of the share lib Directory of Oozie on HDFS Does Not Take Effect?",
"uri":"mrs_01_1847.html",
"doc_type":"cmpntguide",
- "p_code":"523",
- "code":"525"
+ "p_code":"521",
+ "code":"523"
},
{
"desc":"Check the job logs on Yarn. Run the command executed through Hive SQL using beeline to ensure that Hive is running properly.If error information such as \"classnotfoundExc",
@@ -4730,8 +4712,8 @@
"title":"Common Oozie Troubleshooting Methods",
"uri":"mrs_01_24479.html",
"doc_type":"cmpntguide",
- "p_code":"523",
- "code":"526"
+ "p_code":"521",
+ "code":"524"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4740,7 +4722,7 @@
"uri":"mrs_01_0599.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"527"
+ "code":"525"
},
{
"desc":"You can perform an interactive operation on an MRS cluster client. For a cluster with Kerberos authentication enabled, the user must belong to the opentsdb, hbase, opents",
@@ -4748,8 +4730,8 @@
"title":"Using an MRS Client to Operate OpenTSDB Metric Data",
"uri":"mrs_01_0471.html",
"doc_type":"cmpntguide",
- "p_code":"527",
- "code":"528"
+ "p_code":"525",
+ "code":"526"
},
{
"desc":"For example, to write data of a metric named testdata, whose timestamp is 1524900185, value is true, tag is key and value, run the following command:: indicates t",
@@ -4757,8 +4739,8 @@
"title":"Running the curl Command to Operate OpenTSDB",
"uri":"mrs_01_0472.html",
"doc_type":"cmpntguide",
- "p_code":"527",
- "code":"529"
+ "p_code":"525",
+ "code":"527"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4767,7 +4749,7 @@
"uri":"mrs_01_0432.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"530"
+ "code":"528"
},
{
"desc":"You can view the Presto statistics on the graphical Presto web UI. You are advised to use Google Chrome to access the Presto web UI because it cannot be accessed using In",
@@ -4775,8 +4757,8 @@
"title":"Accessing the Presto Web UI",
"uri":"mrs_01_0433.html",
"doc_type":"cmpntguide",
- "p_code":"530",
- "code":"531"
+ "p_code":"528",
+ "code":"529"
},
{
"desc":"You can perform an interactive query on an MRS cluster client. For clusters with Kerberos authentication enabled, users who submit topologies must belong to the presto gr",
@@ -4784,8 +4766,8 @@
"title":"Using a Client to Execute Query Statements",
"uri":"mrs_01_0434.html",
"doc_type":"cmpntguide",
- "p_code":"530",
- "code":"532"
+ "p_code":"528",
+ "code":"530"
},
{
"desc":"The Presto component has been installed in an MRS cluster.You have synchronized IAM users. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to",
@@ -4793,8 +4775,8 @@
"title":"Using Presto to Dump Data in DLF",
"uri":"mrs_01_0635.html",
"doc_type":"cmpntguide",
- "p_code":"530",
- "code":"533"
+ "p_code":"528",
+ "code":"531"
},
{
"desc":"MRS 3.x does not enable you to configure Presto permissions.By default, the Hive Catalog authorization of the Presto component is enabled in a security cluster. The Prest",
@@ -4802,8 +4784,8 @@
"title":"Configuring Presto Permissions",
"uri":"mrs_01_0636.html",
"doc_type":"cmpntguide",
- "p_code":"530",
- "code":"534"
+ "p_code":"528",
+ "code":"532"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4812,7 +4794,7 @@
"uri":"mrs_01_0761.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"535"
+ "code":"533"
},
{
"desc":"Currently, only normal MRS 1.9.2 clusters support Ranger. Security clusters with Kerberos authentication enabled do not support Ranger.After the cluster is created, Range",
@@ -4820,8 +4802,8 @@
"title":"Creating a Ranger Cluster",
"uri":"mrs_01_0763.html",
"doc_type":"cmpntguide",
- "p_code":"535",
- "code":"536"
+ "p_code":"533",
+ "code":"534"
},
{
"desc":"You can manage Ranger on the Ranger web UI.After logging in to the Ranger Web UI for the first time, change the password and keep it secure.Ranger UserSync is an importan",
@@ -4829,8 +4811,8 @@
"title":"Accessing the Ranger Web UI and Synchronizing Unix Users to the Ranger Web UI",
"uri":"mrs_01_0764.html",
"doc_type":"cmpntguide",
- "p_code":"535",
- "code":"537"
+ "p_code":"533",
+ "code":"535"
},
{
"desc":"After an MRS cluster with Ranger installed is created, Hive and Impala access control is not integrated into Ranger. This section describes how to integrate Hive into Ran",
@@ -4838,8 +4820,8 @@
"title":"Configuring Hive/Impala Access Permissions in Ranger",
"uri":"mrs_01_0765.html",
"doc_type":"cmpntguide",
- "p_code":"535",
- "code":"538"
+ "p_code":"533",
+ "code":"536"
},
{
"desc":"After an MRS cluster with Ranger installed is created, HBase access control is not integrated into Ranger. This section describes how to integrate HBase into Ranger.Addin",
@@ -4847,8 +4829,8 @@
"title":"Configuring HBase Access Permissions in Ranger",
"uri":"mrs_01_0766.html",
"doc_type":"cmpntguide",
- "p_code":"535",
- "code":"539"
+ "p_code":"533",
+ "code":"537"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -4857,7 +4839,7 @@
"uri":"mrs_01_1849.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"540"
+ "code":"538"
},
{
"desc":"Ranger provides a centralized permission management framework to implement fine-grained permission control on components such as HDFS, HBase, Hive, and Yarn. In addition,",
@@ -4865,8 +4847,8 @@
"title":"Logging In to the Ranger Web UI",
"uri":"mrs_01_1850.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"541"
+ "p_code":"538",
+ "code":"539"
},
{
"desc":"This section guides you how to enable Ranger authentication. Ranger authentication is enabled by default in security mode and disabled by default in normal mode.If Enable",
@@ -4874,8 +4856,8 @@
"title":"Enabling Ranger Authentication",
"uri":"mrs_01_2393.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"542"
+ "p_code":"538",
+ "code":"540"
},
{
"desc":"In the newly installed MRS cluster, Ranger is installed by default, with the Ranger authentication model enabled. The systemadministrator can set fine-grained security po",
@@ -4883,8 +4865,8 @@
"title":"Configuring Component Permission Policies",
"uri":"mrs_01_1851.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"543"
+ "p_code":"538",
+ "code":"541"
},
{
"desc":"The systemadministrator can view audit logs of the Ranger running and the permission control after Ranger authentication is enabled on the Ranger web UI.",
@@ -4892,8 +4874,8 @@
"title":"Viewing Ranger Audit Information",
"uri":"mrs_01_1852.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"544"
+ "p_code":"538",
+ "code":"542"
},
{
"desc":"Security zone can be configured using Ranger. Rangeradministrators can divide resources of each component into multiple security zones where administrators set security p",
@@ -4901,8 +4883,8 @@
"title":"Configuring a Security Zone",
"uri":"mrs_01_1853.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"545"
+ "p_code":"538",
+ "code":"543"
},
{
"desc":"By default, the Ranger data source of the security cluster can be accessed by FusionInsight Manager LDAP users. By default, the Ranger data source of a common cluster can",
@@ -4910,8 +4892,8 @@
"title":"Changing the Ranger Data Source to LDAP for a Normal Cluster",
"uri":"mrs_01_2394.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"546"
+ "p_code":"538",
+ "code":"544"
},
{
"desc":"You can view Ranger permission settings, such as users, user groups, and roles.Users: displays all user information synchronized from LDAP or OS to Ranger.Groups: display",
@@ -4919,8 +4901,8 @@
"title":"Viewing Ranger Permission Information",
"uri":"mrs_01_1854.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"547"
+ "p_code":"538",
+ "code":"545"
},
{
"desc":"The Rangeradministrator can use Ranger to configure the read, write, and execution permissions on HDFS directories or files for HDFS users.The Ranger service has been ins",
@@ -4928,8 +4910,8 @@
"title":"Adding a Ranger Access Permission Policy for HDFS",
"uri":"mrs_01_1856.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"548"
+ "p_code":"538",
+ "code":"546"
},
{
"desc":"Rangeradministrators can use Ranger to configure permissions on HBase tables, column families, and columns for HBase users.The Ranger service has been installed and is ru",
@@ -4937,8 +4919,8 @@
"title":"Adding a Ranger Access Permission Policy for HBase",
"uri":"mrs_01_1857.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"549"
+ "p_code":"538",
+ "code":"547"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Hive users. The default administrator account of Hive is hive and the initial password is Hive@123.The Range",
@@ -4946,8 +4928,8 @@
"title":"Adding a Ranger Access Permission Policy for Hive",
"uri":"mrs_01_1858.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"550"
+ "p_code":"538",
+ "code":"548"
},
{
"desc":"The Rangeradministrator can use Ranger to configure Yarn administrator permissions for Yarn users, allowing them to manage Yarn queue resources.The Ranger service has bee",
@@ -4955,8 +4937,8 @@
"title":"Adding a Ranger Access Permission Policy for Yarn",
"uri":"mrs_01_1859.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"551"
+ "p_code":"538",
+ "code":"549"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Spark2x users.After Ranger authentication is enabled or disabled on Spark2x, you need to restart Spark2x.Dow",
@@ -4964,8 +4946,8 @@
"title":"Adding a Ranger Access Permission Policy for Spark2x",
"uri":"mrs_01_1860.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"552"
+ "p_code":"538",
+ "code":"550"
},
{
"desc":"The Rangeradministrator can use Ranger to configure the read, write, and management permissions of the Kafka topic and the management permission of the cluster for the Ka",
@@ -4973,8 +4955,8 @@
"title":"Adding a Ranger Access Permission Policy for Kafka",
"uri":"mrs_01_1861.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"553"
+ "p_code":"538",
+ "code":"551"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Storm users.The Ranger service has been installed and is running properly.You have created users, user group",
@@ -4982,8 +4964,8 @@
"title":"Adding a Ranger Access Permission Policy for Storm",
"uri":"mrs_01_1863.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"554"
+ "p_code":"538",
+ "code":"552"
},
{
"desc":"Log path: The default storage path of Ranger logs is /var/log/Bigdata/ranger/Role name.RangerAdmin: /var/log/Bigdata/ranger/rangeradmin (run logs)TagSync: /var/log/Bigdat",
@@ -4991,8 +4973,8 @@
"title":"Ranger Log Overview",
"uri":"mrs_01_1865.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"555"
+ "p_code":"538",
+ "code":"553"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5000,8 +4982,8 @@
"title":"Common Issues About Ranger",
"uri":"mrs_01_1866.html",
"doc_type":"cmpntguide",
- "p_code":"540",
- "code":"556"
+ "p_code":"538",
+ "code":"554"
},
{
"desc":"During cluster installation, Ranger fails to be started, and the error message \"ERROR: cannot drop sequence X_POLICY_REF_ACCESS_TYPE_SEQ \" is displayed in the task list o",
@@ -5009,8 +4991,8 @@
"title":"Why Ranger Startup Fails During the Cluster Installation?",
"uri":"mrs_01_1867.html",
"doc_type":"cmpntguide",
- "p_code":"556",
- "code":"557"
+ "p_code":"554",
+ "code":"555"
},
{
"desc":"How do I determine whether the Ranger authentication is enabled for a service that supports the authentication?Log in to FusionInsight Manager and choose Cluster > Servic",
@@ -5018,8 +5000,8 @@
"title":"How Do I Determine Whether the Ranger Authentication Is Used for a Service?",
"uri":"mrs_01_1868.html",
"doc_type":"cmpntguide",
- "p_code":"556",
- "code":"558"
+ "p_code":"554",
+ "code":"556"
},
{
"desc":"When a new user logs in to Ranger, why is the 401 error reported after the password is changed?The UserSync synchronizes user data at an interval of 5 minutes by default.",
@@ -5027,8 +5009,8 @@
"title":"Why Cannot a New User Log In to Ranger After Changing the Password?",
"uri":"mrs_01_2300.html",
"doc_type":"cmpntguide",
- "p_code":"556",
- "code":"559"
+ "p_code":"554",
+ "code":"557"
},
{
"desc":"When a Ranger access permission policy is added for HBase and wildcard characters are used to search for an existing HBase table in the policy, the table cannot be found.",
@@ -5036,8 +5018,8 @@
"title":"When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables",
"uri":"mrs_01_2355.html",
"doc_type":"cmpntguide",
- "p_code":"556",
- "code":"560"
+ "p_code":"554",
+ "code":"558"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5046,7 +5028,7 @@
"uri":"mrs_01_0589.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"561"
+ "code":"559"
},
{
"desc":"This section applies to versions earlier than MRS 3.x.",
@@ -5054,8 +5036,8 @@
"title":"Precautions",
"uri":"mrs_01_1925.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"562"
+ "p_code":"559",
+ "code":"560"
},
{
"desc":"This section describes how to use Spark to submit a SparkPi job. SparkPi, a typical Spark job, is used to calculate the value of Pi (π).Multiple open-source Spark sample ",
@@ -5063,8 +5045,8 @@
"title":"Getting Started with Spark",
"uri":"mrs_01_0366.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"563"
+ "p_code":"559",
+ "code":"561"
},
{
"desc":"Spark provides the Spark SQL language that is similar to SQL to perform operations on structured data. This section describes how to use Spark SQL from scratch. Create a ",
@@ -5072,8 +5054,8 @@
"title":"Getting Started with Spark SQL",
"uri":"mrs_01_0367.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"564"
+ "p_code":"559",
+ "code":"562"
},
{
"desc":"After an MRS cluster is created, you can create and submit jobs on the client. The client can be installed on nodes inside or outside the cluster.Nodes inside the cluster",
@@ -5081,8 +5063,8 @@
"title":"Using the Spark Client",
"uri":"mrs_01_1183.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"565"
+ "p_code":"559",
+ "code":"563"
},
{
"desc":"The Spark web UI is used to view the running status of Spark applications. Google Chrome is recommended for better user experience.Spark has two web UIs.Spark UI: used to",
@@ -5090,8 +5072,8 @@
"title":"Accessing the Spark Web UI",
"uri":"mrs_01_0767.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"566"
+ "p_code":"559",
+ "code":"564"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5099,8 +5081,8 @@
"title":"Interconnecting Spark with OpenTSDB",
"uri":"mrs_01_0584.html",
"doc_type":"cmpntguide",
- "p_code":"561",
- "code":"567"
+ "p_code":"559",
+ "code":"565"
},
{
"desc":"MRS Spark can be used to access the data source of OpenTSDB, create and associate tables in the Spark, and query and insert the OpenTSDB data.Use the CREATE TABLE command",
@@ -5108,8 +5090,8 @@
"title":"Creating a Table and Associating It with OpenTSDB",
"uri":"mrs_01_0585.html",
"doc_type":"cmpntguide",
- "p_code":"567",
- "code":"568"
+ "p_code":"565",
+ "code":"566"
},
{
"desc":"Run the INSERT INTO statement to insert the data in the table to the associated OpenTSDB metric.The inserted data cannot be null. If the inserted data is the same as the ",
@@ -5117,8 +5099,8 @@
"title":"Inserting Data to the OpenTSDB Table",
"uri":"mrs_01_0586.html",
"doc_type":"cmpntguide",
- "p_code":"567",
- "code":"569"
+ "p_code":"565",
+ "code":"567"
},
{
"desc":"This SELECT command is used to query data in an OpenTSDB table.The to-be-queried table must exist. Otherwise, an error is reported.The value of tagv must exist. Otherwise",
@@ -5126,8 +5108,8 @@
"title":"Querying an OpenTSDB Table",
"uri":"mrs_01_0587.html",
"doc_type":"cmpntguide",
- "p_code":"567",
- "code":"570"
+ "p_code":"565",
+ "code":"568"
},
{
"desc":"By default, OpenTSDB connects to the local TSD process of the node where the Spark executor resides. In MRS, use the default configuration.Run the set statement in spark-",
@@ -5135,8 +5117,8 @@
"title":"Modifying the Default Configuration Data",
"uri":"mrs_01_0588.html",
"doc_type":"cmpntguide",
- "p_code":"567",
- "code":"571"
+ "p_code":"565",
+ "code":"569"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5145,7 +5127,7 @@
"uri":"mrs_01_1926.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"572"
+ "code":"570"
},
{
"desc":"This section applies to MRS 3.x or later clusters.",
@@ -5153,8 +5135,8 @@
"title":"Precautions",
"uri":"mrs_01_1927.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"573"
+ "p_code":"570",
+ "code":"571"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5162,8 +5144,8 @@
"title":"Basic Operation",
"uri":"mrs_01_1928.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"574"
+ "p_code":"570",
+ "code":"572"
},
{
"desc":"This section describes how to use Spark2x to submit Spark applications, including Spark Core and Spark SQL. Spark Core is the kernel module of Spark. It executes tasks an",
@@ -5171,8 +5153,8 @@
"title":"Getting Started",
"uri":"mrs_01_1929.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"575"
+ "p_code":"572",
+ "code":"573"
},
{
"desc":"This section describes how to quickly configure common parameters and lists parameters that are not recommended to be modified when Spark2x is used.Some parameters have b",
@@ -5180,8 +5162,8 @@
"title":"Configuring Parameters Rapidly",
"uri":"mrs_01_1930.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"576"
+ "p_code":"572",
+ "code":"574"
},
{
"desc":"This section describes common configuration items used in Spark. Subsections are divided by feature so that you can quickly find required configuration items. If you use ",
@@ -5189,8 +5171,8 @@
"title":"Common Parameters",
"uri":"mrs_01_1931.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"577"
+ "p_code":"572",
+ "code":"575"
},
{
"desc":"Spark on HBase allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read data f",
@@ -5198,8 +5180,8 @@
"title":"Spark on HBase Overview and Basic Applications",
"uri":"mrs_01_1933.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"578"
+ "p_code":"572",
+ "code":"576"
},
{
"desc":"Spark on HBase V2 allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read dat",
@@ -5207,8 +5189,8 @@
"title":"Spark on HBase V2 Overview and Basic Applications",
"uri":"mrs_01_1934.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"579"
+ "p_code":"572",
+ "code":"577"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5216,8 +5198,8 @@
"title":"SparkSQL Permission Management(Security Mode)",
"uri":"mrs_01_1935.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"580"
+ "p_code":"572",
+ "code":"578"
},
{
"desc":"Similar to Hive, Spark SQL is a data warehouse framework built on Hadoop, providing storage of structured data like structured query language (SQL).MRS supports users, us",
@@ -5225,8 +5207,8 @@
"title":"Spark SQL Permissions",
"uri":"mrs_01_1936.html",
"doc_type":"cmpntguide",
- "p_code":"580",
- "code":"581"
+ "p_code":"578",
+ "code":"579"
},
{
"desc":"This section describes how to create and configure a SparkSQL role on Manager as the system administrator. The Spark SQL role can be configured with the Sparkadministrato",
@@ -5234,8 +5216,8 @@
"title":"Creating a Spark SQL Role",
"uri":"mrs_01_1937.html",
"doc_type":"cmpntguide",
- "p_code":"580",
- "code":"582"
+ "p_code":"578",
+ "code":"580"
},
{
"desc":"You can configure related permissions if you need to access tables or databases created by other users. SparkSQL supports column-based permission control. If a user needs",
@@ -5243,8 +5225,8 @@
"title":"Configuring Permissions for SparkSQL Tables, Columns, and Databases",
"uri":"mrs_01_1938.html",
"doc_type":"cmpntguide",
- "p_code":"580",
- "code":"583"
+ "p_code":"578",
+ "code":"581"
},
{
"desc":"SparkSQL may need to be associated with other components. For example, Spark on HBase requires HBase permissions. The following describes how to associate SparkSQL with H",
@@ -5252,8 +5234,8 @@
"title":"Configuring Permissions for SparkSQL to Use Other Components",
"uri":"mrs_01_1939.html",
"doc_type":"cmpntguide",
- "p_code":"580",
- "code":"584"
+ "p_code":"578",
+ "code":"582"
},
{
"desc":"This section describes how to configure SparkSQL permission management functions (client configuration is similar to server configuration). To enable table permission, ad",
@@ -5261,8 +5243,8 @@
"title":"Configuring the Client and Server",
"uri":"mrs_01_1940.html",
"doc_type":"cmpntguide",
- "p_code":"580",
- "code":"585"
+ "p_code":"578",
+ "code":"583"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5270,8 +5252,8 @@
"title":"Scenario-Specific Configuration",
"uri":"mrs_01_1941.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"586"
+ "p_code":"572",
+ "code":"584"
},
{
"desc":"In this mode, multiple ThriftServers coexist in the cluster and the client can randomly connect any ThriftServer to perform service operations. When one or multiple Thrif",
@@ -5279,8 +5261,8 @@
"title":"Configuring Multi-active Instance Mode",
"uri":"mrs_01_1942.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"587"
+ "p_code":"584",
+ "code":"585"
},
{
"desc":"In multi-tenant mode, JDBCServers are bound with tenants. Each tenant corresponds to one or more JDBCServers, and a JDBCServer provides services for only one tenant. Diff",
@@ -5288,8 +5270,8 @@
"title":"Configuring the Multi-tenant Mode",
"uri":"mrs_01_1943.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"588"
+ "p_code":"584",
+ "code":"586"
},
{
"desc":"When using a cluster, if you want to switch between multi-active instance mode and multi-tenant mode, the following configurations are required.Switch from multi-tenant m",
@@ -5297,8 +5279,8 @@
"title":"Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode",
"uri":"mrs_01_1944.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"589"
+ "p_code":"584",
+ "code":"587"
},
{
"desc":"Functions such as UI, EventLog, and dynamic resource scheduling in Spark are implemented through event transfer. Events include SparkListenerJobStart and SparkListenerJob",
@@ -5306,8 +5288,8 @@
"title":"Configuring the Size of the Event Queue",
"uri":"mrs_01_1945.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"590"
+ "p_code":"584",
+ "code":"588"
},
{
"desc":"When the executor off-heap memory is too small, or processes with higher priority preempt resources, the physical memory usage will exceed the maximal value. To prevent t",
@@ -5315,8 +5297,8 @@
"title":"Configuring Executor Off-Heap Memory",
"uri":"mrs_01_1947.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"591"
+ "p_code":"584",
+ "code":"589"
},
{
"desc":"A large amount of memory is required when Spark SQL executes a query, especially during Aggregate and Join operations. If the memory is limited, OutOfMemoryError may occu",
@@ -5324,8 +5306,8 @@
"title":"Enhancing Stability in a Limited Memory Condition",
"uri":"mrs_01_1948.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"592"
+ "p_code":"584",
+ "code":"590"
},
{
"desc":"When yarn.log-aggregation-enable of Yarn is set to true, the container log aggregation function is enabled. Log aggregation indicates that after applications are run on Y",
@@ -5333,8 +5315,8 @@
"title":"Viewing Aggregated Container Logs on the Web UI",
"uri":"mrs_01_1949.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"593"
+ "p_code":"584",
+ "code":"591"
},
{
"desc":"Values of some configuration parameters of Spark client vary depending on its work mode (YARN-Client or YARN-Cluster). If you switch Spark client between different modes ",
@@ -5342,8 +5324,8 @@
"title":"Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes",
"uri":"mrs_01_1951.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"594"
+ "p_code":"584",
+ "code":"592"
},
{
"desc":"By default, SparkSQL divides data into 200 data blocks during shuffle. In data-intensive scenarios, each data block may have excessive size. If a single data block of a t",
@@ -5351,8 +5333,8 @@
"title":"Configuring the Default Number of Data Blocks Divided by SparkSQL",
"uri":"mrs_01_1952.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"595"
+ "p_code":"584",
+ "code":"593"
},
{
"desc":"The compression format of a Parquet table can be configured as follows:If the Parquet table is a partitioned one, set the parquet.compression parameter of the Parquet tab",
@@ -5360,8 +5342,8 @@
"title":"Configuring the Compression Format of a Parquet Table",
"uri":"mrs_01_1953.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"596"
+ "p_code":"584",
+ "code":"594"
},
{
"desc":"In Spark WebUI, the Executor page can display information about Lost Executor. Executors are dynamically recycled. If the JDBCServer tasks are large, there may be too man",
@@ -5369,8 +5351,8 @@
"title":"Configuring the Number of Lost Executors Displayed in WebUI",
"uri":"mrs_01_1954.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"597"
+ "p_code":"584",
+ "code":"595"
},
{
"desc":"In some scenarios, to locate problems or check information by changing the log level,you can add the -Dlog4j.configuration.watch=true parameter to the JVM parameter of a ",
@@ -5378,8 +5360,8 @@
"title":"Setting the Log Level Dynamically",
"uri":"mrs_01_1957.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"598"
+ "p_code":"584",
+ "code":"596"
},
{
"desc":"When Spark is used to submit tasks, the driver obtains tokens from HBase by default. To access HBase, you need to configure the jaas.conf file for security authentication",
@@ -5387,8 +5369,8 @@
"title":"Configuring Whether Spark Obtains HBase Tokens",
"uri":"mrs_01_1958.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"599"
+ "p_code":"584",
+ "code":"597"
},
{
"desc":"If the Spark Streaming application is connected to Kafka, after the Spark Streaming application is terminated abnormally and restarted from the checkpoint, the system pre",
@@ -5396,8 +5378,8 @@
"title":"Configuring LIFO for Kafka",
"uri":"mrs_01_1959.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"600"
+ "p_code":"584",
+ "code":"598"
},
{
"desc":"When the Spark Streaming application is connected to Kafka and the application is restarted, the application reads data from Kafka based on the last read topic offset and",
@@ -5405,8 +5387,8 @@
"title":"Configuring Reliability for Connected Kafka",
"uri":"mrs_01_1960.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"601"
+ "p_code":"584",
+ "code":"599"
},
{
"desc":"When a query statement is executed, the returned result may be large (containing more than 100,000 records). In this case, JDBCServer out of memory (OOM) may occur. There",
@@ -5414,8 +5396,8 @@
"title":"Configuring Streaming Reading of Driver Execution Results",
"uri":"mrs_01_1961.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"602"
+ "p_code":"584",
+ "code":"600"
},
{
"desc":"When you perform the select query in Hive partitioned tables, the FileNotFoundException exception is displayed if a specified partition path does not exist in HDFS. To av",
@@ -5423,8 +5405,8 @@
"title":"Filtering Partitions without Paths in Partitioned Tables",
"uri":"mrs_01_1962.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"603"
+ "p_code":"584",
+ "code":"601"
},
{
"desc":"Users need to implement security protection for Spark2x web UI when some data on the UI cannot be viewed by other users. Once a user attempts to log in to the UI, Spark2x",
@@ -5432,8 +5414,8 @@
"title":"Configuring Spark2x Web UI ACLs",
"uri":"mrs_01_1963.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"604"
+ "p_code":"584",
+ "code":"602"
},
{
"desc":"ORC is a column-based storage format in the Hadoop ecosystem. It originates from Apache Hive and is used to reduce the Hadoop data storage space and accelerate the Hive q",
@@ -5441,8 +5423,8 @@
"title":"Configuring Vector-based ORC Data Reading",
"uri":"mrs_01_1964.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"605"
+ "p_code":"584",
+ "code":"603"
},
{
"desc":"In earlier versions, the predicate for pruning Hive table partitions is pushed down. Only comparison expressions between column names and integers or character strings ca",
@@ -5450,8 +5432,8 @@
"title":"Broaden Support for Hive Partition Pruning Predicate Pushdown",
"uri":"mrs_01_1965.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"606"
+ "p_code":"584",
+ "code":"604"
},
{
"desc":"In earlier versions, when the insert overwrite syntax is used to overwrite partition tables, only partitions with specified expressions are matched, and partitions withou",
@@ -5459,8 +5441,8 @@
"title":"Hive Dynamic Partition Overwriting Syntax",
"uri":"mrs_01_1966.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"607"
+ "p_code":"584",
+ "code":"605"
},
{
"desc":"The execution plan for SQL statements is optimized in Spark. Common optimization rules are heuristic optimization rules. Heuristic optimization rules are provided based o",
@@ -5468,8 +5450,8 @@
"title":"Configuring the Column Statistics Histogram to Enhance the CBO Accuracy",
"uri":"mrs_01_1967.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"608"
+ "p_code":"584",
+ "code":"606"
},
{
"desc":"JobHistory can use local disks to cache the historical data of Spark applications to prevent the JobHistory memory from loading a large amount of application data, reduci",
@@ -5477,8 +5459,8 @@
"title":"Configuring Local Disk Cache for JobHistory",
"uri":"mrs_01_1969.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"609"
+ "p_code":"584",
+ "code":"607"
},
{
"desc":"The Spark SQL adaptive execution feature enables Spark SQL to optimize subsequent execution processes based on intermediate results to improve overall execution efficienc",
@@ -5486,8 +5468,8 @@
"title":"Configuring Spark SQL to Enable the Adaptive Execution Feature",
"uri":"mrs_01_1970.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"610"
+ "p_code":"584",
+ "code":"608"
},
{
"desc":"When the event log mode is enabled for Spark, that is, spark.eventLog.enabled is set to true, events are written to a configured log file to record the program running pr",
@@ -5495,8 +5477,8 @@
"title":"Configuring Event Log Rollover",
"uri":"mrs_01_24170.html",
"doc_type":"cmpntguide",
- "p_code":"586",
- "code":"611"
+ "p_code":"584",
+ "code":"609"
},
{
"desc":"When Ranger is used as the permission management service of Spark SQL, the certificate in the cluster is required for accessing RangerAdmin. If you use a third-party JDK ",
@@ -5504,8 +5486,8 @@
"title":"Adapting to the Third-party JDK When Ranger Is Used",
"uri":"mrs_01_2317.html",
"doc_type":"cmpntguide",
- "p_code":"574",
- "code":"612"
+ "p_code":"572",
+ "code":"610"
},
{
"desc":"Log paths:Executor run log: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of running tasks are stored in the prec",
@@ -5513,8 +5495,8 @@
"title":"Spark2x Logs",
"uri":"mrs_01_1971.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"613"
+ "p_code":"570",
+ "code":"611"
},
{
"desc":"Container logs of running Spark applications are distributed on multiple nodes. This section describes how to quickly obtain container logs.You can run the yarn logs comm",
@@ -5522,8 +5504,8 @@
"title":"Obtaining Container Logs of a Running Spark Application",
"uri":"mrs_01_1972.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"614"
+ "p_code":"570",
+ "code":"612"
},
{
"desc":"In a large-scale Hadoop production cluster, HDFS metadata is stored in the NameNode memory, and the cluster scale is restricted by the memory limitation of each NameNode.",
@@ -5531,8 +5513,8 @@
"title":"Small File Combination Tools",
"uri":"mrs_01_1973.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"615"
+ "p_code":"570",
+ "code":"613"
},
{
"desc":"The first query of CarbonData is slow, which may cause a delay for nodes that have high requirements on real-time performance.The tool provides the following functions:Pr",
@@ -5540,8 +5522,8 @@
"title":"Using CarbonData for First Query",
"uri":"mrs_01_2362.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"616"
+ "p_code":"570",
+ "code":"614"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5549,8 +5531,8 @@
"title":"Spark2x Performance Tuning",
"uri":"mrs_01_1974.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"617"
+ "p_code":"570",
+ "code":"615"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5558,8 +5540,8 @@
"title":"Spark Core Tuning",
"uri":"mrs_01_1975.html",
"doc_type":"cmpntguide",
- "p_code":"617",
- "code":"618"
+ "p_code":"615",
+ "code":"616"
},
{
"desc":"Spark supports the following types of serialization:JavaSerializerKryoSerializerData serialization affects the Spark application performance. In specific data format, Kry",
@@ -5567,8 +5549,8 @@
"title":"Data Serialization",
"uri":"mrs_01_1976.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"619"
+ "p_code":"616",
+ "code":"617"
},
{
"desc":"Spark is a memory-based computing frame. If the memory is insufficient during computing, the Spark execution efficiency will be adversely affected. You can determine whet",
@@ -5576,8 +5558,8 @@
"title":"Optimizing Memory Configuration",
"uri":"mrs_01_1977.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"620"
+ "p_code":"616",
+ "code":"618"
},
{
"desc":"The degree of parallelism (DOP) specifies the number of tasks to be executed concurrently. It determines the number of data blocks after the shuffle operation. Configure ",
@@ -5585,8 +5567,8 @@
"title":"Setting the DOP",
"uri":"mrs_01_1978.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"621"
+ "p_code":"616",
+ "code":"619"
},
{
"desc":"Broadcast distributes data sets to each node. It allows data to be obtained locally when a data set is needed during a Spark task. If broadcast is not used, data serializ",
@@ -5594,8 +5576,8 @@
"title":"Using Broadcast Variables",
"uri":"mrs_01_1979.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"622"
+ "p_code":"616",
+ "code":"620"
},
{
"desc":"When the Spark system runs applications that contain a shuffle process, an executor process also writes shuffle data and provides shuffle data for other executors in addi",
@@ -5603,8 +5585,8 @@
"title":"Using the external shuffle service to improve performance",
"uri":"mrs_01_1980.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"623"
+ "p_code":"616",
+ "code":"621"
},
{
"desc":"Resources are a key factor that affects Spark execution efficiency. When a long-running service (such as the JDBCServer) is allocated with multiple executors without task",
@@ -5612,8 +5594,8 @@
"title":"Configuring Dynamic Resource Scheduling in Yarn Mode",
"uri":"mrs_01_1981.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"624"
+ "p_code":"616",
+ "code":"622"
},
{
"desc":"There are three processes in Spark on Yarn mode: driver, ApplicationMaster, and executor. The Driver and Executor handle the scheduling and running of the task. The Appli",
@@ -5621,8 +5603,8 @@
"title":"Configuring Process Parameters",
"uri":"mrs_01_1982.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"625"
+ "p_code":"616",
+ "code":"623"
},
{
"desc":"Optimal program structure helps increase execution efficiency. During application programming, avoid shuffle operations and combine narrow-dependency operations.This topi",
@@ -5630,8 +5612,8 @@
"title":"Designing the Direction Acyclic Graph (DAG)",
"uri":"mrs_01_1983.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"626"
+ "p_code":"616",
+ "code":"624"
},
{
"desc":"If the overhead of each record is high, for example:Use mapPartitions to calculate data by partition.Use mapPartitions to flexibly operate data. For example, to calculate",
@@ -5639,8 +5621,8 @@
"title":"Experience",
"uri":"mrs_01_1984.html",
"doc_type":"cmpntguide",
- "p_code":"618",
- "code":"627"
+ "p_code":"616",
+ "code":"625"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5648,8 +5630,8 @@
"title":"Spark SQL and DataFrame Tuning",
"uri":"mrs_01_1985.html",
"doc_type":"cmpntguide",
- "p_code":"617",
- "code":"628"
+ "p_code":"615",
+ "code":"626"
},
{
"desc":"When two tables are joined in Spark SQL, the broadcast function (see section \"Using Broadcast Variables\") can be used to broadcast tables to each node. This minimizes shu",
@@ -5657,8 +5639,8 @@
"title":"Optimizing the Spark SQL Join Operation",
"uri":"mrs_01_1986.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"629"
+ "p_code":"626",
+ "code":"627"
},
{
"desc":"When multiple tables are joined in Spark SQL, skew occurs in join keys and the data volume in some Hash buckets is much higher than that in other buckets. As a result, so",
@@ -5666,8 +5648,8 @@
"title":"Improving Spark SQL Calculation Performance Under Data Skew",
"uri":"mrs_01_1987.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"630"
+ "p_code":"626",
+ "code":"628"
},
{
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
@@ -5675,8 +5657,8 @@
"title":"Optimizing Spark SQL Performance in the Small File Scenario",
"uri":"mrs_01_1988.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"631"
+ "p_code":"626",
+ "code":"629"
},
{
"desc":"The INSERT...SELECT operation needs to be optimized if any of the following conditions is true:Many small files need to be queried.A few large files need to be queried.Th",
@@ -5684,8 +5666,8 @@
"title":"Optimizing the INSERT...SELECT Operation",
"uri":"mrs_01_1989.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"632"
+ "p_code":"626",
+ "code":"630"
},
{
"desc":"Multiple clients can be connected to JDBCServer at the same time. However, if the number of concurrent tasks is too large, the default configuration of JDBCServer must be",
@@ -5693,8 +5675,8 @@
"title":"Multiple JDBC Clients Concurrently Connecting to JDBCServer",
"uri":"mrs_01_1990.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"633"
+ "p_code":"626",
+ "code":"631"
},
{
"desc":"When SparkSQL inserts data to dynamic partitioned tables, the more partitions there are, the more HDFS files a single task generates and the more memory metadata occupies",
@@ -5702,8 +5684,8 @@
"title":"Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables",
"uri":"mrs_01_1992.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"634"
+ "p_code":"626",
+ "code":"632"
},
{
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
@@ -5711,8 +5693,8 @@
"title":"Optimizing Small Files",
"uri":"mrs_01_1995.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"635"
+ "p_code":"626",
+ "code":"633"
},
{
"desc":"Spark SQL supports hash aggregate algorithm. Namely, use fast aggregate hashmap as cache to improve aggregate performance. The hashmap replaces the previous ColumnarBatch",
@@ -5720,8 +5702,8 @@
"title":"Optimizing the Aggregate Algorithms",
"uri":"mrs_01_1996.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"636"
+ "p_code":"626",
+ "code":"634"
},
{
"desc":"Save the partition information about the datasource table to the Metastore and process partition information in the Metastore.Optimize the datasource tables, support synt",
@@ -5729,8 +5711,8 @@
"title":"Optimizing Datasource Tables",
"uri":"mrs_01_1997.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"637"
+ "p_code":"626",
+ "code":"635"
},
{
"desc":"Spark SQL supports rule-based optimization by default. However, the rule-based optimization cannot ensure that Spark selects the optimal query plan. Cost-Based Optimizer ",
@@ -5738,8 +5720,8 @@
"title":"Merging CBO",
"uri":"mrs_01_1998.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"638"
+ "p_code":"626",
+ "code":"636"
},
{
"desc":"This section describes how to enable or disable the query optimization for inter-source complex SQL.(Optional) Prepare for connecting to the MPPDB data source.If the data",
@@ -5747,8 +5729,8 @@
"title":"Optimizing SQL Query of Data of Multiple Sources",
"uri":"mrs_01_1999.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"639"
+ "p_code":"626",
+ "code":"637"
},
{
"desc":"This section describes the optimization suggestions for SQL statements in multi-level nesting and hybrid join scenarios.The following provides an example of complex query",
@@ -5756,8 +5738,8 @@
"title":"SQL Optimization for Multi-level Nesting and Hybrid Join",
"uri":"mrs_01_2000.html",
"doc_type":"cmpntguide",
- "p_code":"628",
- "code":"640"
+ "p_code":"626",
+ "code":"638"
},
{
"desc":"Streaming is a mini-batch streaming processing framework that features second-level delay and high throughput. To optimize Streaming is to improve its throughput while ma",
@@ -5765,8 +5747,8 @@
"title":"Spark Streaming Tuning",
"uri":"mrs_01_2001.html",
"doc_type":"cmpntguide",
- "p_code":"617",
- "code":"641"
+ "p_code":"615",
+ "code":"639"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5774,8 +5756,8 @@
"title":"Common Issues About Spark2x",
"uri":"mrs_01_2002.html",
"doc_type":"cmpntguide",
- "p_code":"572",
- "code":"642"
+ "p_code":"570",
+ "code":"640"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5783,8 +5765,8 @@
"title":"Spark Core",
"uri":"mrs_01_2003.html",
"doc_type":"cmpntguide",
- "p_code":"642",
- "code":"643"
+ "p_code":"640",
+ "code":"641"
},
{
"desc":"How do I view the aggregated container logs on the page when the log aggregation function is enabled on YARN?For details, see Viewing Aggregated Container Logs on the Web",
@@ -5792,8 +5774,8 @@
"title":"How Do I View Aggregated Spark Application Logs?",
"uri":"mrs_01_2004.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"644"
+ "p_code":"641",
+ "code":"642"
},
{
"desc":"Communication between ApplicationMaster and ResourceManager remains abnormal for a long time. Why is the driver return code inconsistent with application status on Resour",
@@ -5801,8 +5783,8 @@
"title":"Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?",
"uri":"mrs_01_2005.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"645"
+ "p_code":"641",
+ "code":"643"
},
{
"desc":"Why cannot exit the Driver process after running the yarn application -kill applicationID command to stop the Spark Streaming application?Running the yarn application -ki",
@@ -5810,8 +5792,8 @@
"title":"Why Cannot Exit the Driver Process?",
"uri":"mrs_01_2006.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"646"
+ "p_code":"641",
+ "code":"644"
},
{
"desc":"On a large cluster of 380 nodes, run the ScalaSort test case in the HiBench test that runs the 29T data, and configure Executor as --executor-cores 4. The following abnor",
@@ -5819,8 +5801,8 @@
"title":"Why Does FetchFailedException Occur When the Network Connection Is Timed out",
"uri":"mrs_01_2007.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"647"
+ "p_code":"641",
+ "code":"645"
},
{
"desc":"How to configure the event queue size if the following Driver log information is displayed indicating that the event queue overflows?Common applicationsDropping SparkList",
@@ -5828,8 +5810,8 @@
"title":"How to Configure Event Queue Size If Event Queue Overflows?",
"uri":"mrs_01_2008.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"648"
+ "p_code":"641",
+ "code":"646"
},
{
"desc":"During Spark application execution, if the driver fails to connect to ResourceManager, the following error is reported and it does not exit for a long time. What can I do",
@@ -5837,8 +5819,8 @@
"title":"What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?",
"uri":"mrs_01_2009.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"649"
+ "p_code":"641",
+ "code":"647"
},
{
"desc":"When Spark executes an application, an error similar to the following is reported and the application ends. What can I do?Symptom: The value of spark.rpc.io.connectionTim",
@@ -5846,8 +5828,8 @@
"title":"What Can I Do If \"Connection to ip:port has been quiet for xxx ms while there are outstanding requests\" Is Reported When Spark Executes an Application and the Application Ends?",
"uri":"mrs_01_2010.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"650"
+ "p_code":"641",
+ "code":"648"
},
{
"desc":"If the NodeManager is shut down with the Executor dynamic allocation enabled, the Executors on the node where the NodeManeger is shut down fail to be removed from the dri",
@@ -5855,8 +5837,8 @@
"title":"Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?",
"uri":"mrs_01_2011.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"651"
+ "p_code":"641",
+ "code":"649"
},
{
"desc":"ExternalShuffle is enabled for the application that runs Spark. Task loss occurs in the application because the message \"java.lang.NullPointerException: Password cannot b",
@@ -5864,8 +5846,8 @@
"title":"What Can I Do If the Message \"Password cannot be null if SASL is enabled\" Is Displayed?",
"uri":"mrs_01_2012.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"652"
+ "p_code":"641",
+ "code":"650"
},
{
"desc":"When inserting data into the dynamic partition table, a large number of shuffle files are damaged due to the disk disconnection, node error, and the like. In this case, w",
@@ -5873,8 +5855,8 @@
"title":"What Should I Do If the Message \"Failed to CREATE_FILE\" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?",
"uri":"mrs_01_2013.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"653"
+ "p_code":"641",
+ "code":"651"
},
{
"desc":"When Hash shuffle is used to run a job that consists of 1000000 map tasks x 100000 reduce tasks, run logs report many message failures and Executor heartbeat timeout, lea",
@@ -5882,8 +5864,8 @@
"title":"Why Tasks Fail When Hash Shuffle Is Used?",
"uri":"mrs_01_2014.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"654"
+ "p_code":"641",
+ "code":"652"
},
{
"desc":"When the http(s)://: mode is used to access the Spark JobHistory page, if the displayed Spark JobHistory page is not the page of FusionInsight Manag",
@@ -5891,8 +5873,8 @@
"title":"What Can I Do If the Error Message \"DNS query failed\" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?",
"uri":"mrs_01_2015.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"655"
+ "p_code":"641",
+ "code":"653"
},
{
"desc":"When I execute a 100 TB TPC-DS test suite in the JDBCServer mode, the \"Timeout waiting for task\" is displayed. As a result, shuffle fetch fails, the stage keeps retrying,",
@@ -5900,8 +5882,8 @@
"title":"What Can I Do If Shuffle Fetch Fails Due to the \"Timeout Waiting for Task\" Exception?",
"uri":"mrs_01_2016.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"656"
+ "p_code":"641",
+ "code":"654"
},
{
"desc":"When I run Spark tasks with a large data volume, for example, 100 TB TPCDS test suite, why does the Stage retry due to Executor loss sometimes? The message \"Executor 532 ",
@@ -5909,8 +5891,8 @@
"title":"Why Does the Stage Retry due to the Crash of the Executor?",
"uri":"mrs_01_2017.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"657"
+ "p_code":"641",
+ "code":"655"
},
{
"desc":"When more than 50 terabytes of data is shuffled, some executors fail to register shuffle services due to timeout. The shuffle tasks then fail. Why? The error log is as fo",
@@ -5918,8 +5900,8 @@
"title":"Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?",
"uri":"mrs_01_2018.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"658"
+ "p_code":"641",
+ "code":"656"
},
{
"desc":"During the execution of Spark applications, if the YARN External Shuffle service is enabled and there are too many shuffle tasks, the java.lang.OutofMemoryError: Direct b",
@@ -5927,8 +5909,8 @@
"title":"Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications",
"uri":"mrs_01_2019.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"659"
+ "p_code":"641",
+ "code":"657"
},
{
"desc":"Execution of the sparkbench task (for example, Wordcount) of HiBench6 fails. The bench.log indicates that the Yarn task fails to be executed. The failure information disp",
@@ -5936,8 +5918,8 @@
"title":"Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?",
"uri":"mrs_01_2021.html",
"doc_type":"cmpntguide",
- "p_code":"643",
- "code":"660"
+ "p_code":"641",
+ "code":"658"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -5945,8 +5927,8 @@
"title":"Spark SQL and DataFrame",
"uri":"mrs_01_2022.html",
"doc_type":"cmpntguide",
- "p_code":"642",
- "code":"661"
+ "p_code":"640",
+ "code":"659"
},
{
"desc":"Suppose that there is a table src(d1, d2, m) with the following data:The results for statement \"select d1, sum(d1) from src group by d1, d2 with rollup\" are shown as belo",
@@ -5954,8 +5936,8 @@
"title":"What Do I have to Note When Using Spark SQL ROLLUP and CUBE?",
"uri":"mrs_01_2023.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"662"
+ "p_code":"659",
+ "code":"660"
},
{
"desc":"Why temporary tables of the previous database are displayed after the database is switched?Create a temporary DataSource table, for example:create temporary table ds_parq",
@@ -5963,8 +5945,8 @@
"title":"Why Spark SQL Is Displayed as a Temporary Table in Different Databases?",
"uri":"mrs_01_2024.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"663"
+ "p_code":"659",
+ "code":"661"
},
{
"desc":"Is it possible to assign parameter values through Spark commands, in addition to through a user interface or a configuration file?Spark configuration options can be defin",
@@ -5972,8 +5954,8 @@
"title":"How to Assign a Parameter Value in a Spark Command?",
"uri":"mrs_01_2025.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"664"
+ "p_code":"659",
+ "code":"662"
},
{
"desc":"The following error information is displayed when a new user creates a table using SparkSQL:When you create a table using Spark SQL, the interface of Hive is called by th",
@@ -5981,8 +5963,8 @@
"title":"What Directory Permissions Do I Need to Create a Table Using SparkSQL?",
"uri":"mrs_01_2026.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"665"
+ "p_code":"659",
+ "code":"663"
},
{
"desc":"Why do I fail to delete the UDF using another service, for example, delete the UDF created by Hive using Spark SQL.The UDF can be created using any of the following servi",
@@ -5990,8 +5972,8 @@
"title":"Why Do I Fail to Delete the UDF Using Another Service?",
"uri":"mrs_01_2027.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"666"
+ "p_code":"659",
+ "code":"664"
},
{
"desc":"Why cannot I query newly inserted data in a parquet Hive table using SparkSQL? This problem occurs in the following scenarios:For partitioned tables and non-partitioned t",
@@ -5999,8 +5981,8 @@
"title":"Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?",
"uri":"mrs_01_2028.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"667"
+ "p_code":"659",
+ "code":"665"
},
{
"desc":"What is cache table used for? Which point should I pay attention to while using cache table?Spark SQL caches tables into memory so that data can be directly read from mem",
@@ -6008,8 +5990,8 @@
"title":"How to Use Cache Table?",
"uri":"mrs_01_2029.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"668"
+ "p_code":"659",
+ "code":"666"
},
{
"desc":"During the repartition operation, the number of blocks (spark.sql.shuffle.partitions) is set to 4,500, and the number of keys used by repartition exceeds 4,000. It is exp",
@@ -6017,8 +5999,8 @@
"title":"Why Are Some Partitions Empty During Repartition?",
"uri":"mrs_01_2030.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"669"
+ "p_code":"659",
+ "code":"667"
},
{
"desc":"When the default configuration is used, 16 terabytes of text data fails to be converted into 4 terabytes of parquet data, and the error information below is displayed. Wh",
@@ -6026,8 +6008,8 @@
"title":"Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?",
"uri":"mrs_01_2031.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"670"
+ "p_code":"659",
+ "code":"668"
},
{
"desc":"When the table name is set to table, why the error information similar to the following is displayed after the drop table table command or other command is run?The word t",
@@ -6035,8 +6017,8 @@
"title":"Why the Operation Fails When the Table Name Is TABLE?",
"uri":"mrs_01_2033.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"671"
+ "p_code":"659",
+ "code":"669"
},
{
"desc":"When the analyze table statement is executed using spark-sql, the task is suspended and the information below is displayed. Why?When the statement is executed, the SQL st",
@@ -6044,8 +6026,8 @@
"title":"Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?",
"uri":"mrs_01_2034.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"672"
+ "p_code":"659",
+ "code":"670"
},
{
"desc":"If I access a parquet table on which I do not have permission, why a job is run before \"Missing Privileges\" is displayed?The execution sequence of Spark SQL statement par",
@@ -6053,8 +6035,8 @@
"title":"If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before \"Missing Privileges\" Is Displayed?",
"uri":"mrs_01_2035.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"673"
+ "p_code":"659",
+ "code":"671"
},
{
"desc":"When do I fail to modify the metadata in the datasource and Spark on HBase table by running the Hive command?The current Spark version does not support modifying the meta",
@@ -6062,8 +6044,8 @@
"title":"Why Do I Fail to Modify MetaData by Running the Hive Command?",
"uri":"mrs_01_2036.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"674"
+ "p_code":"659",
+ "code":"672"
},
{
"desc":"After successfully running Spark tasks with large data volume, for example, 2-TB TPCDS test suite, why is the abnormal stack information \"RejectedExecutionException\" disp",
@@ -6071,8 +6053,8 @@
"title":"Why Is \"RejectedExecutionException\" Displayed When I Exit Spark SQL?",
"uri":"mrs_01_2037.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"675"
+ "p_code":"659",
+ "code":"673"
},
{
"desc":"During a health check, if the concurrent statements exceed the threshold of the thread pool, the health check statements fail to be executed, the health check program tim",
@@ -6080,8 +6062,8 @@
"title":"What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?",
"uri":"mrs_01_2038.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"676"
+ "p_code":"659",
+ "code":"674"
},
{
"desc":"Why no result is found when 2016-6-30 is set in the date field as the filter condition?As shown in the following figure, trx_dte_par in the select count (*) from trxfintr",
@@ -6089,8 +6071,8 @@
"title":"Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?",
"uri":"mrs_01_2039.html",
"doc_type":"cmpntguide",
- "p_code":"661",
- "code":"677"
+ "p_code":"659",
+ "code":"675"
},
{
"desc":"Why does the --hivevaroption I specified in the command for starting spark-beeline fail to take effect?In the V100R002C60 version, if I use the --hivevar =\n org.apache.flink\n fli",
@@ -6539,8 +6521,8 @@
"title":"Completely Migrating Storm Services",
"uri":"mrs_01_1050.html",
"doc_type":"cmpntguide",
- "p_code":"725",
- "code":"727"
+ "p_code":"723",
+ "code":"725"
},
{
"desc":"This section describes how to embed Storm code in DataStream of Flink in embedded migration mode. For example, the code of Spout or Bolt compiled using Storm API is embed",
@@ -6548,8 +6530,8 @@
"title":"Performing Embedded Service Migration",
"uri":"mrs_01_1051.html",
"doc_type":"cmpntguide",
- "p_code":"725",
- "code":"728"
+ "p_code":"723",
+ "code":"726"
},
{
"desc":"If the Storm services use the storm-hdfs or storm-hbase plug-in package for interconnection, you need to specify the following security parameters when migrating Storm se",
@@ -6557,8 +6539,8 @@
"title":"Migrating Services of External Security Components Interconnected with Storm",
"uri":"mrs_01_1052.html",
"doc_type":"cmpntguide",
- "p_code":"725",
- "code":"729"
+ "p_code":"723",
+ "code":"727"
},
{
"desc":"This section applies to MRS 3.x or later.Log paths: The default paths of Storm log files are /var/log/Bigdata/storm/Role name (run logs) and /var/log/Bigdata/audit/storm/",
@@ -6566,8 +6548,8 @@
"title":"Storm Log Introduction",
"uri":"mrs_01_1053.html",
"doc_type":"cmpntguide",
- "p_code":"716",
- "code":"730"
+ "p_code":"714",
+ "code":"728"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6575,8 +6557,8 @@
"title":"Performance Tuning",
"uri":"mrs_01_1054.html",
"doc_type":"cmpntguide",
- "p_code":"716",
- "code":"731"
+ "p_code":"714",
+ "code":"729"
},
{
"desc":"You can modify Storm parameters to improve Storm performance in specific service scenarios.This section applies to MRS 3.x or later.Modify the service configuration param",
@@ -6584,8 +6566,8 @@
"title":"Storm Performance Tuning",
"uri":"mrs_01_1055.html",
"doc_type":"cmpntguide",
- "p_code":"731",
- "code":"732"
+ "p_code":"729",
+ "code":"730"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6594,7 +6576,7 @@
"uri":"mrs_01_2067.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"733"
+ "code":"731"
},
{
"desc":"This section applies to MRS 3.x or later clusters.",
@@ -6602,8 +6584,8 @@
"title":"Precautions",
"uri":"mrs_01_2068.html",
"doc_type":"cmpntguide",
- "p_code":"733",
- "code":"734"
+ "p_code":"731",
+ "code":"732"
},
{
"desc":"On Manager, choose Cluster > Service > Tez > Configuration > All Configurations. Enter a parameter name in the search box.",
@@ -6611,8 +6593,8 @@
"title":"Common Tez Parameters",
"uri":"mrs_01_2069.html",
"doc_type":"cmpntguide",
- "p_code":"733",
- "code":"735"
+ "p_code":"731",
+ "code":"733"
},
{
"desc":"Tez displays the Tez task execution process on a GUI. You can view the task execution details on the GUI.The TimelineServer instance of the Yarn service has been installe",
@@ -6620,8 +6602,8 @@
"title":"Accessing TezUI",
"uri":"mrs_01_2070.html",
"doc_type":"cmpntguide",
- "p_code":"733",
- "code":"736"
+ "p_code":"731",
+ "code":"734"
},
{
"desc":"Log path: The default save path of Tez logs is /var/log/Bigdata/tez/role name.TezUI: /var/log/Bigdata/tez/tezui (run logs) and /var/log/Bigdata/audit/tez/tezui (audit log",
@@ -6629,8 +6611,8 @@
"title":"Log Overview",
"uri":"mrs_01_2071.html",
"doc_type":"cmpntguide",
- "p_code":"733",
- "code":"737"
+ "p_code":"731",
+ "code":"735"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6638,8 +6620,8 @@
"title":"Common Issues",
"uri":"mrs_01_2072.html",
"doc_type":"cmpntguide",
- "p_code":"733",
- "code":"738"
+ "p_code":"731",
+ "code":"736"
},
{
"desc":"After a user logs in to Manager and switches to the Tez web UI, the submitted Tez tasks are not displayed.The Tez task data displayed on the Tez WebUI requires the suppor",
@@ -6647,8 +6629,8 @@
"title":"TezUI Cannot Display Tez Task Execution Details",
"uri":"mrs_01_2073.html",
"doc_type":"cmpntguide",
- "p_code":"738",
- "code":"739"
+ "p_code":"736",
+ "code":"737"
},
{
"desc":"When a user logs in to Manager and switches to the Tez web UI, error 404 or 503 is displayed.The Tez web UI depends on the TimelineServer instance of Yarn. Therefore, Tim",
@@ -6656,8 +6638,8 @@
"title":"Error Occurs When a User Switches to the Tez Web UI",
"uri":"mrs_01_2074.html",
"doc_type":"cmpntguide",
- "p_code":"738",
- "code":"740"
+ "p_code":"736",
+ "code":"738"
},
{
"desc":"A user logs in to the Tez web UI and clicks Logs, but the Yarn log page fails to be displayed and data cannot be loaded.Currently, the hostname is used for the access to ",
@@ -6665,8 +6647,8 @@
"title":"Yarn Logs Cannot Be Viewed on the TezUI Page",
"uri":"mrs_01_2075.html",
"doc_type":"cmpntguide",
- "p_code":"738",
- "code":"741"
+ "p_code":"736",
+ "code":"739"
},
{
"desc":"A user logs in to Manager and switches to the Tez web UI page, but no data for the submitted task is displayed on the Hive Queries page.To display task data on the Hive Q",
@@ -6674,8 +6656,8 @@
"title":"Table Data Is Empty on the TezUI HiveQueries Page",
"uri":"mrs_01_2076.html",
"doc_type":"cmpntguide",
- "p_code":"738",
- "code":"742"
+ "p_code":"736",
+ "code":"740"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6684,7 +6666,7 @@
"uri":"mrs_01_0851.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"743"
+ "code":"741"
},
{
"desc":"The Yarn service provides queues for users. Users allocate system resources to each queue. After the configuration is complete, you can click Refresh Queue or restart the",
@@ -6692,8 +6674,8 @@
"title":"Common YARN Parameters",
"uri":"mrs_01_0852.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"744"
+ "p_code":"741",
+ "code":"742"
},
{
"desc":"This section describes how to create and configure a Yarn role. The Yarn role can be assigned with Yarn administrator permission and manage Yarn queue resources.If the cu",
@@ -6701,8 +6683,8 @@
"title":"Creating Yarn Roles",
"uri":"mrs_01_0853.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"745"
+ "p_code":"741",
+ "code":"743"
},
{
"desc":"This section guides users to use a Yarn client in an O&M or service scenario.The client has been installed.For example, the installation directory is /opt/hadoopclient. T",
@@ -6710,8 +6692,8 @@
"title":"Using the YARN Client",
"uri":"mrs_01_0854.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"746"
+ "p_code":"741",
+ "code":"744"
},
{
"desc":"If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware r",
@@ -6719,8 +6701,8 @@
"title":"Configuring Resources for a NodeManager Role Instance",
"uri":"mrs_01_0855.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"747"
+ "p_code":"741",
+ "code":"745"
},
{
"desc":"If the storage directories defined by the Yarn NodeManager are incorrect or the Yarn storage plan changes, the system administrator needs to modify the NodeManager storag",
@@ -6728,8 +6710,8 @@
"title":"Changing NodeManager Storage Directories",
"uri":"mrs_01_0856.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"748"
+ "p_code":"741",
+ "code":"746"
},
{
"desc":"In the multi-tenant scenario in security mode, a cluster can be used by multiple users, and tasks of multiple users can be submitted and executed. Users are invisible to ",
@@ -6737,8 +6719,8 @@
"title":"Configuring Strict Permission Control for Yarn",
"uri":"mrs_01_0857.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"749"
+ "p_code":"741",
+ "code":"747"
},
{
"desc":"Yarn provides the container log aggregation function to collect logs generated by containers on each node to HDFS to release local disk space. You can collect logs in eit",
@@ -6746,8 +6728,8 @@
"title":"Configuring Container Log Aggregation",
"uri":"mrs_01_0858.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"750"
+ "p_code":"741",
+ "code":"748"
},
{
"desc":"This section applies to MRS 3.x or later clusters.CGroups is a Linux kernel feature. In YARN this feature allows containers to be limited in their resource usage (example",
@@ -6755,8 +6737,8 @@
"title":"Using CGroups with YARN",
"uri":"mrs_01_0859.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"751"
+ "p_code":"741",
+ "code":"749"
},
{
"desc":"When resources are insufficient or ApplicationMaster fails to start, a client probably encounters running errors.Go to the All Configurations page of Yarn and enter a par",
@@ -6764,8 +6746,8 @@
"title":"Configuring the Number of ApplicationMaster Retries",
"uri":"mrs_01_0860.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"752"
+ "p_code":"741",
+ "code":"750"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.During the process of starting the configuration, when the ApplicationMaster creates a container, the allocated memor",
@@ -6773,8 +6755,8 @@
"title":"Configure the ApplicationMaster to Automatically Adjust the Allocated Memory",
"uri":"mrs_01_0861.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"753"
+ "p_code":"741",
+ "code":"751"
},
{
"desc":"The value of the yarn.http.policy parameter must be consistent on both the server and clients. Web UIs on clients will be garbled if an inconsistency exists, for example,",
@@ -6782,8 +6764,8 @@
"title":"Configuring the Access Channel Protocol",
"uri":"mrs_01_0862.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"754"
+ "p_code":"741",
+ "code":"752"
},
{
"desc":"If memory usage of the submitted application cannot be estimated, you can modify the configuration on the server to determine whether to check the memory usage.If the mem",
@@ -6791,8 +6773,8 @@
"title":"Configuring Memory Usage Detection",
"uri":"mrs_01_0863.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"755"
+ "p_code":"741",
+ "code":"753"
},
{
"desc":"If the custom scheduler is set in ResourceManager, you can set the corresponding web page and other Web applications for the custom scheduler.Go to the All Configurations",
@@ -6800,8 +6782,8 @@
"title":"Configuring the Additional Scheduler WebUI",
"uri":"mrs_01_0864.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"756"
+ "p_code":"741",
+ "code":"754"
},
{
"desc":"The Yarn Restart feature includes ResourceManager Restart and NodeManager Restart.When ResourceManager Restart is enabled, the new active ResourceManager node loads the i",
@@ -6809,8 +6791,8 @@
"title":"Configuring Yarn Restart",
"uri":"mrs_01_0865.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"757"
+ "p_code":"741",
+ "code":"755"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.In YARN, ApplicationMasters run on NodeManagers just like every other container (ignoring unmanaged ApplicationMaster",
@@ -6818,8 +6800,8 @@
"title":"Configuring ApplicationMaster Work Preserving",
"uri":"mrs_01_0866.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"758"
+ "p_code":"741",
+ "code":"756"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.The default log level of localized container is INFO. You can change the log level by configuring yarn.nodemanager.co",
@@ -6827,8 +6809,8 @@
"title":"Configuring the Localized Log Levels",
"uri":"mrs_01_0867.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"759"
+ "p_code":"741",
+ "code":"757"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.Currently, YARN allows the user that starts the NodeManager to run the task submitted by all other users, or the user",
@@ -6836,8 +6818,8 @@
"title":"Configuring Users That Run Tasks",
"uri":"mrs_01_0868.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"760"
+ "p_code":"741",
+ "code":"758"
},
{
"desc":"The default paths for saving Yarn logs are as follows:ResourceManager: /var/log/Bigdata/yarn/rm (run logs) and /var/log/Bigdata/audit/yarn/rm (audit logs)NodeManager: /va",
@@ -6845,8 +6827,8 @@
"title":"Yarn Log Overview",
"uri":"mrs_01_0870.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"761"
+ "p_code":"741",
+ "code":"759"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6854,8 +6836,8 @@
"title":"Yarn Performance Tuning",
"uri":"mrs_01_0871.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"762"
+ "p_code":"741",
+ "code":"760"
},
{
"desc":"The capacity scheduler of ResourceManager implements job preemption to simplify job running in queues and improve resource utilization. The process is as follows:Assume t",
@@ -6863,8 +6845,8 @@
"title":"Preempting a Task",
"uri":"mrs_01_0872.html",
"doc_type":"cmpntguide",
- "p_code":"762",
- "code":"763"
+ "p_code":"760",
+ "code":"761"
},
{
"desc":"The resource contention scenarios of a cluster are as follows:Submit two jobs (Job 1 and Job 2) with lower priorities.Some tasks of running Job 1 and Job 2 are in the run",
@@ -6872,8 +6854,8 @@
"title":"Setting the Task Priority",
"uri":"mrs_01_0873.html",
"doc_type":"cmpntguide",
- "p_code":"762",
- "code":"764"
+ "p_code":"760",
+ "code":"762"
},
{
"desc":"After the scheduler of a big data cluster is properly configured, you can adjust the available memory, CPU resources, and local disk of each node to optimize the performa",
@@ -6881,8 +6863,8 @@
"title":"Optimizing Node Configuration",
"uri":"mrs_01_0874.html",
"doc_type":"cmpntguide",
- "p_code":"762",
- "code":"765"
+ "p_code":"760",
+ "code":"763"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -6890,8 +6872,8 @@
"title":"Common Issues About Yarn",
"uri":"mrs_01_2077.html",
"doc_type":"cmpntguide",
- "p_code":"743",
- "code":"766"
+ "p_code":"741",
+ "code":"764"
},
{
"desc":"Why mounted directory for Container is not cleared after the completion of the job while using CGroups?The mounted path for the Container should be cleared even if job is",
@@ -6899,8 +6881,8 @@
"title":"Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?",
"uri":"mrs_01_2078.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"767"
+ "p_code":"764",
+ "code":"765"
},
{
"desc":"Why is the HDFS_DELEGATION_TOKEN expired exception reported when a job fails in security mode?HDFS_DELEGATION_TOKEN expires because the token is not updated or it is acce",
@@ -6908,8 +6890,8 @@
"title":"Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?",
"uri":"mrs_01_2079.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"768"
+ "p_code":"764",
+ "code":"766"
},
{
"desc":"If Yarn is restarted in either of the following scenarios, local logs will not be deleted as scheduled and will be retained permanently:When Yarn is restarted during task",
@@ -6917,8 +6899,8 @@
"title":"Why Are Local Logs Not Deleted After YARN Is Restarted?",
"uri":"mrs_01_2080.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"769"
+ "p_code":"764",
+ "code":"767"
},
{
"desc":"Why the task does not fail even though AppAttempts restarts due to failure for more than two times?During the task execution process, if the ContainerExitStatus returns v",
@@ -6926,8 +6908,8 @@
"title":"Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?",
"uri":"mrs_01_2081.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"770"
+ "p_code":"764",
+ "code":"768"
},
{
"desc":"After I moved an application from one queue to another, why is it moved back to the original queue after ResourceManager restarts?This problem is caused by the constraint",
@@ -6935,8 +6917,8 @@
"title":"Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?",
"uri":"mrs_01_2082.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"771"
+ "p_code":"764",
+ "code":"769"
},
{
"desc":"Why does Yarn not release the blacklist even all nodes are added to the blacklist?In Yarn, when the number of application nodes added to the blacklist by ApplicationMaste",
@@ -6944,8 +6926,8 @@
"title":"Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?",
"uri":"mrs_01_2083.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"772"
+ "p_code":"764",
+ "code":"770"
},
{
"desc":"The switchover of ResourceManager occurs continuously when multiple, for example 2,000, tasks are running concurrently, causing the Yarn service unavailable.The cause is ",
@@ -6953,8 +6935,8 @@
"title":"Why Does the Switchover of ResourceManager Occur Continuously?",
"uri":"mrs_01_2084.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"773"
+ "p_code":"764",
+ "code":"771"
},
{
"desc":"Why does a new application fail if a NodeManager has been in unhealthy status for 10 minutes?When nodeSelectPolicy is set to SEQUENCE and the first NodeManager connected ",
@@ -6962,8 +6944,8 @@
"title":"Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?",
"uri":"mrs_01_2085.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"774"
+ "p_code":"764",
+ "code":"772"
},
{
"desc":"Why does an error occur when I query the applicationID of a completed or non-existing application using the RESTful APIs?The Superior scheduler only stores the applicatio",
@@ -6971,8 +6953,8 @@
"title":"Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?",
"uri":"mrs_01_2087.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"775"
+ "p_code":"764",
+ "code":"773"
},
{
"desc":"In Superior scheduling mode, if a single NodeManager is faulty, why may the MapReduce tasks fail?In normal cases, when the attempt of a single task of an application fail",
@@ -6980,8 +6962,8 @@
"title":"Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?",
"uri":"mrs_01_2088.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"776"
+ "p_code":"764",
+ "code":"774"
},
{
"desc":"When a queue is deleted when there are applications running in it, these applications are moved to the \"lost_and_found\" queue. When these applications are moved back to a",
@@ -6989,8 +6971,8 @@
"title":"Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?",
"uri":"mrs_01_2089.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"777"
+ "p_code":"764",
+ "code":"775"
},
{
"desc":"How do I limit the size of application diagnostic messages stored in the ZKstore?In some cases, it has been observed that diagnostic messages may grow infinitely. Because",
@@ -6998,8 +6980,8 @@
"title":"How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?",
"uri":"mrs_01_2090.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"778"
+ "p_code":"764",
+ "code":"776"
},
{
"desc":"Why does a MapReduce job fail to run when a non-ViewFS file system is configured as ViewFS?When a non-ViewFS file system is configured as a ViewFS using cluster, the user",
@@ -7007,8 +6989,8 @@
"title":"Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?",
"uri":"mrs_01_2091.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"779"
+ "p_code":"764",
+ "code":"777"
},
{
"desc":"After the Native Task feature is enabled, Reduce tasks fail to run in some OSs.When -Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.nativetask.NativeM",
@@ -7016,8 +6998,8 @@
"title":"Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?",
"uri":"mrs_01_24051.html",
"doc_type":"cmpntguide",
- "p_code":"766",
- "code":"780"
+ "p_code":"764",
+ "code":"778"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7026,7 +7008,7 @@
"uri":"mrs_01_2092.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"781"
+ "code":"779"
},
{
"desc":"ZooKeeper is an open-source, highly reliable, and distributed consistency coordination service. ZooKeeper is designed to solve the problem that data consistency cannot be",
@@ -7034,8 +7016,8 @@
"title":"Using ZooKeeper from Scratch",
"uri":"mrs_01_2093.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"782"
+ "p_code":"779",
+ "code":"780"
},
{
"desc":"Navigation path for setting parameters:Go to the All Configurations page of ZooKeeper by referring to Modifying Cluster Service Configuration Parameters. Enter a paramete",
@@ -7043,8 +7025,8 @@
"title":"Common ZooKeeper Parameters",
"uri":"mrs_01_2094.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"783"
+ "p_code":"779",
+ "code":"781"
},
{
"desc":"Use a ZooKeeper client in an O&M scenario or service scenario.You have installed the client. For example, the installation directory is /opt/client. The client directory ",
@@ -7052,8 +7034,8 @@
"title":"Using a ZooKeeper Client",
"uri":"mrs_01_2095.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"784"
+ "p_code":"779",
+ "code":"782"
},
{
"desc":"Configure znode permission of ZooKeeper.ZooKeeper uses an access control list (ACL) to implement znode access control. The ZooKeeper client specifies a znode ACL, and the",
@@ -7061,8 +7043,8 @@
"title":"Configuring the ZooKeeper Permissions",
"uri":"mrs_01_2097.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"785"
+ "p_code":"779",
+ "code":"783"
},
{
"desc":"Log path: /var/log/Bigdata/zookeeper/quorumpeer (Run log), /var/log/Bigdata/audit/zookeeper/quorumpeer (Audit log)Log archive rule: The automatic ZooKeeper log compressio",
@@ -7070,8 +7052,8 @@
"title":"ZooKeeper Log Overview",
"uri":"mrs_01_2106.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"786"
+ "p_code":"779",
+ "code":"784"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7079,8 +7061,8 @@
"title":"Common Issues About ZooKeeper",
"uri":"mrs_01_2107.html",
"doc_type":"cmpntguide",
- "p_code":"781",
- "code":"787"
+ "p_code":"779",
+ "code":"785"
},
{
"desc":"After a large number of znodes are created, ZooKeeper servers in the ZooKeeper cluster become faulty and cannot be automatically recovered or restarted.Logs of followers:",
@@ -7088,8 +7070,8 @@
"title":"Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?",
"uri":"mrs_01_2108.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"788"
+ "p_code":"785",
+ "code":"786"
},
{
"desc":"After a large number of znodes are created in a parent directory, the ZooKeeper client will fail to fetch all child nodes of this parent directory in a single request.Log",
@@ -7097,8 +7079,8 @@
"title":"Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?",
"uri":"mrs_01_2109.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"789"
+ "p_code":"785",
+ "code":"787"
},
{
"desc":"Why four letter commands do not work with linux netcat command when secure netty configurations are enabled at Zookeeper server?For example,echo stat |netcat host portLin",
@@ -7106,8 +7088,8 @@
"title":"Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?",
"uri":"mrs_01_2110.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"790"
+ "p_code":"785",
+ "code":"788"
},
{
"desc":"How to check whether the role of a ZooKeeper instance is a leader or follower.Log in to Manager and choose Cluster > Name of the desired cluster > Service > ZooKeeper > I",
@@ -7115,8 +7097,8 @@
"title":"How Do I Check Which ZooKeeper Instance Is a Leader?",
"uri":"mrs_01_2111.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"791"
+ "p_code":"785",
+ "code":"789"
},
{
"desc":"When the IBM JDK is used, the client fails to connect to ZooKeeper.The possible cause is that the jaas.conf file format of the IBM JDK is different from that of the commo",
@@ -7124,8 +7106,8 @@
"title":"Why Cannot the Client Connect to ZooKeeper using the IBM JDK?",
"uri":"mrs_01_2112.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"792"
+ "p_code":"785",
+ "code":"790"
},
{
"desc":"The ZooKeeper client fails to refresh a TGT and therefore ZooKeeper cannot be accessed. The error message is as follows:ZooKeeper uses the system command kinit – R to ref",
@@ -7133,8 +7115,8 @@
"title":"What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?",
"uri":"mrs_01_2113.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"793"
+ "p_code":"785",
+ "code":"791"
},
{
"desc":"When the client connects to a non-leader instance, run the deleteall command to delete a large number of znodes, the error message \"Node does not exist\" is displayed, but",
@@ -7142,8 +7124,8 @@
"title":"Why Is Message \"Node does not exist\" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand",
"uri":"mrs_01_2114.html",
"doc_type":"cmpntguide",
- "p_code":"787",
- "code":"794"
+ "p_code":"785",
+ "code":"792"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7152,7 +7134,7 @@
"uri":"mrs_01_2122.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"795"
+ "code":"793"
},
{
"desc":"For MRS 1.9.2 or later: You can modify service configuration parameters on the cluster management page of the MRS management console.Log in to the MRS console. In the lef",
@@ -7160,8 +7142,8 @@
"title":"Modifying Cluster Service Configuration Parameters",
"uri":"mrs_01_2125.html",
"doc_type":"cmpntguide",
- "p_code":"795",
- "code":"796"
+ "p_code":"793",
+ "code":"794"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7169,8 +7151,8 @@
"title":"Accessing Manager",
"uri":"mrs_01_2123.html",
"doc_type":"cmpntguide",
- "p_code":"795",
- "code":"797"
+ "p_code":"793",
+ "code":"795"
},
{
"desc":"Clusters of versions earlier than MRS 3.x use MRS Manager to monitor, configure, and manage clusters. You can open the MRS Manager page on the MRS console.If you have bou",
@@ -7178,8 +7160,8 @@
"title":"Accessing MRS Manager (Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0102.html",
"doc_type":"cmpntguide",
- "p_code":"797",
- "code":"798"
+ "p_code":"795",
+ "code":"796"
},
{
"desc":"In MRS 3.x or later, FusionInsight Manager is used to monitor, configure, and manage clusters. After the cluster is installed, you can use the account to log in to Fusion",
@@ -7187,8 +7169,8 @@
"title":"Accessing FusionInsight Manager (MRS 3.x or Later)",
"uri":"mrs_01_2124.html",
"doc_type":"cmpntguide",
- "p_code":"797",
- "code":"799"
+ "p_code":"795",
+ "code":"797"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7196,8 +7178,8 @@
"title":"Using an MRS Client",
"uri":"mrs_01_2126.html",
"doc_type":"cmpntguide",
- "p_code":"795",
- "code":"800"
+ "p_code":"793",
+ "code":"798"
},
{
"desc":"This section describes how to install clients of all services (excluding Flume) in an MRS cluster. For details about how to install the Flume client, see Installing the F",
@@ -7205,8 +7187,8 @@
"title":"Installing a Client (Version 3.x or Later)",
"uri":"mrs_01_2127.html",
"doc_type":"cmpntguide",
- "p_code":"800",
- "code":"801"
+ "p_code":"798",
+ "code":"799"
},
{
"desc":"An MRS client is required. The MRS cluster client can be installed on the Master or Core node in the cluster or on a node outside the cluster.After a cluster of versions ",
@@ -7214,8 +7196,8 @@
"title":"Installing a Client (Versions Earlier Than 3.x)",
"uri":"mrs_01_2128.html",
"doc_type":"cmpntguide",
- "p_code":"800",
- "code":"802"
+ "p_code":"798",
+ "code":"800"
},
{
"desc":"A cluster provides a client for you to connect to a server, view task results, or manage data. If you modify service configuration parameters on Manager and restart the s",
@@ -7223,8 +7205,8 @@
"title":"Updating a Client (Version 3.x or Later)",
"uri":"mrs_01_2129.html",
"doc_type":"cmpntguide",
- "p_code":"800",
- "code":"803"
+ "p_code":"798",
+ "code":"801"
},
{
"desc":"This section applies to clusters of versions earlier than MRS 3.x. For MRS 3.x or later, see Updating a Client (Version 3.x or Later).ScenarioAn MRS cluster provides a cl",
@@ -7232,8 +7214,8 @@
"title":"Updating a Client (Versions Earlier Than 3.x)",
"uri":"mrs_01_2130.html",
"doc_type":"cmpntguide",
- "p_code":"800",
- "code":"804"
+ "p_code":"798",
+ "code":"802"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
@@ -7242,6 +7224,6 @@
"uri":"en-us_topic_0000001351362309.html",
"doc_type":"cmpntguide",
"p_code":"",
- "code":"805"
+ "code":"803"
}
]
\ No newline at end of file
diff --git a/docs/mrs/component-operation-guide/en-us_image_0000001533052357.png b/docs/mrs/component-operation-guide/en-us_image_0000001533052357.png
new file mode 100644
index 00000000..3a540db9
Binary files /dev/null and b/docs/mrs/component-operation-guide/en-us_image_0000001533052357.png differ
diff --git a/docs/mrs/component-operation-guide/en-us_topic_0000001351362309.html b/docs/mrs/component-operation-guide/en-us_topic_0000001351362309.html
index 5a294125..53197eab 100644
--- a/docs/mrs/component-operation-guide/en-us_topic_0000001351362309.html
+++ b/docs/mrs/component-operation-guide/en-us_topic_0000001351362309.html
@@ -8,7 +8,12 @@
-2022-11-01
+ |
2023-08-02
+ |
+- Removed section "GeoMesa Command Line" from HBase.
- Removed section "Hive Materialized View" from Hive.
- Fixed link errors in some sections.
+ |
+
+2022-11-01
|
Modified the following content:
Updated the screenshots in the operation guides for ClickHouse, Ranger, Spark2x, Tez, and Yarn.
diff --git a/docs/mrs/component-operation-guide/mrs_01_0132.html b/docs/mrs/component-operation-guide/mrs_01_0132.html
index d12ca30c..0b9c1dc1 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0132.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0132.html
@@ -3,19 +3,19 @@
Accessing the Hue Web UI
ScenarioAfter Hue is installed in an MRS cluster, users can use Hadoop-related components on the Hue web UI.
This section describes how to open the Hue web UI on the MRS cluster.
- To access the Hue web UI, you are advised to use a browser that is compatible with the Hue WebUI, for example, Google Chrome 50. The Internet Explorer may be incompatible with the Hue web UI.
+ To access the Hue web UI, you are advised to use a browser that is compatible with the Hue WebUI, for example, Google Chrome 50. The Internet Explorer may be incompatible with the Hue web UI.
Impact on the SystemSite trust must be added to the browser when you access Manager and Hue web UI for the first time. Otherwise, the Hue web UI cannot be accessed.
- PrerequisitesWhen Kerberos authentication is enabled, the MRS cluster administrator has assigned the permission for using Hive to the user. For details, see Creating a User. For example, create a human-machine user named hueuser, add the user to user groups hive (the primary group), hadoop, supergroup, and System_administrator, and assign the System_administrator role.
+ PrerequisitesWhen Kerberos authentication is enabled, the MRS cluster administrator has assigned the permission for using Hive to the user. For example, create a human-machine user named hueuser, add the user to user groups hive (the primary group), hadoop, supergroup, and System_administrator, and assign the System_administrator role.
This user is used to log in to Manager.
Procedure- Log in to the service page.
For versions earlier than MRS 3.x, click the cluster name on the MRS console and choose Components > Hue.
For MRS 3.x or later, log in to FusionInsight Manager (for details, see Accessing FusionInsight Manager (MRS 3.x or Later)) and choose Cluster > Services > Hue.
- - On the right of Hue WebUI, click the link to open the Hue web UI.
Hue WebUI provides the following functions:
+ - On the right of Hue WebUI, click the link to open the Hue web UI.
Hue WebUI provides the following functions:
- Click
to execute query statements of Hive and SparkSQL as well as Notebook code. Make sure that Hive and Spark2x have been installed in the MRS cluster before this operation. - Click
to submit workflow tasks, scheduled tasks, and bundle tasks. - Click
to view, import, and export tasks on the Hue web UI, such as workflow tasks, scheduled tasks, and bundle tasks. - Click
to manage metadata in Hive and SparkSQL. Make sure that Hive and Spark2x have been installed in the MRS cluster before this operation. - Click
to view the directories and files in HDFS. Make sure that HDFS has been installed in the MRS cluster before this operation. - Click
to view all jobs in the MRS cluster. Make sure that Yarn has been installed in the MRS cluster before this operation. - Use
to create or query HBase tables. Make sure that the HBase component has been installed in the MRS cluster and the Thrift1Server instance has been added before this operation. - Use
to import data that is in the CSV or TXT format.
- - When you log in to the Hue web UI as user hueuser for the first time, you need to change the password.
- After obtaining the URL for accessing the Hue web UI, you can give the URL to other users who cannot access MRS Manager for accessing the Hue web UI.
- If you perform operations on the Hue WebUI only but not on Manager, you must enter the password of the current login user when accessing Manager again.
+ - When you log in to the Hue web UI as user hueuser for the first time, you need to change the password.
- After obtaining the URL for accessing the Hue web UI, you can give the URL to other users who cannot access MRS Manager for accessing the Hue web UI.
- If you perform operations on the Hue WebUI only but not on Manager, you must enter the password of the current login user when accessing Manager again.
diff --git a/docs/mrs/component-operation-guide/mrs_01_0368.html b/docs/mrs/component-operation-guide/mrs_01_0368.html
index 38e1d327..a27e1afe 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0368.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0368.html
@@ -134,7 +134,7 @@
PrerequisitesThe client has been installed. For example, the client is installed in the /opt/client directory. The client directory in the following operations is only an example. Change it to the actual installation directory. Before using the client, download and update the client configuration file, and ensure that the active management node of Manager is available.
ProcedureFor versions earlier than MRS 3.x, perform the following operations:
- - Download the client configuration file.
- Log in to MRS Manager. For details, see Accessing Manager. Then, choose Services.
- Click Download Client.
Set Client Type to Only configuration files, Download To to Server, and click OK to generate the client configuration file. The generated file is saved in the /tmp/MRS-client directory on the active management node by default. You can customize the file path.
+- Download the client configuration file.
- Log in to MRS Manager. For details, see Accessing Manager. Then, choose Services.
- Click Download Client.
Set Client Type to Only configuration files, Download To to Server, and click OK to generate the client configuration file. The generated file is saved in the /tmp/MRS-client directory on the active management node by default. You can customize the file path.
- Log in to the active management node of MRS Manager.
- On the Node tab page, view the Name parameter. The node that contains master1 in its name is the Master1 node. The node that contains master2 in its name is the Master2 node.
The active and standby management nodes of MRS Manager are installed on Master nodes by default. Because Master1 and Master2 are switched over in active and standby mode, Master1 is not always the active management node of MRS Manager. Run a command in Master1 to check whether Master1 is active management node of MRS Manager. For details about the command, see 2.d.
- Log in to the Master1 node using the password as user root. For details, see Logging In to an ECS.
- Run the following commands to switch to user omm:
sudo su - root
@@ -149,14 +149,12 @@ NodeName HostName HAVersion StartTime
- Log in to the active management node, for example, 192-168-0-30 of MRS Manager as user root, and run the following command to switch to user omm:
sudo su - omm
- Run the following command to switch to the client installation directory, for example, /opt/client:
cd /opt/client
- - Run the following command to update the client configuration for the active management node.
sh refreshConfig.sh /opt/client Full path of the client configuration file package
+ - Run the following command to update the client configuration for the active management node.
sh refreshConfig.sh /opt/client Full path of the client configuration file package
For example, run the following command:
sh refreshConfig.sh /opt/client /tmp/MRS-client/MRS_Services_Client.tar
If the following information is displayed, the configurations have been updated successfully.
ReFresh components client config is complete.
Succeed to refresh components client config.
-
- Use the client on a Master node.
- On the active management node where the client is updated, for example, node 192-168-0-30, run the following command to go to the client directory:
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user. The current user must have the permission to create HBase tables. If Kerberos authentication is disabled for the current cluster, skip this step.
kinit MRS cluster user
diff --git a/docs/mrs/component-operation-guide/mrs_01_0370.html b/docs/mrs/component-operation-guide/mrs_01_0370.html
index bcb1f43c..bce8c6d4 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0370.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0370.html
@@ -4,19 +4,19 @@
ScenarioAfter Hue is installed in an MRS cluster, users can use Hadoop and Hive on the Hue web UI.
For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this function.
This section describes how to open the Hue web UI on the MRS cluster.
- To access the Hue web UI, you are advised to use a browser that is compatible with the Hue WebUI, for example, Google Chrome 50. The Internet Explorer may be incompatible with the Hue web UI.
+ To access the Hue web UI, you are advised to use a browser that is compatible with the Hue WebUI, for example, Google Chrome 50. The Internet Explorer may be incompatible with the Hue web UI.
For versions earlier than MRS 1.9.2, the Kerberos authentication is disabled for an MRS cluster, access the Hue web UI by referring to Web UIs of Open Source Components.
Impact on the SystemSite trust must be added to the browser when you access Manager and Hue web UI for the first time. Otherwise, the Hue web UI cannot be accessed.
- PrerequisitesWhen Kerberos authentication is enabled, the MRS cluster administrator has assigned the permission for using Hive to the user. For details, see Creating a User. For example, create a human-machine user named hueuser, add the user to user groups hive (the primary group), hadoop, and supergroup, and role System_administrator.
+ PrerequisitesWhen Kerberos authentication is enabled, the MRS cluster administrator has assigned the permission for using Hive to the user. For example, create a human-machine user named hueuser, add the user to user groups hive (the primary group), hadoop, and supergroup, and role System_administrator.
This user is used to log in to the Hue WebUI.
Procedure- Log in to the service page.
- For versions earlier than MRS 1.9.2, log in to MRS Manager and choose Services.
- For MRS 1.9.2 or later, click the cluster name on the MRS console and choose Components.
- - Select . On the right side of Hue WebUI, click the link to log in to the Hue web UI as user hueuser.
Hue WebUI provides the following functions:
+ - Select . On the right side of Hue WebUI, click the link to log in to the Hue web UI as user hueuser.
Hue WebUI provides the following functions:
- If Hive is installed in the MRS cluster, you can use Query Editors to execute query statements of Hive. Hive has been installed in the MRS cluster.
- If Hive is installed in the MRS cluster, you can use Data Browsers to manage Hive tables.
- If HDFS is installed in the MRS cluster, you can use
to view directories and files in HDFS. - If Yarn is installed in the MRS cluster, you can use
to view all jobs in the MRS cluster.
- - When you log in to the Hue web UI as user hueuser for the first time, you need to change the password.
- After obtaining the URL for accessing the Hue web UI, you can give the URL to other users who cannot access MRS Manager for accessing the Hue web UI.
- If you perform operations on the Hue WebUI only but not on Manager, you must enter the password of the current login user when accessing Manager again.
+ - When you log in to the Hue web UI as user hueuser for the first time, you need to change the password.
- After obtaining the URL for accessing the Hue web UI, you can give the URL to other users who cannot access MRS Manager for accessing the Hue web UI.
- If you perform operations on the Hue WebUI only but not on Manager, you must enter the password of the current login user when accessing Manager again.
diff --git a/docs/mrs/component-operation-guide/mrs_01_0397.html b/docs/mrs/component-operation-guide/mrs_01_0397.html
index 380fe9ba..ff429981 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0397.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0397.html
@@ -69,7 +69,7 @@ client.sinks.kafka_sink.kafka.bootstrap.servers = Kafka domain name. This parameter is mandatory for a security cluster, for example, hadoop.xxx.com.
+client.sinks.kafka_sink.kafka.kerberos.domain.name = Kafka domain name. This parameter is mandatory for a security cluster.
client.sinks.kafka_sink.requiredAcks = 0
client.sources.static_log_source.channels = static_log_channel
@@ -142,7 +142,7 @@ client.sinks.kafka_sink.kafka.bootstrap.servers = Kafka domain name. This parameter is mandatory for a security cluster, for example, hadoop.xxx.com.
+client.sinks.kafka_sink.kafka.kerberos.domain.name = Kafka domain name. This parameter is mandatory for a security cluster.
client.sinks.kafka_sink.requiredAcks = 0
client.sources.static_log_source.channels = static_log_channel
diff --git a/docs/mrs/component-operation-guide/mrs_01_0434.html b/docs/mrs/component-operation-guide/mrs_01_0434.html
index 40e17aeb..c48e1cc0 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0434.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0434.html
@@ -5,9 +5,9 @@
The Presto component of MRS 3.x does not support Kerberos authentication.
Prerequisites- The password of user admin has been obtained. The password of user admin is specified by the user during MRS cluster creation.
- The client has been updated.
- The Presto client has been manually installed for MRS 3.x clusters.
- Procedure- For clusters with Kerberos authentication enabled, log in to MRS Manager and create a role with the Hive Admin Privilege permission. For details about how to create a role, see Creating a Role.
- Create a user that belongs to the Presto and Hive groups, bind the role created in 1 to the user, and download the user authentication file. For details, see Creating a User and Downloading a User Authentication File.
- Upload the downloaded user.keytab and krb5.conf files to the node where the MRS client resides.
For clusters with Kerberos authentication enabled, 2 to 3 must be performed. For normal clusters, start from 4.
+ Procedure- For clusters with Kerberos authentication enabled, log in to MRS Manager and create a role with the Hive Admin Privilege permission.
- Create a user that belongs to the Presto and Hive groups, bind the role created in 1 to the user, and download the user authentication file.
- Upload the downloaded user.keytab and krb5.conf files to the node where the MRS client resides.
For clusters with Kerberos authentication enabled, 2 to 3 must be performed. For normal clusters, start from 4.
- - Prepare a client based on service conditions and log in to the node where the client is installed.
For example, if you have updated the client on the Master2 node, log in to the Master2 node to use the client. For details, see Updating a Client.
+ - Prepare a client based on service conditions and log in to the node where the client is installed.
For example, if you have updated the client on the Master2 node, log in to the Master2 node to use the client.
- Run the following command to switch the user:
sudo su - omm
- Run the following command to switch to the client directory, for example, /opt/client.
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
diff --git a/docs/mrs/component-operation-guide/mrs_01_0442.html b/docs/mrs/component-operation-guide/mrs_01_0442.html
index 966ca044..00b9e601 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0442.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0442.html
@@ -2,9 +2,9 @@
Using Hive from Scratch
Hive is a data warehouse framework built on Hadoop. It maps structured data files to a database table and provides SQL-like functions to analyze and process data. It also allows you to quickly perform simple MapReduce statistics using SQL-like statements without the need of developing a specific MapReduce application. It is suitable for statistical analysis of data warehouses.
- BackgroundSuppose a user develops an application to manage users who use service A in an enterprise. The procedure of operating service A on the Hive client is as follows:
+ BackgroundSuppose a user develops an application to manage users who use service A in an enterprise. The procedure of operating service A on the Hive client is as follows:
Operations on common tables:
- - Create the user_info table.
- Add users' educational backgrounds and professional titles to the table.
- Query user names and addresses by user ID.
- Delete the user information table after service A ends.
+ - Create the user_info table.
- Add users' educational backgrounds and professional titles to the table.
- Query user names and addresses by user ID.
- Delete the user information table after service A ends.
Table 1 User informationID
|
@@ -132,13 +132,13 @@
---|
- Procedure- Download the client configuration file.
- For versions earlier than MRS 3.x, perform the following operations:
- Log in to MRS Manager. For details, see Accessing Manager. Then, choose Services.
- Click Download Client.
Set Client Type to Only configuration files, Download to to Server, and click OK to generate the client configuration file. The generated file is saved in the /tmp/MRS-client directory on the active management node by default.
+Procedure- Download the client configuration file.
- For versions earlier than MRS 3.x, perform the following operations:
- Log in to MRS Manager. For details, see Accessing Manager. Then, choose Services.
- Click Download Client.
Set Client Type to Only configuration files, Download to to Server, and click OK to generate the client configuration file. The generated file is saved in the /tmp/MRS-client directory on the active management node by default.
- For MRS 3.x or later, perform the following operations:
- Log in to FusionInsight Manager. For details, see Accessing FusionInsight Manager (MRS 3.x or Later).
- Choose Cluster > Name of the desired cluster > Dashboard > More > Download Client.
- Download the cluster client.
Set Select Client Type to Configuration Files Only , select a platform type, and click OK to generate the client configuration file which is then saved in the /tmp/FusionInsight-Client/ directory on the active management node by default.
- Log in to the active management node of Manager.
- For versions earlier than MRS 3.x, perform the following operations:
- On the MRS console, click Clusters, choose Active Clusters, and click a cluster name. On the Nodes tab, view the node names. The node whose name contains master1 is the Master1 node, and the node whose name contains master2 is the Master2 node.
The active and standby management nodes of MRS Manager are installed on Master nodes by default. Because Master1 and Master2 are switched over in active and standby mode, Master1 is not always the active management node of MRS Manager. Run a command in Master1 to check whether Master1 is active management node of MRS Manager. For details about the command, see 2.d.
- - Log in to the Master1 node using the password as user root. For details, see Logging In to a Cluster.
- Run the following commands to switch to user omm:
sudo su - root
+ - Log in to the Master1 node using the password as user root.
- Run the following commands to switch to user omm:
sudo su - root
su - omm
- Run the following command to check the active management node of MRS Manager:
sh ${BIGDATA_HOME}/om-0.0.1/sbin/status-oms.sh
In the command output, the node whose HAActive is active is the active management node, and the node whose HAActive is standby is the standby management node. In the following example, mgtomsdat-sh-3-01-1 is the active management node, and mgtomsdat-sh-3-01-2 is the standby management node.
@@ -160,19 +160,17 @@ NodeName HostName HAVersion StartTime
- Run the following command to go to the client installation directory:
cd /opt/client
The cluster client has been installed in advance. The following client installation directory is used as an example. Change it based on the site requirements.
- - Run the following command to update the client configuration for the active management node.
sh refreshConfig.sh /opt/client Full path of the client configuration file package
+ - Run the following command to update the client configuration for the active management node.
sh refreshConfig.sh /opt/client Full path of the client configuration file package
For example, run the following command:
sh refreshConfig.sh /opt/client /tmp/FusionInsight-Client/FusionInsight_Cluster_1_Services_Client.tar
If the following information is displayed, the configurations have been updated successfully.
ReFresh components client config is complete.
Succeed to refresh components client config.
-
- Use the client on a Master node.
- On the active management node, for example, 192-168-0-30, run the following command to switch to the client directory, for example, /opt/client.
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user:
kinit MRS cluster user
Example: user kinit hiveuser
-The current user must have the permission to create Hive tables. To create a role with the permission, refer to Creating a Role. To bind the role to the current user, refer to Creating a User.If Kerberos authentication is disabled, skip this step.
+The current user must have the permission to create Hive tables.If Kerberos authentication is disabled, skip this step.
- Run the client command of the Hive component directly.
beeline
- Run the Hive client command to implement service A.
Operations on internal tables:
@@ -181,9 +179,9 @@ NodeName HostName HAVersion StartTime
insert into table user_info(id,name,gender,age,addr) values("12005000201","A","Male",19,"City A");
For MRS 2.x, perform the following operations:
insert into table user_info values("12005000201","A","Male",19,"City A");
- - Add users' educational backgrounds and professional titles to the user_info table.
For example, to add educational background and title information about user 12005000201, run the following command:
+ - Add users' educational backgrounds and professional titles to the user_info table.
For example, to add educational background and title information about user 12005000201, run the following command:
alter table user_info add columns(education string,technical string);
- - Query user names and addresses by user ID.
For example, to query the name and address of user 12005000201, run the following command:
+ - Query user names and addresses by user ID.
For example, to query the name and address of user 12005000201, run the following command:
select name,addr from user_info where id='12005000201';
- Delete the user information table.
drop table user_info;
diff --git a/docs/mrs/component-operation-guide/mrs_01_0443.html b/docs/mrs/component-operation-guide/mrs_01_0443.html
index 3460541a..d55d7ea4 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0443.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0443.html
@@ -4,7 +4,7 @@
The operations described in this section apply only to clusters of versions earlier than MRS 3.x.
If the default parameter settings of the MRS service cannot meet your requirements, you can modify the parameter settings as required.
- - Log in to the service page.
For versions earlier than MRS 1.9.2: Log in to MRS Manager, and choose Services.
+- Log in to the service page.
For versions earlier than MRS 1.9.2: Log in to MRS Manager, and choose Services.
For MRS 1.9.2 or later: Click the cluster name on the MRS console and choose Components.
- Choose HBase > Service Configuration and switch Basic to All. On the displayed HBase configuration page, modify parameter settings.
Table 1 HBase parametersParameter
diff --git a/docs/mrs/component-operation-guide/mrs_01_0500.html b/docs/mrs/component-operation-guide/mrs_01_0500.html
index 4e7a80dd..fc6ec1d6 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0500.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0500.html
@@ -18,8 +18,6 @@
- Using the ReplicationSyncUp Tool
-- GeoMesa Command Line
-
- Configuring HBase DR
- Configuring HBase Data Compression and Encoding
diff --git a/docs/mrs/component-operation-guide/mrs_01_0501.html b/docs/mrs/component-operation-guide/mrs_01_0501.html
index 9341a6ba..0b7e979a 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0501.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0501.html
@@ -5,13 +5,13 @@
Prerequisites- The active and standby clusters have been successfully installed and started (the cluster status is Running on the Active Clusters page), and you have the administrator rights of the clusters.
-- The network between the active and standby clusters is normal and ports can be used properly.
- Cross-cluster mutual trust has been configured. For details, see Configuring Cross-Cluster Mutual Trust Relationships.
- If historical data exists in the active cluster and needs to be synchronized to the standby cluster, cross-cluster replication must be configured for the active and standby clusters. For details, see Enabling Cross-Cluster Copy.
- Time is consistent between the active and standby clusters and the Network Time Protocol (NTP) service on the active and standby clusters uses the same time source.
- Mapping relationships between the names of all hosts in the active and standby clusters and service IP addresses have been configured in the /etc/hosts file by appending 192.***.***.*** host1 to the hosts file.
- The network bandwidth between the active and standby clusters is determined based on service volume, which cannot be less than the possible maximum service volume.
+- The network between the active and standby clusters is normal and ports can be used properly.
- Cross-cluster mutual trust has been configured.
- If historical data exists in the active cluster and needs to be synchronized to the standby cluster, cross-cluster replication must be configured for the active and standby clusters. For details, see Enabling Cross-Cluster Copy.
- Time is consistent between the active and standby clusters and the Network Time Protocol (NTP) service on the active and standby clusters uses the same time source.
- Mapping relationships between the names of all hosts in the active and standby clusters and service IP addresses have been configured in the /etc/hosts file by appending 192.***.***.*** host1 to the hosts file.
- The network bandwidth between the active and standby clusters is determined based on service volume, which cannot be less than the possible maximum service volume.
Constraints- Despite that HBase cluster replication provides the real-time data replication function, the data synchronization progress is determined by several factors, such as the service loads in the active cluster and the health status of processes in the standby cluster. In normal cases, the standby cluster should not take over services. In extreme cases, system maintenance personnel and other decision makers determine whether the standby cluster takes over services according to the current data synchronization indicators.
- Currently, the replication function supports only one active cluster and one standby cluster in HBase.
- Typically, do not perform operations on data synchronization tables in the standby cluster, such as modifying table properties or deleting tables. If any misoperation on the standby cluster occurs, data synchronization between the active and standby clusters will fail and data of the corresponding table in the standby cluster will be lost.
- If the replication function of HBase tables in the active cluster is enabled for data synchronization, after modifying the structure of a table in the active cluster, you need to manually modify the structure of the corresponding table in the standby cluster to ensure table structure consistency.
ProcedureEnable the replication function for the active cluster to synchronize data written by Put.
-- Log in to the service page.
For versions earlier than MRS 1.9.2: Log in to MRS Manager, and choose Services.
+- Log in to the service page.
For versions earlier than MRS 1.9.2: Log in to MRS Manager, and choose Services.
For MRS 1.9.2 or later: Click the cluster name on the MRS console and choose Components.
- Go to the All Configurations page of the HBase service. For details, see Modifying Cluster Service Configuration Parameters.
For clusters of MRS 1.9.2 or later:
If the Components tab is unavailable, complete IAM user synchronization first. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to synchronize IAM users.)
@@ -91,11 +91,11 @@
If yes, go to 6.
If no, go to 10.
- - Go to the All Configurations page of the HBase service parameters by referring to Modifying Cluster Service Configuration Parameters.
- On the HBase configuration interface of the active and standby clusters, search for hbase.replication.cluster.id and modify it. It specifies the HBase ID of the active and standby clusters. For example, the HBase ID of the active cluster is set to replication1 and the HBase ID of the standby cluster is set to replication2 for connecting the active cluster to the standby cluster. To save data overhead, the parameter value length is not recommended to exceed 30.
- On the HBase configuration interface of the standby cluster, search for hbase.replication.conf.dir and modify it. It specifies the HBase configurations of the active cluster client used by the standby cluster and is used for data replication when the bulkload data replication function is enabled. The parameter value is a path name, for example, /home.
- In versions earlier than MRS 3.x, you do not need to set this parameter. Skip 8.
- When bulkload replication is enabled, you need to manually place the HBase client configuration files (core-site.xml, hdfs-site.xml, and hbase-site.xml) in the active cluster on all RegionServer nodes in the standby cluster. The actual path for placing the configuration file is ${hbase.replication.conf.dir}/${hbase.replication.cluster.id}. For example, if hbase.replication.conf.dir of the standby cluster is set to /home and hbase.replication.cluster.id of the active cluster is set to replication1, the actual path for placing the configuration files in the standby cluster is /home/replication1. You also need to change the corresponding directory and file permissions by running the chown -R omm:wheel /home/replication1 command.
- You can obtain the client configuration files from the client in the active cluster, for example, the /opt/client/HBase/hbase/conf path. For details about how to update the configuration file, see Updating a Client.
+ - Go to the All Configurations page of the HBase service parameters by referring to Modifying Cluster Service Configuration Parameters.
- On the HBase configuration interface of the active and standby clusters, search for hbase.replication.cluster.id and modify it. It specifies the HBase ID of the active and standby clusters. For example, the HBase ID of the active cluster is set to replication1 and the HBase ID of the standby cluster is set to replication2 for connecting the active cluster to the standby cluster. To save data overhead, the parameter value length is not recommended to exceed 30.
- On the HBase configuration interface of the standby cluster, search for hbase.replication.conf.dir and modify it. It specifies the HBase configurations of the active cluster client used by the standby cluster and is used for data replication when the bulkload data replication function is enabled. The parameter value is a path name, for example, /home.
- In versions earlier than MRS 3.x, you do not need to set this parameter. Skip 8.
- When bulkload replication is enabled, you need to manually place the HBase client configuration files (core-site.xml, hdfs-site.xml, and hbase-site.xml) in the active cluster on all RegionServer nodes in the standby cluster. The actual path for placing the configuration file is ${hbase.replication.conf.dir}/${hbase.replication.cluster.id}. For example, if hbase.replication.conf.dir of the standby cluster is set to /home and hbase.replication.cluster.id of the active cluster is set to replication1, the actual path for placing the configuration files in the standby cluster is /home/replication1. You also need to change the corresponding directory and file permissions by running the chown -R omm:wheel /home/replication1 command.
- You can obtain the client configuration files from the client in the active cluster, for example, the /opt/client/HBase/hbase/conf path.
- On the HBase configuration page of the active cluster, search for and change the value of hbase.replication.bulkload.enabled to true to enable bulkload replication.
Restarting the HBase service and install the client
- - Save the configurations and restart HBase.
- In the active and standby clusters of MRS 1.9.2 or earlier, choose Cluster > Dashboard > More > Download Client of MRS 1.9.2 or later, choose Cluster > Dashboard > More > Download Client. For details about how to update the client configuration file, see Updating a Client.
+ - Save the configurations and restart HBase.
- In the active and standby clusters of MRS 1.9.2 or earlier, choose Cluster > Dashboard > More > Download Client of MRS 1.9.2 or later, choose Cluster > Dashboard > More > Download Client.
Synchronize table data of the active cluster. (Skip this step if the active cluster has no data.)
- Access the HBase shell of the active cluster as user hbase.
- On the active management node where the client has been updated, run the following command to go to the client directory:
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
@@ -116,8 +116,6 @@
- Copy the data that has been exported to the standby cluster.
hadoop distcp Directory for storing source data in the active cluster hdfs://ActiveNameNodeIP:9820/ Directory for storing source data in the standby cluster
ActiveNameNodeIP indicates the IP address of the active NameNode in the standby cluster.
Example: hadoop distcp /user/hbase/t1 hdfs://192.168.40.2:9820/user/hbase/t1
-
- Import data to the standby cluster as the HBase table user of the standby cluster.
hbase org.apache.hadoop.hbase.mapreduce.Import -Dimport.bulk.output=Directory where the output data is stored in the standby cluster Table name Directory where the source data is stored in the standby cluster
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles Directory where the output data is stored in the standby cluster Table name
For example, hbase org.apache.hadoop.hbase.mapreduce.Import -Dimport.bulk.output=/user/hbase/output_t1 t1 /user/hbase/t1 and
@@ -238,7 +236,7 @@ replication2 192.168.0.13,192.168.0.177,192.168.0.25:2181:/hbase ENABLED
- Start time: If start time is not specified, the default value 0 will be used.
- End time: If end time is not specified, the time when the current operation is submitted will be used by default.
- Table name: If a table name is not entered, all user tables for which the real-time synchronization function is enabled will be verified by default.
|
-Switch the data writing status.
+ | Switch the data writing status.
|
set_clusterState_active
set_clusterState_standby
diff --git a/docs/mrs/component-operation-guide/mrs_01_0502.html b/docs/mrs/component-operation-guide/mrs_01_0502.html
index 9a3fbbea..ce8da489 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0502.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0502.html
@@ -11,17 +11,13 @@
For versions earlier than MRS 3.x, choose Components > HDFS > Service Configuration on the cluster details page. Switch Basic to All, and search for hadoop.rpc.protection.
-Procedure- Log in to the service page.
For versions earlier than MRS 1.9.2: Log in to MRS Manager, and choose Services.
-For MRS 1.9.2 or later: Click the cluster name on the MRS console and choose Components.
- - Go to the All Configurations page of the Yarn service. For details, see Modifying Cluster Service Configuration Parameters.
If the Components tab is unavailable, complete IAM user synchronization first. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to synchronize IAM users.)
+ Procedure- Log in to the service page.
- Go to the All Configurations page of the Yarn service. For details, see Modifying Cluster Service Configuration Parameters.
If the Components tab is unavailable, complete IAM user synchronization first. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to synchronize IAM users.)
- In the navigation pane, choose Yarn > Distcp.
- Set haclusterX.remotenn1 of dfs.namenode.rpc-address to the service IP address and RPC port number of one NameNode instance of the peer cluster, and set haclusterX.remotenn2 to the service IP address and RPC port number of the other NameNode instance of the peer cluster. Enter a value in the IP address:port format.
For MRS 1.9.2 or later, log in to the MRS console, click the cluster name, and choose Components > HDFS > Instances to obtain the service IP address of the NameNode instance.
You can also log in to FusionInsight Manager in MRS 3.x clusters, and choose Cluster > Name of the desired cluster > Services > HDFS > Instance to obtain the service IP address of the NameNode instance.
dfs.namenode.rpc-address.haclusterX.remotenn1 and dfs.namenode.rpc-address.haclusterX.remotenn2 do not distinguish active and standby NameNode instances. The default NameNode RPC port is 9820 and cannot be modified on MRS Manager.
For example, 10.1.1.1:9820 and 10.1.1.2:9820.
-
- Save the configuration. On the Dashboard tab page, and choose More > Restart Service to restart the Yarn service.
Operation succeeded is displayed. Click Finish. The Yarn service is started successfully.
- Log in to the other cluster and repeat the preceding operations.
diff --git a/docs/mrs/component-operation-guide/mrs_01_0581.html b/docs/mrs/component-operation-guide/mrs_01_0581.html
index ba52a182..d0add489 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0581.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0581.html
@@ -58,8 +58,6 @@
- Switching the Hive Execution Engine to Tez
- - Hive Materialized View
-
- Hive Log Overview
- Hive Performance Tuning
diff --git a/docs/mrs/component-operation-guide/mrs_01_0810.html b/docs/mrs/component-operation-guide/mrs_01_0810.html
index 48c30df7..60e4b250 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0810.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0810.html
@@ -22,7 +22,7 @@
NOTE: You can set this parameter on the HDFS component configuration page. The parameter setting takes effect globally, that is, the setting of whether the RPC channel is encrypted takes effect on all modules in Hadoop.
There are three encryption modes.
-- authentication: This is the default value in normal mode. In this mode, data is directly transmitted without encryption after being authenticated. This mode ensures performance but has security risks.
- integrity: Data is transmitted without encryption or authentication. To ensure data security, exercise caution when using this mode.
- privacy: This is the default value in security mode, indicating that data is transmitted after authentication and encryption. This mode reduces the performance.
+- authentication: This is the default value in normal mode. In this mode, data is directly transmitted without encryption after being authenticated. This mode ensures performance but has security risks.
- integrity: Data is transmitted without encryption or authentication. To ensure data security, exercise caution when using this mode.
- privacy: This is the default value in security mode, indicating that data is transmitted after authentication and encryption. This mode reduces the performance.
|
- Security mode: privacy
- Normal mode: authentication
|
diff --git a/docs/mrs/component-operation-guide/mrs_01_0949.html b/docs/mrs/component-operation-guide/mrs_01_0949.html
index f3fee2ae..9f8d4124 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0949.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0949.html
@@ -23,7 +23,7 @@
In the Permission table, click Hive and select Hive Admin Privilege.
NOTE: After being bound to the Hive administrator role, perform the following operations during each maintenance operation:
- - Log in to the node where the client is installed. For details, see Installing a Client.
- Run the following command to configure environment variables:
For example, if the Hive client installation directory is /opt/hiveclient, run source /opt/hiveclient/bigdata_env.
+- Log in to the node where the client is installed.
- Run the following command to configure environment variables:
For example, if the Hive client installation directory is /opt/hiveclient, run source /opt/hiveclient/bigdata_env.
- Run the following command to authenticate the user:
kinit Hive service user
- Run the following command to log in to the client tool:
beeline
- Run the following command to update the Hive administrator permissions:
set role admin;
diff --git a/docs/mrs/component-operation-guide/mrs_01_0951.html b/docs/mrs/component-operation-guide/mrs_01_0951.html
index c21e0486..c1246e98 100644
--- a/docs/mrs/component-operation-guide/mrs_01_0951.html
+++ b/docs/mrs/component-operation-guide/mrs_01_0951.html
@@ -16,7 +16,7 @@
Hive over HBase Authorization in MRS Earlier than 3.x
After the permissions are assigned, you can use HQL statements that are similar to SQL statements to access HBase tables from Hive. The following uses the procedure for assigning a user the rights to query HBase tables as an example.
- On the role management page of MRS Manager, create an HBase role, for example, hive_hbase_create, and grant the permission to create HBase tables.
In the Permission table, choose HBase > HBase Scope > global, select create of the namespace default, and click OK.
- - On MRS Manager, create a human-machine user, for example, hbase_creates_user, add the user to the hive group, and bind the hive_hbase_create role to the user so that the user can create Hive and HBase tables.
- Log in to the node where the client is installed. For details, see Installing a Client.
- Run the following command to configure environment variables:
source /opt/client/bigdata_env
+ - On MRS Manager, create a human-machine user, for example, hbase_creates_user, add the user to the hive group, and bind the hive_hbase_create role to the user so that the user can create Hive and HBase tables.
- Log in to the node where the client is installed.
- Run the following command to configure environment variables:
source /opt/client/bigdata_env
- Run the following command to authenticate the user:
kinit hbase_creates_user
- Run the following command to go to the shell environment of the Hive client:
beeline
- Run the following command to create a table in Hive and HBase, for example, the thh table.
CREATE TABLE thh(id int, name string, country string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES("hbase.columns.mapping" = "cf1:id,cf1:name,:key") TBLPROPERTIES ("hbase.table.name" = "thh");
diff --git a/docs/mrs/component-operation-guide/mrs_01_1183.html b/docs/mrs/component-operation-guide/mrs_01_1183.html
index ef64a389..bd8e21ac 100644
--- a/docs/mrs/component-operation-guide/mrs_01_1183.html
+++ b/docs/mrs/component-operation-guide/mrs_01_1183.html
@@ -2,8 +2,8 @@
Using the Spark Client
After an MRS cluster is created, you can create and submit jobs on the client. The client can be installed on nodes inside or outside the cluster.
- - Nodes inside the cluster: After an MRS cluster is created, the client has been installed on the master and core nodes in the cluster by default. For details, see Using an MRS Client on Nodes Inside a Cluster. Then, log in to the node where the MRS client is installed..
- Nodes outside the cluster: You can install the client on nodes outside a cluster. For details about how to install a client, see Using an MRS Client on Nodes Outside a Cluster, and log in to the node where the MRS client is installed..
- Using the Spark Client- Based on the client location, log in to the node where the client is installed. For details, see Using an MRS Client on Nodes Inside a Cluster, or Using an MRS Client on Nodes Outside a Cluster.
- Run the following command to go to the client installation directory:
cd /opt/client
+- Nodes inside the cluster: After an MRS cluster is created, the client has been installed on the master and core nodes in the cluster by default.
- Nodes outside the cluster: You can install the client on nodes outside a cluster.
+Using the Spark Client- Based on the client location, log in to the node where the client is installed.
- Run the following command to go to the client installation directory:
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- If the cluster is in security mode, run the following command to authenticate the user. In normal mode, user authentication is not required.
kinit Component service user
- Run the Spark shell command. The following provides an example:
spark-beeline
diff --git a/docs/mrs/component-operation-guide/mrs_01_1986.html b/docs/mrs/component-operation-guide/mrs_01_1986.html
index e61ea15d..a9e4050b 100644
--- a/docs/mrs/component-operation-guide/mrs_01_1986.html
+++ b/docs/mrs/component-operation-guide/mrs_01_1986.html
@@ -34,7 +34,7 @@
|
Indicates the maximum value for the broadcast configuration when two tables are joined.
- When the size of a field in a table involved in an SQL statement is less than the value of this parameter, the system broadcasts the SQL statement.
- If the value is set to -1, broadcast is not performed.
-For details, visit https://spark.apache.org/docs/3.1.1/sql-programming-guide.html.
+For details, visit https://archive.apache.org/dist/spark/docs/3.1.1/sql-programming-guide.html.
|
diff --git a/docs/mrs/component-operation-guide/mrs_01_1990.html b/docs/mrs/component-operation-guide/mrs_01_1990.html
index 696f81f9..9604b844 100644
--- a/docs/mrs/component-operation-guide/mrs_01_1990.html
+++ b/docs/mrs/component-operation-guide/mrs_01_1990.html
@@ -3,7 +3,7 @@
Multiple JDBC Clients Concurrently Connecting to JDBCServer
ScenarioMultiple clients can be connected to JDBCServer at the same time. However, if the number of concurrent tasks is too large, the default configuration of JDBCServer must be optimized to adapt to the scenario.
- Procedure- Set the fair scheduling policy of JDBCServer.
The default scheduling policy of Spark is FIFO, which may cause a failure of short tasks in multi-task scenarios. Therefore, the fair scheduling policy must be used in multi-task scenarios to prevent task failure. - For details about how to configure Fair Scheduler in Spark, visit http://spark.apache.org/docs/3.1.1/job-scheduling.html#scheduling-within-an-application.
- Configure Fair Scheduler on the JDBC client.
- In the Beeline command line client or the code defined by JDBC, run the following statement:
PoolName is a scheduling pool for Fair Scheduler.
+Procedure- Set the fair scheduling policy of JDBCServer.
The default scheduling policy of Spark is FIFO, which may cause a failure of short tasks in multi-task scenarios. Therefore, the fair scheduling policy must be used in multi-task scenarios to prevent task failure. - For details about how to configure Fair Scheduler in Spark, visit https://archive.apache.org/dist/spark/docs/3.1.1/job-scheduling.html#scheduling-within-an-application.
- Configure Fair Scheduler on the JDBC client.
- In the Beeline command line client or the code defined by JDBC, run the following statement:
PoolName is a scheduling pool for Fair Scheduler.
SET spark.sql.thriftserver.scheduler.pool=PoolName;
- Run the SQL command. The Spark task will be executed in the preceding scheduling pool.
diff --git a/docs/mrs/component-operation-guide/mrs_01_2019.html b/docs/mrs/component-operation-guide/mrs_01_2019.html
index 7d77e100..bdbdf828 100644
--- a/docs/mrs/component-operation-guide/mrs_01_2019.html
+++ b/docs/mrs/component-operation-guide/mrs_01_2019.html
@@ -48,7 +48,7 @@ Caused by: java.lang.OutOfMemoryError: Direct buffer memory
GC_OPTS
|
-The GC parameter of YARN NodeManger.
+ | The GC parameter of YARN NodeManager.
|
128M
|
diff --git a/docs/mrs/component-operation-guide/mrs_01_2028.html b/docs/mrs/component-operation-guide/mrs_01_2028.html
index fdaedbe5..f84428c6 100644
--- a/docs/mrs/component-operation-guide/mrs_01_2028.html
+++ b/docs/mrs/component-operation-guide/mrs_01_2028.html
@@ -10,7 +10,7 @@
REFRESH TABLE table_name;
table_name indicates the name of the table to be updated. The table must exist. Otherwise, an error is reported.
When the query statement is executed, the latest inserted data can be obtained.
-For details, visit https://spark.apache.org/docs/3.1.1/sql-programming-guide.html#metadata-refreshing.
+For details, visit https://archive.apache.org/dist/spark/docs/3.1.1/sql-programming-guide.html#metadata-refreshing.
diff --git a/docs/mrs/component-operation-guide/mrs_01_2127.html b/docs/mrs/component-operation-guide/mrs_01_2127.html
index f7f15090..e38b5d3c 100644
--- a/docs/mrs/component-operation-guide/mrs_01_2127.html
+++ b/docs/mrs/component-operation-guide/mrs_01_2127.html
@@ -35,18 +35,6 @@
CentOS 7.6
|
-Kunpeng computing (Arm)
- |
-Euler
- |
-EulerOS 2.8
- |
-
-CentOS
- |
-CentOS 7.6
- |
-
diff --git a/docs/mrs/component-operation-guide/mrs_01_2311.html b/docs/mrs/component-operation-guide/mrs_01_2311.html
deleted file mode 100644
index 282cc686..00000000
--- a/docs/mrs/component-operation-guide/mrs_01_2311.html
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
-Hive Materialized View
-IntroductionA Hive materialized view is a special table obtained based on the query results of Hive internal tables. A materialized view can be considered as an intermediate table that stores actual data and occupies physical space. The tables on which a materialized view depends are called the base tables of the materialized view.
- Materialized views are used to pre-compute and save the results of time-consuming operations such as table joining or aggregation. When executing a query, you can rewrite the query statement based on the base tables to the query statement based on materialized views. In this way, you do not need to perform time-consuming operations such as join and group by, thereby quickly obtaining the query result.
-
- - A materialized view is a special table that stores actual data and occupies physical space.
- Before deleting a base table, you must delete the materialized view created based on the base table.
- The materialized view creation statement is atomic, which means that other users cannot see the materialized view until all query results are populated.
- A materialized view cannot be created based on the query results of another materialized view.
- A materialized view cannot be created based on the results of a tableless query.
- You cannot insert, update, delete, load, or merge materialized views.
- You can perform complex query operations on materialized views, because they are special tables in nature.
- When the data of a base table is updated, you need to manually update the materialized view. Otherwise, the materialized view will retain the old data. That is, the materialized view expires.
- You can use the describe syntax to check whether the materialized view created based on ACID tables has expired.
- The describe statement cannot be used to check whether a materialized view created based on non-ACID tables has expired.
- A materialized view can store only ORC files. You can use TBLPROPERTIES ('transactional'='true') to create a transactional Hive internal table.
-
- Creating a Materialized ViewSyntax
- CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
- [COMMENT materialized_view_comment]
- DISABLE REWRITE
- [ROW FORMAT row_format]
- [STORED AS file_format]
- | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
- ]
- [LOCATION hdfs_path]
- [TBLPROPERTIES (property_name=property_value, ...)]
-AS
-<query>;
- - Currently, the following materialized view file formats are supported: PARQUET, TextFile, SequenceFile, RCfile, and ORC. If STORED AS is not specified in the creation statement, the default file format is ORC.
- Names of materialized views must be unique in the same database. Otherwise, you cannot create a new materialized view, and data files of the original materialized view will be overwritten by the data files queried based on the base table in the new one. As a result, data may be tampered with. (After being tampered with, the materialized view can be restored by re-creating the materialized view.).
-
- Cases
- - Log in to the Hive client and run the following command to enable the following parameters. For details, see Using a Hive Client.
set hive.support.concurrency=true;
-set hive.exec.dynamic.partition.mode=nonstrict;
-set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
- - Create a base table and insert data.
create table tb_emp(
-empno int,ename string,job string,mgr int,hiredate TIMESTAMP,sal float,comm float,deptno int
-)stored as orc
-tblproperties('transactional'='true');
-
-insert into tb_emp values(7369, 'SMITH', 'CLERK',7902, '1980-12-17 08:30:09',800.00,NULL,20),
-(7499, 'ALLEN', 'SALESMAN',7698, '1981-02-20 17:12:00',1600.00,300.00,30),
-(7521, 'WARD', 'SALESMAN',7698, '1981-02-22 09:05:34',1250.00,500.00,30),
-(7566, 'JONES', 'MANAGER', 7839, '1981-04-02 10:14:13',2975.00,NULL,20),
-(7654, 'MARTIN', 'SALESMAN',7698, '1981-09-28 08:36:17',1250.00,1400.00,30),
-(7698, 'BLAKE', 'MANAGER',7839, '1981-05-01 11:12:55',2850.00,NULL,30),
-(7782, 'CLARK', 'MANAGER',7839, '1981-06-09 15:45:28',2450.00,NULL,10),
-(7788, 'SCOTT', 'ANALYST',7566, '1987-04-19 14:05:34',3000.00,NULL,20),
-(7839, 'KING', 'PRESIDENT',NULL, '1981-11-17 10:18:25',5000.00,NULL,10),
-(7844, 'TURNER', 'SALESMAN',7698, '1981-09-08 09:05:34',1500.00,0.00,30),
-(7876, 'ADAMS', 'CLERK',7788, '1987-05-23 15:07:44',1100.00,NULL,20),
-(7900, 'JAMES', 'CLERK',7698, '1981-12-03 16:23:56',950.00,NULL,30),
-(7902, 'FORD', 'ANALYST',7566, '1981-12-03 08:48:17',3000.00,NULL,20),
-(7934, 'MILLER', 'CLERK',7782, '1982-01-23 11:45:29',1300.00,NULL,10);
- - Create a materialized view based on the results of the tb_emp query.
create materialized view group_mv disable rewrite
-row format serde 'org.apache.hadoop.hive.serde2.JsonSerDe'
-stored as textfile
-tblproperties('mv_content'='Total compensation of each department')
-as select deptno,sum(sal) sum_sal from tb_emp group by deptno;
-
-
- Applying a Materialized ViewRewrite the query statement based on base tables to the query statement based on materialized views to improve the query efficiency.
- Cases
- Execute the following query statement:
- select deptno,sum(sal) from tb_emp group by deptno having sum(sal)>10000;
- Based on the created materialized view, rewrite the query statement:
- select deptno, sum_sal from group_mv where sum_sal>10000;
-
- Checking a Materialized ViewSyntax
- SHOW MATERIALIZED VIEWS [IN database_name] ['identifier_with_wildcards'];
- DESCRIBE [EXTENDED | FORMATTED] [db_name.]materialized_view_name;
- Cases
- show materialized views;
- describe formatted group_mv;
-
- Deleting a Materialized ViewSyntax
- DROP MATERIALIZED VIEW [db_name.]materialized_view_name;
- Cases
- drop materialized view group_mv;
-
- Rebuilding a Materialized ViewWhen a materialized view is created, the base table data is filled in the materialized view. However, the data that is added, deleted, or modified in the base table is not automatically synchronized to the materialized view. Therefore, you need to manually rebuild the view after updating the data.
- Syntax
- ALTER MATERIALIZED VIEW [db_name.]materialized_view_name REBUILD;
- Cases
- alter materialized view group_mv rebuild;
- When the base table data is updated but the materialized view data is not updated, the materialized view is in the expired state by default.
- The describe statement can be used to check whether a materialized view created based on transaction tables has expired. If the value of Outdated for Rewriting is Yes, the license has expired. If the value of Outdated for Rewriting is No, the license has not expired.
-
-
-
-
-
diff --git a/docs/mrs/component-operation-guide/mrs_01_2398.html b/docs/mrs/component-operation-guide/mrs_01_2398.html
index 55bd2758..4c614491 100644
--- a/docs/mrs/component-operation-guide/mrs_01_2398.html
+++ b/docs/mrs/component-operation-guide/mrs_01_2398.html
@@ -3,7 +3,6 @@
Creating a ClickHouse Table
ClickHouse implements the replicated table mechanism based on the ReplicatedMergeTree engine and ZooKeeper. When creating a table, you can specify an engine to determine whether the table is highly available. Shards and replicas of each table are independent of each other.
ClickHouse also implements the distributed table mechanism based on the Distributed engine. Views are created on all shards (local tables) for distributed query, which is easy to use. ClickHouse has the concept of data sharding, which is one of the features of distributed storage. That is, parallel read and write are used to improve efficiency.
- The ClickHouse cluster table engine that uses Kunpeng as the CPU architecture does not support HDFS and Kafka.
Viewing cluster and Other Environment Parameters of ClickHouse- Use the ClickHouse client to connect to the ClickHouse server by referring to Using ClickHouse from Scratch.
- Query the cluster identifier and other information about the environment parameters.
select cluster,shard_num,replica_num,host_name from system.clusters;SELECT
cluster,
shard_num,
diff --git a/docs/mrs/component-operation-guide/mrs_01_24049.html b/docs/mrs/component-operation-guide/mrs_01_24049.html
index da9abe0a..0674dfaf 100644
--- a/docs/mrs/component-operation-guide/mrs_01_24049.html
+++ b/docs/mrs/component-operation-guide/mrs_01_24049.html
@@ -27,7 +27,7 @@
|
-
Click OK. Return to role management page.
After the FlinkServer role is created, create a FlinkServer user and bind the user to the role and user group. For details, see Creating a User.
+
Click OK. Return to role management page.
After the FlinkServer role is created, create a FlinkServer user and bind the user to the role and user group.
diff --git a/docs/mrs/component-operation-guide/mrs_01_24119.html b/docs/mrs/component-operation-guide/mrs_01_24119.html
deleted file mode 100644
index bd9b2c11..00000000
--- a/docs/mrs/component-operation-guide/mrs_01_24119.html
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
-
GeoMesa Command Line
-
This section applies only to MRS 3.1.0 or later.
-
-
This section describes common GeoMesa commands. For more GeoMesa commands, visit https://www.geomesa.org/documentation/user/accumulo/commandline.html.
-
After installing the HBase client and loading environment variables, you can use the geomesa-hbase command line.
-
- Viewing classpath
After you run the classpath command, all classpath information of the current command line tool will be returned.
-bin/geomesa-hbase classpath
- - Creating a table
Run the create-schema command to create a table. When creating a table, you need to specify the directory name, table name, and table specifications at least.
-bin/geomesa-hbase create-schema -c geomesa -f test -s Who:String,What:java.lang.Long,When:Date,*Where:Point:srid=4326,Why:String
-
-
- Describing a table
Run the describe-schema command to obtain table descriptions. When describing a table, you need to specify the directory name and table name.
-bin/geomesa-hbase describe-schema -c geomesa -f test
- - Importing data in batches
Run the ingest command to import data in batches. When importing data, you need to specify the directory name, table name, table specifications, and the related data converter.
-The data in the data.csv file contains license plate number, vehicle color, longitude, latitude, and time. Save the data table to the folder.
-AAA,red,113.918417,22.505892,2017-04-09 18:03:46
-BBB,white,113.960719,22.556511,2017-04-24 07:38:47
-CCC,blue,114.088333,22.637222,2017-04-23 15:07:54
-DDD,yellow,114.195456,22.596103,2017-04-21 21:27:06
-EEE,black,113.897614,22.551331,2017-04-09 09:34:48
-Table structure definition: myschema.sft. Save myschema.sft to the conf folder of the GeoMesa command line tool.
-geomesa.sfts.cars = {
- attributes = [
- { name = "carid", type = "String", index = true }
- { name = "color", type = "String", index = false }
- { name = "time", type = "Date", index = false }
- { name = "geom", type = "Point", index = true,srid = 4326,default = true }
- ]
-}
-Converter definition: myconvertor.convert Save myconvertor.convert to the conf folder of the GeoMesa command line tool.
-geomesa.converters.cars= {
- type = "delimited-text",
- format = "CSV",
- id-field = "$fid",
- fields = [
- { name = "fid", transform = "concat($1,$5)" }
- { name = "carid", transform = "$1::string" }
- { name = "color", transform = "$2::string" }
- { name = "lon", transform = "$3::double" }
- { name = "lat", transform = "$4::double" }
- { name = "geom", transform = "point($lon,$lat)" }
- { name = "time", transform = "date('YYYY-MM-dd HH:mm:ss',$5)" }
- ]
-}
-Run the following command to import data:
-bin/geomesa-hbase ingest -c geomesa -C conf/myconvertor.convert -s conf/myschema.sft data/data.csv
-For details about other parameters for importing data, visit https://www.geomesa.org/documentation/user/accumulo/examples.html#ingesting-data.
- - Querying explanations
Run the explain command to obtain execution plan explanations of the specified query statement. You need to specify the directory name, table name, and query statement.
-bin/geomesa-hbase explain -c geomesa -f cars -q "carid = 'BBB'"
- - Analyzing statistics
Run the stats-analyze command to conduct statistical analysis on the data table. In addition, you can run the stats-bounds, stats-count, stats-histogram, and stats-top-k commands to collect more detailed statistics on the data table.
-bin/geomesa-hbase stats-analyze -c geomesa -f cars
-bin/geomesa-hbase stats-bounds -c geomesa -f cars
-bin/geomesa-hbase stats-count -c geomesa -f cars
-bin/geomesa-hbase stats-histogram -c geomesa -f cars
-bin/geomesa-hbase stats-top-k -c geomesa -f cars
- - Exporting a feature
Run the export command to export a feature. When exporting the feature, you must specify the directory name and table name. In addition, you can specify a query statement to export the feature.
-bin/geomesa-hbase export -c geomesa -f cars -q "carid = 'BBB'"
- - Deleting a feature
Run the delete-features command to delete a feature. When deleting the feature, you must specify the directory name and table name. In addition, you can specify a query statement to delete the feature.
-bin/geomesa-hbase delete-features -c geomesa -f cars -q "carid = 'BBB'"
- - Obtain the names of all tables in the directory.
Run the get-type-names command to obtain the names of tables in the specified directory.
-bin/geomesa-hbase get-type-names -c geomesa
- - Deleting a table
Run the remove-schema command to delete a table. You need to specify the directory name and table name at least.
-bin/geomesa-hbase remove-schema -c geomesa -f test
-bin/geomesa-hbase remove-schema -c geomesa -f cars
- - Deleting a catalog
Run the delete-catalog command to delete the specified catalog.
-bin/geomesa-hbase delete-catalog -c geomesa
-
-
-
-
diff --git a/docs/mrs/component-operation-guide/mrs_01_24198.html b/docs/mrs/component-operation-guide/mrs_01_24198.html
index a2b0c5a5..6f5ac045 100644
--- a/docs/mrs/component-operation-guide/mrs_01_24198.html
+++ b/docs/mrs/component-operation-guide/mrs_01_24198.html
@@ -4,7 +4,7 @@
The ClickHouse data migration tool can migrate some partitions of one or more partitioned MergeTree tables on several ClickHouseServer nodes to the same tables on other ClickHouseServer nodes. In the capacity expansion scenario, you can use this tool to migrate data from an original node to a new node to balance data after capacity expansion.
Prerequisites
- The ClickHouse and Zookeeper services are running properly. The ClickHouseServer instances on the source and destination nodes are normal.
- The destination node has the data table to be migrated and the table is a partitioned MergeTree table.
- Before creating a migration task, ensure that all tasks for writing data to a table to be migrated have been stopped. After the task is started, you can only query the table to be migrated and cannot write data to or delete data from the table. Otherwise, data may be inconsistent before and after the migration.
- The ClickHouse data directory on the destination node has sufficient space.
-
Procedure
- Log in to Manager and choose Cluster > Services > ClickHouse. On the ClickHouse service page, click the Data Migration tab.
+
Procedure
- Log in to Manager and choose Cluster > Services > ClickHouse. On the ClickHouse service page, click the Data Migration tab.

- Click Add Task.
- On the page for creating a migration task, set the migration task parameters. For details, see Table 1.