Yang, Tong 6182f91ba8 MRS component operation guide_normal 2.0.38.SP20 version
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com>
Co-authored-by: Yang, Tong <yangtong2@huawei.com>
Co-committed-by: Yang, Tong <yangtong2@huawei.com>
2022-12-09 14:55:21 +00:00

7247 lines
291 KiB
JSON
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Alluxio",
"uri":"mrs_01_0756.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"1"
},
{
"desc":"If you want to use a unified client API and a global namespace to access persistent storage systems including HDFS and OBS to separate computing from storage, you can con",
"product_code":"mrs",
"title":"Configuring an Underlying Storage System",
"uri":"mrs_01_0759.html",
"doc_type":"cmpntguide",
"p_code":"1",
"code":"2"
},
{
"desc":"The port number used for accessing the Alluxio file system is 19998, and the access address is alluxio://<Master node IP address of Alluxio>:19998/<PATH>. This section us",
"product_code":"mrs",
"title":"Accessing Alluxio Using a Data Application",
"uri":"mrs_01_0760.html",
"doc_type":"cmpntguide",
"p_code":"1",
"code":"3"
},
{
"desc":"Create a cluster with Alluxio installed.Log in to the active Master node in a cluster as user root using the password set during cluster creation.Run the following comman",
"product_code":"mrs",
"title":"Common Operations of Alluxio",
"uri":"mrs_01_0757.html",
"doc_type":"cmpntguide",
"p_code":"1",
"code":"4"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using CarbonData (for Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0385.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"5"
},
{
"desc":"This section is for MRS 3.x or earlier. For MRS 3.x or later, see Using CarbonData (for MRS 3.x or Later).This section describes the procedure of using Spark CarbonData. ",
"product_code":"mrs",
"title":"Using CarbonData from Scratch",
"uri":"mrs_01_0386.html",
"doc_type":"cmpntguide",
"p_code":"5",
"code":"6"
},
{
"desc":"CarbonData tables are similar to tables in the relational database management system (RDBMS). RDBMS tables consist of rows and columns to store data. CarbonData tables ha",
"product_code":"mrs",
"title":"About CarbonData Table",
"uri":"mrs_01_0387.html",
"doc_type":"cmpntguide",
"p_code":"5",
"code":"7"
},
{
"desc":"A CarbonData table must be created to load and query data.Users can create a table by specifying its columns and data types. For analysis clusters with Kerberos authentic",
"product_code":"mrs",
"title":"Creating a CarbonData Table",
"uri":"mrs_01_0388.html",
"doc_type":"cmpntguide",
"p_code":"5",
"code":"8"
},
{
"desc":"Unused CarbonData tables can be deleted. After a CarbonData table is deleted, its metadata and loaded data are deleted together.DROP TABLE [IF EXISTS] [db_name.]table_nam",
"product_code":"mrs",
"title":"Deleting a CarbonData Table",
"uri":"mrs_01_0389.html",
"doc_type":"cmpntguide",
"p_code":"5",
"code":"9"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using CarbonData (for MRS 3.x or Later)",
"uri":"mrs_01_1400.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"10"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_1401.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"11"
},
{
"desc":"CarbonData is a new Apache Hadoop native data-store format. CarbonData allows faster interactive queries over PetaBytes of data using advanced columnar storage, index, co",
"product_code":"mrs",
"title":"CarbonData Overview",
"uri":"mrs_01_1402.html",
"doc_type":"cmpntguide",
"p_code":"11",
"code":"12"
},
{
"desc":"The memory required for data loading depends on the following factors:Number of columnsColumn valuesConcurrency (configured using carbon.number.of.cores.while.loading)Sor",
"product_code":"mrs",
"title":"Main Specifications of CarbonData",
"uri":"mrs_01_1403.html",
"doc_type":"cmpntguide",
"p_code":"11",
"code":"13"
},
{
"desc":"This section provides the details of all the configurations required for the CarbonData System.Configure the following parameters in the spark-defaults.conf file on the S",
"product_code":"mrs",
"title":"Configuration Reference",
"uri":"mrs_01_1404.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"14"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Operation Guide",
"uri":"mrs_01_1405.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"15"
},
{
"desc":"This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to ",
"product_code":"mrs",
"title":"CarbonData Quick Start",
"uri":"mrs_01_1406.html",
"doc_type":"cmpntguide",
"p_code":"15",
"code":"16"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Table Management",
"uri":"mrs_01_1407.html",
"doc_type":"cmpntguide",
"p_code":"15",
"code":"17"
},
{
"desc":"In CarbonData, data is stored in entities called tables. CarbonData tables are similar to RDBMS tables. RDBMS data is stored in a table consisting of rows and columns. Ca",
"product_code":"mrs",
"title":"About CarbonData Table",
"uri":"mrs_01_1408.html",
"doc_type":"cmpntguide",
"p_code":"17",
"code":"18"
},
{
"desc":"A CarbonData table must be created to load and query data. You can run the Create Table command to create a table. This command is used to create a table using custom col",
"product_code":"mrs",
"title":"Creating a CarbonData Table",
"uri":"mrs_01_1409.html",
"doc_type":"cmpntguide",
"p_code":"17",
"code":"19"
},
{
"desc":"You can run the DROP TABLE command to delete a table. After a CarbonData table is deleted, its metadata and loaded data are deleted together.Run the following command to ",
"product_code":"mrs",
"title":"Deleting a CarbonData Table",
"uri":"mrs_01_1410.html",
"doc_type":"cmpntguide",
"p_code":"17",
"code":"20"
},
{
"desc":"When the SET command is executed, the new properties overwrite the existing ones.SORT SCOPEThe following is an example of the SET SORT SCOPE command:ALTER TABLE tablename",
"product_code":"mrs",
"title":"Modify the CarbonData Table",
"uri":"mrs_01_1411.html",
"doc_type":"cmpntguide",
"p_code":"17",
"code":"21"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Table Data Management",
"uri":"mrs_01_1412.html",
"doc_type":"cmpntguide",
"p_code":"15",
"code":"22"
},
{
"desc":"After a CarbonData table is created, you can run the LOAD DATA command to load data to the table for query. Once data loading is triggered, data is encoded in CarbonData ",
"product_code":"mrs",
"title":"Loading Data",
"uri":"mrs_01_1413.html",
"doc_type":"cmpntguide",
"p_code":"22",
"code":"23"
},
{
"desc":"If you want to modify and reload the data because you have loaded wrong data into a table, or there are too many bad records, you can delete specific segments by segment ",
"product_code":"mrs",
"title":"Deleting Segments",
"uri":"mrs_01_1414.html",
"doc_type":"cmpntguide",
"p_code":"22",
"code":"24"
},
{
"desc":"Frequent data access results in a large number of fragmented CarbonData files in the storage directory. In each data loading, data is sorted and indexing is performed. Th",
"product_code":"mrs",
"title":"Combining Segments",
"uri":"mrs_01_1415.html",
"doc_type":"cmpntguide",
"p_code":"22",
"code":"25"
},
{
"desc":"If you want to rapidly migrate CarbonData data from a cluster to another one, you can use the CarbonData backup and restoration commands. This method does not require dat",
"product_code":"mrs",
"title":"CarbonData Data Migration",
"uri":"mrs_01_1416.html",
"doc_type":"cmpntguide",
"p_code":"15",
"code":"26"
},
{
"desc":"This migration guides you to migrate the CarbonData table data of Spark 1.5 to that of Spark2x.Before performing this operation, you need to stop the data import service ",
"product_code":"mrs",
"title":"Migrating Data on CarbonData from Spark 1.5 to Spark2x",
"uri":"mrs_01_2301.html",
"doc_type":"cmpntguide",
"p_code":"15",
"code":"27"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Performance Tuning",
"uri":"mrs_01_1417.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"28"
},
{
"desc":"There are various parameters that can be tuned to improve the query performance in CarbonData. Most of the parameters focus on increasing the parallelism in processing an",
"product_code":"mrs",
"title":"Tuning Guidelines",
"uri":"mrs_01_1418.html",
"doc_type":"cmpntguide",
"p_code":"28",
"code":"29"
},
{
"desc":"This section provides suggestions based on more than 50 test cases to help you create CarbonData tables with higher query performance.If the to-be-created table contains ",
"product_code":"mrs",
"title":"Suggestions for Creating CarbonData Tables",
"uri":"mrs_01_1419.html",
"doc_type":"cmpntguide",
"p_code":"28",
"code":"30"
},
{
"desc":"This section describes the configurations that can improve CarbonData performance.Table 1 and Table 2 describe the configurations about query of CarbonData.Table 3, Table",
"product_code":"mrs",
"title":"Configurations for Performance Tuning",
"uri":"mrs_01_1421.html",
"doc_type":"cmpntguide",
"p_code":"28",
"code":"31"
},
{
"desc":"The following table provides details about Hive ACL permissions required for performing operations on CarbonData tables.Parameters listed in Table 5 or Table 6 have been ",
"product_code":"mrs",
"title":"CarbonData Access Control",
"uri":"mrs_01_1422.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"32"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Syntax Reference",
"uri":"mrs_01_1423.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"33"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"DDL",
"uri":"mrs_01_1424.html",
"doc_type":"cmpntguide",
"p_code":"33",
"code":"34"
},
{
"desc":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_",
"product_code":"mrs",
"title":"CREATE TABLE",
"uri":"mrs_01_1425.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"35"
},
{
"desc":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE[IF NOT EXISTS] [db_name.]table_name STORED",
"product_code":"mrs",
"title":"CREATE TABLE As SELECT",
"uri":"mrs_01_1426.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"36"
},
{
"desc":"This command is used to delete an existing table.DROP TABLE [IF EXISTS] [db_name.]table_name;In this command, IF EXISTS and db_name are optional.DROP TABLE IF EXISTS prod",
"product_code":"mrs",
"title":"DROP TABLE",
"uri":"mrs_01_1427.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"37"
},
{
"desc":"SHOW TABLES command is used to list all tables in the current or a specific database.SHOW TABLES [IN db_name];IN db_Name is optional.SHOW TABLES IN ProductDatabase;All ta",
"product_code":"mrs",
"title":"SHOW TABLES",
"uri":"mrs_01_1428.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"38"
},
{
"desc":"The ALTER TABLE COMPACTION command is used to merge a specified number of segments into a single segment. This improves the query performance of a table.ALTER TABLE[db_na",
"product_code":"mrs",
"title":"ALTER TABLE COMPACTION",
"uri":"mrs_01_1429.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"39"
},
{
"desc":"This command is used to rename an existing table.ALTER TABLE [db_name.]table_name RENAME TO new_table_name;Parallel queries (using table names to obtain paths for reading",
"product_code":"mrs",
"title":"TABLE RENAME",
"uri":"mrs_01_1430.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"40"
},
{
"desc":"This command is used to add a column to an existing table.ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...) TBLPROPERTIES(''COLUMNPROPERTIES.columnNam",
"product_code":"mrs",
"title":"ADD COLUMNS",
"uri":"mrs_01_1431.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"41"
},
{
"desc":"This command is used to delete one or more columns from a table.ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...);After a column is deleted, at least one key ",
"product_code":"mrs",
"title":"DROP COLUMNS",
"uri":"mrs_01_1432.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"42"
},
{
"desc":"This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher.ALTER TABLE [db_name.]table_name CHANGE col_name col_name change",
"product_code":"mrs",
"title":"CHANGE DATA TYPE",
"uri":"mrs_01_1433.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"43"
},
{
"desc":"This command is used to register Carbon table to Hive meta store catalogue from exisiting Carbon table data.REFRESH TABLE db_name.table_name;The new database name and the",
"product_code":"mrs",
"title":"REFRESH TABLE",
"uri":"mrs_01_1434.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"44"
},
{
"desc":"This command is used to register an index table with the primary table.REGISTER INDEX TABLE indextable_name ON db_name.maintable_name;Before running this command, run REF",
"product_code":"mrs",
"title":"REGISTER INDEX TABLE",
"uri":"mrs_01_1435.html",
"doc_type":"cmpntguide",
"p_code":"34",
"code":"45"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"DML",
"uri":"mrs_01_1437.html",
"doc_type":"cmpntguide",
"p_code":"33",
"code":"46"
},
{
"desc":"This command is used to load user data of a particular type, so that CarbonData can provide good query performance.Only the raw data on HDFS can be loaded.LOAD DATA INPAT",
"product_code":"mrs",
"title":"LOAD DATA",
"uri":"mrs_01_1438.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"47"
},
{
"desc":"This command is used to update the CarbonData table based on the column expression and optional filtering conditions.Syntax 1:UPDATE <CARBON TABLE> SET (column_name1, col",
"product_code":"mrs",
"title":"UPDATE CARBON TABLE",
"uri":"mrs_01_1439.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"48"
},
{
"desc":"This command is used to delete records from a CarbonData table.DELETE FROM CARBON_TABLE [WHERE expression];If a segment is deleted, all secondary indexes associated with ",
"product_code":"mrs",
"title":"DELETE RECORDS from CARBON TABLE",
"uri":"mrs_01_1440.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"49"
},
{
"desc":"This command is used to add the output of the SELECT command to a Carbon table.INSERT INTO [CARBON TABLE] [select query];A table has been created.You must belong to the d",
"product_code":"mrs",
"title":"INSERT INTO CARBON TABLE",
"uri":"mrs_01_1441.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"50"
},
{
"desc":"This command is used to delete segments by the ID.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.ID IN (segment_id1,segment_id2);Segments cannot be deleted from the s",
"product_code":"mrs",
"title":"DELETE SEGMENT by ID",
"uri":"mrs_01_1442.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"51"
},
{
"desc":"This command is used to delete segments by loading date. Segments created before a specific date will be deleted.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.STARTT",
"product_code":"mrs",
"title":"DELETE SEGMENT by DATE",
"uri":"mrs_01_1443.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"52"
},
{
"desc":"This command is used to list the segments of a CarbonData table.SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_loads;Nonecreate tablecarbon01(a int,b string",
"product_code":"mrs",
"title":"SHOW SEGMENTS",
"uri":"mrs_01_1444.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"53"
},
{
"desc":"This command is used to create secondary indexes in the CarbonData tables.CREATE INDEX index_nameON TABLE [db_name.]table_name (col_name1, col_name2)AS 'carbondata'PROPER",
"product_code":"mrs",
"title":"CREATE SECONDARY INDEX",
"uri":"mrs_01_1445.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"54"
},
{
"desc":"This command is used to list all secondary index tables in the CarbonData table.SHOW INDEXES ON db_name.table_name;db_name is optional.create table productdb.productSales",
"product_code":"mrs",
"title":"SHOW SECONDARY INDEXES",
"uri":"mrs_01_1446.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"55"
},
{
"desc":"This command is used to delete the existing secondary index table in a specific table.DROP INDEX [IF EXISTS] index_nameON [db_name.]table_name;In this command, IF EXISTS ",
"product_code":"mrs",
"title":"DROP SECONDARY INDEX",
"uri":"mrs_01_1447.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"56"
},
{
"desc":"After the DELETE SEGMENT command is executed, the deleted segments are marked as the delete state. After the segments are merged, the status of the original segments chan",
"product_code":"mrs",
"title":"CLEAN FILES",
"uri":"mrs_01_1448.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"57"
},
{
"desc":"This command is used to dynamically add, update, display, or reset the CarbonData properties without restarting the driver.Add or Update parameter value:SET parameter_nam",
"product_code":"mrs",
"title":"SET/RESET",
"uri":"mrs_01_1449.html",
"doc_type":"cmpntguide",
"p_code":"46",
"code":"58"
},
{
"desc":"Before performing DDL and DML operations, you need to obtain the corresponding locks. See Table 1 for details about the locks that need to be obtained for each operation.",
"product_code":"mrs",
"title":"Operation Concurrent Execution",
"uri":"mrs_01_24046.html",
"doc_type":"cmpntguide",
"p_code":"33",
"code":"59"
},
{
"desc":"This section describes the APIs and usage methods of Segment. All methods are in the org.apache.spark.util.CarbonSegmentUtil class.The following methods have been abandon",
"product_code":"mrs",
"title":"API",
"uri":"mrs_01_1450.html",
"doc_type":"cmpntguide",
"p_code":"33",
"code":"60"
},
{
"desc":"Spatial data includes multidimensional points, lines, rectangles, cubes, polygons, and other geometric objects. A spatial data object occupies a certain region of space, ",
"product_code":"mrs",
"title":"Spatial Indexes",
"uri":"mrs_01_1451.html",
"doc_type":"cmpntguide",
"p_code":"33",
"code":"61"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData Troubleshooting",
"uri":"mrs_01_1454.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"62"
},
{
"desc":"When double data type values with higher precision are used in filters, incorrect values are returned by filtering results.When double data type values with higher precis",
"product_code":"mrs",
"title":"Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter",
"uri":"mrs_01_1455.html",
"doc_type":"cmpntguide",
"p_code":"62",
"code":"63"
},
{
"desc":"The query performance fluctuates when the query is executed in different query periods.During data loading, the memory configured for each executor program instance may b",
"product_code":"mrs",
"title":"Query Performance Deterioration",
"uri":"mrs_01_1456.html",
"doc_type":"cmpntguide",
"p_code":"62",
"code":"64"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"CarbonData FAQ",
"uri":"mrs_01_1457.html",
"doc_type":"cmpntguide",
"p_code":"10",
"code":"65"
},
{
"desc":"Why is incorrect output displayed when I perform query with filter on decimal data type values?For example:select * from carbon_table where num = 1234567890123456.22;Outp",
"product_code":"mrs",
"title":"Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?",
"uri":"mrs_01_1458.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"66"
},
{
"desc":"How to avoid minor compaction for historical data?If you want to load historical data first and then the incremental data, perform following steps to avoid minor compacti",
"product_code":"mrs",
"title":"How to Avoid Minor Compaction for Historical Data?",
"uri":"mrs_01_1459.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"67"
},
{
"desc":"How to change the default group name for CarbonData data loading?By default, the group name for CarbonData data loading is ficommon. You can perform the following operati",
"product_code":"mrs",
"title":"How to Change the Default Group Name for CarbonData Data Loading?",
"uri":"mrs_01_1460.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"68"
},
{
"desc":"Why does the INSERT INTO CARBON TABLE command fail and the following error message is displayed?The INSERT INTO CARBON TABLE command fails in the following scenarios:If t",
"product_code":"mrs",
"title":"Why Does INSERT INTO CARBON TABLE Command Fail?",
"uri":"mrs_01_1461.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"69"
},
{
"desc":"Why is the data logged in bad records different from the original input data with escaped characters?An escape character is a backslash (\\) followed by one or more charac",
"product_code":"mrs",
"title":"Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?",
"uri":"mrs_01_1462.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"70"
},
{
"desc":"Why data load performance decreases due to bad records?If bad records are present in the data and BAD_RECORDS_LOGGER_ENABLE is true or BAD_RECORDS_ACTION is redirect then",
"product_code":"mrs",
"title":"Why Data Load Performance Decreases due to Bad Records?",
"uri":"mrs_01_1463.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"71"
},
{
"desc":"Why INSERT INTO or LOAD DATA task distribution is incorrect, and the openedtasks are less than the available executors when the number of initial executors is zero?In ca",
"product_code":"mrs",
"title":"Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial ExecutorsIs Zero?",
"uri":"mrs_01_1464.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"72"
},
{
"desc":"Why does CarbonData require additional executors even though the parallelism is greater than the number of blocks to be processed?CarbonData block distribution optimizes ",
"product_code":"mrs",
"title":"Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?",
"uri":"mrs_01_1465.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"73"
},
{
"desc":"Why Data Loading fails during off heap?YARN Resource Manager will consider (Java heap memory + spark.yarn.am.memoryOverhead) as memory limit, so during the off heap, the ",
"product_code":"mrs",
"title":"Why Data loading Fails During off heap?",
"uri":"mrs_01_1466.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"74"
},
{
"desc":"Why do I fail to create a hive table?Creating a Hive table fails, when source table or sub query has more number of partitions. The implementation of the query requires a",
"product_code":"mrs",
"title":"Why Do I Fail to Create a Hive Table?",
"uri":"mrs_01_1467.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"75"
},
{
"desc":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?The Hive ACL is implemented after the version V100",
"product_code":"mrs",
"title":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?",
"uri":"mrs_01_1468.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"76"
},
{
"desc":"How do I logically split data across different namespaces?Configuration:To logically split data across different namespaces, you must update the following configuration i",
"product_code":"mrs",
"title":"How Do I Logically Split Data Across Different Namespaces?",
"uri":"mrs_01_1469.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"77"
},
{
"desc":"Why drop database cascade is throwing the following exception?This error is thrown when the owner of the database performs drop database <database_name> cascade which con",
"product_code":"mrs",
"title":"Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?",
"uri":"mrs_01_1470.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"78"
},
{
"desc":"Why the UPDATE command cannot be executed in Spark Shell?The syntax and examples provided in this document are about Beeline commands instead of Spark Shell commands.To r",
"product_code":"mrs",
"title":"Why the UPDATE Command Cannot Be Executed in Spark Shell?",
"uri":"mrs_01_1471.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"79"
},
{
"desc":"How do I configure unsafe memory in CarbonData?In the Spark configuration, the value of spark.yarn.executor.memoryOverhead must be greater than the sum of (sort.inmemory.",
"product_code":"mrs",
"title":"How Do I Configure Unsafe Memory in CarbonData?",
"uri":"mrs_01_1472.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"80"
},
{
"desc":"Why exception occurs in CarbonData when Disk Space Quota is set for the storage directory in HDFS?The data will be written to HDFS when you during create table, load tabl",
"product_code":"mrs",
"title":"Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?",
"uri":"mrs_01_1473.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"81"
},
{
"desc":"Why does data query or loading fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" is displayed?This exception is thrown when the out-of-heap ",
"product_code":"mrs",
"title":"Why Does Data Query or Loading Fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" Is Displayed?",
"uri":"mrs_01_1474.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"82"
},
{
"desc":"Why do files of a Carbon table exist in the recycle bin even if the drop table command is not executed when mis-deletion prevention is enabled?After the the mis-deletion ",
"product_code":"mrs",
"title":"Why Do Files of a Carbon Table Exist in the Recycle Bin Even If the drop table Command Is Not Executed When Mis-deletion Prevention Is Enabled?",
"uri":"mrs_01_24537.html",
"doc_type":"cmpntguide",
"p_code":"65",
"code":"83"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using ClickHouse",
"uri":"mrs_01_2344.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"84"
},
{
"desc":"ClickHouse is a column-based database oriented to online analysis and processing. It supports SQL query and provides good query performance. The aggregation analysis and ",
"product_code":"mrs",
"title":"Using ClickHouse from Scratch",
"uri":"mrs_01_2345.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"85"
},
{
"desc":"Table engines play a key role in ClickHouse to determine:Where to write and read dataSupported query modesWhether concurrent data access is supportedWhether indexes can b",
"product_code":"mrs",
"title":"ClickHouse Table Engine Overview",
"uri":"mrs_01_24105.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"86"
},
{
"desc":"ClickHouse implements the replicated table mechanism based on the ReplicatedMergeTree engine and ZooKeeper. When creating a table, you can specify an engine to determine ",
"product_code":"mrs",
"title":"Creating a ClickHouse Table",
"uri":"mrs_01_2398.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"87"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common ClickHouse SQL Syntax",
"uri":"mrs_01_24199.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"88"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse database.CREATE DATABASE [IF NOT EXISTS] Database_name [ON CLUSTERClickHo",
"product_code":"mrs",
"title":"CREATE DATABASE: Creating a Database",
"uri":"mrs_01_24200.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"89"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse table.Method 1: Creating a table named table_name in the specified databa",
"product_code":"mrs",
"title":"CREATE TABLE: Creating a Table",
"uri":"mrs_01_24201.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"90"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for inserting data to a table in ClickHouse.Method 1: Inserting data in standard formatINSERT INTO ",
"product_code":"mrs",
"title":"INSERT INTO: Inserting Data into a Table",
"uri":"mrs_01_24202.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"91"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for querying table data in ClickHouse.SELECT [DISTINCT] expr_list[FROM[database_name.]table| (subqu",
"product_code":"mrs",
"title":"SELECT: Querying Table Data",
"uri":"mrs_01_24203.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"92"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for modifying a table structure in ClickHouse.ALTER TABLE [database_name].name[ON CLUSTER cluster] ",
"product_code":"mrs",
"title":"ALTER TABLE: Modifying a Table Structure",
"uri":"mrs_01_24204.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"93"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for querying a table structure in ClickHouse.DESC|DESCRIBETABLE[database_name.]table[INTOOUTFILE fi",
"product_code":"mrs",
"title":"DESC: Querying a Table Structure",
"uri":"mrs_01_24205.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"94"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for deleting a ClickHouse table.DROP[TEMPORARY] TABLE[IF EXISTS] [database_name.]name[ON CLUSTER cl",
"product_code":"mrs",
"title":"DROP: Deleting a Table",
"uri":"mrs_01_24208.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"95"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statement for displaying information about databases and tables in ClickHouse.show databasesshow tables",
"product_code":"mrs",
"title":"SHOW: Displaying Information About Databases and Tables",
"uri":"mrs_01_24207.html",
"doc_type":"cmpntguide",
"p_code":"88",
"code":"96"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Migrating ClickHouse Data",
"uri":"mrs_01_24250.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"97"
},
{
"desc":"This section describes the basic syntax and usage of the SQL statements for importing and exporting file data using ClickHouse.Importing data in CSV formatclickhouse clie",
"product_code":"mrs",
"title":"Using ClickHouse to Import and Export Data",
"uri":"mrs_01_24206.html",
"doc_type":"cmpntguide",
"p_code":"97",
"code":"98"
},
{
"desc":"This section describes how to create a Kafka table to automatically synchronize Kafka data to the ClickHouse cluster.You have created a Kafka cluster. The Kafka client ha",
"product_code":"mrs",
"title":"Synchronizing Kafka Data to ClickHouse",
"uri":"mrs_01_24377.html",
"doc_type":"cmpntguide",
"p_code":"97",
"code":"99"
},
{
"desc":"The ClickHouse data migration tool can migrate some partitions of one or more partitioned MergeTree tables on several ClickHouseServer nodes to the same tables on other C",
"product_code":"mrs",
"title":"Using the ClickHouse Data Migration Tool",
"uri":"mrs_01_24198.html",
"doc_type":"cmpntguide",
"p_code":"97",
"code":"100"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"User Management and Authentication",
"uri":"mrs_01_24251.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"101"
},
{
"desc":"ClickHouse user permission management enables unified management of users, roles, and permissions on each ClickHouse instance in the cluster. You can use the permission m",
"product_code":"mrs",
"title":"ClickHouse User and Permission Management",
"uri":"mrs_01_24057.html",
"doc_type":"cmpntguide",
"p_code":"101",
"code":"102"
},
{
"desc":"ClickHouse can be interconnected with OpenLDAP. You can manage accounts and permissions in a centralized manner by adding the OpenLDAP server configuration and creating u",
"product_code":"mrs",
"title":"Interconnecting ClickHouse With OpenLDAP for Authentication",
"uri":"mrs_01_24109.html",
"doc_type":"cmpntguide",
"p_code":"101",
"code":"103"
},
{
"desc":"This section describes how to back up data by exporting ClickHouse data to a CSV file and restore data using the CSV file.You have installed the ClickHouse client.You hav",
"product_code":"mrs",
"title":"Backing Up and Restoring ClickHouse Data Using a Data File",
"uri":"mrs_01_24292.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"104"
},
{
"desc":"Log path: The default storage path of ClickHouse log files is as follows: ${BIGDATA_LOG_HOME}/clickhouseLog archive rule: The automatic ClickHouse log compression functio",
"product_code":"mrs",
"title":"ClickHouse Log Overview",
"uri":"mrs_01_2399.html",
"doc_type":"cmpntguide",
"p_code":"84",
"code":"105"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using DBService",
"uri":"mrs_01_2356.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"106"
},
{
"desc":"Log path: The default storage path of DBService log files is /var/log/Bigdata/dbservice.GaussDB: /var/log/Bigdata/dbservice/DB (GaussDB run log directory), /var/log/Bigda",
"product_code":"mrs",
"title":"DBService Log Overview",
"uri":"mrs_01_0789.html",
"doc_type":"cmpntguide",
"p_code":"106",
"code":"107"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Flink",
"uri":"mrs_01_0591.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"108"
},
{
"desc":"This section describes how to use Flink to run wordcount jobs.Flink has been installed in an MRS cluster.The cluster runs properly and the client has been correctly insta",
"product_code":"mrs",
"title":"Using Flink from Scratch",
"uri":"mrs_01_0473.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"109"
},
{
"desc":"You can view Flink job information on the Yarn web UI.The Flink service has been installed in a cluster.For versions earlier than MRS 1.9.2, log in to MRS Manager and cho",
"product_code":"mrs",
"title":"Viewing Flink Job Information",
"uri":"mrs_01_0784.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"110"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Flink Configuration Management",
"uri":"mrs_01_0592.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"111"
},
{
"desc":"All parameters of Flink must be set on a client. The path of a configuration file is as follows: Client installation path/Flink/flink/conf/flink-conf.yaml.You are advised",
"product_code":"mrs",
"title":"Configuring Parameter Paths",
"uri":"mrs_01_1565.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"112"
},
{
"desc":"JobManager and TaskManager are main components of Flink. You can configure the parameters for different security and performance scenarios on the client.Main configuratio",
"product_code":"mrs",
"title":"JobManager & TaskManager",
"uri":"mrs_01_1566.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"113"
},
{
"desc":"The Blob server on the JobManager node is used to receive JAR files uploaded by users on the client, send JAR files to TaskManager, and transfer log files. Flink provides",
"product_code":"mrs",
"title":"Blob",
"uri":"mrs_01_1567.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"114"
},
{
"desc":"The Akka actor model is the basis of communications between the Flink client and JobManager, JobManager and TaskManager, as well as TaskManager and TaskManager. Flink ena",
"product_code":"mrs",
"title":"Distributed Coordination (via Akka)",
"uri":"mrs_01_1568.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"115"
},
{
"desc":"When the secure Flink cluster is required, SSL-related configuration items must be set.Configuration items include the SSL switch, certificate, password, and encryption a",
"product_code":"mrs",
"title":"SSL",
"uri":"mrs_01_1569.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"116"
},
{
"desc":"When Flink runs a job, data transmission and reverse pressure detection between tasks depend on Netty. In certain environments, Netty parameters should be configured.For ",
"product_code":"mrs",
"title":"Network communication (via Netty)",
"uri":"mrs_01_1570.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"117"
},
{
"desc":"When JobManager is started, the web server in the same process is also started.You can access the web server to obtain information about the current Flink cluster, includ",
"product_code":"mrs",
"title":"JobManager Web Frontend",
"uri":"mrs_01_1571.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"118"
},
{
"desc":"Result files are created when tasks are running. Flink enables you to configure parameters for file creation.Configuration items include overwriting policy and directory ",
"product_code":"mrs",
"title":"File Systems",
"uri":"mrs_01_1572.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"119"
},
{
"desc":"Flink enables HA and job exception, as well as job pause and recovery during version upgrade. Flink depends on state backend to store job states and on the restart strate",
"product_code":"mrs",
"title":"State Backend",
"uri":"mrs_01_1573.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"120"
},
{
"desc":"Flink Kerberos configuration items must be configured in security mode.The configuration items include keytab and principal of Kerberos.",
"product_code":"mrs",
"title":"Kerberos-based Security",
"uri":"mrs_01_1574.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"121"
},
{
"desc":"The Flink HA mode depends on ZooKeeper. Therefore, ZooKeeper-related configuration items must be set.Configuration items include the ZooKeeper address, path, and security",
"product_code":"mrs",
"title":"HA",
"uri":"mrs_01_1575.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"122"
},
{
"desc":"In scenarios raising special requirements on JVM configuration, users can use configuration items to transfer JVM parameters to the client, JobManager, and TaskManager.Co",
"product_code":"mrs",
"title":"Environment",
"uri":"mrs_01_1576.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"123"
},
{
"desc":"Flink runs on a Yarn cluster and JobManager runs on ApplicationMaster. Certain configuration parameters of JobManager depend on Yarn. By setting Yarn-related configuratio",
"product_code":"mrs",
"title":"Yarn",
"uri":"mrs_01_1577.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"124"
},
{
"desc":"The Netty connection is used among multiple jobs to reduce latency. In this case, NettySink is used on the server and NettySource is used on the client for data transmiss",
"product_code":"mrs",
"title":"Pipeline",
"uri":"mrs_01_1578.html",
"doc_type":"cmpntguide",
"p_code":"111",
"code":"125"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Security Configuration",
"uri":"mrs_01_0593.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"126"
},
{
"desc":"All Flink cluster components support authentication.The Kerberos authentication is supported between Flink cluster components and external components, such as Yarn, HDFS,",
"product_code":"mrs",
"title":"Security Features",
"uri":"mrs_01_1579.html",
"doc_type":"cmpntguide",
"p_code":"126",
"code":"127"
},
{
"desc":"Sample project data of Flink is stored in Kafka. A user with Kafka permission can send data to Kafka and receive data from it.Run Linux command line to create a topic. Be",
"product_code":"mrs",
"title":"Configuring Kafka",
"uri":"mrs_01_1580.html",
"doc_type":"cmpntguide",
"p_code":"126",
"code":"128"
},
{
"desc":"This section applies to MRS 3.x or later clusters.Configure files.nettyconnector.registerserver.topic.storage: (Mandatory) Configures the path (on a third-party server) t",
"product_code":"mrs",
"title":"Configuring Pipeline",
"uri":"mrs_01_1581.html",
"doc_type":"cmpntguide",
"p_code":"126",
"code":"129"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Security Hardening",
"uri":"mrs_01_0594.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"130"
},
{
"desc":"Flink uses the following three authentication modes:Kerberos authentication: It is used between the Flink Yarn client and Yarn ResourceManager, JobManager and ZooKeeper, ",
"product_code":"mrs",
"title":"Authentication and Encryption",
"uri":"mrs_01_1583.html",
"doc_type":"cmpntguide",
"p_code":"130",
"code":"131"
},
{
"desc":"In HA mode of Flink, ZooKeeper can be used to manage clusters and discover services. Zookeeper supports SASL ACL control. Only users who have passed the SASL (Kerberos) a",
"product_code":"mrs",
"title":"ACL Control",
"uri":"mrs_01_1584.html",
"doc_type":"cmpntguide",
"p_code":"130",
"code":"132"
},
{
"desc":"Note: The same coding mode is used on the web service client and server to prevent garbled characters and to enable input verification.Security hardening: apply UTF-8 to ",
"product_code":"mrs",
"title":"Web Security",
"uri":"mrs_01_1585.html",
"doc_type":"cmpntguide",
"p_code":"130",
"code":"133"
},
{
"desc":"All security functions of Flink are provided by the open source community or self-developed. Security features that need to be configured by users, such as authentication",
"product_code":"mrs",
"title":"Security Statement",
"uri":"mrs_01_1586.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"134"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using the Flink Web UI",
"uri":"mrs_01_24014.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"135"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_24015.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"136"
},
{
"desc":"Flink web UI provides a web-based visual development platform. You only need to compile SQL statements to develop jobs, slashing the job development threshold. In additio",
"product_code":"mrs",
"title":"Introduction to Flink Web UI",
"uri":"mrs_01_24016.html",
"doc_type":"cmpntguide",
"p_code":"136",
"code":"137"
},
{
"desc":"The Flink web UI application process is shown as follows:",
"product_code":"mrs",
"title":"Flink Web UI Application Process",
"uri":"mrs_01_24017.html",
"doc_type":"cmpntguide",
"p_code":"136",
"code":"138"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"FlinkServer Permissions Management",
"uri":"mrs_01_24047.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"139"
},
{
"desc":"User admin of Manager does not have the FlinkServer service operation permission. To perform FlinkServer service operations, you need to grant related permission to the u",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_24048.html",
"doc_type":"cmpntguide",
"p_code":"139",
"code":"140"
},
{
"desc":"This section describes how to create and configure a FlinkServer role on Manager as the system administrator. A FlinkServer role can be configured with FlinkServer admini",
"product_code":"mrs",
"title":"Authentication Based on Users and Roles",
"uri":"mrs_01_24049.html",
"doc_type":"cmpntguide",
"p_code":"139",
"code":"141"
},
{
"desc":"After Flink is installed in an MRS cluster, you can connect to clusters and data as well as manage stream tables and jobs using the Flink web UI.This section describes ho",
"product_code":"mrs",
"title":"Accessing the Flink Web UI",
"uri":"mrs_01_24019.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"142"
},
{
"desc":"Applications can be used to isolate different upper-layer services.After the application is created, you can switch to the application to be operated in the upper left co",
"product_code":"mrs",
"title":"Creating an Application on the Flink Web UI",
"uri":"mrs_01_24020.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"143"
},
{
"desc":"Different clusters can be accessed by configuring the cluster connection.To obtain the cluster client configuration files, perform the following steps:Log in to FusionIns",
"product_code":"mrs",
"title":"Creating a Cluster Connection on the Flink Web UI",
"uri":"mrs_01_24021.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"144"
},
{
"desc":"You can use data connections to access different data services. Currently, FlinkServer supports HDFS and Kafka data connections.",
"product_code":"mrs",
"title":"Creating a Data Connection on the Flink Web UI",
"uri":"mrs_01_24022.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"145"
},
{
"desc":"Data tables can be used to define basic attributes and parameters of source tables, dimension tables, and output tables.",
"product_code":"mrs",
"title":"Managing Tables on the Flink Web UI",
"uri":"mrs_01_24023.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"146"
},
{
"desc":"Define Flink jobs, including Flink SQL and Flink JAR jobs.Creating a Flink SQL jobDevelop the job on the job development page.Click Check Semantic to check the input cont",
"product_code":"mrs",
"title":"Managing Jobs on the Flink Web UI",
"uri":"mrs_01_24024.html",
"doc_type":"cmpntguide",
"p_code":"135",
"code":"147"
},
{
"desc":"Log path:Run logs of a Flink job: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of executing tasks are stored in ",
"product_code":"mrs",
"title":"Flink Log Overview",
"uri":"mrs_01_0596.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"148"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Flink Performance Tuning",
"uri":"mrs_01_0597.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"149"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Optimization DataStream",
"uri":"mrs_01_1587.html",
"doc_type":"cmpntguide",
"p_code":"149",
"code":"150"
},
{
"desc":"The computing of Flink depends on memory. If the memory is insufficient, the performance of Flink will be greatly deteriorated. One solution is to monitor garbage collect",
"product_code":"mrs",
"title":"Memory Configuration Optimization",
"uri":"mrs_01_1588.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"151"
},
{
"desc":"The degree of parallelism (DOP) indicates the number of tasks to be executed concurrently. It determines the number of data blocks after the operation. Configuring the DO",
"product_code":"mrs",
"title":"Configuring DOP",
"uri":"mrs_01_1589.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"152"
},
{
"desc":"In Flink on Yarn mode, there are JobManagers and TaskManagers. JobManagers and TaskManagers schedule and run tasks.Therefore, configuring parameters of JobManagers and Ta",
"product_code":"mrs",
"title":"Configuring Process Parameters",
"uri":"mrs_01_1590.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"153"
},
{
"desc":"The divide of tasks can be optimized by optimizing the partitioning method. If data skew occurs in a certain task, the whole execution process is delayed. Therefore, when",
"product_code":"mrs",
"title":"Optimizing the Design of Partitioning Method",
"uri":"mrs_01_1591.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"154"
},
{
"desc":"The communication of Flink is based on Netty network. The network performance determines the data switching speed and task execution efficiency. Therefore, the performanc",
"product_code":"mrs",
"title":"Configuring the Netty Network Communication",
"uri":"mrs_01_1592.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"155"
},
{
"desc":"If data skew occurs (certain data volume is extremely large), the execution time of tasks is inconsistent even though no GC is performed.Redefine keys. Use keys of smalle",
"product_code":"mrs",
"title":"Experience Summary",
"uri":"mrs_01_1593.html",
"doc_type":"cmpntguide",
"p_code":"150",
"code":"156"
},
{
"desc":"This section applies to MRS 3.x or later clusters.Before running the Flink shell commands, perform the following steps:source /opt/client/bigdata_envkinit Service user",
"product_code":"mrs",
"title":"Common Flink Shell Commands",
"uri":"mrs_01_0598.html",
"doc_type":"cmpntguide",
"p_code":"108",
"code":"157"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Flume",
"uri":"mrs_01_0390.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"158"
},
{
"desc":"You can use Flume to import collected log information to Kafka.A streaming cluster that contains components such as Flume and Kafka and has Kerberos authentication enable",
"product_code":"mrs",
"title":"Using Flume from Scratch",
"uri":"mrs_01_0397.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"159"
},
{
"desc":"Flume is a distributed, reliable, and highly available system for aggregating massive logs, which can efficiently collect, aggregate, and move massive log data from diffe",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_0391.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"160"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Installing the Flume Client",
"uri":"mrs_01_0392.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"161"
},
{
"desc":"To use Flume to collect logs, you must install the Flume client on a log host. You can create an ECS and install the Flume client on it.This section applies to MRS 3.x or",
"product_code":"mrs",
"title":"Installing the Flume Client on Clusters of Versions Earlier Than MRS 3.x",
"uri":"mrs_01_1594.html",
"doc_type":"cmpntguide",
"p_code":"161",
"code":"162"
},
{
"desc":"To use Flume to collect logs, you must install the Flume client on a log host. You can create an ECS and install the Flume client on it.This section applies to MRS 3.x or",
"product_code":"mrs",
"title":"Installing the Flume Client on MRS 3.x or Later Clusters",
"uri":"mrs_01_1595.html",
"doc_type":"cmpntguide",
"p_code":"161",
"code":"163"
},
{
"desc":"You can view logs to locate faults.The Flume client has been installed.ls -lR flume-client-*A log file is shown as follows:In the log file, FlumeClient.log is the run log",
"product_code":"mrs",
"title":"Viewing Flume Client Logs",
"uri":"mrs_01_0393.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"164"
},
{
"desc":"You can stop and start the Flume client or uninstall the Flume client when the Flume data ingestion channel is not required.Stop the Flume client of the Flume role.Assume",
"product_code":"mrs",
"title":"Stopping or Uninstalling the Flume Client",
"uri":"mrs_01_0394.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"165"
},
{
"desc":"You can use the encryption tool provided by the Flume client to encrypt some parameter values in the configuration file.The Flume client has been installed.cd fusioninsig",
"product_code":"mrs",
"title":"Using the Encryption Tool of the Flume Client",
"uri":"mrs_01_0395.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"166"
},
{
"desc":"This section applies to MRS 3.x or later clusters.This configuration guide describes how to configure common Flume services. For non-common Source, Channel, and Sink conf",
"product_code":"mrs",
"title":"Flume Service Configuration Guide",
"uri":"mrs_01_1057.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"167"
},
{
"desc":"For versions earlier than MRS 3.x, configure Flume parameters in the properties.properties file.For MRS 3.x or later, some parameters can be configured on Manager.This se",
"product_code":"mrs",
"title":"Flume Configuration Parameter Description",
"uri":"mrs_01_0396.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"168"
},
{
"desc":"This section describes how to use environment variables in the properties.properties configuration file.This section applies to MRS 3.x or later clusters.The Flume servic",
"product_code":"mrs",
"title":"Using Environment Variables in the properties.properties File",
"uri":"mrs_01_1058.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"169"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Non-Encrypted Transmission",
"uri":"mrs_01_1059.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"170"
},
{
"desc":"This section describes how to configure Flume server and client parameters after the cluster and the Flume service are installed to ensure proper running of the service.T",
"product_code":"mrs",
"title":"Configuring Non-encrypted Transmission",
"uri":"mrs_01_1060.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"171"
},
{
"desc":"This section describes how to use the Flume client to collect static logs from a local host and save them to the topic list (test1) of Kafka.This section applies to MRS 3",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka",
"uri":"mrs_01_1061.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"172"
},
{
"desc":"This section describes how to use the Flume client to collect static logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MRS",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
"uri":"mrs_01_1063.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"173"
},
{
"desc":"This section describes how to use the Flume client to collect dynamic logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MR",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS",
"uri":"mrs_01_1064.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"174"
},
{
"desc":"This section describes how to use the Flume client to collect logs from the topic list (test1) of Kafka and save them to the /flume/test directory on HDFS.This section ap",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS",
"uri":"mrs_01_1065.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"175"
},
{
"desc":"This section describes how to use the Flume client to collect logs from the topic list (test1) of the Kafka client and save them to the /flume/test directory on HDFS.This",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client",
"uri":"mrs_01_1066.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"176"
},
{
"desc":"This section describes how to use the Flume client to collect static logs from a local host and save them to the flume_test HBase table. In this scenario, multi-level age",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase",
"uri":"mrs_01_1067.html",
"doc_type":"cmpntguide",
"p_code":"170",
"code":"177"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Encrypted Transmission",
"uri":"mrs_01_1068.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"178"
},
{
"desc":"This section describes how to configure the server and client parameters of the Flume service (including the Flume and MonitorServer roles) after the cluster is installed",
"product_code":"mrs",
"title":"Configuring the Encrypted Transmission",
"uri":"mrs_01_1069.html",
"doc_type":"cmpntguide",
"p_code":"178",
"code":"179"
},
{
"desc":"This section describes how to use Flume to collect static logs from a local host and save them to the /flume/test directory on HDFS.This section applies to MRS 3.x or lat",
"product_code":"mrs",
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
"uri":"mrs_01_1070.html",
"doc_type":"cmpntguide",
"p_code":"178",
"code":"180"
},
{
"desc":"The Flume client outside the FusionInsight cluster is a part of the end-to-end data collection. Both the Flume client outside the cluster and the Flume server in the clus",
"product_code":"mrs",
"title":"Viewing Flume Client Monitoring Information",
"uri":"mrs_01_1596.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"181"
},
{
"desc":"This section describes how to connect to Kafka using the Flume client in security mode.This section applies to MRS 3.x or later.Set keyTab and principal based on site req",
"product_code":"mrs",
"title":"Connecting Flume to Kafka in Security Mode",
"uri":"mrs_01_1071.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"182"
},
{
"desc":"This section describes how to use Flume to connect to Hive (version 3.1.0) in the cluster.This section applies to MRS 3.x or later.Flume and Hive have been correctly inst",
"product_code":"mrs",
"title":"Connecting Flume with Hive in Security Mode",
"uri":"mrs_01_1072.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"183"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Configuring the Flume Service Model",
"uri":"mrs_01_1073.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"184"
},
{
"desc":"This section applies to MRS 3.x or later.Guide a reasonable Flume service configuration by providing performance differences between Flume common modules, to avoid a nons",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_1074.html",
"doc_type":"cmpntguide",
"p_code":"184",
"code":"185"
},
{
"desc":"This section applies to MRS 3.x or later.During Flume service configuration and module selection, the ultimate throughput of a sink must be greater than the maximum throu",
"product_code":"mrs",
"title":"Service Model Configuration Guide",
"uri":"mrs_01_1075.html",
"doc_type":"cmpntguide",
"p_code":"184",
"code":"186"
},
{
"desc":"Log path: The default path of Flume log files is /var/log/Bigdata/Role name.FlumeServer: /var/log/Bigdata/flume/flumeFlumeClient: /var/log/Bigdata/flume-client-n/flumeMon",
"product_code":"mrs",
"title":"Introduction to Flume Logs",
"uri":"mrs_01_1081.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"187"
},
{
"desc":"This section describes how to join and log out of a cgroup, query the cgroup status, and change the cgroup CPU threshold.This section applies to MRS 3.x or later.Join Cgr",
"product_code":"mrs",
"title":"Flume Client Cgroup Usage Guide",
"uri":"mrs_01_1082.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"188"
},
{
"desc":"This section describes how to perform secondary development for third-party plug-ins.This section applies to MRS 3.x or later.You have obtained the third-party JAR packag",
"product_code":"mrs",
"title":"Secondary Development Guide for Flume Third-Party Plug-ins",
"uri":"mrs_01_1083.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"189"
},
{
"desc":"Flume logs are stored in /var/log/Bigdata/flume/flume/flumeServer.log. Most data transmission exceptions and data transmission failures are recorded in logs. You can run ",
"product_code":"mrs",
"title":"Common Issues About Flume",
"uri":"mrs_01_1598.html",
"doc_type":"cmpntguide",
"p_code":"158",
"code":"190"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using HBase",
"uri":"mrs_01_0500.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"191"
},
{
"desc":"HBase is a column-based distributed storage system that features high reliability, performance, and scalability. This section describes how to use HBase from scratch, inc",
"product_code":"mrs",
"title":"Using HBase from Scratch",
"uri":"mrs_01_0368.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"192"
},
{
"desc":"This section describes how to use the HBase client in an O&M scenario or a service scenario.The client has been installed. For example, the installation directory is /opt",
"product_code":"mrs",
"title":"Using an HBase Client",
"uri":"bakmrs_01_0368.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"193"
},
{
"desc":"This section guides the system administrator to create and configure an HBase role on Manager. The HBase role can set HBase administrator permissions and read (R), write ",
"product_code":"mrs",
"title":"Creating HBase Roles",
"uri":"mrs_01_1608.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"194"
},
{
"desc":"As a key feature to ensure high availability of the HBase cluster system, HBase cluster replication provides HBase with remote data replication in real time. It provides ",
"product_code":"mrs",
"title":"Configuring HBase Replication",
"uri":"mrs_01_0501.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"195"
},
{
"desc":"The operations described in this section apply only to clusters of versions earlier than MRS 3.x.If the default parameter settings of the MRS service cannot meet your req",
"product_code":"mrs",
"title":"Configuring HBase Parameters",
"uri":"mrs_01_0443.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"196"
},
{
"desc":"DistCp is used to copy the data stored on HDFS from a cluster to another cluster. DistCp depends on the cross-cluster copy function, which is disabled by default. This fu",
"product_code":"mrs",
"title":"Enabling Cross-Cluster Copy",
"uri":"mrs_01_0502.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"197"
},
{
"desc":"Active and standby clusters have been installed and started.Time is consistent between the active and standby clusters and the NTP service on the active and standby clust",
"product_code":"mrs",
"title":"Using the ReplicationSyncUp Tool",
"uri":"mrs_01_0510.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"198"
},
{
"desc":"This section applies only to MRS 3.1.0 or later.This section describes common GeoMesa commands. For more GeoMesa commands, visit https://www.geomesa.org/documentation/use",
"product_code":"mrs",
"title":"GeoMesa Command Line",
"uri":"mrs_01_24119.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"199"
},
{
"desc":"HBase disaster recovery (DR), a key feature that is used to ensure high availability (HA) of the HBase cluster system, provides the real-time remote DR function for HBase",
"product_code":"mrs",
"title":"Configuring HBase DR",
"uri":"mrs_01_1609.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"200"
},
{
"desc":"HBase encodes data blocks in HFiles to reduce duplicate keys in KeyValues, reducing used space. Currently, the following data block encoding modes are supported: NONE, PR",
"product_code":"mrs",
"title":"Configuring HBase Data Compression and Encoding",
"uri":"mrs_01_24112.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"201"
},
{
"desc":"The system administrator can configure HBase cluster DR to improve system availability. If the active cluster in the DR environment is faulty and the connection to the HB",
"product_code":"mrs",
"title":"Performing an HBase DR Service Switchover",
"uri":"mrs_01_1610.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"202"
},
{
"desc":"The HBase cluster in the current environment is a DR cluster. Due to some reasons, the active and standby clusters need to be switched over. That is, the standby cluster ",
"product_code":"mrs",
"title":"Performing an HBase DR Active/Standby Cluster Switchover",
"uri":"mrs_01_1611.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"203"
},
{
"desc":"The Apache HBase official website provides the function of importing data in batches. For details, see the description of the Import and ImportTsv tools at http://hbase.a",
"product_code":"mrs",
"title":"Community BulkLoad Tool",
"uri":"mrs_01_1612.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"204"
},
{
"desc":"In the actual application scenario, data in various sizes needs to be stored, for example, image data and documents. Data whose size is smaller than 10 MB can be stored i",
"product_code":"mrs",
"title":"Configuring the MOB",
"uri":"mrs_01_1631.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"205"
},
{
"desc":"This topic provides the procedure to configure the secure HBase replication during cross-realm Kerberos setup in security mode.Mapping for all the FQDNs to their realms s",
"product_code":"mrs",
"title":"Configuring Secure HBase Replication",
"uri":"mrs_01_1009.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"206"
},
{
"desc":"In a faulty environment, there are possibilities that a region may be stuck in transition for longer duration due to various reasons like slow region server response, uns",
"product_code":"mrs",
"title":"Configuring Region In Transition Recovery Chore Service",
"uri":"mrs_01_1010.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"207"
},
{
"desc":"Log path: The default storage path of HBase logs is /var/log/Bigdata/hbase/Role name.HMaster: /var/log/Bigdata/hbase/hm (run logs) and /var/log/Bigdata/audit/hbase/hm (au",
"product_code":"mrs",
"title":"HBase Log Overview",
"uri":"mrs_01_1056.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"208"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"HBase Performance Tuning",
"uri":"mrs_01_1013.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"209"
},
{
"desc":"BulkLoad uses MapReduce jobs to directly generate files that comply with the internal data format of HBase, and then loads the generated StoreFiles to a running cluster. ",
"product_code":"mrs",
"title":"Improving the BulkLoad Efficiency",
"uri":"mrs_01_1636.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"210"
},
{
"desc":"In the scenario where a large number of requests are continuously put, setting the following two parameters to false can greatly improve the Put performance.hbase.regions",
"product_code":"mrs",
"title":"Improving Put Performance",
"uri":"mrs_01_1637.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"211"
},
{
"desc":"HBase has many configuration parameters related to read and write performance. The configuration parameters need to be adjusted based on the read/write request loads. Thi",
"product_code":"mrs",
"title":"Optimizing Put and Scan Performance",
"uri":"mrs_01_1016.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"212"
},
{
"desc":"Scenarios where data needs to be written to HBase in real time, or large-scale and consecutive put scenariosThis section applies to MRS 3.x and later versions.The HBase p",
"product_code":"mrs",
"title":"Improving Real-time Data Write Performance",
"uri":"mrs_01_1017.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"213"
},
{
"desc":"HBase data needs to be read.The get or scan interface of HBase has been invoked and data is read in real time from HBase.Data reading server tuningParameter portal:Go to ",
"product_code":"mrs",
"title":"Improving Real-time Data Read Performance",
"uri":"mrs_01_1018.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"214"
},
{
"desc":"When the number of clusters reaches a certain scale, the default settings of the Java virtual machine (JVM) cannot meet the cluster requirements. In this case, the cluste",
"product_code":"mrs",
"title":"Optimizing JVM Parameters",
"uri":"mrs_01_1019.html",
"doc_type":"cmpntguide",
"p_code":"209",
"code":"215"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About HBase",
"uri":"mrs_01_1638.html",
"doc_type":"cmpntguide",
"p_code":"191",
"code":"216"
},
{
"desc":"A HBase server is faulty and cannot provide services. In this case, when a table operation is performed on the HBase client, why is the operation suspended and no respons",
"product_code":"mrs",
"title":"Why Does a Client Keep Failing to Connect to a Server for a Long Time?",
"uri":"mrs_01_1639.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"217"
},
{
"desc":"Why submitted operations fail by stopping BulkLoad on the client during BulkLoad data importing?When BulkLoad is enabled on the client, a partitioner file is generated an",
"product_code":"mrs",
"title":"Operation Failures Occur in Stopping BulkLoad On the Client",
"uri":"mrs_01_1640.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"218"
},
{
"desc":"When HBase consecutively deletes and creates the same table, why may a table creation exception occur?Execution process: Disable Table > Drop Table > Create Table > Disab",
"product_code":"mrs",
"title":"Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?",
"uri":"mrs_01_1641.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"219"
},
{
"desc":"Why other services become unstable if HBase sets up a large number of connections over the network port?When the OS command lsof or netstat is run, it is found that many ",
"product_code":"mrs",
"title":"Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?",
"uri":"mrs_01_1642.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"220"
},
{
"desc":"The HBase bulkLoad task (a single table contains 26 TB data) has 210,000 maps and 10,000 reduce tasks (in MRS 3.x or later), and the task fails.ZooKeeper I/O bottleneck o",
"product_code":"mrs",
"title":"Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?",
"uri":"mrs_01_1643.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"221"
},
{
"desc":"How do I restore a region in the RIT state for a long time?Log in to the HMaster Web UI, choose Procedure & Locks in the navigation tree, and check whether any process ID",
"product_code":"mrs",
"title":"How Do I Restore a Region in the RIT State for a Long Time?",
"uri":"mrs_01_1644.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"222"
},
{
"desc":"Why does HMaster exit due to timeout when waiting for the namespace table to go online?During the HMaster active/standby switchover or startup, HMaster performs WAL split",
"product_code":"mrs",
"title":"Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?",
"uri":"mrs_01_1645.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"223"
},
{
"desc":"Why does the following exception occur on the client when I use the HBase client to operate table data?At the same time, the following log is displayed on RegionServer:Th",
"product_code":"mrs",
"title":"Why Does SocketTimeoutException Occur When a Client Queries HBase?",
"uri":"mrs_01_1646.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"224"
},
{
"desc":"Why modified and deleted data can still be queried by using the scan command?Because of the scalability of HBase, all values specific to the versions in the queried colum",
"product_code":"mrs",
"title":"Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?",
"uri":"mrs_01_1647.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"225"
},
{
"desc":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?During HBase shell execution JRuby create temporary files under java.i",
"product_code":"mrs",
"title":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?",
"uri":"mrs_01_1648.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"226"
},
{
"desc":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?When an online RegionServer goes down abruptly, it is displayed under \"Dead R",
"product_code":"mrs",
"title":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?",
"uri":"mrs_01_1649.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"227"
},
{
"desc":"If the data to be imported by HBase bulkload has identical rowkeys, the data import is successful but identical query criteria produce different query results.Data with a",
"product_code":"mrs",
"title":"Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?",
"uri":"mrs_01_1650.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"228"
},
{
"desc":"What should I do if I fail to create tables due to the FAILED_OPEN state of Regions?If a network, HDFS, or Active HMaster fault occurs during the creation of tables, some",
"product_code":"mrs",
"title":"What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?",
"uri":"mrs_01_1651.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"229"
},
{
"desc":"In security mode, names of tables that failed to be created are unnecessarily retained in the table-lock node (default directory is /hbase/table-lock) of ZooKeeper. How d",
"product_code":"mrs",
"title":"How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?",
"uri":"mrs_01_1652.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"230"
},
{
"desc":"Why does HBase become faulty when I set quota for the directory used by HBase in HDFS?The flush operation of a table is to write memstore data to HDFS.If the HDFS directo",
"product_code":"mrs",
"title":"Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?",
"uri":"mrs_01_1653.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"231"
},
{
"desc":"Why HMaster times out while waiting for namespace table to be assigned after rebuilding meta using OfflineMetaRepair tool and startups failed?HMaster abort with following",
"product_code":"mrs",
"title":"Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed",
"uri":"mrs_01_1654.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"232"
},
{
"desc":"Why messages containing FileNotFoundException and no lease are frequently displayed in the HMaster logs during the WAL splitting process?During the WAL splitting process,",
"product_code":"mrs",
"title":"Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?",
"uri":"mrs_01_1655.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"233"
},
{
"desc":"When a tenant accesses Phoenix, a message is displayed indicating that the tenant has insufficient rights.You need to associate the HBase service and Yarn queues when cre",
"product_code":"mrs",
"title":"Insufficient Rights When a Tenant Accesses Phoenix",
"uri":"mrs_01_1657.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"234"
},
{
"desc":"The system automatically rolls back data after an HBase recovery task fails. If \"Rollback recovery failed\" is displayed, the rollback fails. After the rollback fails, dat",
"product_code":"mrs",
"title":"What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating \"Rollback recovery failed\"?",
"uri":"mrs_01_1659.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"235"
},
{
"desc":"When the HBaseFsck tool is used to check the region status in MRS 3.x and later versions, if the log contains ERROR: (regions region1 and region2) There is an overlap in ",
"product_code":"mrs",
"title":"How Do I Fix Region Overlapping?",
"uri":"mrs_01_1660.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"236"
},
{
"desc":"(MRS 3.x and later versions) Check the hbase-omm-*.out log of the node where RegionServer fails to be started. It is found that the log contains An error report file with",
"product_code":"mrs",
"title":"Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?",
"uri":"mrs_01_1661.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"237"
},
{
"desc":"Why does the LoadIncrementalHFiles tool fail to be executed and \"Permission denied\" is displayed when a Linux user is manually created in a normal cluster and DataNode in",
"product_code":"mrs",
"title":"Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and \"Permission denied\" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?",
"uri":"mrs_01_0625.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"238"
},
{
"desc":"When the sqlline script is used on the client, the error message \"import argparse\" is displayed.",
"product_code":"mrs",
"title":"Why Is the Error Message \"import argparse\" Displayed When the Phoenix sqlline Script Is Used?",
"uri":"mrs_01_2210.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"239"
},
{
"desc":"When the indexed field data is updated, if a batch of data exists in the user table, the BulkLoad tool cannot update the global and partial mutable indexes.Problem Analys",
"product_code":"mrs",
"title":"How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?",
"uri":"mrs_01_2211.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"240"
},
{
"desc":"When CTBase accesses the HBase service with the Ranger plug-ins enabled and you are creating a cluster table, a message is displayed indicating that the permission is ins",
"product_code":"mrs",
"title":"Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?",
"uri":"mrs_01_2212.html",
"doc_type":"cmpntguide",
"p_code":"216",
"code":"241"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using HDFS",
"uri":"mrs_01_0790.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"242"
},
{
"desc":"In HDFS, each file object needs to register corresponding information in the NameNode and occupies certain storage space. As the number of files increases, if the origina",
"product_code":"mrs",
"title":"Configuring Memory Management",
"uri":"mrs_01_0791.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"243"
},
{
"desc":"This section describes how to create and configure an HDFS role on FusionInsight Manager. The HDFS role is granted the rights to read, write, and execute HDFS directories",
"product_code":"mrs",
"title":"Creating an HDFS Role",
"uri":"mrs_01_1662.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"244"
},
{
"desc":"This section describes how to use the HDFS client in an O&M scenario or service scenario.The client has been installed.For example, the installation directory is /opt/had",
"product_code":"mrs",
"title":"Using the HDFS Client",
"uri":"mrs_01_1663.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"245"
},
{
"desc":"DistCp is a tool used to perform large-amount data replication between clusters or in a cluster. It uses MapReduce tasks to implement distributed copy of a large amount o",
"product_code":"mrs",
"title":"Running the DistCp Command",
"uri":"mrs_01_0794.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"246"
},
{
"desc":"This section describes the directory structure in HDFS, as shown in the following table.",
"product_code":"mrs",
"title":"Overview of HDFS File System Directories",
"uri":"mrs_01_0795.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"247"
},
{
"desc":"This section applies to MRS 3.x or later clusters.If the storage directory defined by the HDFS DataNode is incorrect or the HDFS storage plan changes, the system administ",
"product_code":"mrs",
"title":"Changing the DataNode Storage Directory",
"uri":"mrs_01_1664.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"248"
},
{
"desc":"The permission for some HDFS directories is 777 or 750 by default, which brings potential security risks. You are advised to modify the permission for the HDFS directorie",
"product_code":"mrs",
"title":"Configuring HDFS Directory Permission",
"uri":"mrs_01_0797.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"249"
},
{
"desc":"This section applies to MRS 3.x or later.Before deploying a cluster, you can deploy a Network File System (NFS) server based on requirements to store NameNode metadata to",
"product_code":"mrs",
"title":"Configuring NFS",
"uri":"mrs_01_1665.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"250"
},
{
"desc":"In HDFS, DataNode stores user files and directories as blocks, and file objects are generated on the NameNode to map each file, directory, and block on the DataNode.The f",
"product_code":"mrs",
"title":"Planning HDFS Capacity",
"uri":"mrs_01_0799.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"251"
},
{
"desc":"When you open an HDFS file, an error occurs due to the limit on the number of file handles. Information similar to the following is displayed.You can contact the systemad",
"product_code":"mrs",
"title":"Configuring ulimit for HBase and HDFS",
"uri":"mrs_01_0801.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"252"
},
{
"desc":"This section applies to MRS 3.x or later clusters.In the HDFS cluster, unbalanced disk usage among DataNodes may occur, for example, when new DataNodes are added to the c",
"product_code":"mrs",
"title":"Balancing DataNode Capacity",
"uri":"mrs_01_1667.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"253"
},
{
"desc":"By default, NameNode randomly selects a DataNode to write files. If the disk capacity of some DataNodes in a cluster is inconsistent (the total disk capacity of some node",
"product_code":"mrs",
"title":"Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes",
"uri":"mrs_01_0804.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"254"
},
{
"desc":"Generally, multiple services are deployed in a cluster, and the storage of most services depends on the HDFS file system. Different components such as Spark and Yarn or c",
"product_code":"mrs",
"title":"Configuring the Number of Files in a Single HDFS Directory",
"uri":"mrs_01_0805.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"255"
},
{
"desc":"On HDFS, deleted files are moved to the recycle bin (trash can) so that the data deleted by mistake can be restored.You can set the time threshold for storing files in th",
"product_code":"mrs",
"title":"Configuring the Recycle Bin Mechanism",
"uri":"mrs_01_0806.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"256"
},
{
"desc":"HDFS allows users to modify the default permissions of files and directories. The default mask provided by the HDFS for creating file and directory permissions is 022. If",
"product_code":"mrs",
"title":"Setting Permissions on Files and Directories",
"uri":"mrs_01_0807.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"257"
},
{
"desc":"In security mode, users can flexibly set the maximum token lifetime and token renewal interval in HDFS based on cluster requirements.Navigation path for setting parameter",
"product_code":"mrs",
"title":"Setting the Maximum Lifetime and Renewal Interval of a Token",
"uri":"mrs_01_0808.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"258"
},
{
"desc":"In the open source version, if multiple data storage volumes are configured for a DataNode, the DataNode stops providing services by default if one of the volumes is dama",
"product_code":"mrs",
"title":"Configuring the Damaged Disk Volume",
"uri":"mrs_01_1669.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"259"
},
{
"desc":"Encrypted channel is an encryption protocol of remote procedure call (RPC) in HDFS. When a user invokes RPC, the user's login name will be transmitted to RPC through RPC ",
"product_code":"mrs",
"title":"Configuring Encrypted Channels",
"uri":"mrs_01_0810.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"260"
},
{
"desc":"Clients probably encounter running errors when the network is not stable. Users can adjust the following parameter values to improve the running efficiency.Go to the All ",
"product_code":"mrs",
"title":"Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable",
"uri":"mrs_01_0811.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"261"
},
{
"desc":"This section applies to MRS 3.x or later.In the existing default DFSclient failover proxy provider, if a NameNode in a process is faulty, all HDFS client instances in the",
"product_code":"mrs",
"title":"Configuring the NameNode Blacklist",
"uri":"mrs_01_1670.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"262"
},
{
"desc":"This section applies to MRS 3.x or later.Several finished Hadoop clusters are faulty because the NameNode is overloaded and unresponsive.Such problem is caused by the ini",
"product_code":"mrs",
"title":"Optimizing HDFS NameNode RPC QoS",
"uri":"mrs_01_1672.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"263"
},
{
"desc":"When the speed at which the client writes data to the HDFS is greater than the disk bandwidth of the DataNode, the disk bandwidth is fully occupied. As a result, the Data",
"product_code":"mrs",
"title":"Optimizing HDFS DataNode RPC QoS",
"uri":"mrs_01_1673.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"264"
},
{
"desc":"When the Yarn local directory and DataNode directory are on the same disk, the disk with larger capacity can run more tasks. Therefore, more intermediate data is stored i",
"product_code":"mrs",
"title":"Configuring Reserved Percentage of Disk Usage on DataNodes",
"uri":"mrs_01_1675.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"265"
},
{
"desc":"You need to configure the nodes for storing HDFS file data blocks based on data features. You can configure a label expression to an HDFS directory or file and assign one",
"product_code":"mrs",
"title":"Configuring HDFS NodeLabel",
"uri":"mrs_01_1676.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"266"
},
{
"desc":"AZ Mover is a copy migration tool used to move copies to meet the new AZ policies set on the directory. It can be used to migrate copies from one AZ policy to another. AZ",
"product_code":"mrs",
"title":"Using HDFS AZ Mover",
"uri":"mrs_01_2360.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"267"
},
{
"desc":"In an HDFS cluster configured with HA, the active NameNode processes all client requests, and the standby NameNode reserves the latest metadata and block location informa",
"product_code":"mrs",
"title":"Configuring the Observer NameNode to Process Read Requests",
"uri":"mrs_01_1681.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"268"
},
{
"desc":"Performing this operation can concurrently modify file and directory permissions and access control tools in a cluster.This section applies to MRS 3.x or later clusters.P",
"product_code":"mrs",
"title":"Performing Concurrent Operations on HDFS Files",
"uri":"mrs_01_1684.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"269"
},
{
"desc":"Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)Da",
"product_code":"mrs",
"title":"Introduction to HDFS Logs",
"uri":"mrs_01_0828.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"270"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"HDFS Performance Tuning",
"uri":"mrs_01_0829.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"271"
},
{
"desc":"Improve the HDFS write performance by modifying the HDFS attributes.This section applies to MRS 3.x or later.Navigation path for setting parameters:On FusionInsight Manag",
"product_code":"mrs",
"title":"Improving Write Performance",
"uri":"mrs_01_1687.html",
"doc_type":"cmpntguide",
"p_code":"271",
"code":"272"
},
{
"desc":"Improve the HDFS read performance by using the client to cache the metadata for block locations.This function is recommended only for reading files that are not modified ",
"product_code":"mrs",
"title":"Improving Read Performance Using Client Metadata Cache",
"uri":"mrs_01_1688.html",
"doc_type":"cmpntguide",
"p_code":"271",
"code":"273"
},
{
"desc":"When HDFS is deployed in high availability (HA) mode with multiple NameNode instances, the HDFS client needs to connect to each NameNode in sequence to determine which is",
"product_code":"mrs",
"title":"Improving the Connection Between the Client and NameNode Using Current Active Cache",
"uri":"mrs_01_1689.html",
"doc_type":"cmpntguide",
"p_code":"271",
"code":"274"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"FAQ",
"uri":"mrs_01_1690.html",
"doc_type":"cmpntguide",
"p_code":"242",
"code":"275"
},
{
"desc":"The NameNode startup is slow when it is restarted immediately after a large number of files (for example, 1 million files) are deleted.It takes time for the DataNode to d",
"product_code":"mrs",
"title":"NameNode Startup Is Slow",
"uri":"mrs_01_1691.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"276"
},
{
"desc":"The DataNode is normal, but cannot report data blocks. As a result, the existing data blocks cannot be used.This error may occur when the number of data blocks in a data ",
"product_code":"mrs",
"title":"DataNode Is Normal but Cannot Report Data Blocks",
"uri":"mrs_01_1693.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"277"
},
{
"desc":"When errors occur in the dfs.datanode.data.dir directory of DataNode due to the permission or disk damage, HDFS WebUI does not display information about damaged data.Afte",
"product_code":"mrs",
"title":"HDFS WebUI Cannot Properly Update Information About Damaged Data",
"uri":"mrs_01_1694.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"278"
},
{
"desc":"Why distcp command fails in the secure cluster with the following error displayed?Client side exceptionServer side exceptionThe preceding error may occur if webhdfs:// is",
"product_code":"mrs",
"title":"Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?",
"uri":"mrs_01_1695.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"279"
},
{
"desc":"If the number of disks specified by dfs.datanode.data.dir is equal to the value of dfs.datanode.failed.volumes.tolerated, DataNode startup will fail.By default, the failu",
"product_code":"mrs",
"title":"Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?",
"uri":"mrs_01_1696.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"280"
},
{
"desc":"The capacity of a DataNode fails to calculate when multiple data.dir directories are configured in a disk partition.Currently, the capacity is calculated based on disks, ",
"product_code":"mrs",
"title":"Failed to Calculate the Capacity of a DataNode when Multiple data.dir Directories Are Configured in a Disk Partition",
"uri":"mrs_01_1697.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"281"
},
{
"desc":"When the standby NameNode is powered off during metadata (namespace) storage, it fails to be started and the following error information is displayed.When the standby Nam",
"product_code":"mrs",
"title":"Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage",
"uri":"mrs_01_1698.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"282"
},
{
"desc":"Why data in the buffer is lost if a power outage occurs during storage of small files?Because of a power outage, the blocks in the buffer are not written to the disk imme",
"product_code":"mrs",
"title":"Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files",
"uri":"mrs_01_1699.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"283"
},
{
"desc":"When HDFS calls the FileInputFormat getSplit method, the ArrayIndexOutOfBoundsException: 0 appears in the following log:The elements of each block correspondent frame are",
"product_code":"mrs",
"title":"Why Does Array Border-crossing Occur During FileInputFormat Split?",
"uri":"mrs_01_1700.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"284"
},
{
"desc":"When the storage policy of the file is set to LAZY_PERSIST, the storage type of the first replica should be RAM_DISK, and the storage type of other replicas should be DIS",
"product_code":"mrs",
"title":"Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?",
"uri":"mrs_01_1701.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"285"
},
{
"desc":"When the NameNode node is overloaded (100% of the CPU is occupied), the NameNode is unresponsive. The HDFS clients that are connected to the overloaded NameNode fail to r",
"product_code":"mrs",
"title":"The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time",
"uri":"mrs_01_1702.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"286"
},
{
"desc":"In DataNode, the storage directory of data blocks is specified by dfs.datanode.data.dir.Can I modify dfs.datanode.data.dir tomodify the data storage directory?Can I modif",
"product_code":"mrs",
"title":"Can I Delete or Modify the Data Storage Directory in DataNode?",
"uri":"mrs_01_1703.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"287"
},
{
"desc":"Why are some blocks missing on the NameNode UI after the rollback is successful?This problem occurs because blocks with new IDs or genstamps may exist on the DataNode. Th",
"product_code":"mrs",
"title":"Blocks Miss on the NameNode UI After the Successful Rollback",
"uri":"mrs_01_1704.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"288"
},
{
"desc":"Why is an \"java.net.SocketException: No buffer space available\" exception reported when data is written to HDFS?This problem occurs when files are written to the HDFS. Ch",
"product_code":"mrs",
"title":"Why Is \"java.net.SocketException: No buffer space available\" Reported When Data Is Written to HDFS",
"uri":"mrs_01_1705.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"289"
},
{
"desc":"Why are there two standby NameNodes after the active NameNode is restarted?When this problem occurs, check the ZooKeeper and ZooKeeper FC logs. You can find that the sess",
"product_code":"mrs",
"title":"Why are There Two Standby NameNodes After the active NameNode Is Restarted?",
"uri":"mrs_01_1706.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"290"
},
{
"desc":"After I start a Balance process in HDFS, the process is shut down abnormally. If I attempt to execute the Balance process again, it fails again.After a Balance process is",
"product_code":"mrs",
"title":"When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?",
"uri":"mrs_01_1707.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"291"
},
{
"desc":"Occasionally, nternet Explorer 9, Explorer 10, or Explorer 11 fails to access the native HDFS UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the nati",
"product_code":"mrs",
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI",
"uri":"mrs_01_1708.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"292"
},
{
"desc":"If a JournalNode server is powered off, the data directory disk is fully occupied, and the network is abnormal, the EditLog sequence number on the JournalNode is inconsec",
"product_code":"mrs",
"title":"NameNode Fails to Be Restarted Due to EditLog Discontinuity",
"uri":"mrs_01_1709.html",
"doc_type":"cmpntguide",
"p_code":"275",
"code":"293"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Hive",
"uri":"mrs_01_0581.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"294"
},
{
"desc":"Hive is a data warehouse framework built on Hadoop. It maps structured data files to a database table and provides SQL-like functions to analyze and process data. It also",
"product_code":"mrs",
"title":"Using Hive from Scratch",
"uri":"mrs_01_0442.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"295"
},
{
"desc":"Go to the Hive configurations page by referring to Modifying Cluster Service Configuration Parameters.",
"product_code":"mrs",
"title":"Configuring Hive Parameters",
"uri":"mrs_01_0582.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"296"
},
{
"desc":"Hive SQL supports all features of Hive-3.1.0. For details, see https://cwiki.apache.org/confluence/display/hive/languagemanual.Table 1 describes the extended Hive stateme",
"product_code":"mrs",
"title":"Hive SQL",
"uri":"mrs_01_2330.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"297"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Permission Management",
"uri":"mrs_01_0947.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"298"
},
{
"desc":"Hive is a data warehouse framework built on Hadoop. It provides basic data analysis services using the Hive query language (HQL), a language like the structured query lan",
"product_code":"mrs",
"title":"Hive Permission",
"uri":"mrs_01_0948.html",
"doc_type":"cmpntguide",
"p_code":"298",
"code":"299"
},
{
"desc":"This section describes how to create and configure a Hive role on Manager as the system administrator. The Hive role can be granted the permissions of the Hive administra",
"product_code":"mrs",
"title":"Creating a Hive Role",
"uri":"mrs_01_0949.html",
"doc_type":"cmpntguide",
"p_code":"298",
"code":"300"
},
{
"desc":"You can configure related permissions if you need to access tables or databases created by other users. Hive supports column-based permission control. If a user needs to ",
"product_code":"mrs",
"title":"Configuring Permissions for Hive Tables, Columns, or Databases",
"uri":"mrs_01_0950.html",
"doc_type":"cmpntguide",
"p_code":"298",
"code":"301"
},
{
"desc":"Hive may need to be associated with other components. For example, Yarn permissions are required in the scenario of using HQL statements to trigger MapReduce jobs, and HB",
"product_code":"mrs",
"title":"Configuring Permissions to Use Other Components for Hive",
"uri":"mrs_01_0951.html",
"doc_type":"cmpntguide",
"p_code":"298",
"code":"302"
},
{
"desc":"This section guides users to use a Hive client in an O&M or service scenario.The client has been installed. For example, the client is installed in the /opt/hadoopclient ",
"product_code":"mrs",
"title":"Using a Hive Client",
"uri":"mrs_01_0952.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"303"
},
{
"desc":"HDFS Colocation is the data location control function provided by HDFS. The HDFS Colocation API stores associated data or data on which associated operations are performe",
"product_code":"mrs",
"title":"Using HDFS Colocation to Store Hive Tables",
"uri":"mrs_01_0953.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"304"
},
{
"desc":"Hive supports encryption of one or multiple columns in a table. When creating a Hive table, you can specify the column to be encrypted and encryption algorithm. When data",
"product_code":"mrs",
"title":"Using the Hive Column Encryption Function",
"uri":"mrs_01_0954.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"305"
},
{
"desc":"In most cases, a carriage return character is used as the row delimiter in Hive tables stored in text files, that is, the carriage return character is used as the termina",
"product_code":"mrs",
"title":"Customizing Row Separators",
"uri":"mrs_01_0955.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"306"
},
{
"desc":"For mutually trusted Hive and HBase clusters with Kerberos authentication enabled, you can access the HBase cluster and synchronize its key configurations to HiveServer o",
"product_code":"mrs",
"title":"Configuring Hive on HBase in Across Clusters with Mutual Trust Enabled",
"uri":"mrs_01_24293.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"307"
},
{
"desc":"Due to the limitations of underlying storage systems, Hive does not support the ability to delete a single piece of table data. In Hive on HBase, MRS Hive supports the ab",
"product_code":"mrs",
"title":"Deleting Single-Row Records from Hive on HBase",
"uri":"mrs_01_0956.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"308"
},
{
"desc":"WebHCat provides external REST APIs for Hive. By default, the open-source community version uses the HTTP protocol.MRS Hive supports the HTTPS protocol that is more secur",
"product_code":"mrs",
"title":"Configuring HTTPS/HTTP-based REST APIs",
"uri":"mrs_01_0957.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"309"
},
{
"desc":"The Transform function is not allowed by Hive of the open source version.MRS Hive supports the configuration of the Transform function. The function is disabled by defaul",
"product_code":"mrs",
"title":"Enabling or Disabling the Transform Function",
"uri":"mrs_01_0958.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"310"
},
{
"desc":"This section describes how to create a view on Hive when MRS is configured in security mode, authorize access permissions to different users, and specify that different u",
"product_code":"mrs",
"title":"Access Control of a Dynamic Table View on Hive",
"uri":"mrs_01_0959.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"311"
},
{
"desc":"You must have ADMIN permission when creating temporary functions on Hive of the open source community version.MRS Hive supports the configuration of the function for crea",
"product_code":"mrs",
"title":"Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions",
"uri":"mrs_01_0960.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"312"
},
{
"desc":"Hive allows users to create external tables to associate with other relational databases. External tables read data from associated relational databases and support Join ",
"product_code":"mrs",
"title":"Using Hive to Read Data in a Relational Database",
"uri":"mrs_01_0961.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"313"
},
{
"desc":"Hive supports the following types of traditional relational database syntax:GroupingEXCEPT and INTERSECTSyntax description:Grouping takes effect only when the Group by st",
"product_code":"mrs",
"title":"Supporting Traditional Relational Database Syntax in Hive",
"uri":"mrs_01_0962.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"314"
},
{
"desc":"This function is applicable to Hive and Spark2x in MRS 3.x and later.With this function enabled, if the select permission is granted to a user during Hive table creation,",
"product_code":"mrs",
"title":"Viewing Table Structures Using the show create Statement as Users with the select Permission",
"uri":"mrs_01_0966.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"315"
},
{
"desc":"This function applies to Hive.After this function is enabled, run the following command to write a directory into Hive: insert overwrite directory \"/path1\".... After the ",
"product_code":"mrs",
"title":"Writing a Directory into Hive with the Old Data Removed to the Recycle Bin",
"uri":"mrs_01_0967.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"316"
},
{
"desc":"This function applies to Hive.With this function enabled, run the insert overwrite directory/path1/path2/path3... command to write a subdirectory. The permission of the /",
"product_code":"mrs",
"title":"Inserting Data to a Directory That Does Not Exist",
"uri":"mrs_01_0968.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"317"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, only the Hive ad",
"product_code":"mrs",
"title":"Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator",
"uri":"mrs_01_0969.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"318"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the location key",
"product_code":"mrs",
"title":"Disabling of Specifying the location Keyword When Creating an Internal Hive Table",
"uri":"mrs_01_0970.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"319"
},
{
"desc":"This function is applicable to Hive and Spark2x for MRS 3.x or later, or Hive and Spark for versions earlier than MRS 3.x.After this function is enabled, the user or user",
"product_code":"mrs",
"title":"Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read",
"uri":"mrs_01_0971.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"320"
},
{
"desc":"This function applies to Hive.The number of OS user groups is limited, and the number of roles that can be created in Hive cannot exceed 32. After this function is enable",
"product_code":"mrs",
"title":"Authorizing Over 32 Roles in Hive",
"uri":"mrs_01_0972.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"321"
},
{
"desc":"This function applies to Hive.This function is used to limit the maximum number of maps for Hive tasks on the server to avoid performance deterioration caused by overload",
"product_code":"mrs",
"title":"Restricting the Maximum Number of Maps for Hive Tasks",
"uri":"mrs_01_0973.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"322"
},
{
"desc":"This function applies to Hive.This function can be enabled to specify specific users to access HiveServer services on specific nodes, achieving HiveServer resource isolat",
"product_code":"mrs",
"title":"HiveServer Lease Isolation",
"uri":"mrs_01_0974.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"323"
},
{
"desc":"Hive supports transactions at the table and partition levels. When the transaction mode is enabled, transaction tables can be incrementally updated, deleted, and read, im",
"product_code":"mrs",
"title":"Hive Supporting Transactions",
"uri":"mrs_01_0975.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"324"
},
{
"desc":"Hive can use the Tez engine to process data computing tasks. Before executing a task, you can manually switch the execution engine to Tez.The TimelineServer role of the Y",
"product_code":"mrs",
"title":"Switching the Hive Execution Engine to Tez",
"uri":"mrs_01_1750.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"325"
},
{
"desc":"A Hive materialized view is a special table obtained based on the query results of Hive internal tables. A materialized view can be considered as an intermediate table th",
"product_code":"mrs",
"title":"Hive Materialized View",
"uri":"mrs_01_2311.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"326"
},
{
"desc":"Log path: The default save path of Hive logs is /var/log/Bigdata/hive/role name, the default save path of Hive1 logs is /var/log/Bigdata/hive1/role name, and the others f",
"product_code":"mrs",
"title":"Hive Log Overview",
"uri":"mrs_01_0976.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"327"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Hive Performance Tuning",
"uri":"mrs_01_0977.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"328"
},
{
"desc":"During the Select query, Hive generally scans the entire table, which is time-consuming. To improve query efficiency, create table partitions based on service requirement",
"product_code":"mrs",
"title":"Creating Table Partitions",
"uri":"mrs_01_0978.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"329"
},
{
"desc":"When the Join statement is used, the command execution speed and query speed may be slow in case of large data volume. To resolve this problem, you can optimize Join.Join",
"product_code":"mrs",
"title":"Optimizing Join",
"uri":"mrs_01_0979.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"330"
},
{
"desc":"Optimize the Group by statement to accelerate the command execution and query speed.During the Group by operation, Map performs grouping and distributes the groups to Red",
"product_code":"mrs",
"title":"Optimizing Group By",
"uri":"mrs_01_0980.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"331"
},
{
"desc":"ORC is an efficient column storage format and has higher compression ratio and reading efficiency than other file formats.You are advised to use ORC as the default Hive t",
"product_code":"mrs",
"title":"Optimizing Data Storage",
"uri":"mrs_01_0981.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"332"
},
{
"desc":"When SQL statements are executed on Hive, if the (a&b) or (a&c) logic exists in the statements, you are advised to change the logic to a & (b or c).If condition a is p_pa",
"product_code":"mrs",
"title":"Optimizing SQL Statements",
"uri":"mrs_01_0982.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"333"
},
{
"desc":"When joining multiple tables in Hive, Hive supports Cost-Based Optimization (CBO). The system automatically selects the optimal plan based on the table statistics, such a",
"product_code":"mrs",
"title":"Optimizing the Query Function Using Hive CBO",
"uri":"mrs_01_0983.html",
"doc_type":"cmpntguide",
"p_code":"328",
"code":"334"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Hive",
"uri":"mrs_01_1752.html",
"doc_type":"cmpntguide",
"p_code":"294",
"code":"335"
},
{
"desc":"How can I delete permanent user-defined functions (UDFs) on multiple HiveServers at the same time?Multiple HiveServers share one MetaStore database. Therefore, there is a",
"product_code":"mrs",
"title":"How Do I Delete UDFs on Multiple HiveServers at the Same Time?",
"uri":"mrs_01_1753.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"336"
},
{
"desc":"Why cannot the DROP operation be performed for a backed up Hive table?Snapshots have been created for an HDFS directory mapping to the backed up Hive table, so the HDFS d",
"product_code":"mrs",
"title":"Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?",
"uri":"mrs_01_1754.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"337"
},
{
"desc":"How to perform operations on local files (such as reading the content of a file) with Hive user-defined functions?By default, you can perform operations on local files wi",
"product_code":"mrs",
"title":"How to Perform Operations on Local Files with Hive User-Defined Functions",
"uri":"mrs_01_1755.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"338"
},
{
"desc":"How do I stop a MapReduce task manually if the task is suspended for a long time?",
"product_code":"mrs",
"title":"How Do I Forcibly Stop MapReduce Jobs Executed by Hive?",
"uri":"mrs_01_1756.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"339"
},
{
"desc":"How do I monitor the Hive table size?The HDFS refined monitoring function allows you to monitor the size of a specified table directory.The Hive and HDFS components are r",
"product_code":"mrs",
"title":"How Do I Monitor the Hive Table Size?",
"uri":"mrs_01_1758.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"340"
},
{
"desc":"How do I prevent key directories from data loss caused by misoperations of the insert overwrite statement?During monitoring of key Hive databases, tables, or directories,",
"product_code":"mrs",
"title":"How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?",
"uri":"mrs_01_1759.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"341"
},
{
"desc":"This function applies to Hive.Perform the following operations to configure parameters. When Hive on Spark tasks are executed in the environment where the HBase is not in",
"product_code":"mrs",
"title":"Why Is Hive on Spark Task Freezing When HBase Is Not Installed?",
"uri":"mrs_01_1760.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"342"
},
{
"desc":"When a table with more than 32,000 partitions is created in Hive, an exception occurs during the query with the WHERE partition. In addition, the exception information pr",
"product_code":"mrs",
"title":"Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive",
"uri":"mrs_01_1761.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"343"
},
{
"desc":"When users check the JDK version used by the client, if the JDK version is IBM JDK, the Beeline client needs to be reconstructed. Otherwise, the client will fail to conne",
"product_code":"mrs",
"title":"Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?",
"uri":"mrs_01_1762.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"344"
},
{
"desc":"Can Hive tables be stored in OBS or HDFS?The location of a common Hive table stored on OBS can be set to an HDFS path.In the same Hive service, you can create tables stor",
"product_code":"mrs",
"title":"Description of Hive Table Location (Either Be an OBS or HDFS Path)",
"uri":"mrs_01_1763.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"345"
},
{
"desc":"Hive uses the Tez engine to execute union-related statements to write data. After Hive is switched to the MapReduce engine for query, no data is found.When Hive uses the ",
"product_code":"mrs",
"title":"Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?",
"uri":"mrs_01_2309.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"346"
},
{
"desc":"Why Does Data Inconsistency Occur When Data Is Concurrently Written to a Hive Table Through an API?Hive does not support concurrent data insertion for the same table or p",
"product_code":"mrs",
"title":"Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?",
"uri":"mrs_01_2310.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"347"
},
{
"desc":"When the vectorized parameterhive.vectorized.execution.enabled is set to true, why do some null pointers or type conversion exceptions occur occasionally when Hive on Tez",
"product_code":"mrs",
"title":"Why Does Hive Not Support Vectorized Query?",
"uri":"mrs_01_2325.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"348"
},
{
"desc":"The HDFS data directory of the Hive table is deleted by mistake, but the metadata still exists. As a result, an error is reported during task execution.This is a exceptio",
"product_code":"mrs",
"title":"Why Does Metadata Still Exist When the HDFS Data Directory of the Hive Table Is Deleted by Mistake?",
"uri":"mrs_01_2343.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"349"
},
{
"desc":"How do I disable the logging function of Hive?cd/opt/Bigdata/clientsource bigdata_envIn security mode, run the following command to complete user authentication and log i",
"product_code":"mrs",
"title":"How Do I Disable the Logging Function of Hive?",
"uri":"mrs_01_24482.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"350"
},
{
"desc":"In the scenario where the fine-grained permission is configured for multiple MRS users to access OBS, after the permission for deleting Hive tables in the OBS directory i",
"product_code":"mrs",
"title":"Why Hive Tables in the OBS Directory Fail to Be Deleted?",
"uri":"mrs_01_24486.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"351"
},
{
"desc":"The error message \"java.lang.OutOfMemoryError: Java heap space.\" is displayed during Hive SQL execution.Solution:For MapReduce tasks, increase the values of the following",
"product_code":"mrs",
"title":"Hive Configuration Problems",
"uri":"mrs_01_24117.html",
"doc_type":"cmpntguide",
"p_code":"335",
"code":"352"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Hudi",
"uri":"mrs_01_24025.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"353"
},
{
"desc":"This section describes capabilities of Hudi using spark-shell. Using the Spark data source, this section describes how to insert and update a Hudi dataset of the default ",
"product_code":"mrs",
"title":"Getting Started",
"uri":"mrs_01_24033.html",
"doc_type":"cmpntguide",
"p_code":"353",
"code":"354"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Basic Operations",
"uri":"mrs_01_24062.html",
"doc_type":"cmpntguide",
"p_code":"353",
"code":"355"
},
{
"desc":"When writing data, Hudi generates a Hudi table based on attributes such as the storage path, table name, and partition structure.Hudi table data files can be stored in th",
"product_code":"mrs",
"title":"Hudi Table Schema",
"uri":"mrs_01_24103.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"356"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Write",
"uri":"mrs_01_24034.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"357"
},
{
"desc":"Hudi provides multiple write modes. For details, see the configuration item hoodie.datasource.write.operation. This section describes upsert, insert, and bulk_insert.inse",
"product_code":"mrs",
"title":"Batch Write",
"uri":"mrs_01_24035.html",
"doc_type":"cmpntguide",
"p_code":"357",
"code":"358"
},
{
"desc":"You can run run_hive_sync_tool.sh to synchronize data in the Hudi table to Hive.For example, run the following command to synchronize the Hudi table in the hdfs://haclust",
"product_code":"mrs",
"title":"Synchronizing Hudi Table Data to Hive",
"uri":"mrs_01_24064.html",
"doc_type":"cmpntguide",
"p_code":"357",
"code":"359"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Read",
"uri":"mrs_01_24037.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"360"
},
{
"desc":"Reading the real-time view (using Hive and SparkSQL as an example): Directly read the Hudi table stored in Hive.select count(*) from test;Reading the real-time view (usin",
"product_code":"mrs",
"title":"Reading COW Table Views",
"uri":"mrs_01_24098.html",
"doc_type":"cmpntguide",
"p_code":"360",
"code":"361"
},
{
"desc":"After the MOR table is synchronized to Hive, the following two tables are synchronized to Hive: Table name_rt and Table name_ro. The table suffixed with rt indicates the ",
"product_code":"mrs",
"title":"Reading MOR Table Views",
"uri":"mrs_01_24099.html",
"doc_type":"cmpntguide",
"p_code":"360",
"code":"362"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Data Management and Maintenance",
"uri":"mrs_01_24038.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"363"
},
{
"desc":"Clustering reorganizes data layout to improve query performance without affecting the ingestion speed.Hudi provides different operations, such as insert, upsert, and bulk",
"product_code":"mrs",
"title":"Clustering",
"uri":"mrs_01_24088.html",
"doc_type":"cmpntguide",
"p_code":"363",
"code":"364"
},
{
"desc":"Cleaning is used to delete data of versions that are no longer required.Hudi uses the cleaner working in the background to continuously delete unnecessary data of old ver",
"product_code":"mrs",
"title":"Cleaning",
"uri":"mrs_01_24089.html",
"doc_type":"cmpntguide",
"p_code":"363",
"code":"365"
},
{
"desc":"A compaction merges base and log files of MOR tables.For MOR tables, data is stored in columnar Parquet files and row-based Avro files, updates are recorded in incrementa",
"product_code":"mrs",
"title":"Compaction",
"uri":"mrs_01_24090.html",
"doc_type":"cmpntguide",
"p_code":"363",
"code":"366"
},
{
"desc":"Savepoints are used to save and restore data of the customized version.Savepoints provided by Hudi can save different commits so that the cleaner program does not delete ",
"product_code":"mrs",
"title":"Savepoint",
"uri":"mrs_01_24091.html",
"doc_type":"cmpntguide",
"p_code":"363",
"code":"367"
},
{
"desc":"Uses an external service (ZooKeeper or Hive MetaStore) as the distributed mutex lock service.Files can be concurrently written, but commits cannot be concurrent. The comm",
"product_code":"mrs",
"title":"Single-Table Concurrent Write",
"uri":"mrs_01_24165.html",
"doc_type":"cmpntguide",
"p_code":"363",
"code":"368"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using the Hudi Client",
"uri":"mrs_01_24100.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"369"
},
{
"desc":"For a cluster with Kerberos authentication enabled, a user has been created on FusionInsight Manager of the cluster and associated with user groups hadoop and hive.The Hu",
"product_code":"mrs",
"title":"Operating a Hudi Table Using hudi-cli.sh",
"uri":"mrs_01_24063.html",
"doc_type":"cmpntguide",
"p_code":"369",
"code":"370"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Configuration Reference",
"uri":"mrs_01_24032.html",
"doc_type":"cmpntguide",
"p_code":"355",
"code":"371"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Write Configuration",
"uri":"mrs_01_24093.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"372"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Configuration of Hive Table Synchronization",
"uri":"mrs_01_24094.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"373"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Index Configuration",
"uri":"mrs_01_24095.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"374"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Storage Configuration",
"uri":"mrs_01_24096.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"375"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Compaction and Cleaning Configurations",
"uri":"mrs_01_24097.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"376"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Single-Table Concurrent Write Configuration",
"uri":"mrs_01_24167.html",
"doc_type":"cmpntguide",
"p_code":"371",
"code":"377"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Hudi Performance Tuning",
"uri":"mrs_01_24039.html",
"doc_type":"cmpntguide",
"p_code":"353",
"code":"378"
},
{
"desc":"In the current version, Spark is recommended for Hudi write operations. Therefore, the tuning methods of Hudi are similar to those of Spark. For details, see Spark2x Perf",
"product_code":"mrs",
"title":"Performance Tuning Methods",
"uri":"mrs_01_24101.html",
"doc_type":"cmpntguide",
"p_code":"378",
"code":"379"
},
{
"desc":"For MOR tables:The essence of MOR tables is to write incremental files, so the tuning is based on the data size (dataSize) of Hudi.If dataSize is only several GBs, you ar",
"product_code":"mrs",
"title":"Recommended Resource Configuration",
"uri":"mrs_01_24102.html",
"doc_type":"cmpntguide",
"p_code":"378",
"code":"380"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Hudi",
"uri":"mrs_01_24065.html",
"doc_type":"cmpntguide",
"p_code":"353",
"code":"381"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Data Write",
"uri":"mrs_01_24070.html",
"doc_type":"cmpntguide",
"p_code":"381",
"code":"382"
},
{
"desc":"The following error is reported when data is written:You are advised to evolve schemas in backward compatible mode while using Hudi. This error usually occurs when you de",
"product_code":"mrs",
"title":"Parquet/Avro schema Is Reported When Updated Data Is Written",
"uri":"mrs_01_24071.html",
"doc_type":"cmpntguide",
"p_code":"382",
"code":"383"
},
{
"desc":"The following error is reported when data is written:This error will occur again because schema evolutions are in non-backwards compatible mode. Basically, there is some ",
"product_code":"mrs",
"title":"UnsupportedOperationException Is Reported When Updated Data Is Written",
"uri":"mrs_01_24072.html",
"doc_type":"cmpntguide",
"p_code":"382",
"code":"384"
},
{
"desc":"The following error is reported when data is written:This error may occur if a schema contains some non-nullable field whose value is not present or is null.You are advis",
"product_code":"mrs",
"title":"SchemaCompatabilityException Is Reported When Updated Data Is Written",
"uri":"mrs_01_24073.html",
"doc_type":"cmpntguide",
"p_code":"382",
"code":"385"
},
{
"desc":"Hudi consumes much space in a temporary folder during upsert.Hudi will spill part of input data to disk if the maximum memory for merge is reached when much input data is",
"product_code":"mrs",
"title":"What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?",
"uri":"mrs_01_24074.html",
"doc_type":"cmpntguide",
"p_code":"382",
"code":"386"
},
{
"desc":"Decimal data is initially written to a Hudi table using the BULK_INSERT command. Then when data is subsequently written using UPSERT, the following error is reported:Caus",
"product_code":"mrs",
"title":"Hudi Fails to Write Decimal Data with Lower Precision",
"uri":"mrs_01_24504.html",
"doc_type":"cmpntguide",
"p_code":"382",
"code":"387"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Data Collection",
"uri":"mrs_01_24075.html",
"doc_type":"cmpntguide",
"p_code":"381",
"code":"388"
},
{
"desc":"The error \"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer\" is reported in the main thread, and the following error is reported.This error may ",
"product_code":"mrs",
"title":"IllegalArgumentException Is Reported When Kafka Is Used to Collect Data",
"uri":"mrs_01_24077.html",
"doc_type":"cmpntguide",
"p_code":"388",
"code":"389"
},
{
"desc":"The following error is reported when data is collected:This error usually occurs when a field marked as recordKey or partitionKey is not present in the input record. Cros",
"product_code":"mrs",
"title":"HoodieException Is Reported When Data Is Collected",
"uri":"mrs_01_24078.html",
"doc_type":"cmpntguide",
"p_code":"388",
"code":"390"
},
{
"desc":"Is it possible to use a nullable field that contains null records as a primary key when creating a Hudi table?No. HoodieKeyException will be thrown.",
"product_code":"mrs",
"title":"HoodieKeyException Is Reported When Data Is Collected",
"uri":"mrs_01_24079.html",
"doc_type":"cmpntguide",
"p_code":"388",
"code":"391"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Hive Synchronization",
"uri":"mrs_01_24080.html",
"doc_type":"cmpntguide",
"p_code":"381",
"code":"392"
},
{
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when you try to add a new column to an existing Hive table using the HiveSyncTo",
"product_code":"mrs",
"title":"SQLException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24081.html",
"doc_type":"cmpntguide",
"p_code":"392",
"code":"393"
},
{
"desc":"The following error is reported during Hive data synchronization:This error occurs because HiveSyncTool currently supports only few compatible data type conversions. The ",
"product_code":"mrs",
"title":"HoodieHiveSyncException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24082.html",
"doc_type":"cmpntguide",
"p_code":"392",
"code":"394"
},
{
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when Hive synchronization is performed on the Hudi dataset but the configured h",
"product_code":"mrs",
"title":"SemanticException Is Reported During Hive Data Synchronization",
"uri":"mrs_01_24083.html",
"doc_type":"cmpntguide",
"p_code":"392",
"code":"395"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Hue (Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0369.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"396"
},
{
"desc":"Hue provides the file browser function using a graphical user interface (GUI) so that you can view files and directories on Hive.You have installed Hive and Hue, and the ",
"product_code":"mrs",
"title":"Using Hue from Scratch",
"uri":"mrs_01_1020.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"397"
},
{
"desc":"After Hue is installed in an MRS cluster, users can use Hadoop and Hive on the Hue web UI.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication e",
"product_code":"mrs",
"title":"Accessing the Hue Web UI",
"uri":"mrs_01_0370.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"398"
},
{
"desc":"For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
"product_code":"mrs",
"title":"Hue Common Parameters",
"uri":"mrs_01_1021.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"399"
},
{
"desc":"Users can use the Hue web UI to execute HiveQL statements in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
"product_code":"mrs",
"title":"Using HiveQL Editor on the Hue Web UI",
"uri":"mrs_01_0371.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"400"
},
{
"desc":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this",
"product_code":"mrs",
"title":"Using the Metadata Browser on the Hue Web UI",
"uri":"mrs_01_0372.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"401"
},
{
"desc":"Users can use the Hue web UI to manage files in HDFS in a cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this func",
"product_code":"mrs",
"title":"Using File Browser on the Hue Web UI",
"uri":"mrs_01_0373.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"402"
},
{
"desc":"You can use the Hue web UI to query all jobs in the cluster.For versions earlier than MRS 1.9.2, MRS clusters with Kerberos authentication enabled support this function.V",
"product_code":"mrs",
"title":"Using Job Browser on the Hue Web UI",
"uri":"mrs_01_0374.html",
"doc_type":"cmpntguide",
"p_code":"396",
"code":"403"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Hue (MRS 3.x or Later)",
"uri":"mrs_01_0130.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"404"
},
{
"desc":"Hue aggregates interfaces which interact with most Apache Hadoop components and enables you to use Hadoop components with ease on a web UI. You can operate components suc",
"product_code":"mrs",
"title":"Using Hue from Scratch",
"uri":"mrs_01_0131.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"405"
},
{
"desc":"After Hue is installed in an MRS cluster, users can use Hadoop-related components on the Hue web UI.This section describes how to open the Hue web UI on the MRS cluster.T",
"product_code":"mrs",
"title":"Accessing the Hue Web UI",
"uri":"mrs_01_0132.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"406"
},
{
"desc":"Go to the All Configurations page of the Hue service by referring to Modifying Cluster Service Configuration Parameters.For details about Hue common parameters, see Table",
"product_code":"mrs",
"title":"Hue Common Parameters",
"uri":"mrs_01_0133.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"407"
},
{
"desc":"Users can use the Hue web UI to execute HiveQL statements in an MRS cluster.Hive supports the following functions:Executes and manages HiveQL statements.Views the HiveQL ",
"product_code":"mrs",
"title":"Using HiveQL Editor on the Hue Web UI",
"uri":"mrs_01_0134.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"408"
},
{
"desc":"You can use Hue to execute SparkSql statements in a cluster on a graphical user interface (GUI).Before using the SparkSql editor, you need to modify the Spark2x configura",
"product_code":"mrs",
"title":"Using the SparkSql Editor on the Hue Web UI",
"uri":"mrs_01_2370.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"409"
},
{
"desc":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.Access the Hue web UI. For details, see Accessing the Hue Web UI.Viewing metadata of Hive tablesCli",
"product_code":"mrs",
"title":"Using the Metadata Browser on the Hue Web UI",
"uri":"mrs_01_0135.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"410"
},
{
"desc":"Users can use the Hue web UI to manage files in HDFS.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk management operation",
"product_code":"mrs",
"title":"Using File Browser on the Hue Web UI",
"uri":"mrs_01_0136.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"411"
},
{
"desc":"Users can use the Hue web UI to query all jobs in an MRS cluster.View the jobs in the current cluster.The number on Job Browser indicates the total number of jobs in the ",
"product_code":"mrs",
"title":"Using Job Browser on the Hue Web UI",
"uri":"mrs_01_0137.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"412"
},
{
"desc":"You can use Hue to create or query HBase tables in a cluster and run tasks on the Hue web UI.Make sure that the HBase component has been installed in the MRS cluster and ",
"product_code":"mrs",
"title":"Using HBase on the Hue Web UI",
"uri":"mrs_01_2371.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"413"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Typical Scenarios",
"uri":"mrs_01_0138.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"414"
},
{
"desc":"Hue provides the file browser function for users to use HDFS in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk ",
"product_code":"mrs",
"title":"HDFS on Hue",
"uri":"mrs_01_0139.html",
"doc_type":"cmpntguide",
"p_code":"414",
"code":"415"
},
{
"desc":"Hue provides the Hive GUI management function so that users can query Hive data in GUI mode.Access the Hue web UI. For details, see Accessing the Hue Web UI.In the naviga",
"product_code":"mrs",
"title":"Hive on Hue",
"uri":"mrs_01_0141.html",
"doc_type":"cmpntguide",
"p_code":"414",
"code":"416"
},
{
"desc":"Hue provides the Oozie job manager function, in this case, you can use Oozie in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not pe",
"product_code":"mrs",
"title":"Oozie on Hue",
"uri":"mrs_01_0144.html",
"doc_type":"cmpntguide",
"p_code":"414",
"code":"417"
},
{
"desc":"Log paths: The default paths of Hue logs are /var/log/Bigdata/hue (for storing run logs) and /var/log/Bigdata/audit/hue (for storing audit logs).Log archive rules: The au",
"product_code":"mrs",
"title":"Hue Log Overview",
"uri":"mrs_01_0147.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"418"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Hue",
"uri":"mrs_01_1764.html",
"doc_type":"cmpntguide",
"p_code":"404",
"code":"419"
},
{
"desc":"What do I do if all HQL statements fail to be executed when I use Internet Explorer to access Hive Editor in Hue and the message \"There was an error with your query\" is d",
"product_code":"mrs",
"title":"How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?",
"uri":"mrs_01_1765.html",
"doc_type":"cmpntguide",
"p_code":"419",
"code":"420"
},
{
"desc":"When Hive is used, the use database statement is entered in the text box to switch the database, and other statements are also entered, why does the database fail to be s",
"product_code":"mrs",
"title":"Why Does the use database Statement Become Invalid When Hive Is Used?",
"uri":"mrs_01_1766.html",
"doc_type":"cmpntguide",
"p_code":"419",
"code":"421"
},
{
"desc":"What can I do if an error message shown in the following figure is displayed, indicating that the HDFS file cannot be accessed when I use Hue web UI to access the HDFS fi",
"product_code":"mrs",
"title":"What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?",
"uri":"mrs_01_0156.html",
"doc_type":"cmpntguide",
"p_code":"419",
"code":"422"
},
{
"desc":"What can I do when a large file fails to be uploaded on the Hue page?You are advised to run commands on the client to upload large files instead of using the Hue file bro",
"product_code":"mrs",
"title":"How Do I Do If a Large File Fails to Upload on the Hue Page?",
"uri":"mrs_01_2367.html",
"doc_type":"cmpntguide",
"p_code":"419",
"code":"423"
},
{
"desc":"Why is the native Hue page blank if the Hive service is not installed in a cluster?In MRS 3.x, Hue depends on Hive. If this problem occurs, check whether the Hive compone",
"product_code":"mrs",
"title":"Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cluster?",
"uri":"mrs_01_2368.html",
"doc_type":"cmpntguide",
"p_code":"419",
"code":"424"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Kafka",
"uri":"mrs_01_0375.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"425"
},
{
"desc":"You can create, query, and delete topics on a cluster client.The client has been installed. For example, the client is installed in the /opt/hadoopclient directory. The c",
"product_code":"mrs",
"title":"Using Kafka from Scratch",
"uri":"mrs_01_1031.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"426"
},
{
"desc":"You can manage Kafka topics on a cluster client based on service requirements. Management permission is required for clusters with Kerberos authentication enabled.You hav",
"product_code":"mrs",
"title":"Managing Kafka Topics",
"uri":"mrs_01_0376.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"427"
},
{
"desc":"You can query existing Kafka topics on MRS.For versions earlier than MRS 1.9.2, log in to MRS Manager and choose Services > Kafka.For MRS 1.9.2 or later, click the cluste",
"product_code":"mrs",
"title":"Querying Kafka Topics",
"uri":"mrs_01_0377.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"428"
},
{
"desc":"For clusters with Kerberos authentication enabled, using Kafka requires relevant permissions. MRS clusters can grant the use permission of Kafka to different users.Table ",
"product_code":"mrs",
"title":"Managing Kafka User Permissions",
"uri":"mrs_01_0378.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"429"
},
{
"desc":"You can produce or consume messages in Kafka topics using the MRS cluster client. For clusters with Kerberos authentication enabled, you must have the permission to perfo",
"product_code":"mrs",
"title":"Managing Messages in Kafka Topics",
"uri":"mrs_01_0379.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"430"
},
{
"desc":"This section describes how to use the Maxwell data synchronization tool to migrate offline binlog-based data to an MRS Kafka cluster.Maxwell is an open source application",
"product_code":"mrs",
"title":"Synchronizing Binlog-based MySQL Data to the MRS Cluster",
"uri":"mrs_01_0441.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"431"
},
{
"desc":"This section describes how to create and configure a Kafka role.This section applies to MRS 3.x or later.Users can create Kafka roles only in security mode.If the current",
"product_code":"mrs",
"title":"Creating a Kafka Role",
"uri":"mrs_01_1032.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"432"
},
{
"desc":"This section applies to MRS 3.x or later.For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
"product_code":"mrs",
"title":"Kafka Common Parameters",
"uri":"mrs_01_1033.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"433"
},
{
"desc":"This section applies to MRS 3.x or later.Producer APIIndicates the API defined in org.apache.kafka.clients.producer.KafkaProducer. When kafka-console-producer.sh is used,",
"product_code":"mrs",
"title":"Safety Instructions on Using Kafka",
"uri":"mrs_01_1035.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"434"
},
{
"desc":"This section applies to MRS 3.x or later.The maximum number of topics depends on the number of file handles (mainly used by data and index files on site) opened in the pr",
"product_code":"mrs",
"title":"Kafka Specifications",
"uri":"mrs_01_1036.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"435"
},
{
"desc":"This section guides users to use a Kafka client in an O&M or service scenario.This section applies to MRS 3.x or later clusters.The client has been installed. For example",
"product_code":"mrs",
"title":"Using the Kafka Client",
"uri":"mrs_01_1767.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"436"
},
{
"desc":"For the Kafka message transmission assurance mechanism, different parameters are available for meeting different performance and reliability requirements. This section de",
"product_code":"mrs",
"title":"Configuring Kafka HA and High Reliability Parameters",
"uri":"mrs_01_1037.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"437"
},
{
"desc":"This section applies to MRS 3.x or later.When a broker storage directory is added, the system administrator needs to change the broker storage directory on FusionInsight ",
"product_code":"mrs",
"title":"Changing the Broker Storage Directory",
"uri":"mrs_01_1038.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"438"
},
{
"desc":"This section describes how to view the current expenditure on the client based on service requirements.This section applies to MRS 3.x or later.The system administrator h",
"product_code":"mrs",
"title":"Checking the Consumption Status of Consumer Group",
"uri":"mrs_01_1039.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"439"
},
{
"desc":"This section describes how to use the Kafka balancing tool on a client to balance the load of the Kafka cluster based on service requirements in scenarios such as node de",
"product_code":"mrs",
"title":"Kafka Balancing Tool Instructions",
"uri":"mrs_01_1040.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"440"
},
{
"desc":"This section describes how to use the Kafka balancing tool on the client to balance the load of the Kafka cluster after Kafka nodes are scaled out.This section applies to",
"product_code":"mrs",
"title":"Balancing Data After Kafka Node Scale-Out",
"uri":"mrs_01_24299.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"441"
},
{
"desc":"Operations need to be performed on tokens when the token authentication mechanism is used.This section applies to security clusters of MRS 3.x or later.The system adminis",
"product_code":"mrs",
"title":"Kafka Token Authentication Mechanism Tool Usage",
"uri":"mrs_01_1041.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"442"
},
{
"desc":"This section applies to MRS 3.x or later.Log paths: The default storage path of Kafka logs is /var/log/Bigdata/kafka. The default storage path of audit logs is /var/log/B",
"product_code":"mrs",
"title":"Introduction to Kafka Logs",
"uri":"mrs_01_1042.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"443"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Performance Tuning",
"uri":"mrs_01_1043.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"444"
},
{
"desc":"You can modify Kafka server parameters to improve Kafka processing capabilities in specific service scenarios.Modify the service configuration parameters. For details, se",
"product_code":"mrs",
"title":"Kafka Performance Tuning",
"uri":"mrs_01_1044.html",
"doc_type":"cmpntguide",
"p_code":"444",
"code":"445"
},
{
"desc":"Feature description: The function of creating idempotent producers is introduced in Kafka 0.11.0.0. After this function is enabled, producers are automatically upgraded t",
"product_code":"mrs",
"title":"Kafka Feature Description",
"uri":"mrs_01_2312.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"446"
},
{
"desc":"This section describes how to use Kafka client commands to migrate partition data between disks on a node without stopping the Kafka service.The system administrator has ",
"product_code":"mrs",
"title":"Migrating Data Between Kafka Nodes",
"uri":"mrs_01_24534.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"447"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Kafka",
"uri":"mrs_01_1768.html",
"doc_type":"cmpntguide",
"p_code":"425",
"code":"448"
},
{
"desc":"How do I delete a Kafka topic if it fails to be deleted?Possible cause 1: The delete.topic.enable configuration item is not set to true. The deletion can be performed onl",
"product_code":"mrs",
"title":"How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?",
"uri":"mrs_01_1769.html",
"doc_type":"cmpntguide",
"p_code":"448",
"code":"449"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using KafkaManager",
"uri":"mrs_01_0435.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"450"
},
{
"desc":"KafkaManager is a tool for managing Apache Kafka and provides GUI-based metric monitoring and management of Kafka clusters. This section applies to MRS 1.9.2 clusters.Kaf",
"product_code":"mrs",
"title":"Introduction to KafkaManager",
"uri":"mrs_01_0436.html",
"doc_type":"cmpntguide",
"p_code":"450",
"code":"451"
},
{
"desc":"You can monitor and manage Kafka clusters on the graphical KafkaManager web UI.This section applies to MRS 1.9.2 clusters.KafkaManager has been installed in a cluster.The",
"product_code":"mrs",
"title":"Accessing the KafkaManager Web UI",
"uri":"mrs_01_0437.html",
"doc_type":"cmpntguide",
"p_code":"450",
"code":"452"
},
{
"desc":"This section applies to MRS 1.9.2 clusters.Kafka cluster management includes the following operations:Adding a Cluster on the KafkaManager Web UIUpdating Cluster Paramete",
"product_code":"mrs",
"title":"Managing Kafka Clusters",
"uri":"mrs_01_0438.html",
"doc_type":"cmpntguide",
"p_code":"450",
"code":"453"
},
{
"desc":"This section applies to MRS 1.9.2 clusters.The Kafka cluster monitoring management includes the following operations:Viewing Broker InformationViewing Topic InformationVi",
"product_code":"mrs",
"title":"Kafka Cluster Monitoring Management",
"uri":"mrs_01_0439.html",
"doc_type":"cmpntguide",
"p_code":"450",
"code":"454"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Loader",
"uri":"mrs_01_0400.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"455"
},
{
"desc":"You can use Loader to import data from the SFTP server to HDFS.This section applies to MRS clusters earlier than 3.x.You have prepared service data.You have created an an",
"product_code":"mrs",
"title":"Using Loader from Scratch",
"uri":"mrs_01_1084.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"456"
},
{
"desc":"This section applies to MRS clusters earlier than 3.x.The process for migrating user data with Loader is as follows:Access the Loader page of the Hue web UI.Manage Loader",
"product_code":"mrs",
"title":"How to Use Loader",
"uri":"mrs_01_0401.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"457"
},
{
"desc":"This section applies to versions earlier than MRS 3.x.Loader supports the following links. This section describes configurations of each link.obs-connectorgeneric-jdbc-co",
"product_code":"mrs",
"title":"Loader Link Configuration",
"uri":"mrs_01_0402.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"458"
},
{
"desc":"You can create, view, edit, and delete links on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see ",
"product_code":"mrs",
"title":"Managing Loader Links (Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0403.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"459"
},
{
"desc":"When Loader jobs obtain data from different data sources, a link corresponding to a data source type needs to be selected and the link properties need to be configured.Th",
"product_code":"mrs",
"title":"Source Link Configurations of Loader Jobs",
"uri":"mrs_01_0404.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"460"
},
{
"desc":"When Loader jobs save data to different storage locations, a destination link needs to be selected and the link properties need to be configured.",
"product_code":"mrs",
"title":"Destination Link Configurations of Loader Jobs",
"uri":"mrs_01_0405.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"461"
},
{
"desc":"You can create, view, edit, and delete jobs on the Loader page.This section applies to versions earlier than MRS 3.x.You have accessed the Loader page. For details, see L",
"product_code":"mrs",
"title":"Managing Loader Jobs",
"uri":"mrs_01_0406.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"462"
},
{
"desc":"As a component for batch data export, Loader can import and export data using a relational database.You have prepared service data.Procedure for MRS clusters earlier than",
"product_code":"mrs",
"title":"Preparing a Driver for MySQL Database Link",
"uri":"mrs_01_0407.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"463"
},
{
"desc":"Log path: The default storage path of Loader log files is /var/log/Bigdata/loader/Log category.runlog: /var/log/Bigdata/loader/runlog (run logs)scriptlog: /var/log/Bigdat",
"product_code":"mrs",
"title":"Loader Log Overview",
"uri":"mrs_01_1165.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"464"
},
{
"desc":"If you need to import a large volume of data from the external cluster to the internal cluster, import it from OBS to HDFS.You have prepared service data.You have created",
"product_code":"mrs",
"title":"Example: Using Loader to Import Data from OBS to HDFS",
"uri":"mrs_01_0408.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"465"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Loader",
"uri":"mrs_01_1785.html",
"doc_type":"cmpntguide",
"p_code":"455",
"code":"466"
},
{
"desc":"Internet Explorer 11 or Internet Explorer 10 is used to access the web UI of Loader. After data is submitted, an error occurs.SymptomWhen the submitted data is saved, a s",
"product_code":"mrs",
"title":"How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Explorer 11 ?",
"uri":"mrs_01_1786.html",
"doc_type":"cmpntguide",
"p_code":"466",
"code":"467"
},
{
"desc":"Three types of connectors are available for importing data from the Oracle database to HDFS using Loader. That is, generic-jdbc-connector, oracle-connector, and oracle-pa",
"product_code":"mrs",
"title":"Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS",
"uri":"mrs_01_1787.html",
"doc_type":"cmpntguide",
"p_code":"466",
"code":"468"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using MapReduce",
"uri":"mrs_01_0834.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"469"
},
{
"desc":"Job and task logs are generated during execution of a MapReduce application.Job logs are generated by the MRApplicationMaster, which record details about the start and ru",
"product_code":"mrs",
"title":"Configuring the Log Archiving and Clearing Mechanism",
"uri":"mrs_01_0836.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"470"
},
{
"desc":"When the network is unstable or the cluster I/O and CPU are overloaded, client applications might encounter running failures.Adjust the following parameters in the mapred",
"product_code":"mrs",
"title":"Reducing Client Application Failure Rate",
"uri":"mrs_01_0837.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"471"
},
{
"desc":"If you want to transmit a job from Windows to Linux, set mapreduce.app-submission.cross-platform to true. If this parameter is unavailable for a cluster or its value is f",
"product_code":"mrs",
"title":"Transmitting MapReduce Tasks from Windows to Linux",
"uri":"mrs_01_0838.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"472"
},
{
"desc":"This section applies to MRS 3.x or later.Distributed caching is useful in the following scenarios:Rolling UpgradeDuring the upgrade, applications must keep the text conte",
"product_code":"mrs",
"title":"Configuring the Distributed Cache",
"uri":"mrs_01_0839.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"473"
},
{
"desc":"When the MapReduce shuffle service is started, it attempts to bind an IP address based on local host. If the MapReduce shuffle service is required to connect to a specifi",
"product_code":"mrs",
"title":"Configuring the MapReduce Shuffle Address",
"uri":"mrs_01_0840.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"474"
},
{
"desc":"This function is used to specify the MapReduce cluster administrator.The systemadministrator list is specified by mapreduce.cluster.administrators. The cluster administra",
"product_code":"mrs",
"title":"Configuring the Cluster Administrator List",
"uri":"mrs_01_0841.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"475"
},
{
"desc":"Log paths:JobhistoryServer: /var/log/Bigdata/mapreduce/jobhistory (run log) and /var/log/Bigdata/audit/mapreduce/jobhistory (audit log)Container: /srv/BigData/hadoop/data",
"product_code":"mrs",
"title":"Introduction to MapReduce Logs",
"uri":"mrs_01_0842.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"476"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"MapReduce Performance Tuning",
"uri":"mrs_01_0843.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"477"
},
{
"desc":"Optimization can be performed when the number of CPU cores is large, for example, the number of CPU cores is three times the number of disks.You can set the following par",
"product_code":"mrs",
"title":"Optimization Configuration for Multiple CPU Cores",
"uri":"mrs_01_0844.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"478"
},
{
"desc":"The performance optimization effect is verified by comparing actual values with the baseline data. Therefore, determining optimal job baseline is critical to performance ",
"product_code":"mrs",
"title":"Determining the Job Baseline",
"uri":"mrs_01_0845.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"479"
},
{
"desc":"During the shuffle procedure of MapReduce, the Map task writes intermediate data into disks, and the Reduce task copies and adds the data to the reduce function. Hadoop p",
"product_code":"mrs",
"title":"Streamlining Shuffle",
"uri":"mrs_01_0846.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"480"
},
{
"desc":"A big job containing 100,000 Map tasks fails. It is found that the failure is triggered by the slow response of ApplicationMaster (AM).When the number of tasks increases,",
"product_code":"mrs",
"title":"AM Optimization for Big Tasks",
"uri":"mrs_01_0847.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"481"
},
{
"desc":"If a cluster has hundreds or thousands of nodes, the hardware or software fault of a node may prolong the execution time of the entire task (as most tasks are already com",
"product_code":"mrs",
"title":"Speculative Execution",
"uri":"mrs_01_0848.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"482"
},
{
"desc":"The Slow Start feature specifies the proportion of Map tasks to be completed before Reduce tasks are started. If the Reduce tasks are started too early, resources will be",
"product_code":"mrs",
"title":"Using Slow Start",
"uri":"mrs_01_0849.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"483"
},
{
"desc":"By default, if an MR job generates a large number of output files, it takes a long time for the job to commit the temporary outputs of a task to the final output director",
"product_code":"mrs",
"title":"Optimizing Performance for Committing MR Jobs",
"uri":"mrs_01_0850.html",
"doc_type":"cmpntguide",
"p_code":"477",
"code":"484"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About MapReduce",
"uri":"mrs_01_1788.html",
"doc_type":"cmpntguide",
"p_code":"469",
"code":"485"
},
{
"desc":"MapReduce job takes a very long time (more than 10minutes) when the ResourceManager switch while the job is running.This is because, ResorceManager HA is enabled but the ",
"product_code":"mrs",
"title":"Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?",
"uri":"mrs_01_1789.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"486"
},
{
"desc":"MapReduce job is not progressing for long timeThis is because of less memory. When the memory is less, the time taken by the job to copy the map output increases signific",
"product_code":"mrs",
"title":"Why Does a MapReduce Task Stay Unchanged for a Long Time?",
"uri":"mrs_01_1790.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"487"
},
{
"desc":"Why is the client unavailable when the MR ApplicationMaster or ResourceManager is moved to the D state during job running?When a task is running, the MR ApplicationMaster",
"product_code":"mrs",
"title":"Why the Client Hangs During Job Running?",
"uri":"mrs_01_1791.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"488"
},
{
"desc":"In security mode, why delegation token HDFS_DELEGATION_TOKEN is not found in the cache?In MapReduce, by default HDFS_DELEGATION_TOKEN will be canceled after the job compl",
"product_code":"mrs",
"title":"Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?",
"uri":"mrs_01_1792.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"489"
},
{
"desc":"How do I set the job priority when submitting a MapReduce task?You can add the parameter -Dmapreduce.job.priority=<priority> in the command to set task priority when subm",
"product_code":"mrs",
"title":"How Do I Set the Task Priority When Submitting a MapReduce Task?",
"uri":"mrs_01_1793.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"490"
},
{
"desc":"After the address of MapReduce JobHistoryServer is changed, why the wrong page is displayed when I click the tracking URL on the ResourceManager WebUI?JobHistoryServer ad",
"product_code":"mrs",
"title":"After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?",
"uri":"mrs_01_1797.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"491"
},
{
"desc":"MapReduce or Yarn job fails in multiple nameService environment using viewFS.When using viewFS only the mount directories are accessible, so the most possible cause is th",
"product_code":"mrs",
"title":"MapReduce Job Failed in Multiple NameService Environment",
"uri":"mrs_01_1799.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"492"
},
{
"desc":"MapReduce task fails and the ratio of fault nodes to all nodes is smaller than the blacklist threshold configured by yarn.resourcemanager.am-scheduling.node-blacklisting-",
"product_code":"mrs",
"title":"Why a Fault MapReduce Node Is Not Blacklisted?",
"uri":"mrs_01_1800.html",
"doc_type":"cmpntguide",
"p_code":"485",
"code":"493"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Oozie",
"uri":"mrs_01_1807.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"494"
},
{
"desc":"Oozie is an open-source workflow engine that is used to schedule and coordinate Hadoop jobs.Oozie can be used to submit a wide array of jobs, such as Hive, Spark2x, Loade",
"product_code":"mrs",
"title":"Using Oozie from Scratch",
"uri":"mrs_01_1808.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"495"
},
{
"desc":"This section describes how to use the Oozie client in an O&M scenario or service scenario.The client has been installed. For example, the installation directory is /opt/c",
"product_code":"mrs",
"title":"Using the Oozie Client",
"uri":"mrs_01_1810.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"496"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Oozie Client to Submit an Oozie Job",
"uri":"mrs_01_1812.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"497"
},
{
"desc":"This section describes how to use the Oozie client to submit a Hive job.Hive jobs are divided into the following types:Hive jobHive job that is connected in JDBC modeHive",
"product_code":"mrs",
"title":"Submitting a Hive Job",
"uri":"mrs_01_1813.html",
"doc_type":"cmpntguide",
"p_code":"497",
"code":"498"
},
{
"desc":"This section describes how to submit a Spark2x job using the Oozie client.You are advised to download the latest client.The Spark2x and Oozie components and clients have ",
"product_code":"mrs",
"title":"Submitting a Spark2x Job",
"uri":"mrs_01_1814.html",
"doc_type":"cmpntguide",
"p_code":"497",
"code":"499"
},
{
"desc":"This section describes how to submit a Loader job using the Oozie client.You are advised to download the latest client.The Hive and Oozie components and clients have been",
"product_code":"mrs",
"title":"Submitting a Loader Job",
"uri":"mrs_01_1815.html",
"doc_type":"cmpntguide",
"p_code":"497",
"code":"500"
},
{
"desc":"This section describes how to submit a DistCp job using the Oozie client.You are advised to download the latest client.The HDFS and Oozie components and clients have been",
"product_code":"mrs",
"title":"Submitting a DistCp Job",
"uri":"mrs_01_2392.html",
"doc_type":"cmpntguide",
"p_code":"497",
"code":"501"
},
{
"desc":"In addition to Hive, Spark2x, and Loader jobs, MapReduce, Java, Shell, HDFS, SSH, SubWorkflow, Streaming, and scheduled jobs can be submitted using the Oozie client.You a",
"product_code":"mrs",
"title":"Submitting Other Jobs",
"uri":"mrs_01_1816.html",
"doc_type":"cmpntguide",
"p_code":"497",
"code":"502"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Hue to Submit an Oozie Job",
"uri":"mrs_01_1817.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"503"
},
{
"desc":"You can submit an Oozie job on the Hue management page, but a workflow must be created before the job is submitted.Before using Hue to submit an Oozie job, configure the ",
"product_code":"mrs",
"title":"Creating a Workflow",
"uri":"mrs_01_1818.html",
"doc_type":"cmpntguide",
"p_code":"503",
"code":"504"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Submitting a Workflow Job",
"uri":"mrs_01_1819.html",
"doc_type":"cmpntguide",
"p_code":"503",
"code":"505"
},
{
"desc":"This section describes how to submit an Oozie job of the Hive2 type on the Hue web UI.For example, if the input parameter is INPUT=/user/admin/examples/input-data/table, ",
"product_code":"mrs",
"title":"Submitting a Hive2 Job",
"uri":"mrs_01_1820.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"506"
},
{
"desc":"This section describes how to submit an Oozie job of the Spark2x type on Hue.For example, add the following parameters:hdfs://hacluster/user/admin/examples/input-data/tex",
"product_code":"mrs",
"title":"Submitting a Spark2x Job",
"uri":"mrs_01_1821.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"507"
},
{
"desc":"This section describes how to submit an Oozie job of the Java type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
"product_code":"mrs",
"title":"Submitting a Java Job",
"uri":"mrs_01_1822.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"508"
},
{
"desc":"This section describes how to submit an Oozie job of the Loader type on the Hue web UI.Job id is the ID of the Loader job to be orchestrated and can be obtained from the ",
"product_code":"mrs",
"title":"Submitting a Loader Job",
"uri":"mrs_01_1823.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"509"
},
{
"desc":"This section describes how to submit an Oozie job of the MapReduce type on the Hue web UI.For example, set the value of mapred.input.dir to /user/admin/examples/input-dat",
"product_code":"mrs",
"title":"Submitting a MapReduce Job",
"uri":"mrs_01_1824.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"510"
},
{
"desc":"This section describes how to submit an Oozie job of the Sub-workflow type on the Hue web UI.If you need to modify the job name before saving the job (default value: My W",
"product_code":"mrs",
"title":"Submitting a Sub-workflow Job",
"uri":"mrs_01_1825.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"511"
},
{
"desc":"This section describes how to submit an Oozie job of the Shell type on the Hue web UI.If the file is stored in HDFS, select the path of the .sh file, for example, user/hu",
"product_code":"mrs",
"title":"Submitting a Shell Job",
"uri":"mrs_01_1826.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"512"
},
{
"desc":"This section describes how to submit an Oozie job of the HDFS type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
"product_code":"mrs",
"title":"Submitting an HDFS Job",
"uri":"mrs_01_1827.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"513"
},
{
"desc":"This section describes how to submit an Oozie job of the Streaming type on the Hue web UI.for example, /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-3.1.1.ja",
"product_code":"mrs",
"title":"Submitting a Streaming Job",
"uri":"mrs_01_1828.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"514"
},
{
"desc":"This section describes how to submit an Oozie job of the DistCp type on the Hue web UI.If yes, go to 4.If no, go to 7.source_ip: service address of the HDFS NameNode in t",
"product_code":"mrs",
"title":"Submitting a DistCp Job",
"uri":"mrs_01_1829.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"515"
},
{
"desc":"This section guides you to enable unidirectional password-free mutual trust when Oozie nodes are used to execute shell scripts of external nodes through SSH jobs.You have",
"product_code":"mrs",
"title":"Example of Mutual Trust Operations",
"uri":"mrs_01_1830.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"516"
},
{
"desc":"This section guides you to submit an Oozie job of the SSH type on the Hue web UI.Due to security risks, SSH jobs cannot be submitted by default. To use the SSH function, ",
"product_code":"mrs",
"title":"Submitting an SSH Job",
"uri":"mrs_01_1831.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"517"
},
{
"desc":"This section describes how to submit a Hive job on the Hue web UI.After the job is submitted, you can view the related contents of the job, such as the detailed informati",
"product_code":"mrs",
"title":"Submitting a Hive Script",
"uri":"mrs_01_2372.html",
"doc_type":"cmpntguide",
"p_code":"505",
"code":"518"
},
{
"desc":"This section describes how to submit a job of the periodic scheduling type on the Hue web UI.Required workflow jobs have been configured before the coordinator task is su",
"product_code":"mrs",
"title":"Submitting a Coordinator Periodic Scheduling Job",
"uri":"mrs_01_1840.html",
"doc_type":"cmpntguide",
"p_code":"503",
"code":"519"
},
{
"desc":"In the case that multiple scheduled jobs exist at the same time, you can manage the jobs in batches over the Bundle task. This section describes how to submit a job of th",
"product_code":"mrs",
"title":"Submitting a Bundle Batch Processing Job",
"uri":"mrs_01_1841.html",
"doc_type":"cmpntguide",
"p_code":"503",
"code":"520"
},
{
"desc":"After the jobs are submitted, you can view the execution status of a specific job on Hue.",
"product_code":"mrs",
"title":"Querying the Operation Results",
"uri":"mrs_01_1842.html",
"doc_type":"cmpntguide",
"p_code":"503",
"code":"521"
},
{
"desc":"Log path: The default storage paths of Oozie log files are as follows:Run log: /var/log/Bigdata/oozieAudit log: /var/log/Bigdata/audit/oozieLog archiving rule: Oozie logs",
"product_code":"mrs",
"title":"Oozie Log Overview",
"uri":"mrs_01_1843.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"522"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Oozie",
"uri":"mrs_01_1844.html",
"doc_type":"cmpntguide",
"p_code":"494",
"code":"523"
},
{
"desc":"Why are not Coordinator scheduled jobs executed on time on the Hue or Oozie client?Use UTC time. For example, set start=2016-12-20T09:00Z in job.properties file.",
"product_code":"mrs",
"title":"Oozie Scheduled Tasks Are Not Executed on Time",
"uri":"mrs_01_1846.html",
"doc_type":"cmpntguide",
"p_code":"523",
"code":"524"
},
{
"desc":"A new JAR package is uploaded to the /user/oozie/share/lib directory on HDFS. However, an error indicating that the class cannot be found is reported during task executio",
"product_code":"mrs",
"title":"Why Update of the share lib Directory of Oozie on HDFS Does Not Take Effect?",
"uri":"mrs_01_1847.html",
"doc_type":"cmpntguide",
"p_code":"523",
"code":"525"
},
{
"desc":"Check the job logs on Yarn. Run the command executed through Hive SQL using beeline to ensure that Hive is running properly.If error information such as \"classnotfoundExc",
"product_code":"mrs",
"title":"Common Oozie Troubleshooting Methods",
"uri":"mrs_01_24479.html",
"doc_type":"cmpntguide",
"p_code":"523",
"code":"526"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using OpenTSDB",
"uri":"mrs_01_0599.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"527"
},
{
"desc":"You can perform an interactive operation on an MRS cluster client. For a cluster with Kerberos authentication enabled, the user must belong to the opentsdb, hbase, opents",
"product_code":"mrs",
"title":"Using an MRS Client to Operate OpenTSDB Metric Data",
"uri":"mrs_01_0471.html",
"doc_type":"cmpntguide",
"p_code":"527",
"code":"528"
},
{
"desc":"For example, to write data of a metric named testdata, whose timestamp is 1524900185, value is true, tag is key and value, run the following command:<tsd_ip>: indicates t",
"product_code":"mrs",
"title":"Running the curl Command to Operate OpenTSDB",
"uri":"mrs_01_0472.html",
"doc_type":"cmpntguide",
"p_code":"527",
"code":"529"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Presto",
"uri":"mrs_01_0432.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"530"
},
{
"desc":"You can view the Presto statistics on the graphical Presto web UI. You are advised to use Google Chrome to access the Presto web UI because it cannot be accessed using In",
"product_code":"mrs",
"title":"Accessing the Presto Web UI",
"uri":"mrs_01_0433.html",
"doc_type":"cmpntguide",
"p_code":"530",
"code":"531"
},
{
"desc":"You can perform an interactive query on an MRS cluster client. For clusters with Kerberos authentication enabled, users who submit topologies must belong to the presto gr",
"product_code":"mrs",
"title":"Using a Client to Execute Query Statements",
"uri":"mrs_01_0434.html",
"doc_type":"cmpntguide",
"p_code":"530",
"code":"532"
},
{
"desc":"The Presto component has been installed in an MRS cluster.You have synchronized IAM users. (On the Dashboard page, click Synchronize on the right side of IAM User Sync to",
"product_code":"mrs",
"title":"Using Presto to Dump Data in DLF",
"uri":"mrs_01_0635.html",
"doc_type":"cmpntguide",
"p_code":"530",
"code":"533"
},
{
"desc":"MRS 3.x does not enable you to configure Presto permissions.By default, the Hive Catalog authorization of the Presto component is enabled in a security cluster. The Prest",
"product_code":"mrs",
"title":"Configuring Presto Permissions",
"uri":"mrs_01_0636.html",
"doc_type":"cmpntguide",
"p_code":"530",
"code":"534"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Ranger (MRS 1.9.2)",
"uri":"mrs_01_0761.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"535"
},
{
"desc":"Currently, only normal MRS 1.9.2 clusters support Ranger. Security clusters with Kerberos authentication enabled do not support Ranger.After the cluster is created, Range",
"product_code":"mrs",
"title":"Creating a Ranger Cluster",
"uri":"mrs_01_0763.html",
"doc_type":"cmpntguide",
"p_code":"535",
"code":"536"
},
{
"desc":"You can manage Ranger on the Ranger web UI.After logging in to the Ranger Web UI for the first time, change the password and keep it secure.Ranger UserSync is an importan",
"product_code":"mrs",
"title":"Accessing the Ranger Web UI and Synchronizing Unix Users to the Ranger Web UI",
"uri":"mrs_01_0764.html",
"doc_type":"cmpntguide",
"p_code":"535",
"code":"537"
},
{
"desc":"After an MRS cluster with Ranger installed is created, Hive and Impala access control is not integrated into Ranger. This section describes how to integrate Hive into Ran",
"product_code":"mrs",
"title":"Configuring Hive/Impala Access Permissions in Ranger",
"uri":"mrs_01_0765.html",
"doc_type":"cmpntguide",
"p_code":"535",
"code":"538"
},
{
"desc":"After an MRS cluster with Ranger installed is created, HBase access control is not integrated into Ranger. This section describes how to integrate HBase into Ranger.Addin",
"product_code":"mrs",
"title":"Configuring HBase Access Permissions in Ranger",
"uri":"mrs_01_0766.html",
"doc_type":"cmpntguide",
"p_code":"535",
"code":"539"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Ranger (MRS 3.x)",
"uri":"mrs_01_1849.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"540"
},
{
"desc":"Ranger provides a centralized permission management framework to implement fine-grained permission control on components such as HDFS, HBase, Hive, and Yarn. In addition,",
"product_code":"mrs",
"title":"Logging In to the Ranger Web UI",
"uri":"mrs_01_1850.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"541"
},
{
"desc":"This section guides you how to enable Ranger authentication. Ranger authentication is enabled by default in security mode and disabled by default in normal mode.If Enable",
"product_code":"mrs",
"title":"Enabling Ranger Authentication",
"uri":"mrs_01_2393.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"542"
},
{
"desc":"In the newly installed MRS cluster, Ranger is installed by default, with the Ranger authentication model enabled. The systemadministrator can set fine-grained security po",
"product_code":"mrs",
"title":"Configuring Component Permission Policies",
"uri":"mrs_01_1851.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"543"
},
{
"desc":"The systemadministrator can view audit logs of the Ranger running and the permission control after Ranger authentication is enabled on the Ranger web UI.",
"product_code":"mrs",
"title":"Viewing Ranger Audit Information",
"uri":"mrs_01_1852.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"544"
},
{
"desc":"Security zone can be configured using Ranger. Rangeradministrators can divide resources of each component into multiple security zones where administrators set security p",
"product_code":"mrs",
"title":"Configuring a Security Zone",
"uri":"mrs_01_1853.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"545"
},
{
"desc":"By default, the Ranger data source of the security cluster can be accessed by FusionInsight Manager LDAP users. By default, the Ranger data source of a common cluster can",
"product_code":"mrs",
"title":"Changing the Ranger Data Source to LDAP for a Normal Cluster",
"uri":"mrs_01_2394.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"546"
},
{
"desc":"You can view Ranger permission settings, such as users, user groups, and roles.Users: displays all user information synchronized from LDAP or OS to Ranger.Groups: display",
"product_code":"mrs",
"title":"Viewing Ranger Permission Information",
"uri":"mrs_01_1854.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"547"
},
{
"desc":"The Rangeradministrator can use Ranger to configure the read, write, and execution permissions on HDFS directories or files for HDFS users.The Ranger service has been ins",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for HDFS",
"uri":"mrs_01_1856.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"548"
},
{
"desc":"Rangeradministrators can use Ranger to configure permissions on HBase tables, column families, and columns for HBase users.The Ranger service has been installed and is ru",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for HBase",
"uri":"mrs_01_1857.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"549"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Hive users. The default administrator account of Hive is hive and the initial password is Hive@123.The Range",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for Hive",
"uri":"mrs_01_1858.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"550"
},
{
"desc":"The Rangeradministrator can use Ranger to configure Yarn administrator permissions for Yarn users, allowing them to manage Yarn queue resources.The Ranger service has bee",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for Yarn",
"uri":"mrs_01_1859.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"551"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Spark2x users.After Ranger authentication is enabled or disabled on Spark2x, you need to restart Spark2x.Dow",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for Spark2x",
"uri":"mrs_01_1860.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"552"
},
{
"desc":"The Rangeradministrator can use Ranger to configure the read, write, and management permissions of the Kafka topic and the management permission of the cluster for the Ka",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for Kafka",
"uri":"mrs_01_1861.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"553"
},
{
"desc":"The Rangeradministrator can use Ranger to set permissions for Storm users.The Ranger service has been installed and is running properly.You have created users, user group",
"product_code":"mrs",
"title":"Adding a Ranger Access Permission Policy for Storm",
"uri":"mrs_01_1863.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"554"
},
{
"desc":"Log path: The default storage path of Ranger logs is /var/log/Bigdata/ranger/Role name.RangerAdmin: /var/log/Bigdata/ranger/rangeradmin (run logs)TagSync: /var/log/Bigdat",
"product_code":"mrs",
"title":"Ranger Log Overview",
"uri":"mrs_01_1865.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"555"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Ranger",
"uri":"mrs_01_1866.html",
"doc_type":"cmpntguide",
"p_code":"540",
"code":"556"
},
{
"desc":"During cluster installation, Ranger fails to be started, and the error message \"ERROR: cannot drop sequence X_POLICY_REF_ACCESS_TYPE_SEQ \" is displayed in the task list o",
"product_code":"mrs",
"title":"Why Ranger Startup Fails During the Cluster Installation?",
"uri":"mrs_01_1867.html",
"doc_type":"cmpntguide",
"p_code":"556",
"code":"557"
},
{
"desc":"How do I determine whether the Ranger authentication is enabled for a service that supports the authentication?Log in to FusionInsight Manager and choose Cluster > Servic",
"product_code":"mrs",
"title":"How Do I Determine Whether the Ranger Authentication Is Used for a Service?",
"uri":"mrs_01_1868.html",
"doc_type":"cmpntguide",
"p_code":"556",
"code":"558"
},
{
"desc":"When a new user logs in to Ranger, why is the 401 error reported after the password is changed?The UserSync synchronizes user data at an interval of 5 minutes by default.",
"product_code":"mrs",
"title":"Why Cannot a New User Log In to Ranger After Changing the Password?",
"uri":"mrs_01_2300.html",
"doc_type":"cmpntguide",
"p_code":"556",
"code":"559"
},
{
"desc":"When a Ranger access permission policy is added for HBase and wildcard characters are used to search for an existing HBase table in the policy, the table cannot be found.",
"product_code":"mrs",
"title":"When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables",
"uri":"mrs_01_2355.html",
"doc_type":"cmpntguide",
"p_code":"556",
"code":"560"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Spark",
"uri":"mrs_01_0589.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"561"
},
{
"desc":"This section applies to versions earlier than MRS 3.x.",
"product_code":"mrs",
"title":"Precautions",
"uri":"mrs_01_1925.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"562"
},
{
"desc":"This section describes how to use Spark to submit a SparkPi job. SparkPi, a typical Spark job, is used to calculate the value of Pi (π).Multiple open-source Spark sample ",
"product_code":"mrs",
"title":"Getting Started with Spark",
"uri":"mrs_01_0366.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"563"
},
{
"desc":"Spark provides the Spark SQL language that is similar to SQL to perform operations on structured data. This section describes how to use Spark SQL from scratch. Create a ",
"product_code":"mrs",
"title":"Getting Started with Spark SQL",
"uri":"mrs_01_0367.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"564"
},
{
"desc":"After an MRS cluster is created, you can create and submit jobs on the client. The client can be installed on nodes inside or outside the cluster.Nodes inside the cluster",
"product_code":"mrs",
"title":"Using the Spark Client",
"uri":"mrs_01_1183.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"565"
},
{
"desc":"The Spark web UI is used to view the running status of Spark applications. Google Chrome is recommended for better user experience.Spark has two web UIs.Spark UI: used to",
"product_code":"mrs",
"title":"Accessing the Spark Web UI",
"uri":"mrs_01_0767.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"566"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Interconnecting Spark with OpenTSDB",
"uri":"mrs_01_0584.html",
"doc_type":"cmpntguide",
"p_code":"561",
"code":"567"
},
{
"desc":"MRS Spark can be used to access the data source of OpenTSDB, create and associate tables in the Spark, and query and insert the OpenTSDB data.Use the CREATE TABLE command",
"product_code":"mrs",
"title":"Creating a Table and Associating It with OpenTSDB",
"uri":"mrs_01_0585.html",
"doc_type":"cmpntguide",
"p_code":"567",
"code":"568"
},
{
"desc":"Run the INSERT INTO statement to insert the data in the table to the associated OpenTSDB metric.The inserted data cannot be null. If the inserted data is the same as the ",
"product_code":"mrs",
"title":"Inserting Data to the OpenTSDB Table",
"uri":"mrs_01_0586.html",
"doc_type":"cmpntguide",
"p_code":"567",
"code":"569"
},
{
"desc":"This SELECT command is used to query data in an OpenTSDB table.The to-be-queried table must exist. Otherwise, an error is reported.The value of tagv must exist. Otherwise",
"product_code":"mrs",
"title":"Querying an OpenTSDB Table",
"uri":"mrs_01_0587.html",
"doc_type":"cmpntguide",
"p_code":"567",
"code":"570"
},
{
"desc":"By default, OpenTSDB connects to the local TSD process of the node where the Spark executor resides. In MRS, use the default configuration.Run the set statement in spark-",
"product_code":"mrs",
"title":"Modifying the Default Configuration Data",
"uri":"mrs_01_0588.html",
"doc_type":"cmpntguide",
"p_code":"567",
"code":"571"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Spark2x",
"uri":"mrs_01_1926.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"572"
},
{
"desc":"This section applies to MRS 3.x or later clusters.",
"product_code":"mrs",
"title":"Precautions",
"uri":"mrs_01_1927.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"573"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Basic Operation",
"uri":"mrs_01_1928.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"574"
},
{
"desc":"This section describes how to use Spark2x to submit Spark applications, including Spark Core and Spark SQL. Spark Core is the kernel module of Spark. It executes tasks an",
"product_code":"mrs",
"title":"Getting Started",
"uri":"mrs_01_1929.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"575"
},
{
"desc":"This section describes how to quickly configure common parameters and lists parameters that are not recommended to be modified when Spark2x is used.Some parameters have b",
"product_code":"mrs",
"title":"Configuring Parameters Rapidly",
"uri":"mrs_01_1930.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"576"
},
{
"desc":"This section describes common configuration items used in Spark. Subsections are divided by feature so that you can quickly find required configuration items. If you use ",
"product_code":"mrs",
"title":"Common Parameters",
"uri":"mrs_01_1931.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"577"
},
{
"desc":"Spark on HBase allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read data f",
"product_code":"mrs",
"title":"Spark on HBase Overview and Basic Applications",
"uri":"mrs_01_1933.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"578"
},
{
"desc":"Spark on HBase V2 allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read dat",
"product_code":"mrs",
"title":"Spark on HBase V2 Overview and Basic Applications",
"uri":"mrs_01_1934.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"579"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"SparkSQL Permission Management(Security Mode)",
"uri":"mrs_01_1935.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"580"
},
{
"desc":"Similar to Hive, Spark SQL is a data warehouse framework built on Hadoop, providing storage of structured data like structured query language (SQL).MRS supports users, us",
"product_code":"mrs",
"title":"Spark SQL Permissions",
"uri":"mrs_01_1936.html",
"doc_type":"cmpntguide",
"p_code":"580",
"code":"581"
},
{
"desc":"This section describes how to create and configure a SparkSQL role on Manager as the system administrator. The Spark SQL role can be configured with the Sparkadministrato",
"product_code":"mrs",
"title":"Creating a Spark SQL Role",
"uri":"mrs_01_1937.html",
"doc_type":"cmpntguide",
"p_code":"580",
"code":"582"
},
{
"desc":"You can configure related permissions if you need to access tables or databases created by other users. SparkSQL supports column-based permission control. If a user needs",
"product_code":"mrs",
"title":"Configuring Permissions for SparkSQL Tables, Columns, and Databases",
"uri":"mrs_01_1938.html",
"doc_type":"cmpntguide",
"p_code":"580",
"code":"583"
},
{
"desc":"SparkSQL may need to be associated with other components. For example, Spark on HBase requires HBase permissions. The following describes how to associate SparkSQL with H",
"product_code":"mrs",
"title":"Configuring Permissions for SparkSQL to Use Other Components",
"uri":"mrs_01_1939.html",
"doc_type":"cmpntguide",
"p_code":"580",
"code":"584"
},
{
"desc":"This section describes how to configure SparkSQL permission management functions (client configuration is similar to server configuration). To enable table permission, ad",
"product_code":"mrs",
"title":"Configuring the Client and Server",
"uri":"mrs_01_1940.html",
"doc_type":"cmpntguide",
"p_code":"580",
"code":"585"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Scenario-Specific Configuration",
"uri":"mrs_01_1941.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"586"
},
{
"desc":"In this mode, multiple ThriftServers coexist in the cluster and the client can randomly connect any ThriftServer to perform service operations. When one or multiple Thrif",
"product_code":"mrs",
"title":"Configuring Multi-active Instance Mode",
"uri":"mrs_01_1942.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"587"
},
{
"desc":"In multi-tenant mode, JDBCServers are bound with tenants. Each tenant corresponds to one or more JDBCServers, and a JDBCServer provides services for only one tenant. Diff",
"product_code":"mrs",
"title":"Configuring the Multi-tenant Mode",
"uri":"mrs_01_1943.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"588"
},
{
"desc":"When using a cluster, if you want to switch between multi-active instance mode and multi-tenant mode, the following configurations are required.Switch from multi-tenant m",
"product_code":"mrs",
"title":"Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode",
"uri":"mrs_01_1944.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"589"
},
{
"desc":"Functions such as UI, EventLog, and dynamic resource scheduling in Spark are implemented through event transfer. Events include SparkListenerJobStart and SparkListenerJob",
"product_code":"mrs",
"title":"Configuring the Size of the Event Queue",
"uri":"mrs_01_1945.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"590"
},
{
"desc":"When the executor off-heap memory is too small, or processes with higher priority preempt resources, the physical memory usage will exceed the maximal value. To prevent t",
"product_code":"mrs",
"title":"Configuring Executor Off-Heap Memory",
"uri":"mrs_01_1947.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"591"
},
{
"desc":"A large amount of memory is required when Spark SQL executes a query, especially during Aggregate and Join operations. If the memory is limited, OutOfMemoryError may occu",
"product_code":"mrs",
"title":"Enhancing Stability in a Limited Memory Condition",
"uri":"mrs_01_1948.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"592"
},
{
"desc":"When yarn.log-aggregation-enable of Yarn is set to true, the container log aggregation function is enabled. Log aggregation indicates that after applications are run on Y",
"product_code":"mrs",
"title":"Viewing Aggregated Container Logs on the Web UI",
"uri":"mrs_01_1949.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"593"
},
{
"desc":"Values of some configuration parameters of Spark client vary depending on its work mode (YARN-Client or YARN-Cluster). If you switch Spark client between different modes ",
"product_code":"mrs",
"title":"Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes",
"uri":"mrs_01_1951.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"594"
},
{
"desc":"By default, SparkSQL divides data into 200 data blocks during shuffle. In data-intensive scenarios, each data block may have excessive size. If a single data block of a t",
"product_code":"mrs",
"title":"Configuring the Default Number of Data Blocks Divided by SparkSQL",
"uri":"mrs_01_1952.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"595"
},
{
"desc":"The compression format of a Parquet table can be configured as follows:If the Parquet table is a partitioned one, set the parquet.compression parameter of the Parquet tab",
"product_code":"mrs",
"title":"Configuring the Compression Format of a Parquet Table",
"uri":"mrs_01_1953.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"596"
},
{
"desc":"In Spark WebUI, the Executor page can display information about Lost Executor. Executors are dynamically recycled. If the JDBCServer tasks are large, there may be too man",
"product_code":"mrs",
"title":"Configuring the Number of Lost Executors Displayed in WebUI",
"uri":"mrs_01_1954.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"597"
},
{
"desc":"In some scenarios, to locate problems or check information by changing the log level,you can add the -Dlog4j.configuration.watch=true parameter to the JVM parameter of a ",
"product_code":"mrs",
"title":"Setting the Log Level Dynamically",
"uri":"mrs_01_1957.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"598"
},
{
"desc":"When Spark is used to submit tasks, the driver obtains tokens from HBase by default. To access HBase, you need to configure the jaas.conf file for security authentication",
"product_code":"mrs",
"title":"Configuring Whether Spark Obtains HBase Tokens",
"uri":"mrs_01_1958.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"599"
},
{
"desc":"If the Spark Streaming application is connected to Kafka, after the Spark Streaming application is terminated abnormally and restarted from the checkpoint, the system pre",
"product_code":"mrs",
"title":"Configuring LIFO for Kafka",
"uri":"mrs_01_1959.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"600"
},
{
"desc":"When the Spark Streaming application is connected to Kafka and the application is restarted, the application reads data from Kafka based on the last read topic offset and",
"product_code":"mrs",
"title":"Configuring Reliability for Connected Kafka",
"uri":"mrs_01_1960.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"601"
},
{
"desc":"When a query statement is executed, the returned result may be large (containing more than 100,000 records). In this case, JDBCServer out of memory (OOM) may occur. There",
"product_code":"mrs",
"title":"Configuring Streaming Reading of Driver Execution Results",
"uri":"mrs_01_1961.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"602"
},
{
"desc":"When you perform the select query in Hive partitioned tables, the FileNotFoundException exception is displayed if a specified partition path does not exist in HDFS. To av",
"product_code":"mrs",
"title":"Filtering Partitions without Paths in Partitioned Tables",
"uri":"mrs_01_1962.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"603"
},
{
"desc":"Users need to implement security protection for Spark2x web UI when some data on the UI cannot be viewed by other users. Once a user attempts to log in to the UI, Spark2x",
"product_code":"mrs",
"title":"Configuring Spark2x Web UI ACLs",
"uri":"mrs_01_1963.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"604"
},
{
"desc":"ORC is a column-based storage format in the Hadoop ecosystem. It originates from Apache Hive and is used to reduce the Hadoop data storage space and accelerate the Hive q",
"product_code":"mrs",
"title":"Configuring Vector-based ORC Data Reading",
"uri":"mrs_01_1964.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"605"
},
{
"desc":"In earlier versions, the predicate for pruning Hive table partitions is pushed down. Only comparison expressions between column names and integers or character strings ca",
"product_code":"mrs",
"title":"Broaden Support for Hive Partition Pruning Predicate Pushdown",
"uri":"mrs_01_1965.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"606"
},
{
"desc":"In earlier versions, when the insert overwrite syntax is used to overwrite partition tables, only partitions with specified expressions are matched, and partitions withou",
"product_code":"mrs",
"title":"Hive Dynamic Partition Overwriting Syntax",
"uri":"mrs_01_1966.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"607"
},
{
"desc":"The execution plan for SQL statements is optimized in Spark. Common optimization rules are heuristic optimization rules. Heuristic optimization rules are provided based o",
"product_code":"mrs",
"title":"Configuring the Column Statistics Histogram to Enhance the CBO Accuracy",
"uri":"mrs_01_1967.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"608"
},
{
"desc":"JobHistory can use local disks to cache the historical data of Spark applications to prevent the JobHistory memory from loading a large amount of application data, reduci",
"product_code":"mrs",
"title":"Configuring Local Disk Cache for JobHistory",
"uri":"mrs_01_1969.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"609"
},
{
"desc":"The Spark SQL adaptive execution feature enables Spark SQL to optimize subsequent execution processes based on intermediate results to improve overall execution efficienc",
"product_code":"mrs",
"title":"Configuring Spark SQL to Enable the Adaptive Execution Feature",
"uri":"mrs_01_1970.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"610"
},
{
"desc":"When the event log mode is enabled for Spark, that is, spark.eventLog.enabled is set to true, events are written to a configured log file to record the program running pr",
"product_code":"mrs",
"title":"Configuring Event Log Rollover",
"uri":"mrs_01_24170.html",
"doc_type":"cmpntguide",
"p_code":"586",
"code":"611"
},
{
"desc":"When Ranger is used as the permission management service of Spark SQL, the certificate in the cluster is required for accessing RangerAdmin. If you use a third-party JDK ",
"product_code":"mrs",
"title":"Adapting to the Third-party JDK When Ranger Is Used",
"uri":"mrs_01_2317.html",
"doc_type":"cmpntguide",
"p_code":"574",
"code":"612"
},
{
"desc":"Log paths:Executor run log: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of running tasks are stored in the prec",
"product_code":"mrs",
"title":"Spark2x Logs",
"uri":"mrs_01_1971.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"613"
},
{
"desc":"Container logs of running Spark applications are distributed on multiple nodes. This section describes how to quickly obtain container logs.You can run the yarn logs comm",
"product_code":"mrs",
"title":"Obtaining Container Logs of a Running Spark Application",
"uri":"mrs_01_1972.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"614"
},
{
"desc":"In a large-scale Hadoop production cluster, HDFS metadata is stored in the NameNode memory, and the cluster scale is restricted by the memory limitation of each NameNode.",
"product_code":"mrs",
"title":"Small File Combination Tools",
"uri":"mrs_01_1973.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"615"
},
{
"desc":"The first query of CarbonData is slow, which may cause a delay for nodes that have high requirements on real-time performance.The tool provides the following functions:Pr",
"product_code":"mrs",
"title":"Using CarbonData for First Query",
"uri":"mrs_01_2362.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"616"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark2x Performance Tuning",
"uri":"mrs_01_1974.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"617"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark Core Tuning",
"uri":"mrs_01_1975.html",
"doc_type":"cmpntguide",
"p_code":"617",
"code":"618"
},
{
"desc":"Spark supports the following types of serialization:JavaSerializerKryoSerializerData serialization affects the Spark application performance. In specific data format, Kry",
"product_code":"mrs",
"title":"Data Serialization",
"uri":"mrs_01_1976.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"619"
},
{
"desc":"Spark is a memory-based computing frame. If the memory is insufficient during computing, the Spark execution efficiency will be adversely affected. You can determine whet",
"product_code":"mrs",
"title":"Optimizing Memory Configuration",
"uri":"mrs_01_1977.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"620"
},
{
"desc":"The degree of parallelism (DOP) specifies the number of tasks to be executed concurrently. It determines the number of data blocks after the shuffle operation. Configure ",
"product_code":"mrs",
"title":"Setting the DOP",
"uri":"mrs_01_1978.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"621"
},
{
"desc":"Broadcast distributes data sets to each node. It allows data to be obtained locally when a data set is needed during a Spark task. If broadcast is not used, data serializ",
"product_code":"mrs",
"title":"Using Broadcast Variables",
"uri":"mrs_01_1979.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"622"
},
{
"desc":"When the Spark system runs applications that contain a shuffle process, an executor process also writes shuffle data and provides shuffle data for other executors in addi",
"product_code":"mrs",
"title":"Using the external shuffle service to improve performance",
"uri":"mrs_01_1980.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"623"
},
{
"desc":"Resources are a key factor that affects Spark execution efficiency. When a long-running service (such as the JDBCServer) is allocated with multiple executors without task",
"product_code":"mrs",
"title":"Configuring Dynamic Resource Scheduling in Yarn Mode",
"uri":"mrs_01_1981.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"624"
},
{
"desc":"There are three processes in Spark on Yarn mode: driver, ApplicationMaster, and executor. The Driver and Executor handle the scheduling and running of the task. The Appli",
"product_code":"mrs",
"title":"Configuring Process Parameters",
"uri":"mrs_01_1982.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"625"
},
{
"desc":"Optimal program structure helps increase execution efficiency. During application programming, avoid shuffle operations and combine narrow-dependency operations.This topi",
"product_code":"mrs",
"title":"Designing the Direction Acyclic Graph (DAG)",
"uri":"mrs_01_1983.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"626"
},
{
"desc":"If the overhead of each record is high, for example:Use mapPartitions to calculate data by partition.Use mapPartitions to flexibly operate data. For example, to calculate",
"product_code":"mrs",
"title":"Experience",
"uri":"mrs_01_1984.html",
"doc_type":"cmpntguide",
"p_code":"618",
"code":"627"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark SQL and DataFrame Tuning",
"uri":"mrs_01_1985.html",
"doc_type":"cmpntguide",
"p_code":"617",
"code":"628"
},
{
"desc":"When two tables are joined in Spark SQL, the broadcast function (see section \"Using Broadcast Variables\") can be used to broadcast tables to each node. This minimizes shu",
"product_code":"mrs",
"title":"Optimizing the Spark SQL Join Operation",
"uri":"mrs_01_1986.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"629"
},
{
"desc":"When multiple tables are joined in Spark SQL, skew occurs in join keys and the data volume in some Hash buckets is much higher than that in other buckets. As a result, so",
"product_code":"mrs",
"title":"Improving Spark SQL Calculation Performance Under Data Skew",
"uri":"mrs_01_1987.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"630"
},
{
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
"product_code":"mrs",
"title":"Optimizing Spark SQL Performance in the Small File Scenario",
"uri":"mrs_01_1988.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"631"
},
{
"desc":"The INSERT...SELECT operation needs to be optimized if any of the following conditions is true:Many small files need to be queried.A few large files need to be queried.Th",
"product_code":"mrs",
"title":"Optimizing the INSERT...SELECT Operation",
"uri":"mrs_01_1989.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"632"
},
{
"desc":"Multiple clients can be connected to JDBCServer at the same time. However, if the number of concurrent tasks is too large, the default configuration of JDBCServer must be",
"product_code":"mrs",
"title":"Multiple JDBC Clients Concurrently Connecting to JDBCServer",
"uri":"mrs_01_1990.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"633"
},
{
"desc":"When SparkSQL inserts data to dynamic partitioned tables, the more partitions there are, the more HDFS files a single task generates and the more memory metadata occupies",
"product_code":"mrs",
"title":"Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables",
"uri":"mrs_01_1992.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"634"
},
{
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
"product_code":"mrs",
"title":"Optimizing Small Files",
"uri":"mrs_01_1995.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"635"
},
{
"desc":"Spark SQL supports hash aggregate algorithm. Namely, use fast aggregate hashmap as cache to improve aggregate performance. The hashmap replaces the previous ColumnarBatch",
"product_code":"mrs",
"title":"Optimizing the Aggregate Algorithms",
"uri":"mrs_01_1996.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"636"
},
{
"desc":"Save the partition information about the datasource table to the Metastore and process partition information in the Metastore.Optimize the datasource tables, support synt",
"product_code":"mrs",
"title":"Optimizing Datasource Tables",
"uri":"mrs_01_1997.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"637"
},
{
"desc":"Spark SQL supports rule-based optimization by default. However, the rule-based optimization cannot ensure that Spark selects the optimal query plan. Cost-Based Optimizer ",
"product_code":"mrs",
"title":"Merging CBO",
"uri":"mrs_01_1998.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"638"
},
{
"desc":"This section describes how to enable or disable the query optimization for inter-source complex SQL.(Optional) Prepare for connecting to the MPPDB data source.If the data",
"product_code":"mrs",
"title":"Optimizing SQL Query of Data of Multiple Sources",
"uri":"mrs_01_1999.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"639"
},
{
"desc":"This section describes the optimization suggestions for SQL statements in multi-level nesting and hybrid join scenarios.The following provides an example of complex query",
"product_code":"mrs",
"title":"SQL Optimization for Multi-level Nesting and Hybrid Join",
"uri":"mrs_01_2000.html",
"doc_type":"cmpntguide",
"p_code":"628",
"code":"640"
},
{
"desc":"Streaming is a mini-batch streaming processing framework that features second-level delay and high throughput. To optimize Streaming is to improve its throughput while ma",
"product_code":"mrs",
"title":"Spark Streaming Tuning",
"uri":"mrs_01_2001.html",
"doc_type":"cmpntguide",
"p_code":"617",
"code":"641"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Spark2x",
"uri":"mrs_01_2002.html",
"doc_type":"cmpntguide",
"p_code":"572",
"code":"642"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark Core",
"uri":"mrs_01_2003.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"643"
},
{
"desc":"How do I view the aggregated container logs on the page when the log aggregation function is enabled on YARN?For details, see Viewing Aggregated Container Logs on the Web",
"product_code":"mrs",
"title":"How Do I View Aggregated Spark Application Logs?",
"uri":"mrs_01_2004.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"644"
},
{
"desc":"Communication between ApplicationMaster and ResourceManager remains abnormal for a long time. Why is the driver return code inconsistent with application status on Resour",
"product_code":"mrs",
"title":"Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?",
"uri":"mrs_01_2005.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"645"
},
{
"desc":"Why cannot exit the Driver process after running the yarn application -kill applicationID command to stop the Spark Streaming application?Running the yarn application -ki",
"product_code":"mrs",
"title":"Why Cannot Exit the Driver Process?",
"uri":"mrs_01_2006.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"646"
},
{
"desc":"On a large cluster of 380 nodes, run the ScalaSort test case in the HiBench test that runs the 29T data, and configure Executor as --executor-cores 4. The following abnor",
"product_code":"mrs",
"title":"Why Does FetchFailedException Occur When the Network Connection Is Timed out",
"uri":"mrs_01_2007.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"647"
},
{
"desc":"How to configure the event queue size if the following Driver log information is displayed indicating that the event queue overflows?Common applicationsDropping SparkList",
"product_code":"mrs",
"title":"How to Configure Event Queue Size If Event Queue Overflows?",
"uri":"mrs_01_2008.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"648"
},
{
"desc":"During Spark application execution, if the driver fails to connect to ResourceManager, the following error is reported and it does not exit for a long time. What can I do",
"product_code":"mrs",
"title":"What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?",
"uri":"mrs_01_2009.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"649"
},
{
"desc":"When Spark executes an application, an error similar to the following is reported and the application ends. What can I do?Symptom: The value of spark.rpc.io.connectionTim",
"product_code":"mrs",
"title":"What Can I Do If \"Connection to ip:port has been quiet for xxx ms while there are outstanding requests\" Is Reported When Spark Executes an Application and the Application Ends?",
"uri":"mrs_01_2010.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"650"
},
{
"desc":"If the NodeManager is shut down with the Executor dynamic allocation enabled, the Executors on the node where the NodeManeger is shut down fail to be removed from the dri",
"product_code":"mrs",
"title":"Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?",
"uri":"mrs_01_2011.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"651"
},
{
"desc":"ExternalShuffle is enabled for the application that runs Spark. Task loss occurs in the application because the message \"java.lang.NullPointerException: Password cannot b",
"product_code":"mrs",
"title":"What Can I Do If the Message \"Password cannot be null if SASL is enabled\" Is Displayed?",
"uri":"mrs_01_2012.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"652"
},
{
"desc":"When inserting data into the dynamic partition table, a large number of shuffle files are damaged due to the disk disconnection, node error, and the like. In this case, w",
"product_code":"mrs",
"title":"What Should I Do If the Message \"Failed to CREATE_FILE\" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?",
"uri":"mrs_01_2013.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"653"
},
{
"desc":"When Hash shuffle is used to run a job that consists of 1000000 map tasks x 100000 reduce tasks, run logs report many message failures and Executor heartbeat timeout, lea",
"product_code":"mrs",
"title":"Why Tasks Fail When Hash Shuffle Is Used?",
"uri":"mrs_01_2014.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"654"
},
{
"desc":"When the http(s)://<spark ip>:<spark port> mode is used to access the Spark JobHistory page, if the displayed Spark JobHistory page is not the page of FusionInsight Manag",
"product_code":"mrs",
"title":"What Can I Do If the Error Message \"DNS query failed\" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?",
"uri":"mrs_01_2015.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"655"
},
{
"desc":"When I execute a 100 TB TPC-DS test suite in the JDBCServer mode, the \"Timeout waiting for task\" is displayed. As a result, shuffle fetch fails, the stage keeps retrying,",
"product_code":"mrs",
"title":"What Can I Do If Shuffle Fetch Fails Due to the \"Timeout Waiting for Task\" Exception?",
"uri":"mrs_01_2016.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"656"
},
{
"desc":"When I run Spark tasks with a large data volume, for example, 100 TB TPCDS test suite, why does the Stage retry due to Executor loss sometimes? The message \"Executor 532 ",
"product_code":"mrs",
"title":"Why Does the Stage Retry due to the Crash of the Executor?",
"uri":"mrs_01_2017.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"657"
},
{
"desc":"When more than 50 terabytes of data is shuffled, some executors fail to register shuffle services due to timeout. The shuffle tasks then fail. Why? The error log is as fo",
"product_code":"mrs",
"title":"Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?",
"uri":"mrs_01_2018.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"658"
},
{
"desc":"During the execution of Spark applications, if the YARN External Shuffle service is enabled and there are too many shuffle tasks, the java.lang.OutofMemoryError: Direct b",
"product_code":"mrs",
"title":"Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications",
"uri":"mrs_01_2019.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"659"
},
{
"desc":"Execution of the sparkbench task (for example, Wordcount) of HiBench6 fails. The bench.log indicates that the Yarn task fails to be executed. The failure information disp",
"product_code":"mrs",
"title":"Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?",
"uri":"mrs_01_2021.html",
"doc_type":"cmpntguide",
"p_code":"643",
"code":"660"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark SQL and DataFrame",
"uri":"mrs_01_2022.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"661"
},
{
"desc":"Suppose that there is a table src(d1, d2, m) with the following data:The results for statement \"select d1, sum(d1) from src group by d1, d2 with rollup\" are shown as belo",
"product_code":"mrs",
"title":"What Do I have to Note When Using Spark SQL ROLLUP and CUBE?",
"uri":"mrs_01_2023.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"662"
},
{
"desc":"Why temporary tables of the previous database are displayed after the database is switched?Create a temporary DataSource table, for example:create temporary table ds_parq",
"product_code":"mrs",
"title":"Why Spark SQL Is Displayed as a Temporary Table in Different Databases?",
"uri":"mrs_01_2024.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"663"
},
{
"desc":"Is it possible to assign parameter values through Spark commands, in addition to through a user interface or a configuration file?Spark configuration options can be defin",
"product_code":"mrs",
"title":"How to Assign a Parameter Value in a Spark Command?",
"uri":"mrs_01_2025.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"664"
},
{
"desc":"The following error information is displayed when a new user creates a table using SparkSQL:When you create a table using Spark SQL, the interface of Hive is called by th",
"product_code":"mrs",
"title":"What Directory Permissions Do I Need to Create a Table Using SparkSQL?",
"uri":"mrs_01_2026.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"665"
},
{
"desc":"Why do I fail to delete the UDF using another service, for example, delete the UDF created by Hive using Spark SQL.The UDF can be created using any of the following servi",
"product_code":"mrs",
"title":"Why Do I Fail to Delete the UDF Using Another Service?",
"uri":"mrs_01_2027.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"666"
},
{
"desc":"Why cannot I query newly inserted data in a parquet Hive table using SparkSQL? This problem occurs in the following scenarios:For partitioned tables and non-partitioned t",
"product_code":"mrs",
"title":"Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?",
"uri":"mrs_01_2028.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"667"
},
{
"desc":"What is cache table used for? Which point should I pay attention to while using cache table?Spark SQL caches tables into memory so that data can be directly read from mem",
"product_code":"mrs",
"title":"How to Use Cache Table?",
"uri":"mrs_01_2029.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"668"
},
{
"desc":"During the repartition operation, the number of blocks (spark.sql.shuffle.partitions) is set to 4,500, and the number of keys used by repartition exceeds 4,000. It is exp",
"product_code":"mrs",
"title":"Why Are Some Partitions Empty During Repartition?",
"uri":"mrs_01_2030.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"669"
},
{
"desc":"When the default configuration is used, 16 terabytes of text data fails to be converted into 4 terabytes of parquet data, and the error information below is displayed. Wh",
"product_code":"mrs",
"title":"Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?",
"uri":"mrs_01_2031.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"670"
},
{
"desc":"When the table name is set to table, why the error information similar to the following is displayed after the drop table table command or other command is run?The word t",
"product_code":"mrs",
"title":"Why the Operation Fails When the Table Name Is TABLE?",
"uri":"mrs_01_2033.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"671"
},
{
"desc":"When the analyze table statement is executed using spark-sql, the task is suspended and the information below is displayed. Why?When the statement is executed, the SQL st",
"product_code":"mrs",
"title":"Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?",
"uri":"mrs_01_2034.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"672"
},
{
"desc":"If I access a parquet table on which I do not have permission, why a job is run before \"Missing Privileges\" is displayed?The execution sequence of Spark SQL statement par",
"product_code":"mrs",
"title":"If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before \"Missing Privileges\" Is Displayed?",
"uri":"mrs_01_2035.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"673"
},
{
"desc":"When do I fail to modify the metadata in the datasource and Spark on HBase table by running the Hive command?The current Spark version does not support modifying the meta",
"product_code":"mrs",
"title":"Why Do I Fail to Modify MetaData by Running the Hive Command?",
"uri":"mrs_01_2036.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"674"
},
{
"desc":"After successfully running Spark tasks with large data volume, for example, 2-TB TPCDS test suite, why is the abnormal stack information \"RejectedExecutionException\" disp",
"product_code":"mrs",
"title":"Why Is \"RejectedExecutionException\" Displayed When I Exit Spark SQL?",
"uri":"mrs_01_2037.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"675"
},
{
"desc":"During a health check, if the concurrent statements exceed the threshold of the thread pool, the health check statements fail to be executed, the health check program tim",
"product_code":"mrs",
"title":"What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?",
"uri":"mrs_01_2038.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"676"
},
{
"desc":"Why no result is found when 2016-6-30 is set in the date field as the filter condition?As shown in the following figure, trx_dte_par in the select count (*) from trxfintr",
"product_code":"mrs",
"title":"Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?",
"uri":"mrs_01_2039.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"677"
},
{
"desc":"Why does the --hivevaroption I specified in the command for starting spark-beeline fail to take effect?In the V100R002C60 version, if I use the --hivevar <VAR_NAME>=<var_",
"product_code":"mrs",
"title":"Why Does the \"--hivevar\" Option I Specified in the Command for Starting spark-beeline Fail to Take Effect?",
"uri":"mrs_01_2040.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"678"
},
{
"desc":"In normal mode, when I create a temporary table or view in spark-beeline, the error message \"Permission denied\" is displayed, indicating that I have no permissions on the",
"product_code":"mrs",
"title":"Why Does the \"Permission denied\" Exception Occur When I Create a Temporary Table or View in Spark-beeline?",
"uri":"mrs_01_2041.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"679"
},
{
"desc":"When I run a complex SQL statement, for example, SQL statements with multiple layers of nesting statements and a single layer statement contains a large number of logic c",
"product_code":"mrs",
"title":"Why Is the \"Code of method ... grows beyond 64 KB\" Error Message Displayed When I Run Complex SQL Statements?",
"uri":"mrs_01_2042.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"680"
},
{
"desc":"When the driver memory is set to 10 GB and the 10 TB TPCDS test suites are continuously run in Beeline/JDBCServer mode, SQL statements fail to be executed due to insuffic",
"product_code":"mrs",
"title":"Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?",
"uri":"mrs_01_2043.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"681"
},
{
"desc":"Scenario 1I set up permanent functions using the add jar statement. After Beeline connects to different JDBCServer or  JDBCServer is restarted, I have to run the add jar ",
"product_code":"mrs",
"title":"Why Are Some Functions Not Available when Another JDBCServer Is Connected?",
"uri":"mrs_01_2044.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"682"
},
{
"desc":"When Spark2x accesses the DataSource table created by Spark1.5, a message is displayed indicating that schema information cannot be obtained. As a result, the table canno",
"product_code":"mrs",
"title":"Why Does Spark2x Have No Access to DataSource Tables Created by Spark1.5?",
"uri":"mrs_01_2046.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"683"
},
{
"desc":"Why does \"Failed to create ThriftService instance\" occur when spark beeline fails to run?Beeline logs are as follows:In addition, the \"Timed out waiting for client to con",
"product_code":"mrs",
"title":"Why Does Spark-beeline Fail to Run and Error Message \"Failed to create ThriftService instance\" Is Displayed?",
"uri":"mrs_01_2047.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"684"
},
{
"desc":"Why cannot I query newly inserted data in an ORC Hive table using Spark SQL? This problem occurs in the following scenarios:For partitioned tables and non-partitioned tab",
"product_code":"mrs",
"title":"Why Cannot I Query Newly Inserted Data in an ORC Hive Table Using Spark SQL?",
"uri":"mrs_01_24491.html",
"doc_type":"cmpntguide",
"p_code":"661",
"code":"685"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Spark Streaming",
"uri":"mrs_01_2048.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"686"
},
{
"desc":"After a Spark Streaming task is run and data is input, no processing result is displayed. Open the web page to view the Spark job execution status. The following figure s",
"product_code":"mrs",
"title":"What Can I Do If Spark Streaming Tasks Are Blocked?",
"uri":"mrs_01_2050.html",
"doc_type":"cmpntguide",
"p_code":"686",
"code":"687"
},
{
"desc":"When Spark Streaming tasks are running, the data processing performance does not improve significantly as the number of executors increases. What should I pay attention t",
"product_code":"mrs",
"title":"What Should I Pay Attention to When Optimizing Spark Streaming Task Parameters?",
"uri":"mrs_01_2051.html",
"doc_type":"cmpntguide",
"p_code":"686",
"code":"688"
},
{
"desc":"Change the validity period of the Kerberos ticket and HDFS token to 5 minutes, set dfs.namenode.delegation.token.renew-interval to a value less than 60 seconds, and submi",
"product_code":"mrs",
"title":"Why Does the Spark Streaming Application Fail to Be Submitted After the Token Validity Period Expires?",
"uri":"mrs_01_2052.html",
"doc_type":"cmpntguide",
"p_code":"686",
"code":"689"
},
{
"desc":"Spark Streaming application creates one input stream without output logic. The application fails to restart from checkpoint and an error will be shown like below:When Str",
"product_code":"mrs",
"title":"Why does Spark Streaming Application Fail to Restart from Checkpoint When It Creates an Input Stream Without Output Logic?",
"uri":"mrs_01_2053.html",
"doc_type":"cmpntguide",
"p_code":"686",
"code":"690"
},
{
"desc":"When the Kafka is restarted during the execution of the Spark Streaming application, the application cannot obtain the topic offset from the Kafka. As a result, the job f",
"product_code":"mrs",
"title":"Why Is the Input Size Corresponding to Batch Time on the Web UI Set to 0 Records When Kafka Is Restarted During Spark Streaming Running?",
"uri":"mrs_01_2054.html",
"doc_type":"cmpntguide",
"p_code":"686",
"code":"691"
},
{
"desc":"The job information obtained from the restful interface of an ended Spark application is incorrect: the value of numActiveTasks is negative, as shown in Figure 1:numActiv",
"product_code":"mrs",
"title":"Why the Job Information Obtained from the restful Interface of an Ended Spark Application Is Incorrect?",
"uri":"mrs_01_2055.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"692"
},
{
"desc":"In FusionInsight, the Spark application is run in yarn-client mode on the client. The following error occurs during the switch from the Yarn web UI to the application web",
"product_code":"mrs",
"title":"Why Cannot I Switch from the Yarn Web UI to the Spark Web UI?",
"uri":"mrs_01_2056.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"693"
},
{
"desc":"An error occurs when I access a Spark application page on the HistoryServer page.Check the HistoryServer logs. The \"FileNotFound\" exception is found. The related logs are",
"product_code":"mrs",
"title":"What Can I Do If an Error Occurs when I Access the Application Page Because the Application Cached by HistoryServer Is Recycled?",
"uri":"mrs_01_2057.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"694"
},
{
"desc":"When I run an application with an empty part file in HDFS with the log grouping function enabled, why is not the application displayed on the homepage of JobHistory?On th",
"product_code":"mrs",
"title":"Why Is not an Application Displayed When I Run the Application with the Empty Part File?",
"uri":"mrs_01_2058.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"695"
},
{
"desc":"The following code fails to be executed on spark-shell of Spark2x:In Spark2x, the duplicate field name of the join statement is checked. You need to modify the code to en",
"product_code":"mrs",
"title":"Why Does Spark2x Fail to Export a Table with the Same Field Name?",
"uri":"mrs_01_2059.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"696"
},
{
"desc":"Why JRE fatal error after running Spark application multiple times?When you run Spark application multiple times, JRE fatal error occurs and this is due to the problem wi",
"product_code":"mrs",
"title":"Why JRE fatal error after running Spark application multiple times?",
"uri":"mrs_01_2060.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"697"
},
{
"desc":"Occasionally, Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the native Spark2x UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the ",
"product_code":"mrs",
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native Spark2x UI",
"uri":"mrs_01_2061.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"698"
},
{
"desc":"There are two clusters, cluster 1 and cluster 2. How do I use Spark2x in cluster 1 to access HDFS, Hive, HBase, and Kafka components in cluster 2?Components in two cluste",
"product_code":"mrs",
"title":"How Does Spark2x Access External Cluster Components?",
"uri":"mrs_01_2062.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"699"
},
{
"desc":"Assume there is a data file path named /test_data_path. User A creates a foreign table named tableA for the directory, and user B creates a foreign table named tableB for",
"product_code":"mrs",
"title":"Why Does the Foreign Table Query Fail When Multiple Foreign Tables Are Created in the Same Directory?",
"uri":"mrs_01_2063.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"700"
},
{
"desc":"After a Spark application that contains a job with millions of tasks. After the application creation is complete, if you access the native page of the application in JobH",
"product_code":"mrs",
"title":"What Should I Do If the Native Page of an Application of Spark2x JobHistory Fails to Display During Access to the Page",
"uri":"mrs_01_2064.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"701"
},
{
"desc":"When the OBS ECS/BMS image cluster is connected, after spark-beeline is logged in, an error is reported when a location is specified to create a table on OBS.The permissi",
"product_code":"mrs",
"title":"Why Do I Fail to Create a Table in the Specified Location on OBS After Logging to spark-beeline?",
"uri":"mrs_01_2340.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"702"
},
{
"desc":"In some scenarios, the following exception occurs in the Spark shuffle phase:For JDBC:Log in to FusionInsight Manager, change the value of the JDBCServer parameter spark.",
"product_code":"mrs",
"title":"Spark Shuffle Exception Handling",
"uri":"mrs_01_24176.html",
"doc_type":"cmpntguide",
"p_code":"642",
"code":"703"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Sqoop",
"uri":"mrs_01_24453.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"704"
},
{
"desc":"Sqoop is an open-source tool for transferring data between Hadoop (Hive) and traditional databases (such as MySQL and PostgreSQL). It can transfer data from a relational ",
"product_code":"mrs",
"title":"Using Sqoop from Scratch",
"uri":"mrs_01_24454.html",
"doc_type":"cmpntguide",
"p_code":"704",
"code":"705"
},
{
"desc":"Sqoop is a tool designed for efficiently transmitting a large amount of data between Apache Hadoop and structured databases (such as relational databases). Customers need",
"product_code":"mrs",
"title":"Adapting Sqoop 1.4.7 to MRS 3.x Clusters",
"uri":"mrs_01_24455.html",
"doc_type":"cmpntguide",
"p_code":"704",
"code":"706"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Sqoop Commands and Parameters",
"uri":"mrs_01_24456.html",
"doc_type":"cmpntguide",
"p_code":"704",
"code":"707"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Sqoop",
"uri":"mrs_01_24457.html",
"doc_type":"cmpntguide",
"p_code":"704",
"code":"708"
},
{
"desc":"What should I do if the QueryProvider class is unavailable?Search for the MRS client directory and save the following JAR packages to the lib directory of Sqoop.",
"product_code":"mrs",
"title":"What Should I Do If Class QueryProvider Is Unavailable?",
"uri":"mrs_01_24458.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"709"
},
{
"desc":"Scenario 1: (import scenarios) Run the sqoop import command to extract the open source PostgreSQL to MRS HDFS or Hive.SymptomThe sqoop command can be executed to query Po",
"product_code":"mrs",
"title":"How Do I Do If PostgreSQL or GaussDB Fails to Connect?",
"uri":"mrs_01_24460.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"710"
},
{
"desc":"What should I do if data failed to be synchronized to a Hive table on the OBS using hive-table?Change -hive-table to -hcatalog-table.",
"product_code":"mrs",
"title":"What Should I Do If Data Failed to Be Synchronized to a Hive Table on the OBS Using hive-table?",
"uri":"mrs_01_24461.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"711"
},
{
"desc":"What should I do if data failed to be synchronized to the ORC or parquet table using hive-table and error message that contains the kite-sdk package name is displayed?Cha",
"product_code":"mrs",
"title":"What Should I Do If Data Failed to Be Synchronized to an ORC or Parquet Table Using hive-table?",
"uri":"mrs_01_24462.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"712"
},
{
"desc":"What should I do if data failed to be synchronized using hive-table?Add the following content to the hive-site.xml file.",
"product_code":"mrs",
"title":"What Should I Do If Data Failed to Be Synchronized Using hive-table?",
"uri":"mrs_01_24463.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"713"
},
{
"desc":"When the partition fields in a Hive parquet table are not of the string type, data in the table can be synchronized only using HCatalog. What should I do if the following",
"product_code":"mrs",
"title":"What Should I Do If Data Failed to Be Synchronized to a Hive Parquet Table Using HCatalog?",
"uri":"mrs_01_24464.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"714"
},
{
"desc":"What should I do if the data type of fields timestamp and data is incorrect during data synchronization between Hive and MySQL?Forcibly convert the data type of the times",
"product_code":"mrs",
"title":"What Should I Do If the Data Type of Fields timestamp and data Is Incorrect During Data Synchronization Between Hive and MySQL?",
"uri":"mrs_01_24465.html",
"doc_type":"cmpntguide",
"p_code":"708",
"code":"715"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Storm",
"uri":"mrs_01_0380.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"716"
},
{
"desc":"You can submit and delete Storm topologies on the MRS cluster client.The MRS cluster client has been installed, for example, in the /opt/hadoopclient directory. The clien",
"product_code":"mrs",
"title":"Using Storm from Scratch",
"uri":"mrs_01_1045.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"717"
},
{
"desc":"This section describes how to use the Storm client in an O&M scenario or service scenario.You have installed the client. For example, the installation directory is /opt/h",
"product_code":"mrs",
"title":"Using the Storm Client",
"uri":"mrs_01_2065.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"718"
},
{
"desc":"You can submit Storm topologies on the cluster client to continuously process stream data. For clusters with Kerberos authentication enabled, users who submit topologies ",
"product_code":"mrs",
"title":"Submitting Storm Topologies on the Client",
"uri":"mrs_01_0381.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"719"
},
{
"desc":"The Storm web UI provides a graphical interface for using Storm.The following information can be queried on the Storm web UI:Storm cluster summaryNimbus summaryTopology s",
"product_code":"mrs",
"title":"Accessing the Storm Web UI",
"uri":"mrs_01_0382.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"720"
},
{
"desc":"You can manage Storm topologies on the Storm web UI. Users in the storm group can manage only the topology tasks submitted by themselves, while users in the stormadmin gr",
"product_code":"mrs",
"title":"Managing Storm Topologies",
"uri":"mrs_01_0383.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"721"
},
{
"desc":"You can query topology logs to check the execution of a Storm topology in a worker process. To query the data processing logs of a topology, enable the Debug function whe",
"product_code":"mrs",
"title":"Querying Storm Topology Logs",
"uri":"mrs_01_0384.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"722"
},
{
"desc":"This section applies to MRS 3.x or later.For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
"product_code":"mrs",
"title":"Storm Common Parameters",
"uri":"mrs_01_1046.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"723"
},
{
"desc":"This section applies to MRS 3.x or later.After submitting a topology task, a Storm service user must ensure that the task continuously runs. During topology running, the ",
"product_code":"mrs",
"title":"Configuring a Storm Service User Password Policy",
"uri":"mrs_01_1047.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"724"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Migrating Storm Services to Flink",
"uri":"mrs_01_1048.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"725"
},
{
"desc":"This section applies to MRS 3.x or later.From 0.10.0, Flink provides a set of APIs to smoothly migrate services compiled using Storm APIs to the Flink platform. This can ",
"product_code":"mrs",
"title":"Overview",
"uri":"mrs_01_1049.html",
"doc_type":"cmpntguide",
"p_code":"725",
"code":"726"
},
{
"desc":"This section describes how to convert and run a complete Storm topology developed using Storm API.<dependency>\n <groupId>org.apache.flink</groupId>\n <artifactId>fli",
"product_code":"mrs",
"title":"Completely Migrating Storm Services",
"uri":"mrs_01_1050.html",
"doc_type":"cmpntguide",
"p_code":"725",
"code":"727"
},
{
"desc":"This section describes how to embed Storm code in DataStream of Flink in embedded migration mode. For example, the code of Spout or Bolt compiled using Storm API is embed",
"product_code":"mrs",
"title":"Performing Embedded Service Migration",
"uri":"mrs_01_1051.html",
"doc_type":"cmpntguide",
"p_code":"725",
"code":"728"
},
{
"desc":"If the Storm services use the storm-hdfs or storm-hbase plug-in package for interconnection, you need to specify the following security parameters when migrating Storm se",
"product_code":"mrs",
"title":"Migrating Services of External Security Components Interconnected with Storm",
"uri":"mrs_01_1052.html",
"doc_type":"cmpntguide",
"p_code":"725",
"code":"729"
},
{
"desc":"This section applies to MRS 3.x or later.Log paths: The default paths of Storm log files are /var/log/Bigdata/storm/Role name (run logs) and /var/log/Bigdata/audit/storm/",
"product_code":"mrs",
"title":"Storm Log Introduction",
"uri":"mrs_01_1053.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"730"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Performance Tuning",
"uri":"mrs_01_1054.html",
"doc_type":"cmpntguide",
"p_code":"716",
"code":"731"
},
{
"desc":"You can modify Storm parameters to improve Storm performance in specific service scenarios.This section applies to MRS 3.x or later.Modify the service configuration param",
"product_code":"mrs",
"title":"Storm Performance Tuning",
"uri":"mrs_01_1055.html",
"doc_type":"cmpntguide",
"p_code":"731",
"code":"732"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Tez",
"uri":"mrs_01_2067.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"733"
},
{
"desc":"This section applies to MRS 3.x or later clusters.",
"product_code":"mrs",
"title":"Precautions",
"uri":"mrs_01_2068.html",
"doc_type":"cmpntguide",
"p_code":"733",
"code":"734"
},
{
"desc":"On Manager, choose Cluster > Service > Tez > Configuration > All Configurations. Enter a parameter name in the search box.",
"product_code":"mrs",
"title":"Common Tez Parameters",
"uri":"mrs_01_2069.html",
"doc_type":"cmpntguide",
"p_code":"733",
"code":"735"
},
{
"desc":"Tez displays the Tez task execution process on a GUI. You can view the task execution details on the GUI.The TimelineServer instance of the Yarn service has been installe",
"product_code":"mrs",
"title":"Accessing TezUI",
"uri":"mrs_01_2070.html",
"doc_type":"cmpntguide",
"p_code":"733",
"code":"736"
},
{
"desc":"Log path: The default save path of Tez logs is /var/log/Bigdata/tez/role name.TezUI: /var/log/Bigdata/tez/tezui (run logs) and /var/log/Bigdata/audit/tez/tezui (audit log",
"product_code":"mrs",
"title":"Log Overview",
"uri":"mrs_01_2071.html",
"doc_type":"cmpntguide",
"p_code":"733",
"code":"737"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues",
"uri":"mrs_01_2072.html",
"doc_type":"cmpntguide",
"p_code":"733",
"code":"738"
},
{
"desc":"After a user logs in to Manager and switches to the Tez web UI, the submitted Tez tasks are not displayed.The Tez task data displayed on the Tez WebUI requires the suppor",
"product_code":"mrs",
"title":"TezUI Cannot Display Tez Task Execution Details",
"uri":"mrs_01_2073.html",
"doc_type":"cmpntguide",
"p_code":"738",
"code":"739"
},
{
"desc":"When a user logs in to Manager and switches to the Tez web UI, error 404 or 503 is displayed.The Tez web UI depends on the TimelineServer instance of Yarn. Therefore, Tim",
"product_code":"mrs",
"title":"Error Occurs When a User Switches to the Tez Web UI",
"uri":"mrs_01_2074.html",
"doc_type":"cmpntguide",
"p_code":"738",
"code":"740"
},
{
"desc":"A user logs in to the Tez web UI and clicks Logs, but the Yarn log page fails to be displayed and data cannot be loaded.Currently, the hostname is used for the access to ",
"product_code":"mrs",
"title":"Yarn Logs Cannot Be Viewed on the TezUI Page",
"uri":"mrs_01_2075.html",
"doc_type":"cmpntguide",
"p_code":"738",
"code":"741"
},
{
"desc":"A user logs in to Manager and switches to the Tez web UI page, but no data for the submitted task is displayed on the Hive Queries page.To display task data on the Hive Q",
"product_code":"mrs",
"title":"Table Data Is Empty on the TezUI HiveQueries Page",
"uri":"mrs_01_2076.html",
"doc_type":"cmpntguide",
"p_code":"738",
"code":"742"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using Yarn",
"uri":"mrs_01_0851.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"743"
},
{
"desc":"The Yarn service provides queues for users. Users allocate system resources to each queue. After the configuration is complete, you can click Refresh Queue or restart the",
"product_code":"mrs",
"title":"Common YARN Parameters",
"uri":"mrs_01_0852.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"744"
},
{
"desc":"This section describes how to create and configure a Yarn role. The Yarn role can be assigned with Yarn administrator permission and manage Yarn queue resources.If the cu",
"product_code":"mrs",
"title":"Creating Yarn Roles",
"uri":"mrs_01_0853.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"745"
},
{
"desc":"This section guides users to use a Yarn client in an O&M or service scenario.The client has been installed.For example, the installation directory is /opt/hadoopclient. T",
"product_code":"mrs",
"title":"Using the YARN Client",
"uri":"mrs_01_0854.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"746"
},
{
"desc":"If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware r",
"product_code":"mrs",
"title":"Configuring Resources for a NodeManager Role Instance",
"uri":"mrs_01_0855.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"747"
},
{
"desc":"If the storage directories defined by the Yarn NodeManager are incorrect or the Yarn storage plan changes, the system administrator needs to modify the NodeManager storag",
"product_code":"mrs",
"title":"Changing NodeManager Storage Directories",
"uri":"mrs_01_0856.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"748"
},
{
"desc":"In the multi-tenant scenario in security mode, a cluster can be used by multiple users, and tasks of multiple users can be submitted and executed. Users are invisible to ",
"product_code":"mrs",
"title":"Configuring Strict Permission Control for Yarn",
"uri":"mrs_01_0857.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"749"
},
{
"desc":"Yarn provides the container log aggregation function to collect logs generated by containers on each node to HDFS to release local disk space. You can collect logs in eit",
"product_code":"mrs",
"title":"Configuring Container Log Aggregation",
"uri":"mrs_01_0858.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"750"
},
{
"desc":"This section applies to MRS 3.x or later clusters.CGroups is a Linux kernel feature. In YARN this feature allows containers to be limited in their resource usage (example",
"product_code":"mrs",
"title":"Using CGroups with YARN",
"uri":"mrs_01_0859.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"751"
},
{
"desc":"When resources are insufficient or ApplicationMaster fails to start, a client probably encounters running errors.Go to the All Configurations page of Yarn and enter a par",
"product_code":"mrs",
"title":"Configuring the Number of ApplicationMaster Retries",
"uri":"mrs_01_0860.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"752"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.During the process of starting the configuration, when the ApplicationMaster creates a container, the allocated memor",
"product_code":"mrs",
"title":"Configure the ApplicationMaster to Automatically Adjust the Allocated Memory",
"uri":"mrs_01_0861.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"753"
},
{
"desc":"The value of the yarn.http.policy parameter must be consistent on both the server and clients. Web UIs on clients will be garbled if an inconsistency exists, for example,",
"product_code":"mrs",
"title":"Configuring the Access Channel Protocol",
"uri":"mrs_01_0862.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"754"
},
{
"desc":"If memory usage of the submitted application cannot be estimated, you can modify the configuration on the server to determine whether to check the memory usage.If the mem",
"product_code":"mrs",
"title":"Configuring Memory Usage Detection",
"uri":"mrs_01_0863.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"755"
},
{
"desc":"If the custom scheduler is set in ResourceManager, you can set the corresponding web page and other Web applications for the custom scheduler.Go to the All Configurations",
"product_code":"mrs",
"title":"Configuring the Additional Scheduler WebUI",
"uri":"mrs_01_0864.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"756"
},
{
"desc":"The Yarn Restart feature includes ResourceManager Restart and NodeManager Restart.When ResourceManager Restart is enabled, the new active ResourceManager node loads the i",
"product_code":"mrs",
"title":"Configuring Yarn Restart",
"uri":"mrs_01_0865.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"757"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.In YARN, ApplicationMasters run on NodeManagers just like every other container (ignoring unmanaged ApplicationMaster",
"product_code":"mrs",
"title":"Configuring ApplicationMaster Work Preserving",
"uri":"mrs_01_0866.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"758"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.The default log level of localized container is INFO. You can change the log level by configuring yarn.nodemanager.co",
"product_code":"mrs",
"title":"Configuring the Localized Log Levels",
"uri":"mrs_01_0867.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"759"
},
{
"desc":"This section applies to clusters of MRS 3.x or later.Currently, YARN allows the user that starts the NodeManager to run the task submitted by all other users, or the user",
"product_code":"mrs",
"title":"Configuring Users That Run Tasks",
"uri":"mrs_01_0868.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"760"
},
{
"desc":"The default paths for saving Yarn logs are as follows:ResourceManager: /var/log/Bigdata/yarn/rm (run logs) and /var/log/Bigdata/audit/yarn/rm (audit logs)NodeManager: /va",
"product_code":"mrs",
"title":"Yarn Log Overview",
"uri":"mrs_01_0870.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"761"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Yarn Performance Tuning",
"uri":"mrs_01_0871.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"762"
},
{
"desc":"The capacity scheduler of ResourceManager implements job preemption to simplify job running in queues and improve resource utilization. The process is as follows:Assume t",
"product_code":"mrs",
"title":"Preempting a Task",
"uri":"mrs_01_0872.html",
"doc_type":"cmpntguide",
"p_code":"762",
"code":"763"
},
{
"desc":"The resource contention scenarios of a cluster are as follows:Submit two jobs (Job 1 and Job 2) with lower priorities.Some tasks of running Job 1 and Job 2 are in the run",
"product_code":"mrs",
"title":"Setting the Task Priority",
"uri":"mrs_01_0873.html",
"doc_type":"cmpntguide",
"p_code":"762",
"code":"764"
},
{
"desc":"After the scheduler of a big data cluster is properly configured, you can adjust the available memory, CPU resources, and local disk of each node to optimize the performa",
"product_code":"mrs",
"title":"Optimizing Node Configuration",
"uri":"mrs_01_0874.html",
"doc_type":"cmpntguide",
"p_code":"762",
"code":"765"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About Yarn",
"uri":"mrs_01_2077.html",
"doc_type":"cmpntguide",
"p_code":"743",
"code":"766"
},
{
"desc":"Why mounted directory for Container is not cleared after the completion of the job while using CGroups?The mounted path for the Container should be cleared even if job is",
"product_code":"mrs",
"title":"Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?",
"uri":"mrs_01_2078.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"767"
},
{
"desc":"Why is the HDFS_DELEGATION_TOKEN expired exception reported when a job fails in security mode?HDFS_DELEGATION_TOKEN expires because the token is not updated or it is acce",
"product_code":"mrs",
"title":"Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?",
"uri":"mrs_01_2079.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"768"
},
{
"desc":"If Yarn is restarted in either of the following scenarios, local logs will not be deleted as scheduled and will be retained permanently:When Yarn is restarted during task",
"product_code":"mrs",
"title":"Why Are Local Logs Not Deleted After YARN Is Restarted?",
"uri":"mrs_01_2080.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"769"
},
{
"desc":"Why the task does not fail even though AppAttempts restarts due to failure for more than two times?During the task execution process, if the ContainerExitStatus returns v",
"product_code":"mrs",
"title":"Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?",
"uri":"mrs_01_2081.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"770"
},
{
"desc":"After I moved an application from one queue to another, why is it moved back to the original queue after ResourceManager restarts?This problem is caused by the constraint",
"product_code":"mrs",
"title":"Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?",
"uri":"mrs_01_2082.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"771"
},
{
"desc":"Why does Yarn not release the blacklist even all nodes are added to the blacklist?In Yarn, when the number of application nodes added to the blacklist by ApplicationMaste",
"product_code":"mrs",
"title":"Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?",
"uri":"mrs_01_2083.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"772"
},
{
"desc":"The switchover of ResourceManager occurs continuously when multiple, for example 2,000, tasks are running concurrently, causing the Yarn service unavailable.The cause is ",
"product_code":"mrs",
"title":"Why Does the Switchover of ResourceManager Occur Continuously?",
"uri":"mrs_01_2084.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"773"
},
{
"desc":"Why does a new application fail if a NodeManager has been in unhealthy status for 10 minutes?When nodeSelectPolicy is set to SEQUENCE and the first NodeManager connected ",
"product_code":"mrs",
"title":"Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?",
"uri":"mrs_01_2085.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"774"
},
{
"desc":"Why does an error occur when I query the applicationID of a completed or non-existing application using the RESTful APIs?The Superior scheduler only stores the applicatio",
"product_code":"mrs",
"title":"Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?",
"uri":"mrs_01_2087.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"775"
},
{
"desc":"In Superior scheduling mode, if a single NodeManager is faulty, why may the MapReduce tasks fail?In normal cases, when the attempt of a single task of an application fail",
"product_code":"mrs",
"title":"Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?",
"uri":"mrs_01_2088.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"776"
},
{
"desc":"When a queue is deleted when there are applications running in it, these applications are moved to the \"lost_and_found\" queue. When these applications are moved back to a",
"product_code":"mrs",
"title":"Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?",
"uri":"mrs_01_2089.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"777"
},
{
"desc":"How do I limit the size of application diagnostic messages stored in the ZKstore?In some cases, it has been observed that diagnostic messages may grow infinitely. Because",
"product_code":"mrs",
"title":"How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?",
"uri":"mrs_01_2090.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"778"
},
{
"desc":"Why does a MapReduce job fail to run when a non-ViewFS file system is configured as ViewFS?When a non-ViewFS file system is configured as a ViewFS using cluster, the user",
"product_code":"mrs",
"title":"Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?",
"uri":"mrs_01_2091.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"779"
},
{
"desc":"After the Native Task feature is enabled, Reduce tasks fail to run in some OSs.When -Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.nativetask.NativeM",
"product_code":"mrs",
"title":"Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?",
"uri":"mrs_01_24051.html",
"doc_type":"cmpntguide",
"p_code":"766",
"code":"780"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using ZooKeeper",
"uri":"mrs_01_2092.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"781"
},
{
"desc":"ZooKeeper is an open-source, highly reliable, and distributed consistency coordination service. ZooKeeper is designed to solve the problem that data consistency cannot be",
"product_code":"mrs",
"title":"Using ZooKeeper from Scratch",
"uri":"mrs_01_2093.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"782"
},
{
"desc":"Navigation path for setting parameters:Go to the All Configurations page of ZooKeeper by referring to Modifying Cluster Service Configuration Parameters. Enter a paramete",
"product_code":"mrs",
"title":"Common ZooKeeper Parameters",
"uri":"mrs_01_2094.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"783"
},
{
"desc":"Use a ZooKeeper client in an O&M scenario or service scenario.You have installed the client. For example, the installation directory is /opt/client. The client directory ",
"product_code":"mrs",
"title":"Using a ZooKeeper Client",
"uri":"mrs_01_2095.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"784"
},
{
"desc":"Configure znode permission of ZooKeeper.ZooKeeper uses an access control list (ACL) to implement znode access control. The ZooKeeper client specifies a znode ACL, and the",
"product_code":"mrs",
"title":"Configuring the ZooKeeper Permissions",
"uri":"mrs_01_2097.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"785"
},
{
"desc":"Log path: /var/log/Bigdata/zookeeper/quorumpeer (Run log), /var/log/Bigdata/audit/zookeeper/quorumpeer (Audit log)Log archive rule: The automatic ZooKeeper log compressio",
"product_code":"mrs",
"title":"ZooKeeper Log Overview",
"uri":"mrs_01_2106.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"786"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Common Issues About ZooKeeper",
"uri":"mrs_01_2107.html",
"doc_type":"cmpntguide",
"p_code":"781",
"code":"787"
},
{
"desc":"After a large number of znodes are created, ZooKeeper servers in the ZooKeeper cluster become faulty and cannot be automatically recovered or restarted.Logs of followers:",
"product_code":"mrs",
"title":"Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?",
"uri":"mrs_01_2108.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"788"
},
{
"desc":"After a large number of znodes are created in a parent directory, the ZooKeeper client will fail to fetch all child nodes of this parent directory in a single request.Log",
"product_code":"mrs",
"title":"Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?",
"uri":"mrs_01_2109.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"789"
},
{
"desc":"Why four letter commands do not work with linux netcat command when secure netty configurations are enabled at Zookeeper server?For example,echo stat |netcat host portLin",
"product_code":"mrs",
"title":"Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?",
"uri":"mrs_01_2110.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"790"
},
{
"desc":"How to check whether the role of a ZooKeeper instance is a leader or follower.Log in to Manager and choose Cluster > Name of the desired cluster > Service > ZooKeeper > I",
"product_code":"mrs",
"title":"How Do I Check Which ZooKeeper Instance Is a Leader?",
"uri":"mrs_01_2111.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"791"
},
{
"desc":"When the IBM JDK is used, the client fails to connect to ZooKeeper.The possible cause is that the jaas.conf file format of the IBM JDK is different from that of the commo",
"product_code":"mrs",
"title":"Why Cannot the Client Connect to ZooKeeper using the IBM JDK?",
"uri":"mrs_01_2112.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"792"
},
{
"desc":"The ZooKeeper client fails to refresh a TGT and therefore ZooKeeper cannot be accessed. The error message is as follows:ZooKeeper uses the system command kinit R to ref",
"product_code":"mrs",
"title":"What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?",
"uri":"mrs_01_2113.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"793"
},
{
"desc":"When the client connects to a non-leader instance, run the deleteall command to delete a large number of znodes, the error message \"Node does not exist\" is displayed, but",
"product_code":"mrs",
"title":"Why Is Message \"Node does not exist\" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand",
"uri":"mrs_01_2114.html",
"doc_type":"cmpntguide",
"p_code":"787",
"code":"794"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Appendix",
"uri":"mrs_01_2122.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"795"
},
{
"desc":"For MRS 1.9.2 or later: You can modify service configuration parameters on the cluster management page of the MRS management console.Log in to the MRS console. In the lef",
"product_code":"mrs",
"title":"Modifying Cluster Service Configuration Parameters",
"uri":"mrs_01_2125.html",
"doc_type":"cmpntguide",
"p_code":"795",
"code":"796"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Accessing Manager",
"uri":"mrs_01_2123.html",
"doc_type":"cmpntguide",
"p_code":"795",
"code":"797"
},
{
"desc":"Clusters of versions earlier than MRS 3.x use MRS Manager to monitor, configure, and manage clusters. You can open the MRS Manager page on the MRS console.If you have bou",
"product_code":"mrs",
"title":"Accessing MRS Manager (Versions Earlier Than MRS 3.x)",
"uri":"mrs_01_0102.html",
"doc_type":"cmpntguide",
"p_code":"797",
"code":"798"
},
{
"desc":"In MRS 3.x or later, FusionInsight Manager is used to monitor, configure, and manage clusters. After the cluster is installed, you can use the account to log in to Fusion",
"product_code":"mrs",
"title":"Accessing FusionInsight Manager (MRS 3.x or Later)",
"uri":"mrs_01_2124.html",
"doc_type":"cmpntguide",
"p_code":"797",
"code":"799"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Using an MRS Client",
"uri":"mrs_01_2126.html",
"doc_type":"cmpntguide",
"p_code":"795",
"code":"800"
},
{
"desc":"This section describes how to install clients of all services (excluding Flume) in an MRS cluster. For details about how to install the Flume client, see Installing the F",
"product_code":"mrs",
"title":"Installing a Client (Version 3.x or Later)",
"uri":"mrs_01_2127.html",
"doc_type":"cmpntguide",
"p_code":"800",
"code":"801"
},
{
"desc":"An MRS client is required. The MRS cluster client can be installed on the Master or Core node in the cluster or on a node outside the cluster.After a cluster of versions ",
"product_code":"mrs",
"title":"Installing a Client (Versions Earlier Than 3.x)",
"uri":"mrs_01_2128.html",
"doc_type":"cmpntguide",
"p_code":"800",
"code":"802"
},
{
"desc":"A cluster provides a client for you to connect to a server, view task results, or manage data. If you modify service configuration parameters on Manager and restart the s",
"product_code":"mrs",
"title":"Updating a Client (Version 3.x or Later)",
"uri":"mrs_01_2129.html",
"doc_type":"cmpntguide",
"p_code":"800",
"code":"803"
},
{
"desc":"This section applies to clusters of versions earlier than MRS 3.x. For MRS 3.x or later, see Updating a Client (Version 3.x or Later).ScenarioAn MRS cluster provides a cl",
"product_code":"mrs",
"title":"Updating a Client (Versions Earlier Than 3.x)",
"uri":"mrs_01_2130.html",
"doc_type":"cmpntguide",
"p_code":"800",
"code":"804"
},
{
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
"product_code":"mrs",
"title":"Change History",
"uri":"en-us_topic_0000001351362309.html",
"doc_type":"cmpntguide",
"p_code":"",
"code":"805"
}
]