FusionInsight Manager provides the backup and restoration of system data and user data by component. The system can back up Manager data, component metadata, and service data.
Data can be backed up to local disks (LocalDir), local HDFS (LocalHDFS), remote HDFS (RemoteHDFS), NAS (NFS/CIFS), Object Storage Service (OBS), and SFTP server (SFTP). For details, see Backing Up Data.
For a component that supports multiple services, multiple instances of a service can be backed up and restored. The backup and restoration operations are consistent with those of a service instance.
Only MRS 3.1.0 or later supports data backup to OBS.
Backup and restoration tasks are performed in the following scenarios:
Backup Type |
Backup Content |
Backup Directory Type |
---|---|---|
OMS |
Database data (excluding alarm data) and configuration data in the cluster management system by default |
|
Backup Type |
Backup Content |
Backup Directory Type |
---|---|---|
DBService |
Metadata of the components (including Loader, Hive, Spark, Oozie, and Hue) managed by DBService. For a cluster with multiple services installed, back up the metadata of multiple Hive and Spark service instances. |
|
Kafka |
Kafka metadata. |
|
NameNode |
HDFS metadata. After multiple NameServices are added, backup and restoration are supported for all of them and the operations are consistent with those of the default hacluster instance. |
|
Yarn |
Information about the Yarn service resource pool. |
|
HBase |
tableinfo files and data files of HBase system tables. |
Backup Type |
Backup Content |
Backup Directory Type |
---|---|---|
HBase |
Table-level user data. For a cluster with multiple services installed, backup and restoration are supported for multiple HBase service instances and the backup and restoration operations are consistent with those of a single HBase service instance. |
|
HDFS |
Directories or files of user services. NOTE:
Encrypted directories cannot be backed up or restored. |
|
Hive |
Table-level user data. For a cluster with multiple services installed, backup and restoration are supported for multiple Hive service instances and the backup and restoration operations are consistent with those of a single Hive service instance. |
Note that some components do not provide data backup or restoration:
Task
Before backup or restoration, you need to create a backup or restoration task and set task parameters, such as the task name, backup data source, and type of the directory for storing backup files. Then you can execute the tasks to back up or restore data. When Manager is used to restore the data of HDFS, HBase, Hive, and NameNode, the cluster cannot be accessed.
Each backup task can back up data of different data sources and generate an independent backup file for each data source. All the backup files generated in a backup task form a backup file set, which can be used in restoration tasks. Backup data can be stored on Linux local disks, local cluster HDFS, and standby cluster HDFS.
Backup tasks support full backup and incremental backup policies. Cloud data backup tasks do not support incremental backup. If the backup directory type is NFS or CIFS, incremental backup is not recommended. When incremental backup is used for NFS or CIFS backup, the latest full backup data is updated each time the incremental backup is performed. Therefore, no new recovery point is generated.
Task execution rules:
When planning backup and restoration tasks, select the data to be backed up or restored strictly based on the service logic, data store structure, and database or table association. By default, the system creates periodic backup tasks default-oms and default-cluster ID at an interval of one hour. OMS metadata and cluster metadata, such as DBService and NameNode, can be fully backed up to local disks.
Snapshot
The system uses the snapshot technology to quickly back up data. Snapshots include HBase and HDFS snapshots.
An HBase snapshot is a backup file of HBase tables at a specified time point. This backup file does not replicate service data or affect the RegionServer. The HBase snapshot replicates table metadata, including table descriptor, region info, and HFile reference information. The metadata can be used to restore data before the snapshot creation time.
An HDFS snapshot is a read-only backup of HDFS at a specified time point. The snapshot is used in data backup, misoperation protection, and disaster recovery scenarios.
The snapshot function can be enabled for any HDFS directory to create the related snapshot file. Before creating a snapshot for a directory, the system automatically enables the snapshot function for the directory. Creating a snapshot does not affect any HDFS operation. A maximum of 65,536 snapshots can be created for each HDFS directory.
When a snapshot is being created for an HDFS directory, the directory cannot be deleted or modified before the snapshot is created. Snapshots cannot be created for the upper-layer directories or subdirectories of the directory.
DistCp
Distributed copy (DistCp) is a tool used to replicate a large amount of data in HDFS in a cluster or between the HDFSs of different clusters. In a backup or restoration task of HBase, HDFS, or Hive, if you back up the data to HDFS of the standby cluster, the system invokes DistCp to perform the operation. Install the MRS software of the same version for the active and standby clusters and install the cluster.
DistCp uses MapReduce to implement data distribution, troubleshooting, restoration, and report. DistCp specifies different Map jobs for various source files and directories in the specified list. Each Map job copies the data in the partition that corresponds to the specified file in the list.
If you use DistCp to replicate data between HDFSs of two clusters, configure the cross-cluster mutual trust (mutual trust does not need to be configured for clusters managed by the same FusionInsight Manager) and cross-cluster replication for both clusters. When backing up the cluster data to HDFS in another cluster, you need to install the Yarn component. Otherwise, the backup fails.
Local rapid restoration
After using DistCp to back up the HBase, HDFS, and Hive data of the local cluster to the HDFS of the standby cluster, the HDFS of the local cluster retains the backup data snapshots. You can create local rapid restoration tasks to restore data by using the snapshot files in the HDFS of the local cluster.
NAS
Network Attached Storage (NAS) is a dedicated data storage server which includes the storage components and embedded system software. It provides the cross-platform file sharing function. By using NFS (supporting NFSv3 and NFSv4) and CIFS (supporting SMBv2 and SMBv3), you can connect the service plane of MRS to the NAS server to back up data to the NAS or restore data from the NAS.
Item |
Specification |
---|---|
Maximum number of backup or restoration tasks |
100 |
Number of concurrent tasks in a cluster |
1 |
Maximum number of waiting tasks |
199 |
Maximum size (GB) of backup files on a Linux local disk |
600 |
If service data is stored in the ZooKeeper upper-layer components, ensure that the number of znodes in a single backup or restoration task is not too large. Otherwise, the task will fail, and the ZooKeeper service performance will be affected. To check the number of znodes in a single backup or restoration task, perform the following operations:
zkCli.sh -server ip:port, where, ip can be any management IP address, and the default port number is 2181.
WatchedEvent state:SyncConnected type:None path:null [zk: ip:port(CONNECIED) 0]
For example, getusage /hbase/region. In the command output, Node count=xxxxxx indicates the number of znodes stored in the region directory.
Item |
OMS |
HBase |
Kafka |
DBService |
NameNode |
---|---|---|---|---|---|
Backup period |
1 hour |
||||
Maximum number of backups |
168 (7-day historical data) |
24 (one-day historical data) |
|||
Maximum size of a backup file |
10 MB |
10 MB |
512 MB |
100 MB |
20 GB |
Maximum size of disk space used |
1.64 GB |
1.64 GB |
84 GB |
16.41 GB |
480 GB |
Storage path of backup data |
Data storage path/LocalBackup/ of the active and standby management nodes |