CDM supports table and file migration between homogeneous or heterogeneous data sources. For details about supported data sources, see Data Sources Supported by Table/File Migration.
The parameters vary with data sources. For details about the job parameters of other types of data sources, see Table 1 and Table 2.
Migration Source |
Description |
Parameter Settings |
---|---|---|
OBS |
Data can be extracted in CSV, JSON, or binary format. Data extracted in binary format is free from file resolution, which ensures high performance and is more suitable for file migration. |
For details, see From OBS. |
|
HDFS data can be exported in CSV, Parquet, or binary format and can be compressed in multiple formats. |
For details, see From HDFS. |
|
Data can be exported from MRS, FusionInsight HD, open source Apache Hadoop HBase, or CloudTable. You need to know all column families and field names of HBase tables. |
For details, see From HBase/CloudTable. |
|
Data can be exported from Hive through the JDBC API. If the data source is Hive, CDM will automatically partition data using the Hive data partitioning file. |
For details, see From Hive. |
DLI |
Data can be exported from DLI. |
For details, see From DLI. |
|
FTP or SFTP data can be extracted in CSV, JSON, or binary format. |
For details, see From FTP/SFTP. |
|
These connectors are used to read files with an HTTP/HTTPS URL, such as reading public files on the third-party object storage system and web disks. Currently, data can only be exported from the HTTP URLs. |
For details, see From HTTP. |
|
Data can be exported from the cloud database services. |
When data is exported from these data sources, CDM uses the JDBC API to extract data. The job parameters for the migration source are the same. For details, see From a Common Relational Database. |
FusionInsight LibrA |
Data can be exported from FusionInsight LibrA. |
|
|
The non-cloud databases can be those created in the on-premises data center or deployed on ECSs, or database services on the third-party clouds. |
|
|
Data can be exported from MongoDB or DDS. |
For details, see From MongoDB/DDS. |
Redis |
Data can be exported from open source Redis. |
For details, see From Redis. |
|
Data can only be exported to Cloud Search Service (CSS). |
For details, see From Kafka/DMS Kafka. |
|
Data can be exported from CSS or Elasticsearch. |
For details, see From Elasticsearch or CSS. |
Migration Destination |
Description |
Parameter Settings |
---|---|---|
OBS |
Files (even in a large volume) can be batch migrated to OBS in CSV or binary format. |
For details, see To OBS. |
MRS HDFS |
You can select a compression format when importing data to HDFS. |
For details, see To HDFS. |
MRS HBase CloudTable Service |
Data can be imported to HBase. The compression algorithm can be set when a new HBase table is created. |
For details, see To HBase/CloudTable. |
MRS Hive |
Data can be rapidly imported to MRS Hive. |
For details, see To Hive. |
DLI |
Data can be imported to DLI. |
For details, see To DLI. |
|
Data can be imported to cloud database services. |
For details about how to use the JDBC API to import data, see To a Common Relational Database. |
Document Database Service |
Data can be imported to the DDS but cannot be imported to the local MongoDB. |
For details, see To DDS. |
Distributed Cache Service |
Data can be imported to DCS in the String or Hashmap value type. Data cannot be imported to the local Redis. |
For details, see To DCS. |
Cloud Search Service (CSS) |
Data can be imported to CSS. |
For details, see To CSS. |
If files are migrated between FTP, SFTP, HDFS, and OBS and the migration source's File Format is set to Binary, files will be directly transferred, free from field mapping.
In other scenarios, CDM automatically maps fields of the source table and the destination table. You need to check whether the mapping and time format are correct. For example, check whether the source field type can be converted into the destination field type.
CDM supports the following converters:
Parameter |
Description |
Example Value |
---|---|---|
Retry upon Failure |
You can select Retry 3 times or Never. You are advised to configure automatic retry for only file migration jobs or database migration jobs with Import to Staging Table enabled to avoid data inconsistency caused by repeated data writes. |
Never |
Job |
Select a group where the job resides. The default group is DEFAULT. On the Job Management page, jobs can be displayed, started, or exported by group. |
DEFAULT |
Schedule Execution |
If you select Yes, you can set the start time, cycle, and validity period of a job. For details, see Scheduling Job Execution. |
No |
Concurrent Extractors |
Number of extraction tasks that can be concurrently executed. The value range is 1 to 300. If the value is too large, the extractors are queued. The number of concurrent extractors in a CDM migration job is related to the cluster specifications and table size.
NOTE:
|
1 |
Concurrent Loaders |
Number of Loaders to be concurrently executed This parameter is displayed only when HBase or Hive serves as the destination data source. |
3 |
Write Dirty Data |
Whether to record dirty data. By default, this parameter is set to No. Dirty data in CDM refers to the data in invalid format. If the source data contains dirty data, you are advised to enable this function. Otherwise, the migration job may fail. |
Yes |
Write Dirty Data Link |
This parameter is displayed only when Write Dirty Data is set to Yes. Only links to OBS support dirty data writes. |
obs_link |
OBS Bucket |
This parameter is displayed only when Write Dirty Data Link is a link to OBS. Name of the OBS bucket to which the dirty data will be written. |
dirtydata |
Dirty Data Directory |
This parameter is displayed only when Write Dirty Data is set to Yes. Dirty data is stored in the directory for storing dirty data on OBS. Dirty data is saved only when this parameter is configured. You can go to this directory to query data that fails to be processed or is filtered out during job execution, and check the source data that does not meet conversion or cleaning rules. |
/user/dirtydir |
Max. Error Records in a Single Shard |
This parameter is displayed only when Write Dirty Data is set to Yes. When the number of error records of a single map exceeds the upper limit, the job will automatically terminate and the imported data cannot be rolled back. You are advised to use a temporary table as the destination table. After the data is imported, rename the table or combine it into the final data table. |
0 |
The job status can be New, Pending, Booting, Running, Failed, or Succeeded.
Pending indicates that the job is waiting to be scheduled by the system, and Booting indicates that the data to be migrated is being analyzed.