5.2 KiB

original_name

mrs_01_24232.html

Using CDL from Scratch

CDL supports data synchronization or comparison tasks in multiple scenarios. This section describes how to import data from PgSQL to Kafka on the CDLService WebUI of a cluster with Kerberos authentication enabled.

Prerequisites

  • The CDL and Kafka services have been installed in a cluster and are running properly.
  • Write-ahead logging is enabled for the PostgreSQL database. For details, see Policy for Modifying Write-Ahead Logs in PostgreSQL Databases <mrs_01_24124__li1868193914169>.
  • You have created a human-machine user, for example, cdluser, added the user to user groups cdladmin (primary group), hadoop, and kafka, and associated the user with the System_administrator role on FusionInsight Manager.

Procedure

  1. Log in to FusionInsight Manager as user cdluser (change the password upon the first login) and choose Cluster > Services > CDL. On the Dashboard page, click the hyperlink next to CDLService UI to go to the native CDL page.

  2. Choose Link Management and click Add Link. On the displayed dialog box, set parameters for adding the pgsql and kafka links by referring to the following tables.

    Table 1 PgSQL data link parameters
    Parameter Example Value
    Link Type pgsql
    Name pgsqllink
    Host 10.10.10.10
    Port 5432
    DB Name testDB
    User user
    Password Password of the user user
    Description -
    Table 2 Kafka data link parameters
    Parameter Example Value
    Link Type kafka
    Name kafkalink
    Description -
  3. After the parameters are configured, click Test to check whether the data link is normal.

    After the test is successful, click OK.

  4. On the Job Management page, click Add Job. In the displayed dialog box, configure the parameters and click Next.

    Specifically:

    Parameter Example Value
    Name job_pgsqltokafka
    Desc xxx
  5. Configure PgSQL job parameters.

    1. On the Job Management page, drag the pgsql icon on the left to the editing area on the right and double-click the icon to go to the PgSQL job configuration page.

      Table 3 PgSQL job parameters
      Parameter Example Value
      Link pgsqllink
      Tasks Max 1
      Mode insert, update, and delete
      Schema public
      dbName Alias cdc
      Slot Name a4545sad
      Slot Drop No
      Connect With Hudi No
      Use Exist Publication Yes
      Publication Name test
    2. Click the plus sign (+) to display more parameters.

      image1

      Note

      • WhiteList: Enter the name of the table in the database, for example, myclass.
      • Topic Table Mapping: In the first text box, enter a topic name (the value must be different from that of Name in 4 <mrs_01_24232__li8419191320242>), for example, myclass_topic. In the second text box, enter a table name, for example, myclass. The value must be in one-to-one relationship with the topic name entered in the first text box.)
    3. Click OK. The PgSQL job parameters are configured.

  6. Configure Kafka job parameters.

    1. On the Job Management page, drag the kafka icon on the left to the editing area on the right and double-click the icon to go to the Kafka job configuration page. Configure parameters based on Table 4 <mrs_01_24232__table8128935153416>.

      Table 4 Kafka job parameter
      Parameter Example Value
      Link kafkalink
    2. Click OK.

  7. After the job parameters are configured, drag the two icons to associate the job parameters and click Save. The job configuration is complete.

    image2

  8. In the job list on the Job Management page, locate the created jobs, click Start in the Operation column, and wait until the jobs are started.

    Check whether the data transmission takes effect. For example, insert data into the table in the PgSQL database, go to the Kafka UI to check whether data is generated in the Kafka topic by referring to Managing Topics on Kafka UI <mrs_01_24138>.