This section describes the data definition language (DDL) of HDFS as a sink table, as well as the WITH parameters and example code for creating a sink table, and provides guidance on how to perform operations on the FlinkServer job management page.
If your Kafka cluster is in security mode, the following example SQL statements can be used.
CREATE TABLE kafka_table ( user_id STRING, order_amount DOUBLE, log_ts TIMESTAMP(3), WATERMARK FOR log_ts AS log_ts - INTERVAL '5' SECOND ) WITH ( 'connector' = 'kafka', 'topic' = 'user_source', 'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number', 'properties.group.id' = 'testGroup', 'scan.startup.mode' = 'latest-offset', 'format' = 'csv', --Ignore the CSV data that fails to be parsed. 'csv.ignore-parse-errors' = 'true' ,--If the data is in JSON format, set 'json.ignore-parse-errors' to true. 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.kerberos.domain.name' = 'hadoop.System domain name' ); CREATE TABLE fs_table ( user_id STRING, order_amount DOUBLE, dt STRING, `hour` STRING ) PARTITIONED BY (dt, `hour`) WITH ( --Date-specific file partitioning 'connector'='filesystem', 'path'='hdfs:///sql/parquet', 'format'='parquet', 'sink.partition-commit.delay'='1 h', 'sink.partition-commit.policy.kind'='success-file' ); -- streaming sql, insert into file system table INSERT INTO fs_table SELECT user_id, order_amount, DATE_FORMAT(log_ts, 'yyyy-MM-dd'), DATE_FORMAT(log_ts, 'HH') FROM kafka_table;
Kafka port number
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the displayed page, click Configurations and then All Configurations, search for allow.everyone.if.no.acl.found, set its value to true, and click Save.
./kafka-topics.sh --list --zookeeper IP address of the ZooKeeper quorumpeer instance:ZooKeeper port number/kafka
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance locates:Kafka port number --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties
For example, if the topic name is user_source, the script is sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance locates:Kafka port number --topic user_source --producer.config /opt/Bigdata/client/Kafka/kafka/config/producer.properties.
3,3333,"2021-09-10 14:00" 4,4444,"2021-09-10 14:01"
Press Enter to send the message.
To obtain IP addresses of all ZooKeeper quorumpeer instances, log in to FusionInsight Manager and choose Cluster > Services > ZooKeeper. On the displayed page, click Instance and view the IP addresses of all the hosts where the quorumpeer instances locate.
Log in to FusionInsight Manager and choose Cluster > Service > ZooKeeper. On the displayed page, click Configurations and check the value of clientPort. The default value is 24002.
hdfs dfs -ls -R /sql/parquet
Flink's file system supports partitions in the standard Hive format. You do not need to register partitions with a table catalog. Partitions are inferred based on the directory structure.
path └── datetime=2021-09-03 └── hour=11 ├── part-0.parquet ├── part-1.parquet └── hour=12 ├── part-0.parquet └── datetime=2021-09-24 └── hour=6 ├── part-0.parquet
Data in the partition directories is split into part files. Each partition contains at least one part file, which is used to receive the data written by the subtask of the sink.
Parameter |
Default Value |
Type |
Description |
---|---|---|---|
sink.rolling-policy.file-size |
128 MB |
MemorySize |
Maximum size of a partition file before it is rolled. |
sink.rolling-policy.rollover-interval |
30 minutes |
Duration |
Maximum duration that a partition file can stay open before it is rolled. |
sink.rolling-policy.check-interval |
1 minute |
Duration |
Interval for checking time-based rolling policies. |
Only files in a single checkpoint are compressed. That is, the number of generated files is at least the same as the number of checkpoints. Files are invisible before merged. They are visible after both the checkpoint and compression are complete. If file compression takes too much time, the checkpoint will be prolonged.
Parameter |
Default Value |
Type |
Description |
---|---|---|---|
auto-compaction |
false |
Boolean |
Whether to enable automatic compression. Data will be written to temporary files. After a checkpoint is complete, the temporary files generated by the checkpoint are compressed. These temporary files are invisible before compression. |
compaction.file-size |
none |
MemorySize |
Size of the target file to be compressed. The default value is the size of the file to be rolled. |
Parameter |
Default Value |
Type |
Description |
---|---|---|---|
sink.partition-commit.trigger |
process-time |
String |
|
sink.partition-commit.delay |
0 s |
Duration |
Partitions will not be committed before the delay time. If it is a daily partition, the value is 1 d. If it is an hourly one, the value is 1 h. |
Parameter |
Default Value |
Type |
Description |
---|---|---|---|
sink.partition-commit.policy.kind |
- |
String |
Policy for committing partitions:
|
sink.partition-commit.policy.class |
- |
String |
Class that implements partition commit policy interfaces. This parameter takes effect only in the customized submission policies. |
sink.partition-commit.success-file.name |
_SUCCESS |
String |
File name of the success-file partition commit policy. The default value is _SUCCESS. |