This section takes the example of creating a Kafka 2.7 instance (ciphertext access and SASL_SSL) and accessing it on the client (private network, within a virtual private cloud (VPC)) for message production and consumption to get you quickly started with Distributed Message Service (DMS).
A Kafka instance runs in a VPC. Before creating a Kafka instance, ensure that a VPC is available.
After a Kafka instance is created, download and install the Kafka open-source client on your ECS before producing and consuming messages.
You can select the specification and quantity, and enable ciphertext access and SASL_SSL when creating a Kafka instance.
When connecting to a Kafka instance with SASL_SSL enabled, SASL is used for authentication. Data is encrypted with SSL certificates for high-security transmission.
Topics store messages created by producers and subscribed by consumers.
This section uses the example of creating a topic on the console.
Before connecting to a Kafka instance with SASL_SSL enabled, download the certificate and configure the connection in the client configuration file.
To achieve fine-grained management of your cloud resources, create IAM user groups and users and grant specified permissions to the users. For more information, see Creating a User and Granting DMS for Kafka Permissions.
Configure the VPC and subnet for Kafka instances as required. You can use the current account's existing VPC and subnet, or create new ones. For details about how to create a VPC and a subnet, see Creating a VPC. Note that the VPC must be in the same region as the Kafka instance.
Configure the security group for Kafka instances as required. You can use the current account's existing security groups, or create new ones. For details about how to create a security group, see Creating a Security Group.
Direction |
Protocol |
Port |
Source address |
Description |
---|---|---|---|---|
Inbound |
TCP |
9093 |
0.0.0.0/0 |
Accessing a Kafka instance over a private network within a VPC (in ciphertext) |
After a security group is created, it has a default inbound rule that allows communication among ECSs within the security group and a default outbound rule that allows all outbound traffic. If you access your Kafka instance using the private network within a VPC, you do not need to add the rules described in Table 1.
For details about how to create an ECS, see Creating an ECS. If you already have an available ECS, skip this step.
Use Oracle JDK instead of ECS's default JDK (for example, OpenJDK), because ECS's default JDK may not be suitable. Obtain Oracle JDK 1.8.111 or later from Oracle's official website.
tar -zxvf jdk-8u321-linux-x64.tar.gz
Change jdk-8u321-linux-x64.tar.gz to your JDK version.
vim ~/.bash_profile
export JAVA_HOME=/opt/java/jdk1.8.0_321 export PATH=$JAVA_HOME/bin:$PATH
Change /opt/java/jdk1.8.0_321 to the path where you install JDK.
:wq
source .bash_profile
java -version
java version "1.8.0_321"
wget https://archive.apache.org/dist/kafka/2.7.2/kafka_2.12-2.7.2.tgz
tar -zxf kafka_2.12-2.7.2.tgz
Parameter |
Description |
---|---|
Region |
DMS for Kafka in different regions cannot communicate with each other over an intranet. Select a nearest location for low latency and fast access. Select eu-de. |
Project |
Projects isolate compute, storage, and network resources across geographical regions. For each region, a preset project is available. Select eu-de (default). |
AZ |
An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network. Select AZ1, AZ2, and AZ3. |
Instance Name |
You can customize a name that complies with the rules: 4–64 characters; starts with a letter; can contain only letters, digits, hyphens (-), and underscores (_). Enter kafka-test. |
Enterprise Project |
This parameter is for enterprise users. An enterprise project manages project resources in groups. Enterprise projects are logically isolated. Select default. |
Specifications |
Select Cluster to create a cluster Kafka instance. |
Version |
Kafka version. Cannot be changed once the instance is created. Select 2.7. |
CPU Architecture |
x86 Retain the default value. |
Broker Flavor |
Select a broker flavor as required. Select kafka.2u4g.cluster. |
Brokers |
Specify the number of brokers as required. Enter 3. |
Storage Space per Broker |
Select the disk type and specify the disk size as required. Total storage space = Storage space per broker × Broker quantity. After the instance is created, you cannot change the disk type. Select Ultra-high I/O and enter 100. |
Disk Encryption |
Skip it. |
Capacity Threshold Policy |
Select Automatically delete: When the disk reaches the disk capacity threshold (95%), messages can still be produced and consumed, but the earliest 10% of messages will be deleted to ensure sufficient disk space. Use this policy for services intolerant of interruptions. However, data may be lost. |
Parameter |
Sub-Parameter |
Description |
---|---|---|
Private Network Access |
Plaintext Access |
Disable it. |
Ciphertext Access |
When this parameter is enabled, SASL authentication is required when a client connects to the Kafka instance.
|
|
Public Network Access |
- |
Skip it. |
It takes 3 to 15 minutes to create an instance. During this period, the instance status is Creating.
Instances that fail to be created do not occupy other resources.
Parameter |
Description |
---|---|
Topic Name |
Customize a name that contains 3 to 200 characters, starts with a letter or underscore (_), and contains only letters, digits, periods (.), hyphens (-), and underscores (_). The name must be different from preset topics:
Cannot be changed once the topic is created. Enter topic-01. |
Partitions |
If the number of partitions is the same as that of consumers, the larger the partitions, the higher the consumption concurrency. Enter 3. |
Replicas |
Data is automatically backed up to each replica. When one Kafka broker becomes faulty, data is still available. A higher number of replicas delivers higher reliability. Enter 3. |
Aging Time (h) |
How long messages will be preserved in the topic. Messages older than this period cannot be consumed. They will be deleted, and can no longer be consumed. Enter 72. |
Synchronous Replication |
Skip it. When this option is disabled, leader replicas are independent from follower replica synchronization. They receive messages and write them to local logs, then immediately send the successfully written ones to the client. |
Synchronous Flushing |
Skip it. When this option is disabled, messages are produced and stored in memory instead of written to the disk immediately. |
Message Timestamp |
Select CreateTime: time when the producer created the message. |
Max. Message Size (bytes) |
Maximum batch processing size allowed by Kafka. If message compression is enabled in the client configuration file or code of producers, this parameter indicates the size after compression. Enter 10,485,760. |
To obtain the certificate: On the Kafka console, click the Kafka instance to go to the Basic Information page. Click Download next to SSL Certificate in the Connection area. Decompress the package to obtain the client certificate file client.jks.
/root is the path for storing the certificate. Change it to the actual path if needed.
cd kafka_2.12-2.7.2/config
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="**********" \ password="**********"; sasl.mechanism=PLAIN security.protocol=SASL_SSL ssl.truststore.location={ssl_truststore_path} ssl.truststore.password=dms@kafka ssl.endpoint.identification.algorithm=
Description:
cd ../bin
./kafka-console-producer.sh --broker-list ${connection addr} --topic ${topic name} --producer.config ../config/producer.properties
Description:
For example, 192.xxx.xxx.xxx:9093, 192.xxx.xxx.xxx:9093, 192.xxx.xxx.xxx:9093 are the connection addresses of the Kafka instance.
After running this command, you can send messages to the Kafka instance by entering the information as prompted and pressing Enter. Each line of content will be sent as a message.
[root@ecs-kafka bin]#./kafka-console-producer.sh --broker-list 192.xxx.xxx.xxx:9093,192.xxx.xxx.xxx:9093,192.xxx.xxx.xxx:9093 --topic topic-demo --producer.config ../config/producer.properties >Hello >DMS >Kafka! >^C[root@ecs-kafka bin]#
Press Ctrl+C to cancel.
./kafka-console-consumer.sh --bootstrap-server ${connection addr} --topic ${topic name} --from-beginning --consumer.config ../config/consumer.properties
Description:
Sample:
[root@ecs-kafka bin]# ./kafka-console-consumer.sh --bootstrap-server 192.xxx.xxx.xxx:9093,192.xxx.xxx.xxx:9093,192.xxx.xxx.xxx:9093 --topic topic-demo --from-beginning --consumer.config ../config/consumer.properties Hello Kafka! DMS ^CProcessed a total of 3 messages [root@ecs-kafka bin]#
Press Ctrl+C to cancel.