DLI allows you to develop a program to create Spark jobs for operations related to databases, DLI or OBS tables, and table data. This example demonstrates how to develop a job by writing a Java program, and use a Spark job to create a database and table and insert table data.
For example, the testdb database is created using the SQL editor of DLI. A program package for creating the testTable table in the testdb database does not work after it is submitted to a Spark Jar job.
Before developing a Spark job to access DLI metadata, set up a development environment that meets the following requirements.
Item |
Description |
---|---|
OS |
Windows 7 or later |
JDK |
JDK 1.8. |
IntelliJ IDEA |
This tool is used for application development. The version of the tool must be 2019.1 or other compatible versions. |
Maven |
Basic configurations of the development environment. Maven is used for project management throughout the lifecycle of software development. |
No. |
Phase |
Software Portal |
Description |
---|---|---|---|
1 |
Create a queue for general use. |
DLI console |
The DLI queue is created for running your job. |
2 |
Configure the OBS file. |
OBS console |
|
3 |
Create a Maven project and configure the POM file. |
IntelliJ IDEA |
Write a program to create a DLI or OBS table by referring to the sample code. |
4 |
Write code. |
||
5 |
Debug, compile, and pack the code into a Jar package. |
||
6 |
Upload the Jar package to OBS and DLI. |
OBS console |
You can upload the generated Spark Jar package to an OBS directory and DLI program package. |
7 |
Create a Spark JAR job. |
DLI console |
The Spark Jar job is created and submitted on the DLI console. |
8 |
Check execution result of the job. |
DLI console |
You can view the job running status and run logs. |
In this example, the Maven project name is SparkJarMetadata, and the project storage path is D:\DLITest\SparkJarMetadata.
<dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency> </dependencies>
Set the package name as you need. In this example, set Package to com.dli.demo and press Enter.
Create a Java Class file in the package path. In this example, the Java Class file is DliCatalogTest.
Write the DliCatalogTest program to create a database, DLI table, and OBS table.
For the sample code, see Java Example Code.
import org.apache.spark.sql.SparkSession;
When you create a SparkSession, you need to specify spark.sql.session.state.builder, spark.sql.catalog.class, and spark.sql.extensions parameters as configured in the following example.
SparkSession spark = SparkSession .builder() .config("spark.sql.session.state.builder", "org.apache.spark.sql.hive.UQueryHiveACLSessionStateBuilder") .config("spark.sql.catalog.class", "org.apache.spark.sql.hive.UQueryHiveACLExternalCatalog") .config("spark.sql.extensions","org.apache.spark.sql.DliSparkExtension") .appName("java_spark_demo") .getOrCreate();
SparkSession spark = SparkSession .builder() .config("spark.sql.session.state.builder", "org.apache.spark.sql.hive.DliLakeHouseBuilder") .config("spark.sql.catalog.class", "org.apache.spark.sql.hive.DliLakeHouseCatalog") .appName("java_spark_demo") .getOrCreate();
spark.sql("drop table if exists test_sparkapp.dli_testtable").collect(); spark.sql("create table test_sparkapp.dli_testtable(id INT, name STRING)").collect(); spark.sql("insert into test_sparkapp.dli_testtable VALUES (123,'jason')").collect(); spark.sql("insert into test_sparkapp.dli_testtable VALUES (456,'merry')").collect();
spark.sql("drop table if exists test_sparkapp.dli_testobstable").collect(); spark.sql("create table test_sparkapp.dli_testobstable(age INT, name STRING) using csv options (path 'obs://dli-test-obs01/testdata.csv')").collect();
spark.stop();
After the compilation is successful, double-click package.
The generated JAR package is stored in the target directory. In this example, SparkJarMetadata-1.0-SNAPSHOT.jar is stored in D:\DLITest\SparkJarMetadata\target.
Parameter |
Value |
---|---|
Queue |
Select the DLI queue created for general purpose. For example, select the queue sparktest created in Step 1: Create a Queue for General Purpose. |
Spark Version |
Select a Spark version. Select a supported Spark version from the drop-down list. The latest version is recommended. |
Job Name (--name) |
Name of a custom Spark Jar job. For example, SparkTestMeta. |
Application |
Select the package uploaded to DLI in Step 6: Upload the JAR Package to OBS and DLI. For example, select SparkJarObs-1.0-SNAPSHOT.jar. |
Main Class (--class) |
The format is program package name + class name. |
Spark Arguments (--conf) |
spark.dli.metaAccess.enable=true spark.sql.warehouse.dir=obs://dli-test-obs01/warehousepath NOTE:
Set spark.sql.warehouse.dir to the OBS path that is specified in Step 2: Configure the OBS Bucket File. |
Access Metadata |
Select Yes. |
Retain default values for other parameters.
After the fault is rectified, click Edit in the Operation column of the job, modify job parameters, and click Execute to run the job again.
Call the API for creating a batch processing job. The following table describes the request parameters.
Configure "spark.sql.warehouse.dir": "obs://bucket/warehousepath" in the CONF file if you need to run the DDL.
The following example provided you with the complete API request.
{ "queue":"citest", "file":"SparkJarMetadata-1.0-SNAPSHOT.jar", "className":"DliCatalogTest", "conf":{"spark.sql.warehouse.dir": "obs://bucket/warehousepath", "spark.dli.metaAccess.enable":"true"}, "sc_type":"A", "executorCores":1, "numExecutors":6, "executorMemory":"4G", "driverCores":2, "driverMemory":"7G", "catalog_name": "dli" }
This example uses Java for coding. The complete sample code is as follows:
package com.dli.demo; import org.apache.spark.sql.SparkSession; public class DliCatalogTest { public static void main(String[] args) { SparkSession spark = SparkSession .builder() .config("spark.sql.session.state.builder", "org.apache.spark.sql.hive.UQueryHiveACLSessionStateBuilder") .config("spark.sql.catalog.class", "org.apache.spark.sql.hive.UQueryHiveACLExternalCatalog") .config("spark.sql.extensions","org.apache.spark.sql.DliSparkExtension") .appName("java_spark_demo") .getOrCreate(); spark.sql("create database if not exists test_sparkapp").collect(); spark.sql("drop table if exists test_sparkapp.dli_testtable").collect(); spark.sql("create table test_sparkapp.dli_testtable(id INT, name STRING)").collect(); spark.sql("insert into test_sparkapp.dli_testtable VALUES (123,'jason')").collect(); spark.sql("insert into test_sparkapp.dli_testtable VALUES (456,'merry')").collect(); spark.sql("drop table if exists test_sparkapp.dli_testobstable").collect(); spark.sql("create table test_sparkapp.dli_testobstable(age INT, name STRING) using csv options (path 'obs://dli-test-obs01/testdata.csv')").collect(); spark.stop(); } }
object DliCatalogTest { def main(args:Array[String]): Unit = { val sql = args(0) val runDdl = Try(args(1).toBoolean).getOrElse(true) System.out.println(s"sql is $sql runDdl is $runDdl") val sparkConf = new SparkConf(true) sparkConf .set("spark.sql.session.state.builder","org.apache.spark.sql.hive.UQueryHiveACLSessionStateBuilder") .set("spark.sql.catalog.class","org.apache.spark.sql.hive.UQueryHiveACLExternalCatalog") sparkConf.setAppName("dlicatalogtester") val spark = SparkSession.builder .config(sparkConf) .enableHiveSupport() .config("spark.sql.extensions","org.apache.spark.sql.DliSparkExtension") .appName("SparkTest") .getOrCreate() System.out.println("catalog is " + spark.sessionState.catalog.toString) if (runDdl) { val df = spark.sql(sql).collect() } else { spark.sql(sql).show() } spark.close() } }
#!/usr/bin/python # -*- coding: UTF-8 -*- from __future__ import print_function import sys from pyspark.sql import SparkSession if __name__ == "__main__": url = sys.argv[1] creatTbl = "CREATE TABLE test_sparkapp.dli_rds USING JDBC OPTIONS ('url'='jdbc:mysql://%s'," \ "'driver'='com.mysql.jdbc.Driver','dbtable'='test.test'," \ " 'passwdauth' = 'DatasourceRDSTest_pwd','encryption' = 'true')" % url spark = SparkSession \ .builder \ .enableHiveSupport() \ .config("spark.sql.session.state.builder","org.apache.spark.sql.hive.UQueryHiveACLSessionStateBuilder") \ .config("spark.sql.catalog.class", "org.apache.spark.sql.hive.UQueryHiveACLExternalCatalog") \ .config("spark.sql.extensions","org.apache.spark.sql.DliSparkExtension") \ .appName("python Spark test catalog") \ .getOrCreate() spark.sql("CREATE database if not exists test_sparkapp").collect() spark.sql("drop table if exists test_sparkapp.dli_rds").collect() spark.sql(creatTbl).collect() spark.sql("select * from test_sparkapp.dli_rds").show() spark.sql("insert into table test_sparkapp.dli_rds select 12,'aaa'").collect() spark.sql("select * from test_sparkapp.dli_rds").show() spark.sql("insert overwrite table test_sparkapp.dli_rds select 1111,'asasasa'").collect() spark.sql("select * from test_sparkapp.dli_rds").show() spark.sql("drop table test_sparkapp.dli_rds").collect() spark.stop()