Relationship Among Spark, HDFS, and Yarn

Relationship Between Spark and HDFS

Data computed by Spark comes from multiple data sources, such as local files and HDFS. Most data computed by Spark comes from the HDFS. The HDFS can read data in large scale for parallel computing. After being computed, data can be stored in the HDFS.

Spark involves Driver and Executor. Driver schedules tasks and Executor runs tasks.

Figure 1 shows the process of reading a file.

Figure 1 File reading process
The file reading process is as follows:
  1. Driver interconnects with the HDFS to obtain the information of File A.
  2. The HDFS returns the detailed block information about this file.
  3. Driver sets a parallel degree based on the block data amount, and creates multiple tasks to read the blocks of this file.
  4. Executor runs the tasks and reads the detailed blocks as part of the Resilient Distributed Dataset (RDD).

Figure 2 shows the process of writing data to a file.

Figure 2 File writing process
The file writing process is as follows:
  1. Driver creates a directory where the file is to be written.
  2. Based on the RDD distribution status, the number of tasks related to data writing is computed, and these tasks are sent to Executor.
  3. Executor runs these tasks, and writes the RDD data to the directory created in 1.

Relationship Between Spark and Yarn

The Spark computing and scheduling can be implemented using Yarn mode. Spark enjoys the computing resources provided by Yarn clusters and runs tasks in a distributed way. Spark on Yarn has two modes: Yarn-cluster and Yarn-client.