Basic Example for Spark Structured Streaming and Kafka Integration
With the newest Kafka consumer API, there are notable differences in usage. Learn how to integrate Spark Structured Streaming and Kafka using this new API.
Join the DZone community and get the full member experience.
Join For FreeThe Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change.
In this blog, I am going to implement a basic example on Spark Structured Streaming and Kafka integration.
Here, I am using:
- Apache Spark 2.2.0
- Apache Kafka 0.11.0.1
- Scala 2.11.8
Create the built.sbt
Let’s create an sbt project and add following dependencies in build.sbt
.
libraryDependencies ++= Seq("org.apache.spark" % "spark-sql_2.11" % "2.2.0",
"org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0",
"org.apache.kafka" % "kafka-clients" % "0.11.0.1")
Create the SparkSession
Now, we have to import the necessary classes and create a local SparkSession
, the starting point of all functionalities in Spark:
val spark = SparkSession
.builder
.appName("Spark-Kafka-Integration")
.master("local")
.getOrCreate()
Define the Schema
We have to define the schema for our data that we are going to read from the CSV.
val mySchema = StructType(Array(
StructField("id", IntegerType),
StructField("name", StringType),
StructField("year", IntegerType),
StructField("rating", DoubleType),
StructField("duration", IntegerType)
))
A sample of my CSV file can be found here and the dataset description is given here.
Create the Streaming DataFrame
Now, we have to create a streaming DataFrame with schema defined in a variable called mySchema
. If you drop any CSV file into dir
, that will automatically change in the streaming DataFrame.
val streamingDataFrame = spark.readStream.schema(mySchema).csv("path of your directory like home/Desktop/dir/")
Publish the Stream to Kafka
streamingDataFrame.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value").
writeStream
.format("kafka")
.option("topic", "topicName")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("checkpointLocation", "path to your local dir")
.start()
Create the topic called topicName
for Kafka and send DataFrame with that topic to Kafka. Here, 9092 is the port number of the local system on which Kafka in running. We use checkpointLocation
to create the offsets about the stream.
Subscribe the Stream From Kafka
import spark.implicits._
val df = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "topicName")
.load()
At this point, we just subscribe our stream from Kafka with the same topic name that we gave above.
Convert Stream According to mySchema and TimeStamp
val df1 = df.selectExpr("CAST(value AS STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
.select(from_json($"value", mySchema).as("data"), $"timestamp")
.select("data.*", "timestamp")
Here, we convert the data that is coming in the Stream from Kafka to JSON, and from JSON, we just create the DataFrame as per our needs described in mySchema
. We also take the timestamp column.
Print the DataFrame on Console
Here, we just print our data to the console.
df1.writeStream
.format("console")
.option("truncate","false")
.start()
.awaitTermination()
For more details, you can refer to this documentation.
Published at DZone with permission of Ayush Tiwari, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments