KSQL: A SQL Streaming Engine for Apache Kafka
Join the DZone community and get the full member experience.
Join For Free
KSQL is a SQL streaming engine for Apache Kafka. It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language like Java or Python. KSQL is scalable, elastic, and fault-tolerant. It supports a wide range of streaming operations, including data filtering, transformations, aggregations, joins, windowing, and sessionization.
What Is Streaming?
In stream processing, data is continuously processed, as new data become available for analyzing. Data is processed sequentially as an unbounded stream and may be pulled in by a “listening” analytics system as a record in key-value pairs.
Below are a few key features of KSQL processing:
- Per record stream processing with millisecond latency.
- Data filtering.
- Data transformation and conversions.
- Data enrichment with join.
- Data manipulation with scalar functions.
- Data analysis with stateful processing, aggregation, and windowing operation.
A client application can use the Kafka Streams API for Stream processing on Kafka topic data, and underneath the Kafka Streams API are Kafka producers and consumers.
KSQL queries do stream processing, which is an abstraction of Kafka stream API which can consume stream data that are structured eg Avro, JSON, DELIMITED
Now, let’s take a look at how we can query in KSQL:
- Start your confluent.
- Open KSQL CLI with the help of <confluent-home>/bin/ksql.
- Create a
STREAM
pageviews_original
from the Kafka topic pageviews, specifying thevalue_format
ofDELIMITED
. Describe the newSTREAM
. Notice that KSQL created additional columns calledROWTIME
, which corresponds to the Kafka message timestamp, andROWKEY
, which corresponds to the Kafka message key:
ksql> CREATE STREAM pageviews_original (viewtime bigint, userid varchar,
pageid varchar) WITH (kafka_topic='pageviews',value_format='DELIMITED');
- Create a
users_original
table from the Kafka topicusers
, specifying thevalue_format
of JSON. Describe the new table:
ksql> CREATE TABLE users_original (registertime bigint, gender varchar,
regionid varchar, userid varchar) WITH (kafka_topic='users',value_format=
'JSON');
- Show stream and table using:
SHOW STREAMS;
SHOW TABLES;
KStream vs KTable
A Stream is a sequence of structured data. Once an event is introduced into a stream, it is immutable, meaning that it can't be updated or deleted. A Table, on the other hand, represents the current situation based on the events coming from a stream and they are mutable.
- Create a persistent query by using the
CREATE STREAM
keywords to precede theSELECT
statement:
ksql> CREATE STREAM pageviews_female AS SELECT users_original.userid AS
userid, pageid, regionid, gender FROM pageviews_original LEFT JOIN
users_original ON pageviews_original.userid = users_original.userid
WHERE gender = 'FEMALE';
- Write KSQL to output topic:
CREATE STREAM pageviews_female_like_89 WITH (kafka_topic='pageviews_enrich
ed_r8_r9', value_format='DELIMITED') AS SELECT * FROM pageviews_female WHERE
regionid LIKE '%_8' OR regionid LIKE '%_9'
Opinions expressed by DZone contributors are their own.
Comments