Write a Kafka Producer Using Twitter Stream
With the newly open sourced Twitter HBC, a Java HTTP library for consuming Twitter’s Streaming API, we can easily create a Kafka twitter stream producer.
Join the DZone community and get the full member experience.
Join For FreeTwitter open-sourced its Hosebird client (hbc), a robust Java HTTP library for consuming Twitter’s Streaming API. In this post, I am going to present a demo of how we can use hbc to create a Kafka twitter stream producer, which tracks a few terms in Twitter statuses and produces a Kafka stream out of it, which can be utilized later for counting the terms, or sending that data from Kafka to Storm (Kafka-Storm pipeline) or HDFS ( as we will see in next post about using Camus API).
You can download and run a complete Sample here.
Requirements
- Apache Kafka 2.6.0
- Twitter Developer account ( for API Key, Secret etc.)
- Apache Zookeeper ( required for Kafka)
- Oracle JDK 1.8 (64 bit )
Build Environment
- Eclipse
- Apache Maven 2/3
How to Generate Twitter API Keys Using Developer Account
- Go to https://dev.twitter.com/apps/new and log in, if necessary.
- Enter your Application Name, Description, and your website address. You can leave the callback URL empty.
- Accept the TOS.
- Submit the form by clicking the Create your Twitter Application.
- Copy the consumer key (API key) and consumer secret from the screen into your application.
- After creating your Twitter Application, you have to give access to your Twitter Account to use this Application. To do this, click the Create my Access Token.
- Now you will have Consumer Key, Consumer Secret, Acess token, Access Token Secret to be used in streaming API calls.
Steps to Run the Sample
1. Start the Zookeeper server
in Kafka using the following script in your Kafka installation folder –
$bin/zookeeper-server-start.sh config/zookeeper.properties &
and, verify if it is running on default port 2181 using –
$netstat -anlp | grep 2181
2. Start Kafka server
$bin/kafka-server-start.sh config/server.properties &
and, verify if it is running on default port 9092
$netstat -anlp | grep 9092
If you are on a mac, and you have brew installed, both can be done with simple brew commands.
$brew install kafka
# this internally installs zookeeper too
$brew services start zookeeper
$kafka-server-start /usr/local/etc/kafka/server.properties
3. Create Topic
$bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic twitter-topic
4. Validate the Topic
$bin/kafka-topics --describe --zookeeper localhost:2181 --topic twitter-topic
5. Publish Message
Now, when we are all set with Kafka running and ready to accept messages on the topic we just created., we will create a Kafka Producer, which makes use of hbc client API to get twitter stream for tracking terms and puts on the topic named as “twitter-topic”.
- First, we need to give maven dependencies for hbc-core for latest version and some other dependencies needed for Kafka –
xxxxxxxxxx
<dependency>
<groupId>com.twitter</groupId>
<artifactId>hbc-core</artifactId> <!-- or hbc-twitter4j -->
<version>2.2.0</version> <!-- or whatever the latest version is -->
</dependency>
- Then, we need to set properties to configure our Kafka Producer to publish messages to the topic and setup required properties for server –
private static final String TOPIC = "twitter-topic";
-
Java
xxxxxxxxxx
1
1properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, TwitterKafkaConfig.SERVERS);
2properties.put(ProducerConfig.ACKS_CONFIG, "1");
3properties.put(ProducerConfig.LINGER_MS_CONFIG, 500);
4properties.put(ProducerConfig.RETRIES_CONFIG, 0);
5properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
6properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
- Set up a StatusFilterEndpoint, which will set up track terms to be tracked on recent status messages, as in the example -
-
Java
xxxxxxxxxx
1
1StatusesFilterEndpoint endpoint = new StatusesFilterEndpoint();
2endpoint.trackTerms(Lists.newArrayList(
3term));
- Provide authentication parameters for OAuth ( we are getting them using command line parameters for this program, so don't forget to pass those as VM arguments when you run it on IDE) for using twitter that we generated earlier and create the client using endpoint and auth –
xxxxxxxxxx
Authentication auth = new OAuth1(consumerKey, consumerSecret, token,
secret);
Client client = new ClientBuilder().hosts(Constants.STREAM_HOST)
.endpoint(endpoint).authentication(auth)
.processor(new StringDelimitedProcessor(queue)).build();
- Last step, connect to the client, fetch messages from the queue and send through Kafka Producer –
xxxxxxxxxx
client.connect();
try (Producer<Long, String> producer = getProducer()) {
while (true) {
ProducerRecord<Long, String> message = new ProducerRecord<>(TwitterKafkaConfig.TOPIC, queue.take());
producer.send(message);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
client.stop();
}
To run the complete example run TwitterKafkaProducer.java class as a Java Application in your favorite IDE and don't forget to pass the arguments with your API keys and terms. Read detailed instructions here.
6. Validate Messages
Consume messages on-topic twitter-topic to verify the incoming message stream.
$bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic twitter-topic --from-beginning
Also, to see how you can integrate Kafka with HDFS using camus from LinkedIn, you can visit the blog here.
Published at DZone with permission of Saurabh Chhajed, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments