Word Count With Spark and Scala
See how exactly you can utilize Scala with Spark together in order to solve the problems that often occurs with word counts.
Join the DZone community and get the full member experience.
Join For FreeApache Spark has taken over the Big Data world. Spark is implemented with Scala and is well-known for its performance.
In previous blogs, we've approached the word count problem by using Scala with Hadoop and Scala with Storm. In this blog, we will utilize Spark for the word count problem.
Submitting Spark jobs implemented with Scala is pretty easy and convenient. All we need is to do is submit our file as our input to the Spark command.
First, we have to download and set up a Spark version locally.
Then, we download a text file for testing. In my case, the script from MGS2 did the work.
Now, on to the WordCount script. For local testing, we will use a file from our file system.
val text = sc.textFile("mytextfile.txt")
val counts = text.flatMap(line => line.split(" ")
).map(word => (word,1)).reduceByKey(_+_) counts.collect
The next step is to run the script.
spark-shell -i WordCountscala.scala
Once finished, a Spark command prompt will appear. We are free to do some experiments with the word count results.
Welcome to
____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\
version 2.1.0 /_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_111)
Type in expressions to have them evaluated.
Type :help for more information.
scala> res0.length
res1: Int = 20159
We detected 20,159 different words!
Our next step is to run our job to a Spark cluster on HDInsight.
Published at DZone with permission of Emmanouil Gkatziouras, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments