
This post elaborates on Apache Spark transformation and action operations by providing a step by step walk through of Spark examples in Scala.
Before you dive into these examples, make sure you know some of the basic Apache Spark Concepts.
Below examples are in no particular sequence and is the first part of our five-part Spark Scala examples post. They assume you have an Apache Hadoop ecosystem setup and have some sample data files created.
If you do not have Apache Hadoop installed, follow this link to download and install.
Let us start by looking at 4 Spark examples.
Related: Apache Spark basics
If you are struggling to figure out how to run a Spark Scala program, this section gets straight to the point.
The first step to writing an Apache Spark application (program) is to invoke the program, which includes initializing the configuration variables and accessing the cluster. SparkContext is the gateway to accessing Spark functionality.
For beginners, the best and simplest option is to use the Scala shell, which auto creates a SparkContext. Below are 4 Spark examples on how to connect and run Spark.
Method 1:
To login to Scala shell, at the command line interface, type "/bin/spark-shell "
Method 2:
To login and run Spark locally without parallelism: "/bin/spark-shell --master local "Method 3:
To login and run Spark locally in parallel mode, setting the parallelism level to the number of cores on your machine: "/bing/spark-shell --master local[*] "
Method 4:
To login and connect to Yarn in client mode: "/bin/spark-shell --master yarn-client "
Before we look at a couple of examples, it is important to understand what parallelize spark means.
Parallelize is a method to partition an RDD to speed up processing. The syntax for parallelizing an RDD is sc.parallelize(data, p), where ‘p’ represents the number of partitions based on the cluster.
For the most part the number of partitions are automatically set, so you do not need to worry about it.
In the below Spark Scala examples, we look at parallelizeing a sample set of numbers, a List and an Array.
Related: Spark SQL Date functions
Method 1:
To create an RDD using Apache Spark Parallelize method on a sample set of numbers, say 1 thru 100.
scala > val parSeqRDD = sc.parallelize(1 to 100)
Method 2:
To create an RDD from a Scala List using the Parallelize method.
scala > val parNumArrayRDD = sc.parallelize(List("pen","laptop","pencil","mouse"))Note: To view a sample set of data loaded in the RDD, type this at the command line: parNumArrayRDD.take(3).foreach(println)Method 3:
To create an RDD from an Array using the Parallelize method. scala > **val parNumArrayRDD = sc.parallelize(Array(1,2,3,4,5))*
Note: To count the number of elements created in the RDD, type this at the comand line: parNumArrayRDD.count()Creating an RDD in Apache Spark requires data. In Spark, there are two ways to aquire this data: parallelized collections and external datasets. Data not in an RDD is classified as an external dataset and includes flat files, binary files,sequence files, hdfs file format, HBase, Cassandra or in any random format.
The 3 Spark examples listed below shows you the most common ways to read data from hdfs.
Method 1:
To read a text file named "recent_orders" that exists in hdfs.
scala > val ordersRDD = sc.textFile("recent_orders")Method 2:
To read a text file named "recent_orders" that exists in hdfs and specify the number of partitions (3 partitions in this case).
scala > val ordersRDD = sc.textFile("recent_orders", 3)Method 3:
To read all the contents of a directory named "data_files" in hdfs.
scala > val dataFilesRDD = sc.wholeTextFiles("data_files")Note: The RDD returned is in a key,value pair format where the key represents the path of each file, and the value represents the entire contents of the file.
A Spark filter is a transformation operation which takes an existing dataset, applies a reducing function and returns data for which the reducing function returns a true Boolean.
Conceptually, this is similar to applying a column filter in an excel spreadsheet, or a “where” clause in a sql statement.
Listed below are a few Spark Scala examples on using a filter operation.
Method 1:
To apply a filter on sampleColorRDD and only select the color "blue" from the RDD dataset.
scala > val filterBlueRDD = sampleColorRDD.filter(color => color == "blue")
Method 2:
To apply a filter on sampleColorRDD and select all colors other than the color "blue" from the RDD dataset.
scala > val filterNotBlueRDD = sampleColorRDD.filter(color => color != "blue")
Method 3:
To apply a filter on sampleColorRDD and select multiple colors: red and "blue" from the RDD dataset.
scala > val filterMultipleRDD = sampleColorRDD.filter(color => (color == "blue" || color == "red"))
Note1: To perform a count() action on the filter output and validate, type the below at the command line:
| Cookie | Duration | Description |
|---|---|---|
| cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
| cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
| cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
| cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
| cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
| viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |