obstkel.com logo

Spark Scala Examples: Your baby steps to Big Data

baby walking on the beach

This post elaborates on Apache Spark transformation and action operations by providing a step by step walk through of Spark examples in Scala.

Before you dive into these examples, make sure you know some of the basic Apache Spark Concepts.

Below examples are in no particular sequence and is the first part of our five-part Spark Scala examples post. They assume you have an Apache Hadoop ecosystem setup and have some sample data files created.

If you do not have Apache Hadoop installed, follow this link to download and install.

Let us start by looking at 4 Spark examples. 

  1. Spark Context
  2. Spark Parallelize
  3. Spark read from hdfs
  4. Spark Filter

 

Related: Apache Spark basics 

1. Spark Context Example - *How to run Spark*​

If you are struggling to figure out how to run a Spark Scala program, this section gets straight to the point.

The first step to writing an Apache Spark application (program) is to invoke the program, which includes initializing the configuration variables and accessing the cluster. SparkContext is the gateway to accessing Spark functionality.

For beginners, the best and simplest option is to use the Scala shell, which auto creates a SparkContext. Below are 4 Spark examples on how to connect and run Spark.

Method 1:
  • To login to Scala shell, at the command line interface, type "/bin/spark-shell "
Method 2:
  • To login and run Spark locally without parallelism: "/bin/spark-shell --master local "
Method 3:
  • To login and run Spark locally in parallel mode, setting the parallelism level to the number of cores on your machine: "/bing/spark-shell --master local[*] " 
Method 4:
  • To login and connect to Yarn in client mode: "/bin/spark-shell --master yarn-client "

2. Spark Parallelize example

Before we look at a couple of examples, it is important to understand what parallelize spark means.

Parallelize is a method to partition an RDD to speed up processing. The syntax for parallelizing an RDD is sc.parallelize(data, p)where ‘p’ represents the number of partitions based on the cluster.

For the most part the number of partitions are automatically set, so you do not need to worry about it.

In the below Spark Scala examples, we look at parallelizeing a sample set of numbers, a List and an Array.

Related: Spark SQL Date functions

Method 1:
  • To create an RDD using Apache Spark Parallelize method on a sample set of numbers, say 1 thru 100.
    • scala > val parSeqRDD = sc.parallelize(1 to 100)
Method 2:
  • To create an RDD from a Scala List using the Parallelize method.
    • scala > val parNumArrayRDD = sc.parallelize(List("pen","laptop","pencil","mouse"))
Note: To view a sample set of data loaded in the RDD, type this at the command line: parNumArrayRDD.take(3).foreach(println)
Method 3:
  • To create an RDD from an Array using the Parallelize method. scala > **val parNumArrayRDD = sc.parallelize(Array(1,2,3,4,5))*
Note: To count the number of elements created in the RDD, type this at the comand line: parNumArrayRDD.count()

3. Spark read from hdfs example

Creating an RDD in Apache Spark requires data. In Spark, there are two ways to aquire this data: parallelized collections and external datasets. Data not in an RDD is classified as an external dataset and includes flat files, binary files,sequence files, hdfs file format, HBase, Cassandra or in any random format. 

The 3 Spark examples listed below shows you the most common ways to read data from hdfs.

Method 1:
  • To read a text file named "recent_orders" that exists in hdfs.
    • scala > val ordersRDD = sc.textFile("recent_orders")
 
Method 2:
  • To read a text file named "recent_orders" that exists in hdfs and specify the number of partitions (3 partitions in this case).
    • scala > val ordersRDD = sc.textFile("recent_orders", 3)
 
Method 3:
  • To read all the contents of a directory named "data_files" in hdfs.
    • scala > val dataFilesRDD = sc.wholeTextFiles("data_files")
 

NoteThe RDD returned is in a key,value pair format where the key represents the path of each file, and the value represents the entire contents of the file.

 

4. Spark filter example

A Spark filter is a transformation operation which takes an existing dataset, applies a reducing function and returns data for which the reducing function returns a true Boolean. 

Conceptually, this is similar to applying a column filter in an excel spreadsheet, or a “where” clause in a sql statement.

Listed below are a few Spark Scala examples on using a filter operation.

Prerequisite : Create an RDD with the sample data as show below. 
scala > val sampleColorRDD = sc.parallelize(List(“red”, “blue”, “green”, “purple”, “blue”, “yellow”))
 
 
Method 1:
  • To apply a filter on sampleColorRDD and only select the color "blue" from the RDD dataset.
    • scala > val filterBlueRDD = sampleColorRDD.filter(color => color == "blue")
Method 2:
  • To apply a filter on sampleColorRDD and select all colors other than the color "blue" from the RDD dataset.
    • scala > val filterNotBlueRDD = sampleColorRDD.filter(color => color != "blue")
Method 3:
  • To apply a filter on sampleColorRDD and select multiple colors: red and "blue" from the RDD dataset.
    • scala > val filterMultipleRDD = sampleColorRDD.filter(color => (color == "blue" || color == "red"))

Note1: To perform a count() action on the filter output and validate, type the below at the command line:

scala > filterMultipleRDD.count()

Note2: Once you get familiar with the basics, you could minimize your code by combining transformation and action operations into a single line as such:
scala > sampleColorRDD.filter(color => (color == “blue” || color == “red”)).count()

Recent Posts

Table of Contents

Interested in our services ?

email us at : info@obstkel.com

Copyright 2022 © OBSTKEL LLC. All rights Reserved