japemon
japemon

Reputation: 325

Installing Apache Spark on Ubuntu 14.04

At first I have a VM to which I access via ubuntu, and this VM is also Ubuntu 14.04. I need to install Apache Spark as soon as possible, but I can not find anything which can help me or give me references where it's best explained. I tried once to install it on my local machine Ubuntu 14.04 but it failed , but the thing is that I don't want to install it on a cluster. Any help please???

Upvotes: 13

Views: 28801

Answers (5)

Abir J.
Abir J.

Reputation: 51

This post explains detailed steps to set up Apache Spark-2.0 in Ubuntu/Linux machine. For running Spark in Ubuntu machine should have Java and Scala installed. Spark can be installed with or without Hadoop, here in this post we will be dealing with only installing Spark 2.0 Standalone. Installing Spark-2.0 over Hadoop is explained in another post. We will also be doing how to install Jupyter notebooks for running Spark applications using Python with pyspark module. So, let’s start by checking and installing java and scala.

$ scala -version
$ java –version

These commands should print you the versions if scala and java is already installed else you can go to installing these by using following commands.

$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ wget http://www.scala-lang.org/files/archive/scala-2.10.4.tgz
$ sudo mkdir /usr/local/src/scala
$ sudo tar xvf scala-2.10.4.tgz -C /usr/local/scala/

You can again check by using –version commands if java and scala is installed properly which will display – Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL and for java it should display java version "1.8.0_101" Java(TM) SE Runtime Environment (build 1.8.0_101-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.101-b14, mixed mode) And update the .bashrc file by adding these lines at the end.

export SCALA_HOME=/usr/local/scala/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH

And restart bashrc by using this command

$ . .bashrc

Installing Spark First Download Spark from https://spark.apache.org/downloads.html using these options Spark Realease : 2.0.0 Package Type: prebuilt with Hadoop 2.7 and Direct download.

Now, got to $HOME/Downloads and use following command to extract the spark tar file and move to the given location.

$ `tar xvf spark-1.3.1-bin-hadoop2.6.tgz`
$ `cd $HOME/Downloads/` 
$ mv spark-2.0.0-bin-hadoop2.7 /usr/local/spark

Add the following line to ~/.bashrc file. It means adding the location, where the spark software file are located to the PATH variable.

export SPARK_HOME=/usr/local/spark
export PATH =$SPARK_HOME/bin:$PATH

Again restart the environment .bashrc by using these commands source ~/.bashrc or

. .bashrc

Now you can start spark-shell by using these commands

$spark-shell    for starting scala API
$ pyspark       for starting Python API

Upvotes: 5

karthik manchala
karthik manchala

Reputation: 13640

You can install and start using spark in three easy steps:

  • Download latest version of Spark from here.
  • Navigate to the downloaded folder from terminal and run the following command:

    tar -xvf spark-x.x.x.tgz        //replace x's with your version
    
  • Navigate to the extracted folder and run one of the following command:

    ./bin/spark-shell               // for interactive scala shell
    ./bin/pyspark                   // for interactive python shell
    

You are now ready to play with spark.

Upvotes: 24

japemon
japemon

Reputation: 325

I made it work by creating a Maven project and then inserted the dependency of spark into the pom.xml file. That was how it just worked for me, because I had to program with Java and not Scala.

Upvotes: 0

prabeesh
prabeesh

Reputation: 945

The process to follow is mainly this:

Make sure you have version 7 or 8 of the Java Development Kit installed

In next step install Scala.

And then add following in the end of the ~/.bashrc file

export SCALA_HOME=<path to Scala home>
export PATH=$SCALA_HOME/bin:$PATH

restart bashrc.

$ . .bashrc

In next step install git. Spark build depends git.

sudo apt-get install git

Finally download spark distribution from here

$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0.tgz
$ tar xvf spark-1.4.0.tgz 

Building

SBT(Simple Build Tool) is used for building Spark, which is bundled with it. To compile the code

$ cd spark-1.4.0
$ build/sbt assembly

Building take some time.

Refer this blog post, here you can find more detailed steps to install Apache Spark on Ubuntu-14.04

Upvotes: 6

Holden
Holden

Reputation: 7452

You can start by going to http://spark.apache.org/downloads.html to download Apache Spark. If you don't have an existing Hadoop cluster/installation you need to run against you can select any of the options. This will give you a .tgz file you can extract with tar -xvf [filename]. From there you can launch the spark shell and get started in local mode. There is more information in the getting started guide at http://spark.apache.org/docs/latest/ .

Upvotes: 0

Related Questions