deltascience
deltascience

Reputation: 3380

Cannot launch spark with Java

I get this error everytime I run my application :

SparkConf sparkConf = new SparkConf().setAppName(new String("New app"));
sparkConf.setMaster("spark://localhost:7077");
JavaSparkContext sc = new JavaSparkContext(sparkConf);

JavaRDD<String> file = sc.textFile("content/texas.content");

The error :

15/01/29 19:45:53 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/01/29 19:46:08 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/01/29 19:46:12 INFO client.AppClient$ClientActor: Connecting to master spark://localhost:7077...
15/01/29 19:46:12 WARN client.AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@localhost:7077: akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster@localhost:7077]
15/01/29 19:46:12 WARN client.AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@localhost:7077: akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster@localhost:7077]
15/01/29 19:46:12 WARN client.AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@localhost:7077: akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster@localhost:7077]
15/01/29 19:46:12 WARN client.AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@localhost:7077: akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster@localhost:7077]
15/01/29 19:46:23 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

How can I get rid of it? Thanks!

Upvotes: 1

Views: 1443

Answers (5)

Rolf
Rolf

Reputation: 381

Use the fully qualified dns name in your Java source, opposed to using localhost.

So server1.domain.com:7077 instead of localhost:7077

Note: you can still use --master spark://localhost:7077 on the command line for spark-submit

Upvotes: 0

Kshitij Kulshrestha
Kshitij Kulshrestha

Reputation: 2072

Download new version of Apache-Spark, everything will work fine.

Upvotes: 0

Manish Malhotra
Manish Malhotra

Reputation: 80

Commenting on old question :), but as bumped into the same problem.

So, worth mentioning.

-- Shengyuan Lu's is correct.

Please refer the documentation of Spark.

http://spark.apache.org/docs/latest/programming-guide.html#linking-with-spark

" The first thing a Spark program must do is to create a JavaSparkContext object, which tells Spark how to access a cluster. To create a SparkContext you first need to build a SparkConf object that contains information about your application.

SparkConf conf = new SparkConf().setAppName(appName).setMaster(master); JavaSparkContext sc = new JavaSparkContext(conf); The appName parameter is a name for your application to show on the cluster UI. master is a Spark, Mesos or YARN cluster URL,

or a special “local” string to run in local mode.

In practice, when running on a cluster, you will not want to hardcode master in the program, but rather launch the application with spark-submit and receive it there. However, for local testing and unit tests, you can pass “local” to run Spark in-process. "

Upvotes: 0

卢声远 Shengyuan Lu
卢声远 Shengyuan Lu

Reputation: 32004

sparkConf.setMaster("local");

To run locally with one thread.

Upvotes: 2

theShadow89
theShadow89

Reputation: 1549

Try to use hostname instead of localhost.

It work for me.

Upvotes: 0

Related Questions