Running a simple Spark script on Mesos with Zookeeper

955 views Asked by At

I want to run a simple spark program, but i am restricted by some errors. My Environment is: CentOS:6.6 Java: 1.7.0_51 Scala: 2.10.4 Spark: spark-1.4.0-bin-hadoop2.6 Mesos: 0.22.1

All are installed and nodes are up.Now i have one Mesos master and Mesos slave node. My spark properties are below:

spark.app.id            20150624-185838-2885789888-5050-1291-0005
spark.app.name          Spark shell
spark.driver.host   192.168.1.172
spark.driver.memory 512m
spark.driver.port   46428
spark.executor.id   driver
spark.executor.memory   512m
spark.executor.uri  http://192.168.1.172:8080/spark-1.4.0-bin-hadoop2.6.tgz
spark.externalBlockStore.folderName spark-91aafe3b-01a8-4c86-ac3b-999e278807c5
spark.fileserver.uri    http://192.168.1.172:51240
spark.jars  
spark.master            mesos://zk://192.168.1.172:2181/mesos
spark.mesos.coarse  true
spark.repl.class.uri    http://192.168.1.172:51600
spark.scheduler.mode    FIFO

Now when I started spark, it comes to scala prompt(scala>). After that I am getting following error: mesos task 1 is now TASK_FAILED, blacklisting mesos slave value due to too many failures is Spark installed on it How to resolve this.

2

There are 2 answers

1
js84 On

Could you check the mesos slave logs/ task information for more output on why the task failed. You could have a look at :5050.

Probably unrelated question: Do you actually have zookeeper:

spark.master mesos://zk://192.168.1.172:2181/mesos

running (as you mentioned you only have one master)?

2
Adam On

With only 900MB and spark.driver.memory = 512m, you will be able to launch the scheduler/REPL, but you won't have enough memory for spark.executor.memory = 512m, so any tasks will fail. Either increasing your VM memory size or reducing the driver/executor memory requirements will help you get around these memory limits.