Wednesday, March 30, 2016

Setup Mesos Cluster in CentOS VMs

This post summarizes my experiences in setting up mesos cluster for spark and hadoop cluster deployment.

Firstly, before start, set up a zookeeper cluster (link: to run on 3 nodes with hostnames zoo01, zoo02, zoo03  as well as set up 3 mesos master eligable nodes and 4 mesos slave nodes (Refers to this on how to setup CentOS VMs for cluster:

1. Disable Firewall and SELinux

1.1. Disable firewalld

Run the following command to disable the firewalld
`systemctl stop firewalld.service
`systemctl disable firewalld.service
`iptables -F

1.2. Disable SELinux,

Run the following command to open the /etc/selinux/config

`vi /etc/selinux/config

Edit the to file to include the following lines:


Run the following command to shutdown selinux immediately:

`setenforce 0

2. Start the zookeepers

Start the zookeepers in the zoo01/2/3

2.1. Install Mesos

Run the following line to install development tools

`yum install -y java-1.8.0-openjdk-devel
`yum install -y maven

Add the following line to the /root/.bashrc file, add the following line:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk

Run the following commands to install the following libraries required to compile mesos:

`rpm -Uvh
# to install master and/or slave
`yum -y install mesos
# to install marathon
`yum -y install marathon
# to install chronos
`yum -y install chronos.x86_64

2.2. Configure Master with Zookeeper

Edit /etc/mesos/zk:

3. Start Mesos

# Mesos Master
`systemctl enable mesos-master.service
`systemctl start mesos-master.service
`systemctl mask mesos-slave.service
`systemctl stop mesos-slave.service

# Marathon
`systemctl enable marathon.service
`systemctl start marathon.service

# Mesos Slave
`systemctl mask mesos-slave.service
`systemctl stop mesos-slave.service

# Chronos
`systemctl enable chronos.service
`systemctl start chronos.service

Personal Note: I noticed that if the mesos services are started immediately after the zookeepers are started,
it is possible that the mesos won't be able to detect the master from zookeeper. The way to solve this is to restart the mesos services on each node again.

4. Web Interface

`curl http://mesos01:5050

`curl http://mesos01:8080

5. Command line Interface

Getting master from CLI
`mesos-resolve `cat /etc/mesos/zk`

Killing a framework:

`curl -XPOST http://mesos02:5050/api/v1/scheduler -d '{ "framework_id": { "value": "[task_id]" }, "type": "TEARDOWN"}' -H Content-Type:application/json

6. Submit a spark job via MESOS cluster

To run a spark application in MESOS cluster instead, setup and configure HDFS (link and Spark (link

Put the spark bin package in the hdfs (run the following command on the hadoop namenode centos01):

`hadoop/bin/hdfs dfs -mkdir /pkg
`hadoop/bin/hdfs dfs -put spark-1.6.0-bin-hadoop2.6.tgz /pkg/spark-1.6.0-bin-hadoop2.6.tgz

Run the following command to modify the in spark/conf

`vi spark/conf/

In the, add the following lines:

export MESOS_NATIVE_LIBRARY=/usr/local/lib/
export SPARK_EXECUTOR_URI= hdfs://centos01:9000/pkg/spark-1.6.0-bin-hadoop2.6.tgz

Where centos01 is the hadoop namenode

To submit a spark job, run the following command:

`spark/bin/spark-submit --class com.tutorials.spark.WordCountDriver --master mesos://mesos01:5050 word-count.jar

Important: mesos01 must be the current leader master node, otherwise, the command such as "spark-shell --master mesos://mesos01:5050" will cause the spark-shell to hang on the line "No credential provided. attempting to register without authentication". The solution is to find out which node is the active leader master node by running the command "mesos-resolve `cat /etc/mesos/zk`" and the luanch the spark shell by specifying the active leader master as in the --master option instead.

No comments:

Post a Comment