Yohan Liyanage
Yohan Liyanage

Reputation: 6990

Co-Locating Spark & Cassandra with Mesos

I have a Spark application that uses Cassandra. I want to setup a co-located deployment so that the Spark nodes will have local access to C* to improve performance. In a traditional setup, I would've installed C* manually on my servers and then install Spark standalone on those same nodes.

But I would like to make use of Apache Mesos to manage my cluster. Is there anyway in Mesos to get this done, so that Mesos will run both C* and Spark on the same nodes?

Upvotes: 2

Views: 467

Answers (2)

rukletsov
rukletsov

Reputation: 1051

I'm not sure Marathon constraints do the job if you use Spark framework for Mesos, because it's always a framework's scheduler which decides where to launch tasks. You may try to launch C* and Spark jobs on same nodes via Marathon only, but it may not be as flexible as using dedicated frameworks. We have ideas to address locality in so-called "Infrastructure frameworks", but this is WIP.

Upvotes: 2

Yohan Liyanage
Yohan Liyanage

Reputation: 6990

I looked up a bit more and it seems to me now that the constraints in Marathon is the way to do this. In case anyone else is looking for the same, Marathon constraints documentation explains this well.

https://github.com/mesosphere/marathon/blob/master/docs/docs/constraints.md

Upvotes: 0

Related Questions