Reputation: 17694
I want to get started with the Playframework and Apache Spark.
I found the following activator to start with: https://www.typesafe.com/activator/template/spark-play
Is this a reasonably good way to integrate spark into play? Or should I start differently? As most activators here https://www.typesafe.com/activator/templates#filter:spark seem to work directly with AKKA or AKKA & spray. Is an explicit AKKA integration needed at all as Play from 2.4 on is based on AKKA-HTTP?
The dependencies are managed in a non-standard way e.g. not using the standard build.sbt. layout. Is this "recommended" for spark integration? What are the benefits of it?
The Scala-dependency file is accompanied with
import play.sbt.PlayImport._
import play.sbt.routes.RoutesKeys._
name := """sparkTest"""
organization := "ch.alexmass"
version := "0.0.1"
scalaVersion := Version.scala
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaSource in Compile <<= baseDirectory / "src/scala"
libraryDependencies ++= Dependencies.sparkAkkaHadoop
dependencyOverrides ++= Set(
"com.fasterxml.jackson.core" % "jackson-databind" % "2.4.4"
)
releaseSettings
scalariformSettings
ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
//routesGenerator := InjectedRoutesGenerator
fork in run := true
Thank you.
Upvotes: 3
Views: 1338
Reputation: 17694
during the last couple of days I learnt a lot. I will try to answer this question myself.
Play is fullstack and based on AKKA. I think for a prototype / UI app this integration is fine.
For a REST-only API AKKA only with something like spray would be better & faster. However spray will be deprecated in favour of akka-http.
This is up to personal preference. However may sometimes provide a clearer structure of the dependencies.
Upvotes: 1