Marco
Marco

Reputation: 2245

Spark application throws javax.servlet.FilterRegistration

I'm using Scala to create and run a Spark application locally.

My build.sbt:

name : "SparkDemo"
version : "1.0"
scalaVersion : "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"    exclude("org.apache.hadoop", "hadoop-client")
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.2.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"  excludeAll(
ExclusionRule(organization = "org.eclipse.jetty"))
libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"
mainClass in Compile := Some("demo.TruckEvents")

During runtime I get the exception:

Exception in thread "main" java.lang.ExceptionInInitializerError during calling of... Caused by: java.lang.SecurityException: class "javax.servlet.FilterRegistration"'s signer information does not match signer information of other classes in the same package

The exception is triggered here:

val sc = new SparkContext("local", "HBaseTest")

I am using the IntelliJ Scala/SBT plugin.

I've seen that other people have also this problem suggestion solution. But this is a maven build... Is my sbt wrong here? Or any other suggestion how I can solve this problem?

Upvotes: 20

Views: 22452

Answers (7)

Igorock
Igorock

Reputation: 2901

For me works the following:

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % sparkVersion.value % "provided",
    "org.apache.spark" %% "spark-sql"  % sparkVersion.value % "provided",
    ....................................................................
).map(_.excludeAll(ExclusionRule(organization = "javax.servlet")))

Upvotes: 2

Jing He
Jing He

Reputation: 914

If you are using IntelliJ IDEA, try this:

  1. Right click the project root folder, choose Open Module Settings
  2. In the new window, choose Modules in the left navigation column
  3. In the column rightmost, select Dependencies tab, find Maven: javax.servlet:servlet-api:2.5
  4. Finally, just move this item to the bottom by pressing ALT+Down.

It should solve this problem.

This method came from http://wpcertification.blogspot.ru/2016/01/spark-error-class-javaxservletfilterreg.html

Upvotes: 4

Ravi Macha
Ravi Macha

Reputation: 717

When you use SBT, FilterRegistration class is present in 3.0 and also if you use JETTY Or Java 8 this JAR 2.5 it automatically adds as dependency,

Fix: Servlet-api-2.5 JAR was the mess there, I resolved this issue by adding servlet-api-3.0 jar in dependencies,

Upvotes: 3

M.Rez
M.Rez

Reputation: 1862

If it is happening in Intellij Idea you should go to the project setting and find the jar in the modules, and remove it. Then run your code with sbt through shell. It will get the jar files itself, and then go back to intellij and re-run the code through intellij. It somehow works for me and fixes the error. I am not sure what was the problem since it doesn't show up anymore.

Oh, I also removed the jar file, and added "javax.servlet:javax.servlet-api:3.1.0" through maven by hand and now I can see the error gone.

Upvotes: 3

Mansoor Siddiqui
Mansoor Siddiqui

Reputation: 21703

See my answer to a similar question here. The class conflict comes about because HBase depends on org.mortbay.jetty, and Spark depends on org.eclipse.jetty. I was able to resolve the issue by excluding org.mortbay.jetty dependencies from HBase.

If you're pulling in hadoop-common, then you may also need to exclude javax.servlet from hadoop-common. I have a working HBase/Spark setup with my sbt dependencies set up as follows:

val clouderaVersion = "cdh5.2.0"
val hadoopVersion = s"2.5.0-$clouderaVersion"
val hbaseVersion = s"0.98.6-$clouderaVersion"
val sparkVersion = s"1.1.0-$clouderaVersion"

val hadoopCommon = "org.apache.hadoop" % "hadoop-common" % hadoopVersion % "provided" excludeAll ExclusionRule(organization = "javax.servlet")
val hbaseCommon = "org.apache.hbase" % "hbase-common" % hbaseVersion % "provided"
val hbaseClient = "org.apache.hbase" % "hbase-client" % hbaseVersion % "provided"
val hbaseProtocol = "org.apache.hbase" % "hbase-protocol" % hbaseVersion % "provided"
val hbaseHadoop2Compat = "org.apache.hbase" % "hbase-hadoop2-compat" % hbaseVersion % "provided"
val hbaseServer = "org.apache.hbase" % "hbase-server" % hbaseVersion % "provided" excludeAll ExclusionRule(organization = "org.mortbay.jetty")
val sparkCore = "org.apache.spark" %% "spark-core" % sparkVersion % "provided"
val sparkStreaming = "org.apache.spark" %% "spark-streaming" % sparkVersion % "provided"
val sparkStreamingKafka = "org.apache.spark" %% "spark-streaming-kafka" % sparkVersion exclude("org.apache.spark", "spark-streaming_2.10")

Upvotes: 37

Juan Alonso
Juan Alonso

Reputation: 11

If you are running inside intellij, please check in project settings if you have two active modules (one for the project and another for sbt).

Probably a problem while importing existing project.

Upvotes: 1

Pankaj Narang
Pankaj Narang

Reputation: 61

try running a simple program without the hadoop and hbase dependency

libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"     excludeAll(ExclusionRule(organization = "org.eclipse.jetty"))

libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"


libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"

There should be mismatch of the dependencies. also make sure you have same version of jars while you compile and while you run.

Also is it possible to run the code on spark shell to reproduce ? I will be able to help better.

Upvotes: 0

Related Questions