hpkong
hpkong

Reputation: 42

Scala Spark read json

My code like as below

val sparkConf = new SparkConf().setAppName("Json Test").setMaster("local[*]") 
val sc = new SparkContext(sparkConf) 
val sqlContext = new org.apache.spark.sql.SQLContext(sc) 
import sqlContext.implicits._

val path = "/path/log.json" 
val df = sqlContext.read.json(path)
df.show()

Sample json data

{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}, {"MLrate":"31","Nrout":"0","up":null,"Crate":"2"},{"MLrate":"30","Nrout":"5","up":null,"Crate":"2"},{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"},{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"},{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"}]}

In scala ide occur error ,I can not understand this:

INFO SharedState: Warehouse path is 'file:/C:/Users/ben53/workspace/Demo/spark-warehouse/'. Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated at java.util.ServiceLoader.fail(Unknown Source) at java.util.ServiceLoader.access$100(Unknown Source) at java.util.ServiceLoader$LazyIterator.nextService(Unknown Source) at java.util.ServiceLoader$LazyIterator.next(Unknown Source) at java.util.ServiceLoader$1.next(Unknown Source) at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247) at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259) at scala.collection.AbstractTraversable.filter(Traversable.scala:104) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:575) at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:298) at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:251) at com.dataflair.spark.QueryLog$.main(QueryLog.scala:27) at com.dataflair.spark.QueryLog.main(QueryLog.scala) Caused by: java.lang.VerifyError: Bad return type Exception Details: Location: org/apache/spark/sql/hive/orc/DefaultSource.createRelation(Lorg/apache/spark/sql/SQLContext;[Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/sources/HadoopFsRelation; @35: areturn Reason: Type 'org/apache/spark/sql/hive/orc/OrcRelation' (current frame, stack[0]) is not assignable to 'org/apache/spark/sql/sources/HadoopFsRelation' (from method signature) Current Frame: bci: @35 flags: { } locals: { 'org/apache/spark/sql/hive/orc/DefaultSource', 'org/apache/spark/sql/SQLContext', '[Ljava/lang/String;', 'scala/Option', 'scala/Option', 'scala/collection/immutable/Map' } stack: { 'org/apache/spark/sql/hive/orc/OrcRelation' } Bytecode: 0x0000000: b200 1c2b c100 1ebb 000e 592a b700 22b6 0x0000010: 0026 bb00 2859 2c2d b200 2d19 0419 052b 0x0000020: b700 30b0

at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Unknown Source) at java.lang.Class.getConstructor0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) ... 20 more


Upvotes: 1

Views: 1245

Answers (2)

Sonu
Sonu

Reputation: 732

The path should be correct. But the provided JSON is invalid. Please correct the sample JSON and then try. You can validate the JSON on https://jsonlint.com/

It shows the invalid portion of the JSON.

Though I tried the sample and got the output as below:

    +---+--------------------+----+-------------+
|COL|                DATA|IFAM|          KTM|
+---+--------------------+----+-------------+
| 21|[[2,30,0,null], [...| EQR|1430006400000|
+---+--------------------+----+-------------+

The code used is below:

object Test {

  def main(args: Array[String]) {
    val sparkConf = new SparkConf().setAppName("Json Test").setMaster("local[*]")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    import sqlContext.implicits._

    val path = "/home/test/Desktop/test.json"
    val df = sqlContext.read.json(path)
    df.show()
  }
}

Upvotes: 1

Yatharth Sharma
Yatharth Sharma

Reputation: 59

I am pretty sure your path is not right. Check if the file is present at the path specified. Json is valid.

Upvotes: 0

Related Questions