Chintan Shah
Chintan Shah

Reputation: 51

Sparkr java error

When I am trying to load data in R with:

df <- read.df(sqlContext, "https://s3-us-west-2.amazonaws.com/sparkr-data/nycflights13.csv", "com.databricks.spark.csv",header=T)

I am getting an error with java

Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
  java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String
    at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:74)
    at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:39)
    at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:27)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
    at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:156)
    at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)
    at or

Upvotes: 2

Views: 3499

Answers (2)

After many tries, I got what was the problem in read.df(). The header property creates the problem. The header should be either header="true" or header="false".

> people = read.df(sqlContext, "C:\\Users\\Vivek\\Desktop\\AirPassengers.csv", source = "com.databricks.spark.csv",header=TRUE)
Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
  java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String

        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:81)

        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)

        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)

        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)

        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)

        at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:156)

        at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

        at java.lang.reflect.Method.invoke(Unknown Source)

        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

        at or
> people = read.df(sqlContext, "C:\\Users\\Vivek\\Desktop\\AirPassengers.csv", source = "com.databricks.spark.csv",header="true")
> head(people)
  Sl_No        time AirPassengers
1     1        1949           112
2     2 1949.083333           118
3     3 1949.166667           132
4     4     1949.25           129
5     5 1949.333333           121
6     6 1949.416667           135
> 

Upvotes: 0

Chintan Shah
Chintan Shah

Reputation: 51

I finally found out solution for the above. Need to make sure following

You have java development kit installed, you can download from website download this and save it to C:/hadoop In this bin folder should be like C:/hadoop/bin

Set up JAVA_HOME in environment variable(dont mention bin folder here) set up HADOOP_HOME as environment variable(dont mention bin folder here)

now run following

rm(list=ls())
  # Set the system environment variables


Sys.setenv(SPARK_HOME = "C:/spark")
Sys.setenv(HADOOP_HOME = "C:/Hadoop")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))


#load the Sparkr library
library(rJava)
library(SparkR)


Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.11:1.2.0" "sparkr-shell"')

Sys.setenv(SPARK_MEM="1g")


# Create a spark context and a SQL context
sc <- sparkR.init(master = "local")

sqlContext <- sparkRSQL.init(sc)

now you should be able to read CSV files

Upvotes: 3

Related Questions