kabot
kabot

Reputation: 91

databricks-connect, py4j.protocol.Py4JJavaError: An error occurred while calling o342.cache

Connection to databricks works fine, working with DataFrames goes smoothly (operations like join, filter, etc). The problem appears when I call cache on a dataframe.

py4j.protocol.Py4JJavaError: An error occurred while calling o342.cache.
: java.io.InvalidClassException: failed to read class descriptor
...
Caused by: java.lang.ClassNotFoundException: org.apache.spark.rdd.RDD$client53442a94a3$$anonfun$mapPartitions$1$$anonfun$apply$23
    at java.lang.ClassLoader.findClass(ClassLoader.java:523)
    at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
    at org.apache.spark.util.ChildFirstURLClassLoader.loadClass(ChildFirstURLClassLoader.java:48)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:257)
    at org.apache.spark.sql.util.ProtoSerializer.org$apache$spark$sql$util$ProtoSerializer$$readResolveClassDescriptor(ProtoSerializer.scala:4316)
    at org.apache.spark.sql.util.ProtoSerializer$$anon$4.readClassDescriptor(ProtoSerializer.scala:4304)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1857)
    ... 71 more

I work with java8 as required, clearing pycache doesn't help. The same code submitted as a job to databricks works fine. It looks like a local problem on a bridge python-jvm level but java version (8) and python (3.7) is as required. Switching to java13 produces quite the same message.

Versions databricks-connect==6.2.0, openjdk version "1.8.0_242", Python 3.7.6

EDIT: Behavior depends on how DF is created, if the source of DF is external then it works fine, if DF is created locally then such error appears.

# works fine
df = spark.read.csv("dbfs:/some.csv")
df.cache()

# ERROR in 'cache' line
df = spark.createDataFrame([("a",), ("b",)])
df.cache()

Upvotes: 1

Views: 9153

Answers (1)

Rohit Mishra
Rohit Mishra

Reputation: 571

This is a known issue and I think a recent patch fixed it. This was seen for Azure, I am not sure whether you are using which Azure or AWS but it's solved. Please check the issue - https://github.com/MicrosoftDocs/azure-docs/issues/52431

Upvotes: 1

Related Questions