user2458922
user2458922

Reputation: 1721

Two DataFrame nested for Each Loop

The foreach Loop nested iteration of DataFrams throws a NullPointerException:

def nestedDataFrame(leftDF: DataFrame, riteDF: DataFrame): Unit = {    
    val leftCols: Array[String] = leftDF.columns
    val riteCols: Array[String] = riteDF.columns

    leftCols.foreach { ltColName =>
        leftDF.select(ltColName).foreach { ltRow =>
            val leftString = ltRow.apply(0).toString();
            // Works ... But Same Kind Of Code Below
            riteCols.foreach { rtColName =>
              riteDF.select(rtColName).foreach { rtRow => //Exception
              val riteString = rtRow.apply(0).toString();
              print(leftString.equals(riteString)
            }
        }
    }

  }

EXCEPTION:

java.lang.NullPointerException at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:77) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3406) at org.apache.spark.sql.Dataset.select(Dataset.scala:1334) at org.apache.spark.sql.Dataset.select(Dataset.scala:1352)

What could be going wrong and how to fix it?

Upvotes: 2

Views: 1224

Answers (1)

deo
deo

Reputation: 936

leftDF.select(ltColName).foreach { ltRow =>

The above line brings your code inside the foreach block as a task to executor. Now with riteDF.select(rtColName).foreach { rtRow =>, you are trying to access the Spark session within the executor which is not allowed. The Spark session is only available on the driver end. In the ofRow method, it tries to access sparkSession,

val qe = sparkSession.sessionState.executePlan(logicalPlan)

You can't use dataset collections just like regular Java/Scala collection, you should rather use them by the apis available to accomplish tasks, for example you can join them to correlate date.


In this case, you can accomplish the comparison in a number of ways. You can join the 2 datasets, for example,

var joinedDf = leftDF.select(ltColName).join(riteDF.select(rtColName), $"ltColName" === $"rtColName", "inner")

Then analyze the joinedDf. You can even intersect() the two datasets.

Upvotes: 5

Related Questions