Reputation: 63022
For the condition below - in which a pattern match has been defined for a Tuple2(BigDecimal,BigDecimal)
(r.get(0), r.get(1)) match {
case (r0: BigDecimal, r1: BigDecimal) => (bigDecimalNullToZero(r0), bigDecimalNullToZero(r1))
case (r0,r1) => {
error(s"Unable to compare [$r0] and [$r1]"); (0L,0L)
}
}
why is the match not being recognized?
Upvotes: 2
Views: 1233
Reputation: 37822
I'm going to assume that r
in this case is of type org.apache.spark.sql.Row
- if that's the case, you're simply using the wrong BigDecimal class - you're matching against Scala's built-in scala.math.BigDecimal
while Spark uses java.math.BigDecimal
under the hood.
So - if you match using Java's class this should work as expected:
(r.get(0), r.get(1)) match {
case (r0: java.math.BigDecimal, r1: java.math.BigDecimal) => (bigDecimalNullToZero(r0), bigDecimalNullToZero(r1))
case (r0,r1) => {
error(s"Unable to compare [$r0] and [$r1]"); (0L,0L)
}
}
I used this full example to test this:
import spark.implicits._
val df = Seq(
(BigDecimal(2.1), BigDecimal(2.3)) // using Scala's BigDecimal to build DF
).toDF("name", "hit_songs")
df.foreach { r: Row => (r.get(0), r.get(1)) match {
case (s1: BigDecimal, s2: BigDecimal) => println("found Scala BigDecimals")
case (s1: java.math.BigDecimal, s2: java.math.BigDecimal) => println("found Java BigDecimals")
case (s1, s2) => println(s"Not found")
}}
// prints: found Java BigDecimals
P.S. You can usually simplify such "extractions" from a Row using Row's unapply
function, i.e. matching on a Row(a, b, ...)
:
df.map {
case Row(s1: java.math.BigDecimal, s2: java.math.BigDecimal, _*) => (s1, s2)
}
Upvotes: 6