WestCoastProjects
WestCoastProjects

Reputation: 63269

Apply SQL functions from within a DataFrame

The following works in Spark SQL:

val df = sqlc.sql(
  "select coalesce(optPrefix.optSysIp,'--') as ip, count(1) as cnt
  from llines group by coalesce(optPrefix.optSysIp,'--')"
).collect

 res39: Array[org.apache.spark.sql.Row] = Array([192.168.1.7,57],[--,43]))

How can we apply that coalesce directly from the dataframe?

scala> df.groupBy("coalesce(optPrefix.optSysIp,'--')").count.collect
org.apache.spark.sql.AnalysisException: Cannot resolve column name 
 "coalesce(optPrefix.optSysIp,'--')

I looked into what methods are on the dataframe. I could not discern any way to run this coalesce operation. Ideas?

Upvotes: 1

Views: 4467

Answers (1)

zero323
zero323

Reputation: 330453

You can use coalesce function:

import org.apache.spark.sql.functions.{coalesce, lit}

case class Foobar(foo: Option[Int], bar: Option[Int])

val df = sc.parallelize(Seq(
  Foobar(Some(1), None), Foobar(None, Some(2)),
  Foobar(Some(3), Some(4)), Foobar(None, None))).toDF

df.select(coalesce($"foo", $"bar", lit("--"))).show

// +--------------------+
// |coalesce(foo,bar,--)|
// +--------------------+
// |                   1|
// |                   2|
// |                   3|
// |                  --|
// +--------------------+

Upvotes: 5

Related Questions