Matt Pollock
Matt Pollock

Reputation: 1105

why does sparklyr::spark_apply fail when specifying numeric schema

Given a spark connection sc

iris_spk <- copy_to(sc, iris)

Next I'll take a silly example for spark_apply

iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = c("A", "B"),
    packages = FALSE
  )
# # Source:   table<sparklyr_tmp_3e96258604cd> [?? x 3]
# # Database: spark_connection
#   Species    A         B
#   <chr>      <chr> <dbl>
# 1 versicolor a      1.00
# 2 versicolor b      2.00
# 3 versicolor c      3.00
# 4 virginica  a      1.00
# 5 virginica  b      2.00
# 6 virginica  c      3.00
# 7 setosa     a      1.00
# 8 setosa     b      2.00
# 9 setosa     c      3.00

So far, so good. However https://stackoverflow.com/a/46410425/1785752 suggests that I can improve performance by specifying an output schema instead of just output column names. So I tried:

iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = list(A="character",
                   B="numeric"),
    packages = FALSE
  )

But then things go wrong:

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 26.0 failed 4 times, most recent failure: Lost task 1.3 in stage 26.0 (TID 133, ml-dn38.mitre.org, executor 3): java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of double 
  if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, A), StringType), true) AS A#256 
  if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, B), DoubleType) AS B#257 
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:290) 
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:581) 
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:581) 
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) 
... and so on

Am I specifying the schema incorrectly?

Upvotes: 1

Views: 355

Answers (1)

Matt Pollock
Matt Pollock

Reputation: 1105

Ah! I think that the group_by column does not inherit it's schema from the input data frame, but needs to be declared along with the rest. I just tried

iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = list(Species="character",
                   A="character",
                   B="numeric"),
    packages = FALSE
  )

which worked (same result as the first try above)

Upvotes: 3

Related Questions