Tim
Tim

Reputation: 2050

Factorize Spark column

Is it possible to factorize a Spark dataframe column? With factorizing I mean creating a mapping of each unique value in the column to the same ID.

Example, the original dataframe:

+----------+----------------+--------------------+
|      col1|            col2|                col3|
+----------+----------------+--------------------+
|1473490929|4060600988513370|                   A|
|1473492972|4060600988513370|                   A|
|1473509764|4060600988513370|                   B|
|1473513432|4060600988513370|                   C|
|1473513432|4060600988513370|                   A|
+----------+----------------+--------------------+

to the factorized version:

+----------+----------------+--------------------+
|      col1|            col2|                col3|
+----------+----------------+--------------------+
|1473490929|4060600988513370|                   0|
|1473492972|4060600988513370|                   0|
|1473509764|4060600988513370|                   1|
|1473513432|4060600988513370|                   2|
|1473513432|4060600988513370|                   0|
+----------+----------------+--------------------+

In scala itself it would be fairly simple, but since Spark distributes it's dataframes over nodes I'm not sure how to keep a mapping from A->0, B->1, C->2.

Also, assume the dataframe is pretty big (gigabytes), which means loading one entire column into the memory of a single machine might not be possible.

Can it be done?

Upvotes: 2

Views: 1102

Answers (2)

mtoto
mtoto

Reputation: 24178

You can use StringIndexer to encode letters into indices:

import org.apache.spark.ml.feature.StringIndexer

val indexer = new StringIndexer()
  .setInputCol("col3")
  .setOutputCol("col3Index")

val indexed = indexer.fit(df).transform(df)
indexed.show()

+----------+----------------+----+---------+
|      col1|            col2|col3|col3Index|
+----------+----------------+----+---------+
|1473490929|4060600988513370|   A|      0.0|
|1473492972|4060600988513370|   A|      0.0|
|1473509764|4060600988513370|   B|      1.0|
|1473513432|4060600988513370|   C|      2.0|
|1473513432|4060600988513370|   A|      0.0|
+----------+----------------+----+---------+

Data:

val df = spark.createDataFrame(Seq(
              (1473490929, "4060600988513370", "A"),
              (1473492972, "4060600988513370", "A"),  
              (1473509764, "4060600988513370", "B"),
              (1473513432, "4060600988513370", "C"),
              (1473513432, "4060600988513370", "A"))).toDF("col1", "col2", "col3")

Upvotes: 5

Carlos Vilchez
Carlos Vilchez

Reputation: 2804

You can use an user defined function.

First you create the mapping you need:

val updateFunction = udf {(x: String) =>
  x match {
    case "A" => 0
    case "B" => 1
    case "C" => 2
    case _ => 3
  }
}

And now you only have to apply it to your DataFrame:

df.withColumn("col3", updateFunction(df.col("col3")))

Upvotes: -1

Related Questions