ponthu
ponthu

Reputation: 311

Sparse Vector pyspark

I'd like to find an efficient method to create spare vectors in PySpark using dataframes.

Let's say given the transactional input:

df = spark.createDataFrame([
    (0, "a"),
    (1, "a"),
    (1, "b"),
    (1, "c"),
    (2, "a"),
    (2, "b"),
    (2, "b"),
    (2, "b"),
    (2, "c"),
    (0, "a"),
    (1, "b"),
    (1, "b"),
    (2, "cc"),
    (3, "a"),
    (4, "a"),
    (5, "c")
], ["id", "category"])
+---+--------+
| id|category|
+---+--------+
|  0|       a|
|  1|       a|
|  1|       b|
|  1|       c|
|  2|       a|
|  2|       b|
|  2|       b|
|  2|       b|
|  2|       c|
|  0|       a|
|  1|       b|
|  1|       b|
|  2|      cc|
|  3|       a|
|  4|       a|
|  5|       c|
+---+--------+

In a summed up format:

df.groupBy(df["id"],df["category"]).count().show()
+---+--------+-----+
| id|category|count|
+---+--------+-----+
|  1|       b|    3|
|  1|       a|    1|
|  1|       c|    1|
|  2|      cc|    1|
|  2|       c|    1|
|  2|       a|    1|
|  1|       a|    1|
|  0|       a|    2|
+---+--------+-----+

My aim is to get this output by id:

+---+-----------------------------------------------+
| id|                                       feature |
+---+-----------------------------------------------+
|  2|SparseVector({a: 1.0, b: 3.0, c: 1.0, cc: 1.0})|

Could you please point me in the right direction? With mapreduce in Java it seemed to be way easier for me.

Upvotes: 6

Views: 15638

Answers (2)

David
David

Reputation: 11593

If you convert your dataframe to a RDD, you can follow a mapreduce-like framework reduceByKey. The only real tricky part here is to formatting the date for spark's sparseVector

Import packages, create data

from pyspark.ml.feature import StringIndexer
from pyspark.ml.linalg import Vectors
df = sqlContext.createDataFrame([
    (0, "a"),
    (1, "a"),
    (1, "b"),
    (1, "c"),
    (2, "a"),
    (2, "b"),
    (2, "b"),
    (2, "b"),
    (2, "c"),
    (0, "a"),
    (1, "b"),
    (1, "b"),
    (2, "cc"),
    (3, "a"),
    (4, "a"),
    (5, "c")
], ["id", "category"])

Create numerical representation for category (needed for sparse vectors)

indexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
df = indexer.fit(df).transform(df) 

Group by index, get counts

df = df.groupBy(df["id"],df["categoryIndex"]).count()

Convert to a rdd, map the data to key-value pairs of id & [categoryIndex, count]

rdd = df.rdd.map(lambda x: (x.id, [(x.categoryIndex, x['count'])]))

Reduce by key to get key value pairs of id & list of all the [categoryIndex, count] for that id

rdd = rdd.reduceByKey(lambda a, b: a + b)

Map the data to convert the list of all the [categoryIndex, count] for each id into a sparse vector

rdd = rdd.map(lambda x: (x[0], Vectors.sparse(len(x[1]), x[1])))

Convert back to a dataframe

finalDf = sqlContext.createDataFrame(rdd, ['id', 'feature'])

Data check

finalDf.take(5)

 [Row(id=0, feature=SparseVector(1, {1: 2.0})),
  Row(id=1, feature=SparseVector(3, {0: 3.0, 1: 1.0, 2: 1.0})),
  Row(id=2, feature=SparseVector(4, {0: 3.0, 1: 1.0, 2: 1.0, 3: 1.0})),
  Row(id=3, feature=SparseVector(1, {1: 1.0})),
  Row(id=4, feature=SparseVector(1, {1: 1.0}))]

Upvotes: 4

zero323
zero323

Reputation: 330453

This can be done pretty easily with pivot and VectorAssembler. Replace aggregation with pivot:

 pivoted = df.groupBy("id").pivot("category").count().na.fill(0)

and assemble:

from pyspark.ml.feature import VectorAssembler

input_cols = [x for x in pivoted.columns if x != id]

result = (VectorAssembler(inputCols=input_cols, outputCol="features")
    .transform(pivoted)
    .select("id", "features"))

with the result being as follows. This will choose more efficient representation depending on sparsity:

+---+---------------------+
|id |features             |
+---+---------------------+
|0  |(5,[1],[2.0])        |
|5  |(5,[0,3],[5.0,1.0])  |
|1  |[1.0,1.0,3.0,1.0,0.0]|
|3  |(5,[0,1],[3.0,1.0])  |
|2  |[2.0,1.0,3.0,1.0,1.0]|
|4  |(5,[0,1],[4.0,1.0])  |
+---+---------------------+

but of course you can still convert it to a single representation:

from pyspark.ml.linalg import SparseVector, VectorUDT
import numpy as np

def to_sparse(c):
    def to_sparse_(v):
        if isinstance(v, SparseVector):
            return v
        vs = v.toArray()
        nonzero = np.nonzero(vs)[0]
        return SparseVector(v.size, nonzero, vs[nonzero])
    return udf(to_sparse_, VectorUDT())(c)
+---+-------------------------------------+
|id |features                             |
+---+-------------------------------------+
|0  |(5,[1],[2.0])                        |
|5  |(5,[0,3],[5.0,1.0])                  |
|1  |(5,[0,1,2,3],[1.0,1.0,3.0,1.0])      |
|3  |(5,[0,1],[3.0,1.0])                  |
|2  |(5,[0,1,2,3,4],[2.0,1.0,3.0,1.0,1.0])|
|4  |(5,[0,1],[4.0,1.0])                  |
+---+-------------------------------------+

Upvotes: 17

Related Questions