Reputation: 20101
I'm trying to group by date in a Spark dataframe and for each group count the unique values of one column:
test.json
{"name":"Yin", "address":1111111, "date":20151122045510}
{"name":"Yin", "address":1111111, "date":20151122045501}
{"name":"Yln", "address":1111111, "date":20151122045500}
{"name":"Yun", "address":1111112, "date":20151122065832}
{"name":"Yan", "address":1111113, "date":20160101003221}
{"name":"Yin", "address":1111111, "date":20160703045231}
{"name":"Yin", "address":1111114, "date":20150419134543}
{"name":"Yen", "address":1111115, "date":20151123174302}
And the code:
import pyspark.sql.funcions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_g.count().distinct().show()
The results with pyspark are
df_y.groupby(df_y.name).count().distinct().show()
+----+-----+
|name|count|
+----+-----+
| Yan| 1|
| Yun| 1|
| Yin| 4|
| Yen| 1|
| Yln| 1|
+----+-----+
And what I'm expecting is something like this with pandas:
df = df_y.toPandas()
df.groupby('name').address.nunique()
Out[51]:
name
Yan 1
Yen 1
Yin 2
Yln 1
Yun 1
How can I get the unique elements of each group by another field, like address?
Upvotes: 28
Views: 59431
Reputation: 2146
a concise and direct answer to groupby a field "_c1" and count the distinct number of values from field "_c2":
import pyspark.sql.functions as F
dg = df.groupBy("_c1").agg(F.countDistinct("_c2"))
Upvotes: 10
Reputation: 20101
There's a way to do this count of distinct elements of each group using the function countDistinct
:
import pyspark.sql.functions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_y.groupby(df_y.name).agg(func.countDistinct('address')).show()
+----+--------------+
|name|count(address)|
+----+--------------+
| Yan| 1|
| Yun| 1|
| Yin| 2|
| Yen| 1|
| Yln| 1|
+----+--------------+
The docs are available [here](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/functions.html#countDistinct(org.apache.spark.sql.Column, org.apache.spark.sql.Column...)).
Upvotes: 40