Reputation: 841
I have a pyspark dataframe df:
+-------------------+
| timestamplast|
+-------------------+
|2019-08-01 00:00:00|
|2019-08-01 00:01:09|
|2019-08-01 01:00:20|
|2019-08-03 00:00:27|
+-------------------+
I want to add columns 'year','month','day','hour' to the existing dataframe by list comprehension.
In Pandas this would be done as such:
L = ['year', 'month', 'day', 'hour']
date_gen = (getattr(df['timestamplast'].dt, i).rename(i) for i in L)
df = df.join(pd.concat(date_gen, axis=1)) # concatenate results and join to original dataframe
How would this be done in pyspark?
Upvotes: 0
Views: 68
Reputation: 13998
check the following:
df.selectExpr("*", *[ '{0}(timestamplast) as {0}'.format(c) for c in L]).show()
+-------------------+----+-----+---+----+
| timestamplast|year|month|day|hour|
+-------------------+----+-----+---+----+
|2019-08-01 00:00:00|2019| 8| 1| 0|
|2019-08-03 00:00:27|2019| 8| 3| 0|
+-------------------+----+-----+---+----+
Upvotes: 1