Reputation: 2158
I'm looking for a way to aggregate by hour my data. I want firstly to keep only hours in my evtTime. My DataFrame looks like this:
Row(access=u'WRITE',
agentHost=u'xxxxxx50.haas.xxxxxx',
cliIP=u'192.000.00.000',
enforcer=u'ranger-acl',
event_count=1,
event_dur_ms=0,
evtTime=u'2017-10-01 23:03:51.337',
id=u'a43d824c-1e53-439b-b374-96b76bacf714',
logType=u'RangerAudit',
policy=699,
reason=u'/project-h/xxxx/xxxx/warehouse/rocq.db/f_crcm_res_temps_retrait',
repoType=1,
reqUser=u'rocqphadm',
resType=u'path',
resource=u'/project-h/xxxx/xxxx/warehouse/rocq.db/f_crcm_res_temps_retrait',
result=1,
seq_num=342976577)
My objectif subsequently is to group by reqUser and calculate the sum of event_count. I tried this :
func = udf (lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f'), DateType())
df1 = df.withColumn('DATE', func(col('evtTime')))
metrics_DataFrame = (df1
.groupBy(hour('DATE'), 'reqUser')
.agg({'event_count': 'sum'})
)
This is the result :
[Row(hour(DATE)=0, reqUser=u'A383914', sum(event_count)=12114),
Row(hour(DATE)=0, reqUser=u'xxxxadm', sum(event_count)=211631),
Row(hour(DATE)=0, reqUser=u'splunk-system-user', sum(event_count)=48),
Row(hour(DATE)=0, reqUser=u'adm', sum(event_count)=7608),
Row(hour(DATE)=0, reqUser=u'X165473', sum(event_count)=2)]
My objectif is to get something like this :
[Row(hour(DATE)=2017-10-01 23:00:00, reqUser=u'A383914', sum(event_count)=12114),
Row(hour(DATE)=2017-10-01 22:00:00, reqUser=u'xxxxadm', sum(event_count)=211631),
Row(hour(DATE)=2017-10-01 08:00:00, reqUser=u'splunk-system-user', sum(event_count)=48),
Row(hour(DATE)=2017-10-01 03:00:00, reqUser=u'adm', sum(event_count)=7608),
Row(hour(DATE)=2017-10-01 11:00:00, reqUser=u'X165473', sum(event_count)=2)]
Upvotes: 1
Views: 2910
Reputation: 35229
There are multiple possible solutions, the simplest one is to use only the required part as a string:
from pyspark.sql.functions import substring, to_timestamp
df = spark.createDataFrame(["2017-10-01 23:03:51.337"], "string").toDF("evtTime")
df.withColumn("hour", substring("evtTime", 0, 13)).show()
# +--------------------+-------------+
# | evtTime| hour|
# +--------------------+-------------+
# |2017-10-01 23:03:...|2017-10-01 23|
# +--------------------+-------------+
or as a timestamp:
df.withColumn("hour", to_timestamp(substring("evtTime", 0, 13), "yyyy-MM-dd HH")).show()
# +--------------------+-------------------+
# | evtTime| hour|
# +--------------------+-------------------+
# |2017-10-01 23:03:...|2017-10-01 23:00:00|
# +--------------------+-------------------+
You could also date_format
:
from pyspark.sql.functions import date_format, col
df.withColumn("hour", date_format(col("evtTime").cast("timestamp"), "yyyy-MM-dd HH:00")).show()
# +--------------------+----------------+
# | evtTime| hour|
# +--------------------+----------------+
# |2017-10-01 23:03:...|2017-10-01 23:00|
# +--------------------+----------------+
or date_trunc
:
from pyspark.sql.functions import date_trunc
df.withColumn("hour", date_trunc("hour", col("evtTime").cast("timestamp"))).show()
# +--------------------+-------------------+
# | evtTime| hour|
# +--------------------+-------------------+
# |2017-10-01 23:03:...|2017-10-01 23:00:00|
# +--------------------+-------------------+
Upvotes: 6