Reputation: 3727
I generate a new dataframe based on the following code:
from pyspark.sql.functions import split, regexp_extract
split_log_df = log_df.select(regexp_extract('value', r'^([^\s]+\s)', 1).alias('host'),
regexp_extract('value', r'^.*\[(\d\d/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} -\d{4})]', 1).alias('timestamp'),
regexp_extract('value', r'^.*"\w+\s+([^\s]+)\s+HTTP.*"', 1).alias('path'),
regexp_extract('value', r'^.*"\s+([^\s]+)', 1).cast('integer').alias('status'),
regexp_extract('value', r'^.*\s+(\d+)$', 1).cast('integer').alias('content_size'))
split_log_df.show(10, truncate=False)
I need another column showing the dayofweek, what would be the best elegant way to create it? ideally just adding a udf like field in the select.
Thank you very much.
Updated: my question is different than the one in the comment, what I need is to make the calculation based on a string in log_df, not based on the timestamp like the comment, so this is not a duplicate question. Thanks.
Upvotes: 12
Views: 63176
Reputation: 31
## Here is a potential solution with using UDF which can solve the issue.
# UDF’s are a black box to PySpark as it can’t apply any optimization and you
# will lose all the optimization PySpark does on Dataframe. so you should use
# Spark SQL built-in functions as these functions provide optimization.
# you should use UDF only when existing built-in SQL function doesn’t have it.
from dateutil.parser import parse
def findWeekday(dt):
dt = parse(dt)
return dt.strftime('%A')
weekDayUDF = udf(lambda x:findWeekday(x),StringType())
df.withColumn('weekday',weekDayUDF('ORDERDATE')).show()
+-------+---------------+--------+---------+
| SALES| ORDERDATE|MONTH_ID| weekday|
+-------+---------------+--------+---------+
| 2871.0| 2/24/2003 0:00| 2| Monday|
| 2765.9| 5/7/2003 0:00| 5|Wednesday|
|3884.34| 7/1/2003 0:00| 7| Tuesday|
| 3746.7| 8/25/2003 0:00| 8| Monday|
|5205.27|10/10/2003 0:00| 10| Friday|
|3479.76|10/28/2003 0:00| 10| Tuesday|
|2497.77|11/11/2003 0:00| 11| Tuesday|
|5512.32|11/18/2003 0:00| 11| Tuesday|
|2168.54| 12/1/2003 0:00| 12| Monday|
|4708.44| 1/15/2004 0:00| 1| Thursday|
|3965.66| 2/20/2004 0:00| 2| Friday|
Upvotes: 0
Reputation: 181
Since Spark 2.3 you can use the dayofweek function https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.dayofweek.html
from pyspark.sql.functions import dayofweek
df.withColumn('day_of_week', dayofweek('my_timestamp'))
However this defines the start of the week as a Sunday = 1
If you don't want that, but instead require Monday = 1, then you could do an inelegant fudge like either subtracting 1 day before using the dayofweek function or amend the result such as like this
from pyspark.sql.functions import dayofweek
df.withColumn('day_of_week', ((dayofweek('my_timestamp')+5)%7)+1)
Upvotes: 17
Reputation: 59
I did this to get weekdays from date:
def get_weekday(date):
import datetime
import calendar
month, day, year = (int(x) for x in date.split('/'))
weekday = datetime.date(year, month, day)
return calendar.day_name[weekday.weekday()]
spark.udf.register('get_weekday', get_weekday)
Example of usage:
df.createOrReplaceTempView("weekdays")
df = spark.sql("select DateTime, PlayersCount, get_weekday(Date) as Weekday from weekdays")
Upvotes: 4
Reputation: 1003
I suggest a bit different method
from pyspark.sql.functions import date_format
df.select('capturetime', date_format('capturetime', 'u').alias('dow_number'), date_format('capturetime', 'E').alias('dow_string'))
df3.show()
It gives ...
+--------------------+----------+----------+
| capturetime|dow_number|dow_string|
+--------------------+----------+----------+
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
|2017-06-05 10:05:...| 1| Mon|
Upvotes: 45
Reputation: 3727
I finally resolved the question myself, here is the complete solution:
I am not satisfied with my solution as it seems to be so zig-zag, it would be appreciated if anyone can come up with a more elegant solution, thank you in advance.
Upvotes: -5