Reputation: 133
I am using sparklyr
for some quick analysis. I do have some issues in working with timestamps. I have two different dataframes: one with rows in 1-minute-interval and another with 3-minute-interval.
First dataset: (1-minute-interval)
id timefrom timeto value
10 "2017-06-06 10:30:00" "2017-06-06 10:31:00" 50
10 "2017-06-06 10:31:00" "2017-06-06 10:32:00" 80
10 "2017-06-06 10:32:00" "2017-06-06 10:33:00" 20
22 "2017-06-06 10:33:00" "2017-06-06 10:34:00" 30
22 "2017-06-06 10:34:00" "2017-06-06 10:35:00" 50
22 "2017-06-06 10:35:00" "2017-06-06 10:36:00" 50
Second dataset: (3-minute-interval)
id timefrom timeto value
10 "2017-06-06 10:30:00" "2017-06-06 10:33:00" 30
22 "2017-06-06 10:33:00" "2017-06-06 10:36:00" 67
32 "2017-06-06 10:36:00" "2017-06-06 10:39:00" 28
14 "2017-06-06 10:39:00" "2017-06-06 10:42:00" 30
27 "2017-06-06 10:42:00" "2017-06-06 10:55:00" 90
To compare values of these 2 dataset I have to aggregate the first by 3 minutes and calculate the average of value. Furthermore I have to find the best fitting window from the second dataset.
The result should look something like this:
id timefrom timeto value1 value2
10 "2017-06-06 10:30:00" "2017-06-06 10:33:00" 30 50
22 "2017-06-06 10:33:00" "2017-06-06 10:36:00" 67 43.3
Is it possible to achieve this only with sparklyr? I appreciate your help!
Upvotes: 2
Views: 466
Reputation: 330343
Assuming your data is already parsed:
df1
# # Source: table<df1> [?? x 4]
# # Database: spark_connection
# id timefrom timeto value
# <int> <dttm> <dttm> <int>
# 1 10 2017-06-06 08:30:00 2017-06-06 08:31:00 50
# 2 10 2017-06-06 08:31:00 2017-06-06 08:32:00 80
# 3 10 2017-06-06 08:32:00 2017-06-06 08:33:00 20
# 4 22 2017-06-06 08:33:00 2017-06-06 08:34:00 30
# 5 22 2017-06-06 08:34:00 2017-06-06 08:35:00 50
# 6 22 2017-06-06 08:35:00 2017-06-06 08:36:00 50
df2
# # Source: table<df2> [?? x 4]
# # Database: spark_connection
# id timefrom timeto value
# <int> <dttm> <dttm> <int>
# 1 10 2017-06-06 08:30:00 2017-06-06 08:33:00 30
# 2 22 2017-06-06 08:33:00 2017-06-06 08:36:00 67
# 3 32 2017-06-06 08:36:00 2017-06-06 08:39:00 28
# 4 14 2017-06-06 08:39:00 2017-06-06 08:42:00 30
# 5 27 2017-06-06 08:42:00 2017-06-06 08:55:00 90
you can use window
function:
exprs <- list(
"id", "value as value2",
# window generates structure struct<start: timestamp, end: timestamp>
# we use dot syntax to access nested fields
"window.start as timefrom", "window.end as timeto")
df1_agg <- df1 %>%
mutate(window = window(timefrom, "3 minutes")) %>%
group_by(id, window) %>%
summarise(value = avg(value)) %>%
# As far as I am aware there is no sparklyr syntax
# for accessing struct fields, so we'll use simple SQL expression
spark_dataframe() %>%
invoke("selectExpr", exprs) %>%
sdf_register() %>%
print()
# Source: table<sparklyr_tmp_472ee8ba244> [?? x 4]
# Database: spark_connection
id value2 timefrom timeto
<int> <dbl> <dttm> <dttm>
1 22 43.3 2017-06-06 08:33:00 2017-06-06 08:36:00
2 10 50.0 2017-06-06 08:30:00 2017-06-06 08:33:00
Then you can just by id
and timestamp columns:
df2 %>% inner_join(df1_agg, by = c("id", "timefrom", "timeto"))
# # Source: lazy query [?? x 5]
# # Database: spark_connection
# id timefrom timeto value value2
# <int> <dttm> <dttm> <int> <dbl>
# 1 10 2017-06-06 08:30:00 2017-06-06 08:33:00 30 50.0
# 2 22 2017-06-06 08:33:00 2017-06-06 08:36:00 67 43.3
Upvotes: 1