Reputation: 13520
I am new to PySpark dataframes and used to work with RDDs before. I have a dataframe like this:
date path
2017-01-01 /A/B/C/D
2017-01-01 /X
2017-01-01 /X/Y
And want to convert to the following:
date path
2017-01-01 /A/B
2017-01-01 /X
2017-01-01 /X/Y
Basically to get rid of everything after the third /
including it. So before with RDD I used to have the following:
from urllib import quote_plus
path_levels = df['path'].split('/')
filtered_path_levels = []
for _level in range(min(df_size, 3)):
# Take only the top 2 levels of path
filtered_path_levels.append(quote_plus(path_levels[_level]))
df['path'] = '/'.join(map(str, filtered_path_levels))
Things with pyspark are more complicated I would say. Here is what I have got so far:
path_levels = split(results_df['path'], '/')
filtered_path_levels = []
for _level in range(size(df_size, 3)):
# Take only the top 2 levels of path
filtered_path_levels.append(quote_plus(path_levels[_level]))
df['path'] = '/'.join(map(str, filtered_path_levels))
which is giving me the following error:
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
Any help regrading this would be much appreciated. Let me know if this need more information/explanation.
Upvotes: 1
Views: 4105
Reputation: 13520
I resolved my problem using the following code:
from pyspark.sql.functions import split, col, lit, concat
split_col = split(df['path'], '/')
df = df.withColumn('l1_path', split_col.getItem(1))
df = df.withColumn('l2_path', split_col.getItem(2))
df = df.withColumn('path', concat(col('l1_path'), lit('/'), col('l2_path')))
df = df.drop('l1_path', 'l2_path')
Upvotes: 1
Reputation: 35249
Use udf
:
from pyspark.sql.functions import *
@udf
def quote_string_(path, size):
if path:
return "/".join(quote_plus(x) for x in path.split("/")[:size])
df.withColumn("foo", quote_string_("path", lit(2)))
Upvotes: 1