Reputation: 21
In pyspark, suppose I have dataframe with columns named as 'a1','a2','a3'...'a99'
, how do I apply operation on each of them to create new columns with new names dynamically?
For example, to getnew columns such as sum('a1') as 'total_a1' , ... sum('a99') as 'total_a99'
.
Upvotes: 1
Views: 835
Reputation: 43504
You can use a list comprehension with alias
.
To return only the new columns:
import pyspark.sql.functions as f
df1 = df.select(*[f.sum(c).alias("total_"+c) for c in df.columns])
And if you wanted to keep the existing columns as well:
df2 = df.select("*", *[f.sum(c).alias("total_"+c) for c in df.columns])
Upvotes: 1