Reputation: 719
My input spark dataframe is;
Client Feature1 Feature2
1 10 1
1 15 3
1 20 5
1 25 7
1 30 9
2 1 10
2 2 11
2 3 12
2 4 13
2 5 14
3 100 0
3 150 1
3 200 2
3 250 3
3 300 4
I want to convert pyspark dataframe to 3d numpy matrix for each client. I shared the desired output according to the data above ;
[[[10, 1],
[15, 3],
[20, 5],
[25, 7],
[30, 9]],
[[1, 10],
[2, 11],
[3, 12],
[4, 13],
[5, 14]],
[[100, 0],
[150, 1],
[200, 2],
[250, 3],
[300, 4]]]
Could you please help me about this?
Upvotes: 0
Views: 862
Reputation: 42392
You can do a collect_list
aggregation before collecting the dataframe to Python and converting the result to a Numpy array:
import numpy as np
import pyspark.sql.functions as F
a = np.array([
i[1] for i in
df.groupBy('Client')
.agg(F.collect_list(F.array(*df.columns[1:])))
.orderBy('Client')
.collect()
])
print(a)
array([[[ 10, 1],
[ 15, 3],
[ 20, 5],
[ 25, 7],
[ 30, 9]],
[[ 1, 10],
[ 2, 11],
[ 3, 12],
[ 4, 13],
[ 5, 14]],
[[100, 0],
[150, 1],
[200, 2],
[250, 3],
[300, 4]]])
Upvotes: 2