xzxz
xzxz

Reputation: 75

Numpy and Pandas - reshaping with zero padding

I have a csv with data that I want to import to an ndarray so I can manipulate it. The csv data is formatted like this.

u i r c
1 1 5 1
2 2 5 1
3 3 1 0
4 4 1 1

I want to get all the elements with c = 1 in a row, and the ones with c = 0 in another one, like so, reducing the dimensionality.

1  1 1 5 2 2 5 4 4 1
0  3 3 1

However, different u and i can't be in the same column, hence the final result needing zero padding, like this. I want to keep the c variable column, since this represents a categorical variable, so I need to keep its value to be able to make the correspondence between the information and the c value. I don't want to just separate the data according to the value of c.

1 1 1 5 2 2 5 0 0 0 4 4 1
0 0 0 0 0 0 0 3 3 1 0 0 0

So far, I'm reading the .csv file with df = pd.read_csv and creating a multidimensional array/tensor by using arr=df.to_numpy(). After that, I'm permutating the order of the columns to make the c column be the first one, getting this array [[ 1 1 1 5][ 1 2 2 5][ 0 3 3 1][ 1 4 4 1]].

I then do arr = arr.reshape(2,), since there are two possible values for c and then delete all but the first c column according to the length of the tuples. So in this case, since there are 4 elements in each tuple and 16 elements I'm doing arr = np.delete(arr, (4,8,12), axis=1).

Finally, I'm doing this to pad the array with zeros when the u doesn't match with both columns.

nomatch = 0
for j in range(1, cols, 3):
    if arr[0][j] != arr[1][j]:
        nomatch+=1
z = np.zeros(nomatch*3, dtype=arr.dtype)
h1 = np.split(arr, [0][0])
new0 = np.concatenate((arr[0],z))
new1 = np.concatenate((z,arr[1])) # problem

final = np.concatenate((new0, new1))

In the line with the comment, the problem is how can I concatenate the array while maintaining the first element. Instead of just appending, I'd like to be able to set up a start and end index and patch the zeros only on those indexes. By using concatenate, I don't get the expected result, since I'm altering the first element (the head of the array should be untouched).

Additionally, I can't help but wonder if this is a good way to achieve the end result. For an example I tried to pad the array with resize before reshaping with np.resize(), but it doesn't work, when I print the result the array is the same as previous, no matter the dimensions I use as argument. A good solution would be one that adapted if there were 3 or more possible values for c, and that could include multiple c-like values, such as c1, c2... that would become rows in the table. I appreciate all the input and suggestions in advance.

Upvotes: 4

Views: 816

Answers (1)

Paul Panzer
Paul Panzer

Reputation: 53079

Here is a compact numpy approach:

asnp = df.to_numpy()
(np.bitwise_xor.outer(np.arange(2),asnp[:,3:])*asnp[:,:3]).reshape(2,-1)
# array([[1, 1, 5, 2, 2, 5, 0, 0, 0, 4, 4, 1],
#        [0, 0, 0, 0, 0, 0, 3, 3, 1, 0, 0, 0]])

UPDATE: multi category:

categories must be the last k columns and have column headers starting with "cat". we create a row for each unique combination of categories, this combination is prepended to the row.

Code:

import numpy as np
import pandas as pd
import itertools as it

def spreadcats(df):
    cut = sum(map(str.startswith,df.columns,it.repeat("cat")))
    data = df.to_numpy()
    cats,idx = np.unique(data[:,-cut:],axis=0,return_inverse=True)
    m,n,k,_ = data.shape + cats.shape
    out = np.zeros((k,cut+(n-cut)*m),int)
    out[:,:cut] = cats
    out[:,cut:].reshape(k,m,n-cut)[idx,np.arange(m)] = data[:,:-cut]
    return out


x = np.random.randint([1,1,1,0,0],[10,10,10,3,2],(10,5))
df = pd.DataFrame(x,columns=[f"data{i}" for i in "123"] + ["cat1","cat2"])

print(df)
print(spreadcats(df))

Sample run:

   data1  data2  data3  cat1  cat2
0      9      5      1     1     1
1      7      4      2     2     0
2      3      9      8     1     0
3      3      9      1     1     0
4      9      1      7     2     1
5      1      3      7     2     0
6      2      8      2     1     0
7      1      4      9     0     1
8      8      7      3     1     1
9      3      6      9     0     1
[[0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 4 9 0 0 0 3 6 9]
 [1 0 0 0 0 0 0 0 3 9 8 3 9 1 0 0 0 0 0 0 2 8 2 0 0 0 0 0 0 0 0 0]
 [1 1 9 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 7 3 0 0 0]
 [2 0 0 0 0 7 4 2 0 0 0 0 0 0 0 0 0 1 3 7 0 0 0 0 0 0 0 0 0 0 0 0]
 [2 1 0 0 0 0 0 0 0 0 0 0 0 0 9 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]

Upvotes: 4

Related Questions