Eran Moshe
Eran Moshe

Reputation: 3208

python filter 2d array by a chunk of data

import numpy as np

data = np.array([
    [20,  0,  5,  1],
    [20,  0,  5,  1],
    [20,  0,  5,  0],
    [20,  1,  5,  0],
    [20,  1,  5,  0],
    [20,  2,  5,  1],
    [20,  3,  5,  0],
    [20,  3,  5,  0],
    [20,  3,  5,  1],
    [20,  4,  5,  0],
    [20,  4,  5,  0],
    [20,  4,  5,  0]
])

I have the following 2d array. lets called the fields a, b, c, d in the above order where column b is like id. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column d for all cells with the same number in column b (same id) so after filtering i will have the following results:

[[20  0  5  1]
 [20  0  5  1]
 [20  0  5  0]
 [20  2  5  1]
 [20  3  5  0]
 [20  3  5  0]
 [20  3  5  1]]

all rows with b = 1 and b = 4 have been deleted from the data

to sum up because I see answers that doesnt fit. we look at chunks of data by the b column. if a complete chunk of data doesnt have even one appearance of the number "1" in column d we delete all the rows of that b item. in the following example we can see a chunk of data with b = 1 and b = 4 ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column d. thats why it gets deleted from the data

Upvotes: 4

Views: 7332

Answers (5)

Divakar
Divakar

Reputation: 221714

Generic approach : Here's an approach using np.unique and np.bincount to solve for a generic case -

unq,tags = np.unique(data[:,1],return_inverse=1)
goodIDs = np.flatnonzero(np.bincount(tags,data[:,3]==1)>=1)
out = data[np.in1d(tags,goodIDs)]

Sample run -

In [15]: data
Out[15]: 
array([[20, 10,  5,  1],
       [20, 73,  5,  0],
       [20, 73,  5,  1],
       [20, 31,  5,  0],
       [20, 10,  5,  1],
       [20, 10,  5,  0],
       [20, 42,  5,  1],
       [20, 54,  5,  0],
       [20, 73,  5,  0],
       [20, 54,  5,  0],
       [20, 54,  5,  0],
       [20, 31,  5,  0]])

In [16]: out
Out[16]: 
array([[20, 10,  5,  1],
       [20, 73,  5,  0],
       [20, 73,  5,  1],
       [20, 10,  5,  1],
       [20, 10,  5,  0],
       [20, 42,  5,  1],
       [20, 73,  5,  0]])

Specific case approach : If the second column data is always sorted and have sequential numbers starting from 0, we can use a simplified version, like so -

goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3]==1)>=1)
out = data[np.in1d(data[:,1],goodIDs)]

Sample run -

In [44]: data
Out[44]: 
array([[20,  0,  5,  1],
       [20,  0,  5,  1],
       [20,  0,  5,  0],
       [20,  1,  5,  0],
       [20,  1,  5,  0],
       [20,  2,  5,  1],
       [20,  3,  5,  0],
       [20,  3,  5,  0],
       [20,  3,  5,  1],
       [20,  4,  5,  0],
       [20,  4,  5,  0],
       [20,  4,  5,  0]])

In [45]: out
Out[45]: 
array([[20,  0,  5,  1],
       [20,  0,  5,  1],
       [20,  0,  5,  0],
       [20,  2,  5,  1],
       [20,  3,  5,  0],
       [20,  3,  5,  0],
       [20,  3,  5,  1]])

Also, if data[:,3] always have ones and zeros, we can just use data[:,3] in place of data[:,3]==1 in the above listed codes.


Benchmarking

Let's benchmark the vectorized approaches on the specific case for a larger array -

In [69]: def logical_or_based(data): #@ Eric's soln
    ...:     b_vals = data[:,1]
    ...:     d_vals = data[:,3]
    ...:     is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_)
    ...:     np.logical_or.at(is_ok, b_vals, d_vals)
    ...:     return is_ok[b_vals]
    ...: 
    ...: def in1d_based(data):
    ...:     goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3])!=0)
    ...:     out = np.in1d(data[:,1],goodIDs)
    ...:     return out
    ...: 

In [70]: # Setup input
    ...: data = np.random.randint(0,100,(10000,4))
    ...: data[:,1] = np.sort(np.random.randint(0,100,(10000)))
    ...: data[:,3] = np.random.randint(0,2,(10000))
    ...: 

In [71]: %timeit logical_or_based(data) #@ Eric's soln
1000 loops, best of 3: 1.44 ms per loop

In [72]: %timeit in1d_based(data)
1000 loops, best of 3: 528 µs per loop

Upvotes: 3

Eric
Eric

Reputation: 97681

Let's assume the following:

  • b >= 0
  • b is an integer
  • b is fairly dense, ie max(b) ~= len(unique(b))

Here's a solution using np.ufunc.at:

# unpack for clarity - this costs nothing in numpy
b_vals = data[:,1]
d_vals = data[:,3]

# build an array indexed by b values
is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_)
np.logical_or.at(is_ok, b_vals, d_vals)
# is_ok == array([ True, False,  True,  True, False], dtype=bool)

# take the rows which have a b value that was deemed OK
result = data[is_ok[b_vals]]

np.logical_or.at(is_ok, b_vals, d_vals) is a more efficient version of:

for idx, val in zip(b_vals, d_vals):
    is_ok[idx] = np.logical_or(is_ok[idx], val)

Upvotes: 1

Eelco Hoogendoorn
Eelco Hoogendoorn

Reputation: 10769

Untested since in a hurry, but this should work:

import numpy_indexed as npi
g = npi.group_by(data[:, 1])
ids, valid = g.any(data[:, 3])
result = data[valid[g.inverse]]

Upvotes: 1

Kennet Celeste
Kennet Celeste

Reputation: 4771

code:

import numpy as np

my_list = [[20,0,5,1],
    [20,0,5,1],
    [20,0,5,0],
    [20,1,5,0],
    [20,1,5,0],
    [20,2,5,1],
    [20,3,5,0],
    [20,3,5,0],
    [20,3,5,1],
    [20,4,5,0],
    [20,4,5,0],
    [20,4,5,0]]

all_ids = np.array(my_list)[:,1]
unique_ids = np.unique(all_ids)
indices = [np.where(all_ids==ui)[0][0] for ui in unique_ids ]

final = []
for id in unique_ids:
    try:
        tmp_group = my_list[indices[id]:indices[id+1]]
    except:
        tmp_group = my_list[indices[id]:]
    if 1 in np.array(tmp_group)[:,3]:
        final.extend(tmp_group)

print np.array(final)

result:

[[20  0  5  1]
 [20  0  5  1]
 [20  0  5  0]
 [20  2  5  1]
 [20  3  5  0]
 [20  3  5  0]
 [20  3  5  1]]

Upvotes: 1

Patrick Haugh
Patrick Haugh

Reputation: 61052

This gets rid of all rows with 1 in the second position:

[sublist for sublist in list_ if sublist[1] != 1]

This get's rid of all rows with 1 in the second position unless the fourth position is also 1:

[sublist for sublist in list_ if not (sublist[1] == 1 and sublist[3] != 1) ]

Upvotes: 1

Related Questions