Reputation: 287
I am trying to find the best way to group 'rows' with similar IDs.
My best guess:
np.array([test[test[:,0] == ID] for ID in List_IDs])
result: array of arrays of arrays
[ array([['ID_1', 'col1','col2',...,'coln'],
['ID_1', 'col1','col2',...,'coln'],...,
['ID_1', 'col1','col2',...,'coln']],dtype='|S32')
array([['ID_2', 'col1','col2',...,'coln'],
['ID_2', 'col1','col2',...,'coln'],...,
['ID_2', 'col1','col2',...,'coln']],dtype='|S32')
....
array([['ID_k', 'col1','col2',...,'coln'],
['ID_k', 'col1','col2',...,'coln'],...,
['ID_K', 'col1','col2',...,'coln']],dtype='|S32')]
Can anyone suggest something that can be more efficient ?
Reminder: The test
array is huge. 'Rows' not ordered
Upvotes: 2
Views: 1042
Reputation: 221644
I am assuming List_IDs
is a list of all unique IDs from the first column. With that assumption, here's a Numpy-based solution -
# Sort input array test w.r.t. first column that are IDs
test_sorted = test[test[:,0].argsort()]
# Convert the string IDs to numeric IDs
_,numeric_ID = np.unique(test_sorted[:,0],return_inverse=True)
# Get the indices where shifts (IDs change) occur
_,cut_idx = np.unique(numeric_ID,return_index=True)
# Use the indices to split the input array into sub-arrays with common IDs
out = np.split(test_sorted,cut_idx)[1:]
Sample run -
In [305]: test
Out[305]:
array([['A', 'A', 'B', 'E', 'A'],
['B', 'E', 'A', 'E', 'B'],
['C', 'D', 'D', 'A', 'C'],
['B', 'D', 'A', 'C', 'A'],
['B', 'A', 'E', 'A', 'E'],
['C', 'D', 'C', 'E', 'D']],
dtype='|S32')
In [306]: test_sorted
Out[306]:
array([['A', 'A', 'B', 'E', 'A'],
['B', 'E', 'A', 'E', 'B'],
['B', 'D', 'A', 'C', 'A'],
['B', 'A', 'E', 'A', 'E'],
['C', 'D', 'D', 'A', 'C'],
['C', 'D', 'C', 'E', 'D']],
dtype='|S32')
In [307]: out
Out[307]:
[array([['A', 'A', 'B', 'E', 'A']],
dtype='|S32'), array([['B', 'E', 'A', 'E', 'B'],
['B', 'D', 'A', 'C', 'A'],
['B', 'A', 'E', 'A', 'E']],
dtype='|S32'), array([['C', 'D', 'D', 'A', 'C'],
['C', 'D', 'C', 'E', 'D']],
dtype='|S32')]
Upvotes: 3