metersk
metersk

Reputation: 12519

How to speed up complex/difficult data filtering in pandas

I have a very large data set that has the below indices and column headers.

+------+------------------------+------------------------------+--------------------------+------------------------------------+-------------------------------------+------------------------+--------------------------+--------------------------------+----------------------------+--------------------------------------+---------------------------------------+--------------------------+
|      | count: interaction_eis | count: interaction_eis_reply | count: interaction_match | count: interaction_single_message_ | count: interaction_single_message_1 | count: interaction_yes | dc(uid): interaction_eis | dc(uid): interaction_eis_reply | dc(uid): interaction_match | dc(uid): interaction_single_message_ | dc(uid): interaction_single_message_1 | dc(uid): interaction_yes |
+------+------------------------+------------------------------+--------------------------+------------------------------------+-------------------------------------+------------------------+--------------------------+--------------------------------+----------------------------+--------------------------------------+---------------------------------------+--------------------------+
| uid  |                        |                              |                          |                                    |                                     |                        |                          |                                |                            |                                      |                                       |                          |
| 38   |                     36 |                            0 |                        0 |                                 14 |                                   0 |                    163 |                        1 |                              0 |                          0 |                                    1 |                                     0 |                        1 |
| 66   |                     63 |                            0 |                        0 |                                  0 |                                   0 |                      0 |                        1 |                              0 |                          0 |                                    0 |                                     0 |                        0 |
| 1466 |                      0 |                            0 |                        0 |                                  0 |                                   0 |                      1 |                        0 |                              0 |                          0 |                                    0 |                                     0 |                        1 |
| 1709 |                     51 |                            0 |                        0 |                                  1 |                                   0 |                      9 |                        1 |                              0 |                          0 |                                    1 |                                     0 |                        1 |
| 1844 |                     66 |                            0 |                        1 |                                  3 |                                   1 |                     17 |                        1 |                              0 |                          1 |                                    1 |                                     1 |                        1 |
+------+------------------------+------------------------------+--------------------------+------------------------------------+-------------------------------------+------------------------+--------------------------+--------------------------------+----------------------------+--------------------------------------+---------------------------------------+--------------------------+

I am attempting to group UIDs by type of interaction received, where if a user only has one certain type of interaction they will ONLY be grouped with other users who only have that specific type of interaction.

To do this I started by taking all of the dc(uid) columns which only have 1 "hit" per interaction type and a 0 if the interaction type never happened and aggregating them into groups row by row like this:

cols = [i for i in list(all_f_rm.columns) if i[0]=="d"]

def aggregate(row):
    key = ""
    for i in cols:
        key+=str(row[i])

    if key not in results:
        results[key] = []
    results[key].append(row.name)

results = {}
all_f_rm.apply(aggregate, axis=1) 

results.keys() is all of the potential interaction type combinations (35 of them) and the value for each key is every index (UID) that belongs to that combination. It looks like this: {'001101': [141168, 153845, 172598, 254401, 448276,...

Next, I made a function to filter out all of the non matching rows for each combination/key:

def tableFor(key):
    return all_f_rm[all_f_rm.apply(lambda row: row.name in results[key], axis=1)] 

And tableFor('001101') displays the exact DataFrame that I want.

My problem is this that I wrote a list comprehension to loop through all 35 combinations like this [tableFor(x) for x in results.keys()] but it is taking forever (1+ hrs and hasn't finished) and I need to perform this on 5 more data sets. Is there a more efficient way to accomplish what I'm trying to do?

Upvotes: 0

Views: 67

Answers (1)

DSM
DSM

Reputation: 353179

IIUC, you can do what you want as a groupby. Constructing a toy dataframe like yours:

df = pd.DataFrame({"uid": np.arange(10**6)})
for col in range(6):
    df["dc{}".format(col)] = np.random.randint(0,2,len(df))

We can group by the columns of interest and get the associated id numbers very rapidly:

>>> dcs = [col for col in df.columns if col.startswith("dc")]
>>> df.groupby(dcs)["uid"].unique()
dc0  dc1  dc2  dc3  dc4  dc5
0    0    0    0    0    0      [302, 357, 383, 474, 526, 614, 802, 812, 865, ...
                         1      [7, 96, 190, 220, 405, 453, 534, 598, 606, 866...
                    1    0      [16, 209, 289, 355, 430, 620, 634, 736, 780, 7...
                         1      [9, 79, 166, 268, 408, 434, 435, 447, 572, 749...
               1    0    0      [60, 120, 196, 222, 238, 346, 426, 486, 536, 5...
                         1      [2, 53, 228, 264, 315, 517, 557, 621, 626, 630...
                    1    0      [42, 124, 287, 292, 300, 338, 341, 350, 500, 5...
                         1      [33, 95, 140, 192, 225, 282, 328, 339, 365, 44...
          1    0    0    0      [1, 59, 108, 134, 506, 551, 781, 823, 836, 861...
                         1      [149, 215, 380, 394, 436, 482, 570, 600, 631, ...
                    1    0      [77, 133, 247, 333, 374, 782, 809, 892, 1096, ...
                         1      [14, 275, 312, 326, 343, 444, 569, 692, 770, 7...
               1    0    0      [69, 104, 143, 404, 431, 468, 636, 639, 657, 7...
                         1      [178, 224, 367, 402, 664, 666, 739, 807, 871, ...
[...]

If you'd prefer the associated groups instead, you can get a list or a dictionary from this as well, rather than simply pulling out the indices:

>>> groups = list(df.groupby(dcs, as_index=False))
>>> print(groups[0][0])
(0, 0, 0, 0, 0, 0)
>>> print(groups[0][1])
           uid  dc0  dc1  dc2  dc3  dc4  dc5
302        302    0    0    0    0    0    0
357        357    0    0    0    0    0    0
383        383    0    0    0    0    0    0
[...]
999730  999730    0    0    0    0    0    0
999945  999945    0    0    0    0    0    0
999971  999971    0    0    0    0    0    0

[15357 rows x 7 columns]

and so on.

Upvotes: 1

Related Questions