tcokyasar
tcokyasar

Reputation: 592

Finding the smallest number's index

I have the following (sampled) dictionary A that originally has over 17,000 keys, and each array's length is just over 600,000 (same for all). I am trying to find the key of the smallest number across arrays for each of 600,000 inputs. For instance, in the below dictionary, I want to get i = 3093094 for j = 0 because 45.16672136 is the smallest across the first indices of all arrays. Similarly, i = 1157086 for j = 1 because 1.53174068 is the smallest.

A = {3093094: array([45.16672136,  1.68053313, 13.78822307, ..., 36.18798239,
        36.09565274, 35.85261821]),
 1156659: array([45.46286695,  1.69632425, 13.81351489, ..., 36.54544469,
        36.45329774, 36.20969689]),
 1156667: array([45.43970605,  1.69026244, 13.81365067, ..., 36.51934187,
        36.42716964, 36.18364528]),
 1156792: array([45.29956347,  1.57736575, 13.90834355, ..., 36.43079348,
        36.33804273, 36.09623309]),
 1157086: array([45.38149498,  1.53174068, 13.98398836, ..., 36.57985343,
        36.48684657, 36.2457831 ]),
 1430072: array([45.46114909,  1.58096885, 13.95459557, ..., 36.64775128,
        36.55496457, 36.31324461]),
 1668445: array([45.44073352,  1.5941793 , 13.92953699, ..., 36.60630965,
        36.51361336, 36.27162926]),
 3055958: array([45.45006118,  1.57686417, 13.95499241, ..., 36.63558996,
        36.54278917, 36.30111176]),
 1078241: array([45.56175847,  1.77256163, 13.75586274, ..., 36.61441986,
        36.52264105, 36.27795081])}

I have the below multiprocessing solution method but looking for a more efficient way as it takes too long to process.

import numpy as np
import os
from multiprocessing import Pool


C = range(len(A[3093094]))

def closest(All_inputs):
    (A,j) = All_inputs
    B = list(A.keys())
    my_list = [A[i][j] for i in B]
    return(B[np.argmin(np.array(my_list))])

with Pool(processes=os.cpu_count()) as pool:
    results = pool.map(closest, [(A,j) for j in C])

A challenge is to duplicate A in multiprocessing as it is huge in size. Do you have any Pythonic approaches to quickly complete this supposedly trivial computation?

Upvotes: 0

Views: 143

Answers (3)

Xu Qiushi
Xu Qiushi

Reputation: 1161

If your memory is big enough. May be you can try this, using pandas. If still slow, try using dask. Both example were list below.

import numpy as np
import pandas as pd
import dask.dataframe as dd
from tqdm import tqdm
test_data = {}
for i in tqdm(range(2000)):
    test_data[i] = np.random.randint(0, 10000, 600000)

# test one
# print(test_data)
now = time.time()
df = pd.DataFrame(test_data)
min_idx = df.idxmin(axis=1)
result_one = dict(zip(range(2000), min_idx.tolist()))
# print(result_one)
print(time.time() - now)

# test two
now = time.time()
df = pd.DataFrame(test_data)
ddf = dd.from_pandas(df, npartitions=multiprocessing.cpu_count())
min_idx = ddf.idxmin(axis=1).compute(scheduler="processes")
result_two = dict(zip(range(2000), min_idx.tolist()))
# print(result_two)
print(time.time() - now)

Upvotes: 0

Timus
Timus

Reputation: 11351

I have tried the following on a machine with 12 cores and 16G RAM:

from multiprocessing import Pool, cpu_count
from time import perf_counter

def closest(values):
    return np.argmin(np.array(values))

if __name__ == "__main__":
    # Build A inside __main__ (otherwise each process builds it again)
    num_keys = 10_000
    arr_len = 100_000
    rng = np.random.default_rng()
    A = {
        key: rng.integers(0, 1000, arr_len)
        for key in range(1000, 1000 + num_keys)
    }

    # Multiprocessing
    start = perf_counter()
    with Pool(processes=cpu_count()) as p:
        indices = p.imap(closest, zip(*A.values()), chunksize=1000)
        keys = tuple(A.keys())
        results = [keys[i] for i in indices]
    end = perf_counter()
    print(f"Duration (np.argmin mp): {end - start:.2f}")

    # np.argmin directly
    start = perf_counter()
    arr = np.array([*A.values()])
    keys = tuple(A.keys())
    results = [keys[i] for i in np.argmin(arr, axis=0)]
    end = perf_counter()
    print(f"Duration (np.argmin direct): {end - start:.2f}")

Duration results:

Duration (np.argmin mp): 1258.07
Duration (np.argmin direct): 563.84

Results for a small sample (num_keys = 4, arr_len = 8):

A =
{1000: [879, 130, 114, 973, 691, 394, 122, 215],
 1001: [221, 482, 510, 319, 454, 585, 767, 138],
 1002: [982, 526, 971, 168, 185, 477, 838, 37],
 1003: [675, 293, 769, 878, 611, 695, 237, 129]}
results = [1001, 1000, 1000, 1002, 1002, 1000, 1000, 1002]
results = [1001, 1000, 1000, 1002, 1002, 1000, 1000, 1002]

Upvotes: 1

no comment
no comment

Reputation: 10464

This seems to work and ought to be faster than converting each column to a Python list with an unpythonic list comprehension and then back to a NumPy array:

K = np.array(list(A))
V = np.array(list(A.values()))
print(K[V.argmin(axis=0)])

Output for your example data (with the ... removed):

[3093094 1157086 1078241 3093094 3093094 3093094]

Try it online!

Upvotes: 0

Related Questions