Reputation: 129
I want to compile a range oft functions using numba and as I only need to run them on my machine with the same signatures, I want to cache them. But when attempting to do so, numba tells me that the function cannot be cached because it uses large global arrays. This is the specific warning it displayed.
NumbaWarning: Cannot cache compiled function "sigmoid" as it uses dynamic globals (such as ctypes pointers and large global arrays)
I am aware that global arrays are usually frozen but large ones aren't, but as my function looks like this:
@njit(parallel=True, cache=True)
def sigmoid(x):
return 1./(1. + np.exp(-x))
I cannot see any global arrays, especially large ones.
Where is the problem?
Upvotes: 3
Views: 4180
Reputation: 433
I observed this behavior too (running on: Windows 10, Dell Latitude 7480, Git for Windows), even for very simple tests. It seems parallel=True
doesn't allow caching. This is independent from the actual presence of of prange
calls. Below a simple example.
def where_numba(arr: np.ndarray) -> np.ndarray:
l0, l1 = np.shape(arr)[0], np.shape(arr)[1]
for i0 in prange(l0):
for i1 in prange(l1):
if arr[i0, i1] > 0.5:
arr[i0, i1] = arr[i0, i1] * 10
return(arr)
where_numba_jit = jit(signature_or_function='float64[:,:](float64[:,:])',
nopython=True, parallel=True, cache=True, fastmath=True, nogil=True)(where_numba)
arr = np.random.random((10000, 10000))
seln = where_numba_jit(arr)
I get the same warning.
I think you may consider your specific codes and see which option (cache
or parallel
) is better to keep, clearly cache
for relative short calculation times and parallel
when the compilation time may be negligible compared to the actual calculation time. Please, comment if you have updates.
There is also an open Numba issue on this: https://github.com/numba/numba/issues/2439
Upvotes: 1