mshahabi
mshahabi

Reputation: 43

Error Passing Multiple Inputs to a Class while using Numba

I am trying to use Numba Decorator with my class. However, I am receiving the the following error. I checked the input dimension and it looks correct but still getting the same error. Any idea on how to resolve the issue?

   spec = [('w_x', nb.int32), ('w_a', nb.int32),('mu_a', nb.int64[:]), 
      ('sig_a',nb.int64[:]),('mu_x', nb.int64[:]),('sig_x', nb.int32[:]), 
       ('mu_a_a',nb.float64[:,:]),('sig_a_a', nb.float64[:,:]), ('mu_x_a', 
       nb.int32[:]),('sig_x_a', nb.float32[:,:]),('mu_0', nb.boolean), 
       ('sig_0', nb.boolean),('beta', nb.int32),('policy', nb.uint8)]
    @nb.jitclass(spec)        
    class learner(object):
    def __init__ (self, w_x, w_a, beta, policy):
    '''
        initialize: 
        w_x: the dim of customer features
        w_a: the dim of ad features
        mu_a: the prior of mean of weights on ad
        sig_a: the prior of var of weights on ad
        mu_x: the prior of mean of weights on customer
        sig_x: the prior of var of weights on customer
        mu_a_a: the prior of interactions between ad segments
        sig_a_a: the prior of var of interactions between ad segments
        mu_x_a: the prior of mean of interactions between customers and ad 
        segments
        sig_x_a: the prior of var of interactions between customers and ad 
     segments
    '''
    self.w_x = w_x
    self.w_a = w_a
    self.mu_a = np.zeros(self.w_a)
    self.sig_a = np.ones(self.w_a)
    self.mu_x = np.zeros(self.w_x)
    self.sig_x = np.ones(self.w_x)
    self.mu_a_a = np.zeros((self.w_a, self.w_a))
    #self.mu_a_a = np.triu(self.mu_a_a, k=1)
    self.sig_a_a = np.ones((self.w_a, self.w_a))
    #self.sig_a_a = np.triu(self.sig_a_a, k=1)
    self.mu_x_a = np.zeros((self.w_x, self.w_a))
    self.sig_x_a = np.ones((self.w_x, self.w_a))
    #the intercept term w_0
    self.mu_0 = 0
    self.sig_0 = 1
    self.beta = beta
    self.policy = policy

Below is the error message:

File "C:\Users\MSHAHAB2\AppData\Local\Continuum\anaconda3\lib\site- 
packages\numba\six.py", line 659, in reraise
raise value numba.errors.LoweringError: Failed at nopython (nopython mode 
backend)
Can only insert i64* at [4] in {i8*, i8*, i64, i64, i64*, [1 x i64], [1 x 
i64]}: got double*

File "batch_mode_function.py", line 147:
def __init__ (self, w_x, w_a, beta, policy):
    <source elided>
    self.w_a = w_a
    self.mu_a = np.zeros(self.w_a)
    ^
[1] During: lowering "(self).mu_a = $0.9" at 
W:\GRMOS\MShahabi\MNV\HillClimbSim\batch_mode_function.py (147)
[2] During: resolving callee type: 
jitclass.learner#1e390f65798<w_x:int32,w_a:int32,mu_a:array(int64, 1d, 
A),sig_a:array(int64, 1d, A),mu_x:array(int64, 1d, A),sig_x:array(int32, 1d, 
A),mu_a_a:array(float64, 2d, A),sig_a_a:array(float64, 2d, 
A),mu_x_a:array(int32, 1d, A),sig_x_a:array(float32, 2d, 
A),mu_0:bool,sig_0:bool,beta:int32,policy:uint8>
[3] During: typing of call at <string> (3)

Upvotes: 1

Views: 594

Answers (1)

JE_Muc
JE_Muc

Reputation: 5784

The error message which is being displayed is quite easy to resolve. np.zeros creates an array of dtype=np.float64 per default, which is nb.float64 in numba. You have to specify the dtype in np.zeros to get an array of np.int64 or np.int32:

self.mu_a = np.zeros(self.w_a, dtype=np.int64)
self.sig_a = np.ones(self.w_a, dtype=np.int64)
self.mu_x = np.zeros(self.w_x, dtype=np.int64)
self.sig_x = np.ones(self.w_x, dtype=np.int32)

The same for the arrays self.mu_x_a and self.sig_x_a

self.mu_x_a = np.zeros((self.w_x, self.w_a), dtype=np.int32)
self.sig_x_a = np.ones((self.w_x, self.w_a), dtype=np.float32)

For self.mu_x_a you also missed the second dimension in spec. It has to be:

spec = [('mu_x_a',  nb.int32[:, :])]

Then there is a follow up error when creating the array self.mu_a_a. Numba raises an error, that the shape tuple (self.w_a, self.w_a) is of type (i64, i32). This obviously is some bug in numba with the type inference/casting. All nb.int32 types seem to be casted to nb.int64 automatically.
There are two workarounds for this:

Workaround 1:
Replace the type signature of self.w_a with nb.int64 (and also of self.w_x, since this is needed for self.mu_x_a and self.sig_x_a):

spec = [('w_x', nb.int64), ('w_a', nb.int64)]

OR Workaround 2: Don't use the somehow inconsistently cast instance variables. Instead use the given inputs:

self.mu_a_a = np.zeros((w_a, w_a))
self.sig_a_a = np.ones((w_a, w_a))
self.mu_x_a = np.zeros((w_x, w_a), dtype=np.int32)
self.sig_x_a = np.ones((w_x, w_a), dtype=np.float32)

I recommend using workaround 1, since currently int32 is cast to int64 in numba anyways. Using Workaround 1 it should look like this:

spec = [('w_x', nb.int64), ('w_a', nb.int64),('mu_a', nb.int64[:]), 
      ('sig_a',nb.int64[:]),('mu_x', nb.int64[:]),('sig_x', nb.int32[:]), 
       ('mu_a_a',nb.float64[:,:]),('sig_a_a', nb.float64[:,:]), ('mu_x_a', 
       nb.int32[:, :]),('sig_x_a', nb.float32[:,:]),('mu_0', nb.boolean), 
       ('sig_0', nb.boolean),('beta', nb.int32),('policy', nb.uint8)]
@nb.jitclass(spec)        
class learner(object):
    def __init__ (self, w_x, w_a, beta, policy):
        '''
            initialize: 
            w_x: the dim of customer features
            w_a: the dim of ad features
            mu_a: the prior of mean of weights on ad
            sig_a: the prior of var of weights on ad
            mu_x: the prior of mean of weights on customer
            sig_x: the prior of var of weights on customer
            mu_a_a: the prior of interactions between ad segments
            sig_a_a: the prior of var of interactions between ad segments
            mu_x_a: the prior of mean of interactions between customers and ad 
            segments
            sig_x_a: the prior of var of interactions between customers and ad 
         segments
        '''
        self.w_x = w_x
        self.w_a = w_a
        self.mu_a = np.zeros(self.w_a, dtype=np.int64)
        self.sig_a = np.ones(self.w_a, dtype=np.int64)
        self.mu_x = np.zeros(self.w_x, dtype=np.int64)
        self.sig_x = np.ones(self.w_x, dtype=np.int32)
        self.mu_a_a = np.zeros((self.w_a, self.w_a))
        #self.mu_a_a = np.triu(self.mu_a_a, k=1)
        self.sig_a_a = np.ones((self.w_a, self.w_a))
        #self.sig_a_a = np.triu(self.sig_a_a, k=1)
        self.mu_x_a = np.zeros((self.w_x, self.w_a), dtype=np.int32)
        self.sig_x_a = np.ones((self.w_x, self.w_a), dtype=np.float32)
        #the intercept term w_0
        self.mu_0 = 0
        self.sig_0 = 1
        self.beta = beta
        self.policy = policy

For workaround 2 you can leave the specs for w_x and w_a as nb.int32 and just replace the array creation of the following 4 arrays with:

self.mu_a_a = np.zeros((w_a, w_a))
self.sig_a_a = np.ones((w_a, w_a))
self.mu_x_a = np.zeros((w_x, w_a), dtype=np.int32)
self.sig_x_a = np.ones((w_x, w_a), dtype=np.float32)

Since I guess the casting behavious is a bug, I recommend that you report it with a link to this thread.

Upvotes: 2

Related Questions