EliteKaffee
EliteKaffee

Reputation: 109

Numerical errors in Keras vs Numpy

In order to really understand convolutional layers, I have reimplemented the forward method of a single keras Conv2D layer in basic numpy. The outputs of both seam almost identical, but there are some minor differences.

Getting the keras output:

inp = K.constant(test_x)
true_output = model.layers[0].call(inp).numpy()

My output:

def relu(x):
    return np.maximum(0, x)


def forward(inp, filter_weights, filter_biases):

    result = np.zeros((1, 64, 64, 32))

    inp_with_padding = np.zeros((1, 66, 66, 1))
    inp_with_padding[0, 1:65, 1:65, :] = inp

    for filter_num in range(32):
        single_filter_weights = filter_weights[:, :, 0, filter_num]

        for i in range(64):
            for j in range(64):
                prod = single_filter_weights * inp_with_padding[0, i:i+3, j:j+3, 0]
                filter_sum = np.sum(prod) + filter_biases[filter_num]
                result[0, i, j, filter_num] = relu(filter_sum)
    return result


my_output = forward(test_x, filter_weights, biases_weights)

The results are largely the same, but here are some examples of differences:

Mine: 2.6608338356018066
Keras: 2.660834312438965

Mine: 1.7892705202102661
Keras: 1.7892701625823975

Mine: 0.007190803997218609
Keras: 0.007190565578639507

Mine: 4.970898151397705
Keras: 4.970897197723389

I've tried converting everything to float32, but that does not solve it. Any ideas?

Edit: I plotted the distribution over errors, and it might give some insight into what is happening. As can be seen, the errors all have very similar values, falling into four groups. However, these errors are not exactly these four values, but are almost all unique values around these four peaks. enter image description here

I am very interested in how to get my implementation to exactly match the keras one. Unfortunately, the errors seem to increase exponentially when implementing multiple layers. Any insight would help me out a lot!

Upvotes: 1

Views: 195

Answers (3)

Daniel Möller
Daniel Möller

Reputation: 86610

First thing is to check whether you're using padding='same'. You seem to be using padding same in your implementation.

If you're using other types of padding, including the default which is padding='valid', there will be a difference.

Another possibility is that you may be accumulating errors because of the triple loop of little sums.

You could do it at once and see if it gets different. Compare this implementation with your own, for instance:

def forward2(inp, filter_weights, filter_biases):

    #inp: (batch, 64, 64, in)
    #w: (3, 3, in, out)
    #b: (out,)

    padded_input = np.pad(inp, ((0,0), (1,1), (1,1), (0,0))) #(batch, 66, 66, in)
    stacked_input = np.stack([
        padded_input[:,  :-2], 
        padded_input[:, 1:-1],
        padded_input[:, 2:  ]], axis=1) #(batch, 3, 64, 64, in)

    stacked_input = np.stack([
        stacked_input[:, :, :,  :-2],
        stacked_input[:, :, :, 1:-1],
        stacked_input[:, :, :, 2:  ]], axis=2) #(batch, 3, 3, 64, 64, in)


    stacked_input = stacked_input.reshape((-1, 3, 3, 64, 64, 1,   1))
    w =            filter_weights.reshape(( 1, 3, 3,  1,  1, 1, 32))
    b =            filter_biases.reshape (( 1, 1, 1, 32))


    result = stacked_input * w #(-1, 3, 3, 64, 64, 1, 32)
    result = result.sum(axis=(1,2,-2)) #(-1, 64, 64, 32)
    result += b

    result = relu(result)

    return result

A third possibility is to check whether you're using GPU and switch everything to CPU for test. Some algorithms for GPU are even non-deterministic.


For any kernel size:

def forward3(inp, filter_weights, filter_biases):

    inShape = inp.shape           #(batch, imgX, imgY, ins)
    wShape = filter_weights.shape #(wx, wy, ins, out)
    bShape = filter_biases.shape  #(out,)

    ins = inShape[-1]
    out = wShape[-1]

    wx = wShape[0]
    wy = wShape[1]

    imgX = inShape[1]
    imgY = inShape[2]

    assert imgX >= wx
    assert imgY >= wy

    assert inShape[-1] == wShape[-2]
    assert bShape[-1] == wShape[-1]


    #you may need to invert this padding, exchange L with R
    loseX = wx - 1
    padXL = loseX // 2
    padXR = padXL + (1 if loseX % 2 > 0 else 0)

    loseY = wy - 1
    padYL = loseY // 2
    padYR = padYL + (1 if loseY % 2 > 0 else 0)

    padded_input = np.pad(inp, ((0,0), (padXL,padXR), (padYL,padYR), (0,0)))
        #(batch, paddedX, paddedY, in)


    stacked_input = np.stack([padded_input[:, i:imgX + i] for i in range(wx)],
                             axis=1) #(batch, wx, imgX, imgY, in)

    stacked_input = np.stack([stacked_input[:,:,:,i:imgY + i] for i in range(wy)],
                             axis=2) #(batch, wx, wy, imgX, imgY, in)

    stacked_input = stacked_input.reshape((-1, wx, wy, imgX, imgY, ins,   1))
    w =            filter_weights.reshape(( 1, wx, wy,    1,    1, ins, out))
    b =             filter_biases.reshape(( 1,   1,  1, out))

    result = stacked_input * w
    result = result.sum(axis=(1,2,-2))
    result += b

    result = relu(result)

    return result

Upvotes: 1

Chris
Chris

Reputation: 2860

Floating point operations are not commutable. Here is an example:

In [19]: 1.2 - 1.0 - 0.2
Out[19]: -5.551115123125783e-17

In [21]: 1.2 - 0.2 -  1.0
Out[21]: 0.0

So if you want completely identical results, you not only need to do the same computations analytically. But you also need to do them in the exact same order, with the same datatypes and rounding implementation.

To debug this. Start with the Keras code and change it line by line towards your code, until you see a difference.

Upvotes: 1

Hiho
Hiho

Reputation: 663

Given how small the differences are, I would say that they are rounding errors.
I recommend using np.isclose (or math.isclose) to check if floats are "equal".

Upvotes: 2

Related Questions