Waschbrettwade
Waschbrettwade

Reputation: 73

Why is the performance of my backpropagation algorithm stuck?

I am learning how to write neural networks and currently I am working on a backpropagation algorithm with one input layer, one hidden layer and one output layer. The algorithm is running and when I throw some test data

x_train = np.array([[1., 2., -3., 10.], [0.3, -7.8, 1., 2.]])
y_train = np.array([[10, -3, 6, 1], [1, 1, 6, 1]])

into my algorithm, using a default value of 3 hidden units and a default learning rate of 10e-4,

Backprop.train(x_train, y_train, tol = 10e-1)
x_pred = Backprop.predict(x_train),

I get good results:

Tolerances: [10e-1, 10e-2, 10e-3, 10e-4, 10e-5]
Iterations: [2678, 5255, 7106, 14270, 38895]
Mean absolute error: [0.42540, 0.14577, 0.04264, 0.01735, 0.00773]
Sum of squared errors: [1.85383, 0.21345, 0.01882, 0.00311, 0.00071].

Each time the sum of squared errors drops by one decimal as I would have expected. However, when I use test data like this

X_train = np.random.rand(20, 7)
Y_train = np.random.rand(20, 2)
Tolerances: [10e+1, 10e-0, 10e-1, 10e-2, 10e-3]
Iterations: [11, 19, 63, 80, 7931],
Mean absolute error: [0.30322, 0.25076, 0.25292, 0.24327, 0.24255],
Sum of squared errors: [4.69919, 3.43997, 3.50411, 3.38170, 3.16057],

nothing really changes. I have checked my hidden units, gradients and weight matrices and they are all different and the gradients are indeed shrinking as in the backprop algorithm I have set

if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol: 
   learning = False,

where E_hidden and E_output are my gradient matrices. My question is: How can it be that although the gradients are shrinking as they should be, the metrics stay practically the same for some data and what can I do about it?

My backprop looks like this:

class Backprop:


    def sigmoid(r):
            return (1 + np.exp(-r)) ** (-1)

    def train(x_train, y_train, hidden_units = 3, learning_rate = 10e-4, tol = 10e-3):
        # We need y_train to be 2D. There should be as many rows as there are x_train vectors
        N = x_train.shape[0]
        I = x_train.shape[1]
        J = hidden_units 
        K = y_train.shape[1] # Number of output units

            # Add the bias units to x_train
        bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
            # Make the row vector a column vector for easier use when applying matrices. Afterwards, x_train.shape = (N, I+1)
        x_train = np.hstack((x_train, bias)).T # x_train.shape = (I+1, N) -> N column vectors of respective length I+1
        
            # Create our weight matrices
        W_input = np.random.rand(J, I+1) # W_input.shape = (J, I+1)
        W_hidden = np.random.rand(K, J+1) # W_hidden.shape = (K, J+1)
        m = 0
        learning = True
        while learning:

            ##### ----- Phase 1: Forward Propagation ----- #####

                # Create the total input to the hidden units
            u_hidden = W_input @ x_train # u_hidden.shape = (J, N) -> N column vectors of respective length J. For every training vector we                                            # get J hidden states
                # Create the hidden units 
           
            h = Backprop.sigmoid(u_hidden) # h.shape = (J, N)
                # Create the total input to the output units
            
            bias = -np.ones(N)
            h = np.vstack((h, bias)) # h.shape = (J+1, N)
            u_output = W_hidden @ h # u_output.shape = (K, N). For every training vector we get K output states. 
                # In the code itself the following is not necessary, because, as we remember from the above, the output activation function
                # is the identity function, but let's do it anyway for the sake of clarity
            y_pred = u_output.copy() # Now, y_pred has the same shape as y_train
            
            
            ##### ----- Phase 2: Backward Propagation ----- #####

                # We will calculate the delta terms now and begin with the delta term of the output unit
                
                # We will transpose several times now. Before, having column vectors was convenient, because matrix multiplication is 
                # more intuitive then. But now, we need to work with indices and need the right dimensions. Yes, loops are inefficient,
                # they provide much more clarity so that we can easily connect the theory above with our code. 

                # We don't need the delta_output right now, because we will update W_hidden with a loop. But we need it for the delta term 
                # of the hidden unit.
            delta_output = y_pred.T - y_train 
                # Calculate our error gradient for the output units
            E_output = np.zeros((K, J+1))
            for k in range(K):
                for j in range(J+1):
                    for n in range(N):
                        E_output[k, j] += (y_pred.T[n, k] - y_train[n, k]) * h.T[n, j] 
                # Calculate our change in W_hidden
            W_delta_output = -learning_rate * E_output
                # Update the old weights
            W_hidden = W_hidden + W_delta_output

                # Let's calculate the delta term of the hidden unit
            delta_hidden = np.zeros((N, J+1))
            for n in range(N):
                for j in range(J+1):
                    for k in range(K):
                        delta_hidden[n, j] += h.T[n, j]*(1 - h.T[n, j]) * delta_output[n, k] * W_delta_output[k, j]

                # Calculate our error gradient for the hidden units, but exclude the hidden bias unit, because W_input and the hidden bias
                # unit don't share any relation at all
            E_hidden = np.zeros((J, I+1))
            for j in range(J):
                for i in range(I+1):
                    for n in range(N):
                        E_hidden[j, i] += delta_hidden[n, j]*x_train.T[n, i]
                # Calculate our change in W_input
            W_delta_hidden = -learning_rate * E_hidden
            W_input = W_input + W_delta_hidden
            
            if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol: 
               learning = False
            
            m += 1 # Iteration count
            
        Backprop.weights = [W_input, W_hidden]
        Backprop.iterations = m
        Backprop.errors = [E_hidden, E_output]


 ##### ----- #####


    def predict(x):
        N = x.shape[0]
            # x1 = Backprop.weights[1][:,:-1] @ Backprop.sigmoid(Backprop.weights[0][:,:-1] @ x.T) # Trying this we see we really need to add
            #  a bias here the bias if we also train using bias

            # Add the bias units to x
        bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
            # Make the row vector a column vector for easier use when applying matrices.
        x = np.hstack((x, bias)).T
        h = Backprop.weights[0] @ x
        u = Backprop.sigmoid(h) # We need to transform the data using the sigmoidal function
        h = np.vstack((u, bias.reshape(1, -1)))

        return (Backprop.weights[1] @ h).T

Upvotes: 0

Views: 236

Answers (1)

Waschbrettwade
Waschbrettwade

Reputation: 73

I have found the answer. If in Backprop.predict, I write

output = (Backprop.weights[1] @ h).T
    return output

instead of the above, everything is working just fine.

Upvotes: 1

Related Questions