Ryan
Ryan

Reputation: 1432

Checking Neural Network Gradient with Finite Difference Methods Doesn't Work

After a full week of print statements, dimensional analysis, refactoring, and talking through the code out loud, I can say I'm completely stuck.

The gradients my cost function produces are too far from those produced by finite differences.

I have confirmed my cost function produces correct costs for regularized inputs and not. Here's the cost function:

def nnCost(nn_params, X, y, lambda_, input_layer_size, hidden_layer_size, num_labels):
  # reshape parameter/weight vectors to suit network size
  Theta1 = np.reshape(nn_params[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1)))
  Theta2 = np.reshape(nn_params[(hidden_layer_size * (input_layer_size+1)):], (num_labels, (hidden_layer_size + 1)))

  if lambda_ is None:
    lambda_ = 0

  # grab number of observations
  m = X.shape[0]
  
  # init variables we must return
  cost = 0
  Theta1_grad = np.zeros(Theta1.shape)
  Theta2_grad = np.zeros(Theta2.shape)

  # one-hot encode the vector y
  y_mtx = pd.get_dummies(y.ravel()).to_numpy() 

  ones = np.ones((m, 1))
  X = np.hstack((ones, X))
  
  # layer 1
  a1 = X
  z2 = [email protected]
  # layer 2
  ones_l2 = np.ones((y.shape[0], 1))
  a2 = np.hstack((ones_l2, sigmoid(z2.T)))
  z3 = [email protected]
  # layer 3
  a3 = sigmoid(z3)

  reg_term = (lambda_/(2*m)) * (np.sum(np.sum(np.multiply(Theta1, Theta1))) + np.sum(np.sum(np.multiply(Theta2,Theta2))) - np.subtract((Theta1[:,0].T@Theta1[:,0]),(Theta2[:,0].T@Theta2[:,0])))
  cost = (1/m) * np.sum((-np.log(a3).T * (y_mtx) - np.log(1-a3).T * (1-y_mtx))) + reg_term
  
  # BACKPROPAGATION
  # δ3 equals the difference between a3 and the y_matrix
  d3 = a3 - y_mtx.T
  # δ2 equals the product of δ3 and Θ2 (ignoring the Θ2 bias units) multiplied element-wise by the g′() of z2 (computed back in Step 2).
  d2 = Theta2[:,1:].T@d3 * sigmoidGradient(z2)
  # Δ1 equals the product of δ2 and a1.
  Delta1 = d2@a1
  Delta1 /= m
  # Δ2 equals the product of δ3 and a2.
  Delta2 = d3@a2
  Delta2 /= m
  
  reg_term1 = (lambda_/m) * np.append(np.zeros((Theta1.shape[0],1)), Theta1[:,1:], axis=1)
  reg_term2 = (lambda_/m) * np.append(np.zeros((Theta2.shape[0],1)), Theta2[:,1:], axis=1)
  
  Theta1_grad = Delta1 + reg_term1
  Theta2_grad = Delta2 + reg_term2
  
  grad = np.append(Theta1_grad.ravel(), Theta2_grad.ravel())
  
  return cost, grad

Here's the code to check the gradients. I have been over every line and there is nothing whatsoever that I can think of to change here. It seems to be in working order.

def checkNNGradients(lambda_):
  """
  Creates a small neural network to check the backpropagation gradients. 
  Credit: Based on the MATLAB code provided by Dr. Andrew Ng, Stanford Univ.
  
  Input: Regularization parameter, lambda, as int or float.
  
  Output: Analytical gradients produced by backprop code and the numerical gradients (computed
  using computeNumericalGradient). These two gradient computations should result in 
  very similar values. 
  """

  input_layer_size = 3
  hidden_layer_size = 5
  num_labels = 3
  m = 5

  # generate 'random' test data
  Theta1 = debugInitializeWeights(hidden_layer_size, input_layer_size)
  Theta2 = debugInitializeWeights(num_labels, hidden_layer_size)

  # reusing debugInitializeWeights to generate X
  X  = debugInitializeWeights(m, input_layer_size - 1)
  y  = np.ones(m) + np.remainder(np.range(m), num_labels)


  # unroll parameters
  nn_params = np.append(Theta1.ravel(), Theta2.ravel())
  costFunc = lambda p: nnCost(p, X, y, lambda_, input_layer_size, hidden_layer_size, num_labels)
    
  cost, grad = costFunc(nn_params)
    
  numgrad = computeNumericalGradient(costFunc, nn_params)

  # examine the two gradient computations; two columns should be very similar. 
  print('The columns below should be very similar.\n')
   
  # Credit: http://stackoverflow.com/a/27663954/583834
  print('{:<25}{}'.format('Numerical Gradient', 'Analytical Gradient'))
  for numerical, analytical in zip(numgrad, grad):
    print('{:<25}{}'.format(numerical, analytical))


  # If you have a correct implementation, and assuming you used EPSILON = 0.0001 
  # in computeNumericalGradient.m, then diff below should be less than 1e-9
  diff = np.linalg.norm(numgrad-grad)/np.linalg.norm(numgrad+grad)
  print(diff)
  print("\n")
  print('If your backpropagation implementation is correct, then \n' \
          'the relative difference will be small (less than 1e-9). \n' \
          '\nRelative Difference: {:.10f}'.format(diff))

The check function generates its own data using a debugInitializeWeights function (so there's the reproducible example; just run that and it will call the other functions), and then calls the function that calculates the gradient using finite differences. Both are below.

def debugInitializeWeights(fan_out, fan_in):
  """
  Initializes the weights of a layer with fan_in
  incoming connections and fan_out outgoing connections using a fixed
  strategy.

  Input: fan_out, number of outgoing connections for a layer as int; fan_in, number
  of incoming connections for the same layer as int. 
  
  Output: Weight matrix, W, of size(1 + fan_in, fan_out), as the first row of W handles the "bias" terms
  """
  W = np.zeros((fan_out, 1 + fan_in))
  # Initialize W using "sin", this ensures that the values in W are of similar scale;
  # this will be useful for debugging
  W = np.sin(range(1, np.size(W)+1)) / 10 
  return W.reshape(fan_out, fan_in+1)

def computeNumericalGradient(J, nn_params):
  """
  Computes the gradient using "finite differences"
  and provides a numerical estimate of the gradient (i.e.,
  gradient of the function J around theta).
  Credit: Based on the MATLAB code provided by Dr. Andrew Ng, Stanford Univ. 

  Inputs: Cost, J, as computed by nnCost function; Parameter vector, theta.

  Output: Gradient vector using finite differences. Per Dr. Ng, 
  'Sets numgrad(i) to (a numerical approximation of) the partial derivative of 
  J with respect to the i-th input argument, evaluated at theta. (i.e., numgrad(i) should 
  be the (approximately) the partial derivative of J with respect
  to theta(i).)'          
  """
  numgrad = np.zeros(nn_params.shape)
  perturb = np.zeros(nn_params.shape)
  e = .0001
  for i in range(np.size(nn_params)):
      # Set perturbation (i.e., noise) vector
      perturb[i] = e
      # run cost fxn w/ noise added to and subtracted from parameters theta in nn_params
      cost1, grad1 = J((nn_params - perturb))
      cost2, grad2 = J((nn_params + perturb))
      # record the difference in cost function ouputs; this is the numerical gradient
      numgrad[i] = (cost2 - cost1) / (2*e)
      perturb[i] = 0

  return numgrad

The code is not for class. That MOOC was in MATLAB and it's over. This is for me. Other solutions exist on the web; looking at them has proved fruitless. Everyone has a different (inscrutable) approach. So, I'm in serious need of assistance or a miracle.

Edit/Update: Fortran ordering when raveling vectors influences the outcome, but I have not been able to get the gradients to move together changing that option.

Upvotes: 2

Views: 994

Answers (2)

Ryan
Ryan

Reputation: 1432

All of the following figured into the solution to my problem. For those attempting to translate MATLAB code to Python, whether from Andrew NG's Coursera Machine Learning course or not, these are things everyone should know.

  1. MATLAB does everything in FORTRAN order; Python does everything in C order. This affects how vectors are populated and, thus, your results. You should always be in FORTRAN order, if you want your answers to match what you did in MATLAB. See docs

  2. Getting your vectors in FORTRAN order can be as easy as passing order='F' as an argument to .reshape(), .ravel(), or .flatten(). You may, however, achieve the same thing if you are using .ravel() by transposing the vector then applying the .ravel() function like so X.T.ravel().

  3. Speaking of .ravel(), the .ravel() and .flatten() functions do not do the same thing and may have different use cases. For example, .flatten() is preferred by SciPy optimization methods. So, if your equivalent of fminunc isn't working, it's likely because you forgot to .flatten() your response vector y. See this Q&A StackOverflow and docs on .ravel() which link to .flatten().More Docs

  4. If you're translating your code from MATLAB live script into a Jupyter notebook or Google COLAB, you must police your name space. On one occasion, I found that the variable I thought was being passed was not actually the variable that was being passed. Why? Jupyter and Colab notebooks have a lot of global variables that one would never write ordinarily.

  5. There is a better function to evaluate the differences between numerical and analytical gradients: Relative Error Comparison np.abs(numerical-analyitical)/(numerical+analytical). Read about it here CS231 Also, consider the accepted post above.

Upvotes: 2

hfhc2
hfhc2

Reputation: 4391

One thought: I think your perturbation is a little large, being 1e-4. For double precision floating point numbers, it should be more like 1e-8, i.e., the root of the machine precision (or are you working with single precision?!).

That being said, finite differences can be very bad approximations to true derivatives. Specifically, floating point computations in numpy are not deterministic, as you seem to have found out. The noise in evaluations can cancel out many significant digits under some circumstances. What values are you seeing and what are you expecting?

Upvotes: 3

Related Questions