USAF
USAF

Reputation: 33

How to find the predicted output of a classification neural network in python?

I a newbie to python and learning neural networks. I have a trained 3 layer feed forward neural network with 2 neurons in the hidden layer and 3 in the output layer. I am wondering that how to calculate the output layer values/ predicted output

I have weights and biases extracted from the network and activation values calculated of the hidden layer. I just want to confirm that how can I use softmax function to calculate the out put of the output layer neurons?

My implementation is as follows:

weights_from_hiddenLayer_to_OutputLayer = [
    [x, y],  # two weights connected to the output neuron 1 from hidden neurons 1 and 2
    [a, b],  # two weights connected to the output neuron 2 from hidden neurons 1 and 2
    [c, d]   # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
    ]

# output layer biases extracted from the neural network
biases_output_layer = [a, b, c]

act1 = m  # activation value of hidden neuron 1
act2 = n  # activation value of hidden neuron 2
arr = []
for i, weights in enumerate(weights_from_hiddenLayer_to_OutputLayer):
            arr.append(m*weights[0]+n*weights[1] +
                       biases_output_layer[i])
# i believe this will be the brightest neuron / predicted neural networks output ?  
print(np.argmax(arr))

I have searched over the internet for using softmax in python and here I have reached. My predicted output is mostly different from what neural networks prediction. Whereas I am using exact same values from the same trained model.

Upvotes: 2

Views: 653

Answers (1)

Orbital
Orbital

Reputation: 583

Your output would be the matrix multiplication of weights_from_hiddenLayer_to_OutputLayer and the previous activations. You can then pass it through the softmax function to get a probability distribution and use argmax as you guessed to get the corresponding class.

weights_from_hiddenLayer_to_OutputLayer = np.array([
    [x, y],  # two weights connected to the output neuron 1 from hidden neurons 1 and 2
    [a, b],  # two weights connected to the output neuron 2 from hidden neurons 1 and 2
    [c, d]   # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
    ])

act = np.array([m, n])
biases_output_layer = [a, b, c]
arr = np.dot(weights_from_hiddenLayer_to_OutputLayer, act)    # matrix multiplication of weights and activations
arr = arr + biases_output_layer
     
probability = np.exp(arr) / np.sum(np.exp(arr), axis=0)       # softmax
print(np.argmax(probability))

Note that you technically don't need to use softmax unless you are back-propagating or trying to assess the confidence of the output as the result of np.argmax() will be the same regardless of whether you pass in arr or the corresponding probability.

Upvotes: 2

Related Questions