Reputation: 33
This is a classical visualization of the perceptron learning model with 1 neuron. Let's say that I'd like to use 3 neuron or 5 neuron for training, can I do it without hidden layer ? I just can't picture in my head. Here is the code;
import numpy as np
def tanh(x):
return (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
def tanh_derivative(x):
return 1-x**2
#inputs
training_inputs = np.array([[0,0,0],[0,0,1],[0,1,0],[0,1,1],[1,0,0],[1,0,1],[1,1,0],[1,1,1]])
#outputs
training_outputs =np.array([[1,0,0,1,0,1,1,0]]).T
#3 input 1 output //
synaptic_weights = 2* np.random.random((3,1))-1
print('Random weights :{}'.format(synaptic_weights))
for i in range(20000):
input_layer = training_inputs
outputs = tanh(np.dot(input_layer,synaptic_weights))
error = training_outputs - outputs
weight_adjust = error * tanh_derivative(outputs)
synaptic_weights += np.dot(input_layer.T, weight_adjust)
print('After training Synaptic Weights: {}'.format(synaptic_weights))
print('\n')
print('After training Outputs :\n{}'.format(outputs))
Upvotes: 1
Views: 137
Reputation: 180050
If you have 3 neurons in the output layer, you have three outputs. This makes sense for some problems - imagine a color with RGB components.
The size of your input determines your number input nodes; the size of your output determines your number of output nodes. Only hidden layers sizes can be chosen freely. But any interesting network has at least one hidden layer.
Upvotes: 2