Reputation:
I have implemented this simple model to learn neural nets, it trains well and give the output which was given initially.
This is where I am kind of loss, in the example of the XOR function recognition I would like just to be able to test it, not train it. It seems like all readings online are all about training and then stop right there.
Does this mean that for each new input the model has to recalculate and train the whole set? Is there anything to do with the weights? How would you proceed with having the model running "live" and taking new input as part of its live feedback plus its ongoing recurrent training?
Thanks
import numpy as np
def nonlin(x, deriv=False):
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
#4x2
x = np.array([[0,0],[0,1],[1,0],[1,1]])
print (x)
#1x4
y = np.array([[0],[1],[1],[0]])
np.random.seed(1)
syn0 = 2*np.random.random((2,4))-1
print (syn0)
syn1 = 2*np.random.random((4,1))-1
for j in range(60000):
l0 = x;
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
l2_error = y - l2
l2_delta = l2_error*nonlin(l2, deriv=True)
l1_error = l2_delta.dot(syn1.T)
l1_delta = l1_error * nonlin(l1,deriv=True)
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
if(j % 10000) ==0:
print ("Error:" + str(np.mean(np.abs(l2_error))))
print ("Output after training")
print (syn0)
print (syn1)
print (l2)
Upvotes: 0
Views: 157
Reputation: 351
You just have to factor out the code that actually does the Neural Network computation, here is your code amended this way:
import numpy as np
def nonlin(x, deriv=False):
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
#4x2
x = np.array([[0,0],[0,1],[1,0],[1,1]])
print ("x=",x)
#1x4
y = np.array([[0],[1],[1],[0]])
print ("y=",y)
np.random.seed(1)
syn0 = 2*np.random.random((2,4))-1
print (syn0)
syn1 = 2*np.random.random((4,1))-1
def NN(x):
l0 = x;
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
return (l0,l1,l2)
for j in range(60000):
l0,l1,l2 = NN(x)
l2_error = y - l2
l2_delta = l2_error*nonlin(l2, deriv=True)
l1_error = l2_delta.dot(syn1.T)
l1_delta = l1_error * nonlin(l1,deriv=True)
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
if(j % 10000) ==0:
print ("Error:" + str(np.mean(np.abs(l2_error))))
print ("Output after training")
print ("trained l0 weights:",syn0)
print ("trained l1 weights:",syn1)
l0,l1,l2 = NN(x)
print ("NN(",x,") == ",l2)
Here NN(x) is the function that performs neural net calculation. It returns input vector, hidden layer and output layer values in a tuple. You can code a separate function for cleaner interface:
def NNout(x,syn0,syn1):
l0 = x;
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
return l2
Upvotes: 1