Reputation: 629
I have three classes of points:
C1: {(4,1), (2,3), (3,5), (5,4), (1,6)}
C2: {(0,2), (-2,2), (-3,2), (-2,4)}
C3: {(1,-2), (3,-2)}
I also have a single-layer perceptron with 2 inputs, a bias term, and three outputs.
a) Can the net learn to separate the samples? (Assuming that we want yi = 1 if x ∈ Ci and yj = −1 for j != i)
b) Add the sample (-1,6) to C1. Now, can the net learn to separate the samples?
I don't know how to approach this problem. I don't need to specify the actual weights, but how do I determine whether the network will be able to separate the samples or not? Can this be done purely graphically, or is there a written proof?
Upvotes: 0
Views: 77
Reputation: 4983
you can see from the graph generated by the following code
import matplotlib.pyplot as plt
C1 = [(4,1), (2,3), (3,5), (5,4), (1,6), (-1,6)]
C2 = [(0,2), (-2,2), (-3,2), (-2,4)]
C3 = [(1,-2), (3,-2)]
plt.scatter([i[0] for i in C1],[i[1] for i in C1], c='b')
plt.scatter([i[0] for i in C2],[i[1] for i in C2], c='r')
plt.scatter([i[0] for i in C3],[i[1] for i in C3], c='g')
plt.show()
the data can be easily separated by linear lines, perceptron aka neural network with just 1 layer can learn to separate linear data
a full neural network with a few layers, can produce non linear separation, so it can do it easily
Upvotes: 2