Reputation: 3033
I am training a regression model that takes approximates the weights for the equation : Y = R+B+G For this, I provide pre-determined values of R, B and G and Y, as training data.
R = np.array([-4, -10, -2, 8, 5, 22, 3], dtype=float)
B = np.array([4, -10, 0, 0, 15, 5, 1], dtype=float)
G = np.array([0, 10, 5, 8, 1, 2, 38], dtype=float)
Y = np.array([0, -10, 3, 16, 21, 29, 42], dtype=float)
The training batch consisted of 1x3 array corresponding to Ith value of R, B and G.
RBG = np.array([R,B,G]).transpose()
print(RBG)
[[ -4. 4. 0.]
[-10. -10. 10.]
[ -2. 0. 5.]
[ 8. 0. 8.]
[ 5. 15. 1.]
[ 22. 5. 2.]
[ 3. 1. 38.]]
I used a neural network with 3 inputs, 1 dense layer (hidden layer) with 2 neurons and the output layer (output) with a single neuron.
hidden = tf.keras.layers.Dense(units=2, input_shape=[3])
output = tf.keras.layers.Dense(units=1)
Further, I trained the model
model = tf.keras.Sequential([hidden, output])
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(RBG,Y, epochs=500, verbose=False)
print("Finished training the model")
The loss vs epoch plot was as normal, decreasing and then flat.
But when I tested the model, using random values of R, B and G as
print(model.predict([[1],[1],[1]]))
expecting the output to be 1+1+1 = 3, but got the Value Error:
ValueError: Error when checking input: expected dense_2_input to have shape (3,) but got array with shape (1,)
Any idea where I might be getting wrong?
Surprisingly, the only input it responds to, is the training data itself. i.e,
print(model.predict(RBG))
[[ 2.1606684e-07]
[-3.0000000e+01]
[-3.2782555e-07]
[ 2.4000002e+01]
[ 4.4999996e+01]
[ 2.9000000e+01]
[ 4.2000000e+01]]
Upvotes: 0
Views: 37
Reputation: 2507
As the error says, the problem is in your shape of the input. You need to transpose [[1],[1],[1]]
this input then you will have the shape that is expected by the model.
so npq = np.array([[1],[1],[1]]).transpose()
and now feed this to model.predict(npq)
Upvotes: 1