Reputation: 776
I want to have the simples, single layer NN which transforms a vector of 300 numbers to another vector of 300 numbers.
So having:
print(np.array(train_in).shape)
print(np.array(train_t).shape)
return:
(943, 300)
(943, 300)
I try the following:
model = keras.Sequential()
model.add(Dense(300, input_shape=(300,)))
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(np.array(train_in), np.array(train_t), epochs=5)
I'm getting:
ValueError: Error when checking target: expected dense_37 to have shape (1,) but got array with shape (300,)
Why is target expected to have shape (1,)
? A layer with 300 units should produce a vector of 300 numbers on output, right?
Edit:
As requested, this is how my data looks like:
print(np.array(train_in))
print(np.array(train_t))
gives:
[[-0.13841234 0.22157902 0.12244826 ... -0.10154381 -0.01824803
-0.08607237]
[ 0.02228635 0.3353927 0.05389142 ... -0.23218463 -0.06550601
0.03365546]
[ 0.22719774 0.25478157 -0.02882686 ... -0.36675575 -0.14722016
-0.22856475]
...
[ 0.07122967 0.07579704 0.2376182 ... -0.5245226 -0.38911286
-0.5513026 ]
[-0.05494669 -0.3587228 0.13438214 ... -0.6134821 -0.06194036
-0.46365416]
[-0.16560836 -0.15729778 0.00067104 ... -0.01925305 -0.3984945
0.12297624]]
[[-0.20293862 0.27669927 0.19337481 ... -0.14366734 0.06025359
-0.1156549 ]
[-0.02273261 0.20943424 0.26937988 ... -0.20701817 -0.03191033
0.03741883]
[ 0.16326293 0.19438037 0.12544776 ... -0.37406632 -0.1527986
-0.29249507]
...
[ 0.05573128 0.26873755 0.40287578 ... -0.65253705 -0.30244952
-0.68772614]
[-0.02555208 -0.0485841 0.19109009 ... -0.2797842 -0.01007691
-0.53623134]
[-0.30828896 0.04836991 -0.108813 ... -0.20583114 -0.40019956
0.11540392]]
Upvotes: 0
Views: 259
Reputation: 56397
The problem is your loss, sparse categorical cross-entropy makes no sense in this case, as it is used for classification, and you do not seem to have a classification problem. To perform regression of a 300-dimensional vector, then mean squared error makes more sense.
The problem with using sparse categorical cross-entropy is that this loss makes the assumption that the model outputs a scalar (a one element vector), and this is checked during runtime, and this check fails and that's why you get an error.
Also, accuracy makes no sense in a regression setting.
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='mean_squared_error')
Upvotes: 2