Reputation: 4640
I am learning keras and this is the first time I am using it in a toy example, so I tried the following linear regression, however I got a ValueError: setting an array element with a sequence.
:
In:
import pandas as pd
import keras
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation
dims = X.shape[1]
print(dims, 'dims')
print("Building model...")
nb_classes = y.shape[0]
print(nb_classes, 'classes')
model = Sequential()
model.add(Dense(1, input_dim=dims))
model.compile(optimizer='sgd', loss='mean_squared_error')
print(X.shape)
print(y.shape)
model.fit(X, y)
Out:
68 dims
Building model...
1000 classes
(1000, 68)
(1000,)
Epoch 1/10
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-7a58187d7756> in <module>()
19 print(X.shape)
20 print(y.shape)
---> 21 model.fit(X, y)
/usr/local/lib/python3.5/site-packages/keras/models.py in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, **kwargs)
662 shuffle=shuffle,
663 class_weight=class_weight,
--> 664 sample_weight=sample_weight)
665
666 def evaluate(self, x, y, batch_size=32, verbose=1,
/usr/local/lib/python3.5/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch)
1141 val_f=val_f, val_ins=val_ins, shuffle=shuffle,
1142 callback_metrics=callback_metrics,
-> 1143 initial_epoch=initial_epoch)
1144
1145 def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):
/usr/local/lib/python3.5/site-packages/keras/engine/training.py in _fit_loop(self, f, ins, out_labels, batch_size, nb_epoch, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
841 batch_logs['size'] = len(batch_ids)
842 callbacks.on_batch_begin(batch_index, batch_logs)
--> 843 outs = f(ins_batch)
844 if not isinstance(outs, list):
845 outs = [outs]
/usr/local/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
1601 session = get_session()
1602 updated = session.run(self.outputs + [self.updates_op],
-> 1603 feed_dict=feed_dict)
1604 return updated[:len(self.outputs)]
1605
/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
764 try:
765 result = self._run(None, fetches, feed_dict, options_ptr,
--> 766 run_metadata_ptr)
767 if run_metadata:
768 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
935 ' to a larger type (e.g. int64).')
936
--> 937 np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
938
939 if not subfeed_t.get_shape().is_compatible_with(np_val.shape):
/usr/local/lib/python3.5/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
529
530 """
--> 531 return array(a, dtype, copy=False, order=order)
532
533
ValueError: setting an array element with a sequence.
Data:
X.shape
(1000, 20)
labels:
y.shape
(1000,)
Upvotes: 1
Views: 220
Reputation: 56377
The first parameter of a Dense layer is the number of outputs of that layer. So in your case, you have an input with shape (1000, 20), and labels with shape (1000,). This means you have 1000 training samples, with 20 features and each label is one dimensional (and you have 1000 of them). Then you need to modify the model to have a single output:
model = Sequential()
model.add(Dense(1, input_dim=dims))
model.compile(optimizer='sgd', loss='mean_squared_error')
Here I used the fact that you want to implement linear regression to remove the softmax (which is for classification), and use the mean square error loss instead of cross entropy.
Upvotes: 2
Reputation: 37741
input_shape
parameter of Dense()
should be an integer. So, try the following:
model.add(Dense(nb_classes, input_dim=dims))
From official documentation:
keras.layers.core.Dense(output_dim, init='glorot_uniform', activation=None, weights=None, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, input_dim=None)
where
input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model.
Upvotes: 2