Reputation: 2183
Iam tyring to implement a simple RNN LSTM model but stuck. The problem itself is simple. I will be giving 5 consecutive digits to the model (but 1 digit at a time) and then I want the model to predict the 6th one.
Example: Input data: 1, 2, 3, 4, 5 (1 digit at each time step) And the output for this sequence should be 6.
I have a csv file in which:
I want to develop a model with Keras and make it successfully guess the 6th number.
HEre is what I do:
1) First implement some constants that we will need.
NR_FEATURES = 5
ITERATOR_BATCH_SIZE = 1
NR_EPOCHS = 15
2) Define the generator that will be used when training.
def train_data_generator():
dataset = tf.contrib.data.make_csv_dataset(train_path1,
batch_size=ITERATOR_BATCH_SIZE,
num_epochs=NR_EPOCHS,
shuffle=True)
iter = dataset.make_one_shot_iterator()
next = iter.get_next()
ID = next['ID']
features = [next['nr1'], next['nr2'], next['nr3'], next['nr4'], next['nr5']]
features = tf.reshape(features, [NR_FEATURES, 1])
label = next['next_nr']
yield (features, label)
3) Create the model and start training.
input_data = Input(shape=(5, 1), name='input_data')
layer1_out = LSTM(1, return_sequences=False)(input_data) # only return the last output
lstm_model = Model(inputs=input_data, outputs=layer1_out)
lstm_model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accuracy'])
lstm_model.fit_generator(train_data_generator(),
steps_per_epoch=(150/ITERATOR_BATCH_SIZE),
epochs=NR_EPOCHS,
verbose=1)
But it crashes right away...
The error message I get:
Epoch 1/15
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-af9dcbcbe289> in <module>()
8 steps_per_epoch=(150/ITERATOR_BATCH_SIZE),
9 epochs=NR_EPOCHS,
---> 10 verbose=1)
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
2212 # build batch logs
2213 batch_logs = {}
-> 2214 if x is None or len(x) == 0:
2215 # Handle data tensors support when no input given
2216 # step-size = 1 for data tensors
TypeError: object of type 'Tensor' has no len()
I just do not get it. Does anyone have any idea?
Upvotes: 0
Views: 848
Reputation: 6166
You can convert tensor
to numpy
by eval()
directly.
features = tf.reshape(features, [NR_FEATURES, 1])
# convert tensor to numpy
with tf.Session() as sess:
features = features.eval()
# Your data shape needs to be adjusted relative to your model input.
features = features.reshape(-1,NR_FEATURES,1)
label = next['next_nr']
label = np.array([label])
yield (features, label)
Upvotes: 1