Overasyco
Overasyco

Reputation: 97

How to use a tflearn trained model in an application?

I am currently trying use a trained model in an application.

I've been using this code to generate US city names with an LSTM model. The code works fine and I do manage to get city names.

Right now, I am trying to save the model so I can load it in a different application without training the model again.

Here is the code of my basic application :

from __future__ import absolute_import, division, print_function

import os
from six import moves
import ssl
import tflearn
from tflearn.data_utils import *


path = "US_cities.txt"
maxlen = 20
X, Y, char_idx = textfile_to_semi_redundant_sequences(
    path, seq_maxlen=maxlen, redun_step=3)


# --- Create LSTM model
g = tflearn.input_data(shape=[None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True, name="lstm1")
g = tflearn.dropout(g, 0.5, name='dropout1')
g = tflearn.lstm(g, 512, name='lstm2')
g = tflearn.dropout(g, 0.5, name='dropout')
g = tflearn.fully_connected(g, len(char_idx), activation='softmax', name='fc')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
                            learning_rate=0.001)


# --- Initializing model and loading
model = tflearn.models.generator.SequenceGenerator(g, char_idx)
model.load('myModel.tfl')
print("Model is now loaded !")


# 
#    Main Application   
# 

while(True):
    user_choice = input("Do you want to generate a U.S. city names ? [y/n]")
    if user_choice == 'y':
        seed = random_sequence_from_textfile(path, 20)
        print("-- Test with temperature of 1.5 --")
        model.generate(20, temperature=1.5, seq_seed=seed, display=True)
    else:
        exit()

And here is what I get as an output :

Do you want to generate a U.S. city names ? [y/n]y
-- Test with temperature of 1.5 --
rk
Orange Park AcresTraceback (most recent call last):
  File "App.py", line 46, in <module>
    model.generate(20, temperature=1.5, seq_seed=seed, display=True)
  File "/usr/local/lib/python3.5/dist-packages/tflearn/models/generator.py", line 216, in generate
    preds = self._predict(x)[0]
  File "/usr/local/lib/python3.5/dist-packages/tflearn/models/generator.py", line 180, in _predict
    return self.predictor.predict(feed_dict)
  File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/evaluator.py", line 69, in predict
    o_pred = self.session.run(output, feed_dict=feed_dict).tolist()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 717, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 894, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 25, 61) for Tensor 'InputData/X:0', which has shape '(?, 20, 61)'

Unfortunately, I can't see why the shape has changed when using generate() in my app. Could anyone help me solve this problem?

Thank you in advance

William

Upvotes: 0

Views: 1249

Answers (1)

Overasyco
Overasyco

Reputation: 97

SOLVED?

One solution would be to simply add "modes" to the python script thanks to the argument parser :

import argparse
parser = argparse.ArgumentParser()
parser.add_argument("mode", help="Train or/and test", nargs='+', choices=["train","test"])
args = parser.parse_args()

And then

if args.mode == "train":
     # define your model
     # train the model
     model.save('my_model.tflearn')

if args.mode == "test":
     model.load('my_model.tflearn')
     # do whatever you want with your model

I dont really understand why this works and why when you're trying to load a model from a different script it doesn't. But I guess this should be fine for the moment...

Upvotes: 1

Related Questions