Kyrylo Kalashnikov
Kyrylo Kalashnikov

Reputation: 81

Tensorflow restore() missing 1 required positional argument: 'save_path'

I am trying to make a neural network with Python that based on Irish dataset will predict the type of flower based on the array that I fed in. That how my NN looks like

    names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'species']  
    train = pd.read_csv(dataset, names=names, skiprows=1)  
    test = pd.read_csv(test_dataset, names=names, skiprows=1)
    Xtrain = train.drop("species" , axis = 1)
    Xtest = train.drop("species" , axis = 1)

    ytrain = pd.get_dummies(train.species)
    ytest = pd.get_dummies(test.species)
def create_train_model(hidden_nodes, num_iters):

    # Reset the graph
    tf.reset_default_graph()

    # Placeholders for input and output data
    X = tf.placeholder(shape=(120, 4), dtype=tf.float64, name='X')
    y = tf.placeholder(shape=(120, 3), dtype=tf.float64, name='y')

    # Variables for two group of weights between the three layers of the network
    W1 = tf.Variable(np.random.rand(4, hidden_nodes), dtype=tf.float64)
    W2 = tf.Variable(np.random.rand(hidden_nodes, 3), dtype=tf.float64)

    # Create the neural net graph
    A1 = tf.sigmoid(tf.matmul(X, W1))
    y_est = tf.sigmoid(tf.matmul(A1, W2))

    # Define a loss function
    deltas = tf.square(y_est - y)
    loss = tf.reduce_sum(deltas)

    # Define a train operation to minimize the loss
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    train = optimizer.minimize(loss)

    # Initialize variables and run session
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    sess = tf.Session()
    sess.run(init)

    # Go through num_iters iterations
    for i in range(num_iters):
        sess.run(train, feed_dict={X: Xtrain, y: ytrain})
        loss_plot[hidden_nodes].append(sess.run(loss, feed_dict={X: Xtrain.as_matrix(), y: ytrain.as_matrix()}))
        weights1 = sess.run(W1)
        weights2 = sess.run(W2)

    print("loss (hidden nodes: %d, iterations: %d): %.2f" % (hidden_nodes, num_iters, loss_plot[hidden_nodes][-1]))
    save_path = saver.save(sess, model_path , hidden_nodes)
    print("Model saved in path: %s" % save_path)
    return weights1, weights2
# Plot the loss function over iterations
num_hidden_nodes = [5, 10, 20]  
loss_plot = {5: [], 10: [], 20: []}  
weights1 = {5: None, 10: None, 20: None}  
weights2 = {5: None, 10: None, 20: None}  
num_iters = 2000

plt.figure(figsize=(12,8))  
for hidden_nodes in num_hidden_nodes:  
    weights1[hidden_nodes], weights2[hidden_nodes] = create_train_model(hidden_nodes, num_iters)
    plt.plot(range(num_iters), loss_plot[hidden_nodes], label="nn: 4-%d-3" % hidden_nodes)

plt.xlabel('Iteration', fontsize=12)  
plt.ylabel('Loss', fontsize=12)
plt.legend(fontsize=12)  

everything runs good. Model is being saved and all training goes well. But when i feed in array and restore model i get an error

new_samples = np.array([[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=np.float32)
with tf.Session() as sess:
  saver = tf.train.Saver
  saver.restore(sess , model_path , str(hidden_nodes))
  y_est_val = sess.run(y_est, feed_dict={X: new_samples})

After i this i get an error missing 1 required positional argument: 'save_path'. I dont know what could be the probelem. Error is in this line

saver.restore(sess , model_path , hidden_nodes)

I watched some tutorial and they have the same code and it works for them

Upvotes: 2

Views: 7438

Answers (2)

Vijay Mariappan
Vijay Mariappan

Reputation: 17201

The model restore seems to be problem. First you create the graph using import_meta_graph and then restore the parameters to the graph using saver.restore.

There are other issues like when restoring the graph you need to load the tensors using get_tensor_by_name, so you name the tensors appropriately.

Here are the changes you may need to make:

# The test batch size is different from the hard-coded batch_size in the original graph, so replace `120` to `None` in the placeholders of X and y.
new_samples = np.array([[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=np.float32)

tf.reset_default_graph()
graph = tf.Graph()

with graph.as_default():

    with tf.Session() as sess:

       # Create the network, load the meta file appropriately.
       saver = tf.train.import_meta_graph('{your meta file for the hidden unit}.meta')
       # Load the parameters
       saver.restore(sess , tf.train.latest_checkpoint(model_path))
       # Get the tensors from the graph. 
       X = graph.get_tensor_by_name("X:0")

       # `y_est` is not named in your graph: change to y_est = tf.identity(tf.sigmoid(tf.matmul(A1, W2)), 'y_est')
       y_est = graph.get_tensor_by_name("y_est:0")

       y_est_val = sess.run(y_est, feed_dict={X: new_samples})

Note: You need the different checkpoints without overwriting them, so do:

 save_path = saver.save(sess, model_dir+str(hidden_nodes)+'/' , hidden_nodes ).

Upvotes: 1

Eliethesaiyan
Eliethesaiyan

Reputation: 2322

i am not sure what kind of tutorial you watched,it would be helpful if you posted them here. From what i can tell,saver.restore takes only two arguments, session and save_path. and i suspect the error is coming from save_path = saver.save(sess, model_path , hidden_nodes) You don't save variables like that. You save the model and once restored you get the ops like

w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")

My advice is to use explicit arugments when saving,it will tell you which key words are wrong.

save_path = saver.save(sess=sess,save_path=model_path , not sure what this is=hidden_nodes)

here is the link about orginal arugments for saver and restore and Tensoflow guide on how to save and restore model and a good tutorial on how save and restore tensorflow models.

Upvotes: 0

Related Questions