beinando
beinando

Reputation: 497

Fast Style Transfer in a for-loop. Each iteration takes longer. Why?

i want to run the image style transition in a for-loop. The problem is the following: Each iteration takes longer than the previous one. Why is that so? I just read another topic where someone proposed to use placeholders for the content image to process. But I already use it, and it still does not change the behavior. The following codelines are modified, the source code comes from: https://github.com/hwalsuklee/tensorflow-fast-style-transfer

Here's the relevant code for my program:

sess = tf.Session(config=soft_config)

value = 1
args = parse_args()

for st in mnist_list[:]:  

if args is None:
    exit()

# load content image

content_image = utils.load_image(pfad_liste + "\\" + st, max_size=args.max_size)
transformer = style_transfer_tester.StyleTransferTester(session=sess,
                                                    model_path=args.style_model,
                                                    content_image=content_image,
                                                    )

value = value + 1

# execute the graph
start_time = time.time()
output_image = transformer.test()
end_time = time.time()
print('EXECUTION TIME for ALL  image : %f sec' % (1.*float(end_time - start_time))) 

out_string = "D:\\DeepLearning\\tensorflow-fast-style-transfer\\images\\02_results\\" + str(value) + "_resultNEU.jpg"
utils.save_image(output_image,out_string)

tf.get_variable_scope().reuse_variables()

The functions I call in the code above are written here:

import tensorflow as tf
import transform

class StyleTransferTester:

    def __init__(self, session, content_image, model_path):
        # session
        self.sess = session

        # input images
        self.x0 = content_image

        # input model
        self.model_path = model_path

        # image transform network
        self.transform = transform.Transform()

        # build graph for style transfer
        self._build_graph()

    def _build_graph(self):

        # graph input
        self.x = tf.placeholder(tf.float32, shape=self.x0.shape, name='input')
        self.xi = tf.expand_dims(self.x, 0) # add one dim for batch

        # result image from transform-net
        self.y_hat = self.transform.net(self.xi/255.0)
        self.y_hat = tf.squeeze(self.y_hat) # remove one dim for batch
        self.y_hat = tf.clip_by_value(self.y_hat, 0., 255.)

        self.sess.run(tf.global_variables_initializer())

        # load pre-trained model
        saver = tf.train.Saver()
        saver.restore(self.sess, self.model_path)

    def test(self):

        # initialize parameters
        #self.sess.run(tf.global_variables_initializer())

        # load pre-trained model
        #saver = tf.train.Saver()
        #saver.restore(self.sess, self.model_path)

        # get transformed image
        output = self.sess.run(self.y_hat, feed_dict={self.x: self.x0})

        return output

The output of the console is the following:

EXECUTION TIME for ALL  image : 3.297000 sec
EXECUTION TIME for ALL  image : 0.450000 sec
EXECUTION TIME for ALL  image : 0.474000 sec
EXECUTION TIME for ALL  image : 0.507000 sec
EXECUTION TIME for ALL  image : 0.524000 sec
EXECUTION TIME for ALL  image : 0.533000 sec
EXECUTION TIME for ALL  image : 0.559000 sec
EXECUTION TIME for ALL  image : 0.555000 sec
EXECUTION TIME for ALL  image : 0.570000 sec
EXECUTION TIME for ALL  image : 0.609000 sec
EXECUTION TIME for ALL  image : 0.623000 sec
EXECUTION TIME for ALL  image : 0.645000 sec
EXECUTION TIME for ALL  image : 0.667000 sec
EXECUTION TIME for ALL  image : 0.663000 sec
EXECUTION TIME for ALL  image : 0.746000 sec
EXECUTION TIME for ALL  image : 0.720000 sec
EXECUTION TIME for ALL  image : 0.733000 sec

I know it's a difficult question, this is going too "deep" into the details of TensorFlow.

Upvotes: 0

Views: 53

Answers (1)

Alexandr Dibrov
Alexandr Dibrov

Reputation: 156

I assume that the formatting in the first code block is off and everything after for st in mnist_list[:]: is indented.

If this is the case, than your problem is probably caused by reinstantiating the transformer in every iteration of the loop: transformer = style_transfer_tester.StyleTransferTester(...). This way you are repeatedly calling the StyleTransferTester constructor, that calls the _build_graph method that, in turn, creates new objects (e.g. placeholders) and operations (e.g. the network ops) that are added to the existing graph.

Thus, your graph is gradually getting bigger and the overall execution time is increasing. A possible solution is to create the style_transfer_tester object once (outside of the loop) and then only update the content_image at every iteration.

Upvotes: 0

Related Questions