Fábio Perez
Fábio Perez

Reputation: 26048

How to display custom images in TensorBoard using Keras?

I'm working on a segmentation problem in Keras and I want to display segmentation results at the end of every training epoch.

I want something similar to Tensorflow: How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots), but using Keras. I know that Keras has the TensorBoard callback but it seems limited for this purpose.

I know this would break the Keras backend abstraction, but I'm interested in using TensorFlow backend anyway.

Is it possible to achieve that with Keras + TensorFlow?

Upvotes: 28

Views: 20903

Answers (8)

craq
craq

Reputation: 1502

The existing answers here and elsewhere were an excellent starting point, but I found they needed some tweaking to work with Tensorflow 2.x and keras flow_from_directory*. This is what I came up with.

My aim was to verify the data augmentation process, so the images I have written to tensorboard are the augmented training data. That's not exactly what the OP wanted. They would have to change on_batch_end to on_epoch_end and access the model outputs (which is something I haven't looked into, but I'm sure it's possible.)

Similar to Fabio Perez's answer with the astronaut, you will be able to scroll through the epochs by dragging the orange slider, showing differently augmented copies of each image that has been written to tensorboard. Careful with large datasets trained over many epochs. Since this routine saves a copy of every 1000th image in every epoch, you might end up with a large tfevents file.

the callback function, saved as tensorflow_image_callback.py

import tensorflow as tf
import math

class TensorBoardImage(tf.keras.callbacks.Callback):

    def __init__(self, logdir, train, validation=None):
        super(TensorBoardImage, self).__init__()
        self.logdir = logdir
        self.train = train
        self.validation = validation
        self.file_writer = tf.summary.create_file_writer(logdir)

    def on_batch_end(self, batch, logs):
        images_or_labels = 0 #0=images, 1=labels
        imgs = self.train[batch][images_or_labels]

        #calculate epoch
        n_batches_per_epoch = self.train.samples / self.train.batch_size
        epoch = math.floor(self.train.total_batches_seen / n_batches_per_epoch)

        #since the training data is shuffled each epoch, we need to use the index_array to find something which uniquely 
        #identifies the image and is constant throughout training
        first_index_in_batch = batch * self.train.batch_size
        last_index_in_batch = first_index_in_batch + self.train.batch_size
        last_index_in_batch = min(last_index_in_batch, len(self.train.index_array))
        img_indices = self.train.index_array[first_index_in_batch : last_index_in_batch]

        #convert float to uint8, shift range to 0-255
        imgs -= tf.reduce_min(imgs)
        imgs *= 255 / tf.reduce_max(imgs)
        imgs = tf.cast(imgs, tf.uint8)

        with self.file_writer.as_default():
            for ix,img in enumerate(imgs):
                img_tensor = tf.expand_dims(img, 0) #tf.summary needs a 4D tensor
                #only post 1 out of every 1000 images to tensorboard
                if (img_indices[ix] % 1000) == 0:
                    #instead of img_filename, I could just use str(img_indices[ix]) as a unique identifier
                    #but this way makes it easier to find the unaugmented image
                    img_filename = self.train.filenames[img_indices[ix]]
                    tf.summary.image(img_filename, img_tensor, step=epoch)

integrate it with your training like this:

train_augmentation = keras.preprocessing.image.ImageDataGenerator(rotation_range=20,
                                                                    shear_range=10,
                                                                    zoom_range=0.2,
                                                                    width_shift_range=0.2,
                                                                    height_shift_range=0.2,
                                                                    brightness_range=[0.8, 1.2],
                                                                    horizontal_flip=False,
                                                                    vertical_flip=False
                                                                    )
train_data_generator = train_augmentation.flow_from_directory(directory='/some/path/train/',
                                                                class_mode='categorical',
                                                                batch_size=batch_size,
                                                                shuffle=True
                                                                )

valid_augmentation = keras.preprocessing.image.ImageDataGenerator()
valid_data_generator = valid_augmentation.flow_from_directory(directory='/some/path/valid/',
                                                                class_mode='categorical',
                                                                batch_size=batch_size,
                                                                shuffle=False
                                                                )
tensorboard_log_dir = '/some/path'
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=tensorboard_log_dir, update_freq='batch')
tensorboard_image_callback = tensorflow_image_callback.TensorBoardImage(logdir=tensorboard_log_dir, train=train_data_generator, validation=valid_data_generator)

model.fit(x=train_data_generator,
        epochs=n_epochs,
        validation_data=valid_data_generator, 
        validation_freq=1,
        callbacks=[
                    tensorboard_callback,
                    tensorboard_image_callback
                    ])

*I later realised that flow_from_directory has an option save_to_dir which would have been sufficient for my purposes. Simply adding that option is much simpler, but using a callback like this has additional features of displaying the images in Tensorboard, where multiple versions of the same image can be compared, and allowing the number of saved images to be customised. save_to_dir saves a copy of every single augmented image, which quickly adds up to a lot of space.

Upvotes: 0

ziyi liu
ziyi liu

Reputation: 49

class customModelCheckpoint(Callback):
def __init__(self, log_dir='../logs/', feed_inputs_display=None):
      super(customModelCheckpoint, self).__init__()
      self.seen = 0
      self.feed_inputs_display = feed_inputs_display
      self.writer = tf.summary.FileWriter(log_dir)


def custom_set_feed_input_to_display(self, feed_inputs_display):
      self.feed_inputs_display = feed_inputs_display


# A callback has access to its associated model through the class property self.model.
def on_batch_end(self, batch, logs = None):
      logs = logs or {}
      self.seen += 1
      if self.seen % 8 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
          summary_str = []
          feature = self.feed_inputs_display[0][0]
          disp_gt = self.feed_inputs_display[0][1]
          disp_pred = self.model.predict_on_batch(feature)

          summary_str.append(tf.summary.image('disp_input/{}'.format(self.seen), feature, max_outputs=4))
          summary_str.append(tf.summary.image('disp_gt/{}'.format(self.seen), disp_gt, max_outputs=4))
          summary_str.append(tf.summary.image('disp_pred/{}'.format(self.seen), disp_pred, max_outputs=4))

          summary_st = tf.summary.merge(summary_str)
          summary_s = K.get_session().run(summary_st)
          self.writer.add_summary(summary_s, global_step=self.seen)
          self.writer.flush()
Then you can call your custom callback and write the image in tensorboard
callback_mc = customModelCheckpoint(log_dir='../logs/',  feed_inputs_display=[(a, b)])
callback_tb = TensorBoard(log_dir='../logs/', histogram_freq=0, write_graph=True, write_images=True)
callback = []
def data_gen(fr1, fr2):
while True:
    hdr_arr = []
    ldr_arr = []
    for i in range(args['batch_size']):
        try:
            ldr = pickle.load(fr2)           
            hdr = pickle.load(fr1)               
        except EOFError:
            fr1 = open(args['data_h_hdr'], 'rb')
            fr2 = open(args['data_h_ldr'], 'rb')
        hdr_arr.append(hdr)
        ldr_arr.append(ldr)
    hdr_h = np.array(hdr_arr)
    ldr_h = np.array(ldr_arr)
    gen = aug.flow(hdr_h, ldr_h, batch_size=args['batch_size'])
    out = gen.next()
    a = out[0]
    b = out[1]
    callback_mc.custom_set_feed_input_to_display(feed_inputs_display=[(a, b)])
    yield [a, b]

callback.append(callback_tb)
callback.append(callback_mc)
H = model.fit_generator(data_gen(fr1, fr2), steps_per_epoch=100,   epochs=args['epoch'], callbacks=callback)

picture

Upvotes: 0

Lokesh Kumar
Lokesh Kumar

Reputation: 909

I'm trying to display matplotlib plots to the tensorboard (useful incases of plotting statistics, heatmaps, etc). It can be used for the general case also.

class AttentionLogger(keras.callbacks.Callback):
        def __init__(self, val_data, logsdir):
            super(AttentionLogger, self).__init__()
            self.logsdir = logsdir  # where the event files will be written 
            self.validation_data = val_data # validation data generator
            self.writer = tf.summary.FileWriter(self.logsdir)  # creating the summary writer

        @tfmpl.figure_tensor
        def attention_matplotlib(self, gen_images): 
            '''
            Creates a matplotlib figure and writes it to tensorboard using tf-matplotlib
            gen_images: The image tensor of shape (batchsize,width,height,channels) you want to write to tensorboard
            '''  
            r, c = 5,5  # want to write 25 images as a 5x5 matplotlib subplot in TBD (tensorboard)
            figs = tfmpl.create_figures(1, figsize=(15,15))
            cnt = 0
            for idx, f in enumerate(figs):
                for i in range(r):
                    for j in range(c):    
                        ax = f.add_subplot(r,c,cnt+1)
                        ax.set_yticklabels([])
                        ax.set_xticklabels([])
                        ax.imshow(gen_images[cnt])  # writes the image at index cnt to the 5x5 grid
                        cnt+=1
                f.tight_layout()
            return figs

        def on_train_begin(self, logs=None):  # when the training begins (run only once)
                image_summary = [] # creating a list of summaries needed (can be scalar, images, histograms etc)
                for index in range(len(self.model.output)):  # self.model is accessible within callback
                    img_sum = tf.summary.image('img{}'.format(index), self.attention_matplotlib(self.model.output[index]))                    
                    image_summary.append(img_sum)
                self.total_summary = tf.summary.merge(image_summary)

        def on_epoch_end(self, epoch, logs = None):   # at the end of each epoch run this
            logs = logs or {} 
            x,y = next(self.validation_data)  # get data from the generator
            # get the backend session and sun the merged summary with appropriate feed_dict
            sess_run_summary = K.get_session().run(self.total_summary, feed_dict = {self.model.input: x['encoder_input']})
            self.writer.add_summary(sess_run_summary, global_step =epoch)  #finally write the summary!

Then you will have to give it as an argument to fit/fit_generator

#val_generator is the validation data generator
callback_image = AttentionLogger(logsdir='./tensorboard', val_data=val_generator)
... # define the model and generators

# autoencoder is the model, note how callback is suppiled to fit_generator
autoencoder.fit_generator(generator=train_generator,
                    validation_data=val_generator,
                    callbacks=callback_image)

In my case where I'm displaying attention maps (as heatmaps) to tensorboard, this is the output.

tensorboard

Upvotes: 3

Fábio Perez
Fábio Perez

Reputation: 26048

So, the following solution works well for me:

import tensorflow as tf

def make_image(tensor):
    """
    Convert an numpy representation image to Image protobuf.
    Copied from https://github.com/lanpa/tensorboard-pytorch/
    """
    from PIL import Image
    height, width, channel = tensor.shape
    image = Image.fromarray(tensor)
    import io
    output = io.BytesIO()
    image.save(output, format='PNG')
    image_string = output.getvalue()
    output.close()
    return tf.Summary.Image(height=height,
                         width=width,
                         colorspace=channel,
                         encoded_image_string=image_string)

class TensorBoardImage(keras.callbacks.Callback):
    def __init__(self, tag):
        super().__init__() 
        self.tag = tag

    def on_epoch_end(self, epoch, logs={}):
        # Load image
        img = data.astronaut()
        # Do something to the image
        img = (255 * skimage.util.random_noise(img)).astype('uint8')

        image = make_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()

        return

tbi_callback = TensorBoardImage('Image Example')

Just pass the callback to fit or fit_generator.

Note that you can also run some operations using the model inside the callback. For example, you may run the model on some images to check its performance.

screen

Upvotes: 41

ccj5351
ccj5351

Reputation: 434

Based on the above answers and my own searching, I provide the following code to finish the following things using TensorBoard in Keras:


  • problem setup: to predict the disparity map in binocular stereo matching;
  • to feeds the model with input left image x and ground truth disparity map gt;
  • to display the input x and ground truth 'gt', at some iteration time;
  • to display the output y of your model, at some iteration time.

  1. First of all, you have to make your costumed callback class with Callback. Note that a callback has access to its associated model through the class property self.model. Also Note: you have to feed the input to the model with feed_dict, if you want to get and display the output of your model.

    from keras.callbacks import Callback
    import numpy as np
    from keras import backend as K
    import tensorflow as tf
    import cv2
    
    # make the 1 channel input image or disparity map look good within this color map. This function is not necessary for this Tensorboard problem shown as above. Just a function used in my own research project.
    def colormap_jet(img):
        return cv2.cvtColor(cv2.applyColorMap(np.uint8(img), 2), cv2.COLOR_BGR2RGB)
    
    class customModelCheckpoint(Callback):
        def __init__(self, log_dir='./logs/tmp/', feed_inputs_display=None):
              super(customModelCheckpoint, self).__init__()
              self.seen = 0
              self.feed_inputs_display = feed_inputs_display
              self.writer = tf.summary.FileWriter(log_dir)
    
        # this function will return the feeding data for TensorBoard visualization;
        # arguments:
        #  * feed_input_display : [(input_yourModelNeed, left_image, disparity_gt ), ..., (input_yourModelNeed, left_image, disparity_gt), ...], i.e., the list of tuples of Numpy Arrays what your model needs as input and what you want to display using TensorBoard. Note: you have to feed the input to the model with feed_dict, if you want to get and display the output of your model. 
        def custom_set_feed_input_to_display(self, feed_inputs_display):
              self.feed_inputs_display = feed_inputs_display
    
        # copied from the above answers;
        def make_image(self, numpy_img):
              from PIL import Image
              height, width, channel = numpy_img.shape
              image = Image.fromarray(numpy_img)
              import io
              output = io.BytesIO()
              image.save(output, format='PNG')
              image_string = output.getvalue()
              output.close()
              return tf.Summary.Image(height=height, width=width, colorspace= channel, encoded_image_string=image_string)
    
    
        # A callback has access to its associated model through the class property self.model.
        def on_batch_end(self, batch, logs = None):
              logs = logs or {} 
              self.seen += 1
              if self.seen % 200 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
                  summary_str = []
                  for i in range(len(self.feed_inputs_display)):
                      feature, disp_gt, imgl = self.feed_inputs_display[i]
                      disp_pred = np.squeeze(K.get_session().run(self.model.output, feed_dict = {self.model.input : feature}), axis = 0)
                      #disp_pred = np.squeeze(self.model.predict_on_batch(feature), axis = 0)
                      summary_str.append(tf.Summary.Value(tag= 'plot/img0/{}'.format(i), image= self.make_image( colormap_jet(imgl)))) # function colormap_jet(), defined above;
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp_gt/{}'.format(i), image= self.make_image( colormap_jet(disp_gt))))
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp/{}'.format(i), image= self.make_image( colormap_jet(disp_pred))))
    
                  self.writer.add_summary(tf.Summary(value = summary_str), global_step =self.seen)
    
  2. Next, pass this callback object to fit_generator() for your model, like:

       feed_inputs_4_display = some_function_you_wrote()
       callback_mc = customModelCheckpoint( log_dir = log_save_path, feed_inputd_display = feed_inputs_4_display)
       # or 
       callback_mc.custom_set_feed_input_to_display(feed_inputs_4_display)
       yourModel.fit_generator(... callbacks = callback_mc)
       ...
    
  3. Now your can run the code, and go the TensorBoard host to see the costumed image display. For example, this is what I got using the aforementioned code: enter image description here


    Done! Enjoy!

Upvotes: 11

mrgloom
mrgloom

Reputation: 21622

Here is example how to draw landmarks on image:

class CustomCallback(keras.callbacks.Callback):
    def __init__(self, model, generator):
        self.generator = generator
        self.model = model

    def tf_summary_image(self, tensor):
        import io
        from PIL import Image

        tensor = tensor.astype(np.uint8)

        height, width, channel = tensor.shape
        image = Image.fromarray(tensor)
        output = io.BytesIO()
        image.save(output, format='PNG')
        image_string = output.getvalue()
        output.close()
        return tf.Summary.Image(height=height,
                             width=width,
                             colorspace=channel,
                             encoded_image_string=image_string)

    def on_epoch_end(self, epoch, logs={}):
        frames_arr, landmarks = next(self.generator)

        # Take just 1st sample from batch
        frames_arr = frames_arr[0:1,...]

        y_pred = self.model.predict(frames_arr)

        # Get last frame for which we have done predictions
        img = frames_arr[0,-1,:,:]

        img = img * 255
        img = img[:, :, ::-1]
        img = np.copy(img)

        landmarks_gt = landmarks[-1].reshape(-1,2)
        landmarks_pred = y_pred.reshape(-1,2)

        img = draw_landmarks(img, landmarks_gt, (0,255,0))
        img = draw_landmarks(img, landmarks_pred, (0,0,255))

        image = self.tf_summary_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()
        return

Upvotes: 0

Igor Gadelha Pereira
Igor Gadelha Pereira

Reputation: 51

I believe I found a better way to log such custom images to tensorboard making use of the tf-matplotlib. Here is how...

class TensorBoardDTW(tf.keras.callbacks.TensorBoard):
    def __init__(self, **kwargs):
        super(TensorBoardDTW, self).__init__(**kwargs)
        self.dtw_image_summary = None

    def _make_histogram_ops(self, model):
        super(TensorBoardDTW, self)._make_histogram_ops(model)
        tf.summary.image('dtw-cost', create_dtw_image(model.output))

One just need to overwrite the _make_histogram_ops method from the TensorBoard callback class to add the custom summary. In my case, the create_dtw_image is a function that creates an image using the tf-matplotlib.

Regards,.

Upvotes: 1

chrish.
chrish.

Reputation: 725

Similarily, you might want to try tf-matplotlib. Here's a scatter plot

import tensorflow as tf
import numpy as np

import tfmpl

@tfmpl.figure_tensor
def draw_scatter(scaled, colors): 
    '''Draw scatter plots. One for each color.'''  
    figs = tfmpl.create_figures(len(colors), figsize=(4,4))
    for idx, f in enumerate(figs):
        ax = f.add_subplot(111)
        ax.axis('off')
        ax.scatter(scaled[:, 0], scaled[:, 1], c=colors[idx])
        f.tight_layout()

    return figs

with tf.Session(graph=tf.Graph()) as sess:

    # A point cloud that can be scaled by the user
    points = tf.constant(
        np.random.normal(loc=0.0, scale=1.0, size=(100, 2)).astype(np.float32)
    )
    scale = tf.placeholder(tf.float32)        
    scaled = points*scale

    # Note, `scaled` above is a tensor. Its being passed `draw_scatter` below. 
    # However, when `draw_scatter` is invoked, the tensor will be evaluated and a
    # numpy array representing its content is provided.   
    image_tensor = draw_scatter(scaled, ['r', 'g'])
    image_summary = tf.summary.image('scatter', image_tensor)      
    all_summaries = tf.summary.merge_all() 

    writer = tf.summary.FileWriter('log', sess.graph)
    summary = sess.run(all_summaries, feed_dict={scale: 2.})
    writer.add_summary(summary, global_step=0)

When executed, this results in the following plot inside Tensorboard

Note that tf-matplotlib takes care about evaluating any tensor inputs, avoids pyplot threading issues and supports blitting for runtime critical plotting.

Upvotes: 2

Related Questions