Diane.95
Diane.95

Reputation: 107

Implementing late fusion in Keras

I am working on a multimodal classifier with images and text. I have developed and succesfully two models, one is a CNN for images and the other is a BERT-based model for text. The last layer of both models is a Dense with n units and softmax activation (where n is the number of classes). Keras provides different merging layers for combining the output vectors of these models (https://keras.io/api/layers/merging_layers/) and then it is possible to create a new network, but my question is: is there a better way to combine the decisions of the single models? Maybe to weight the values inside the vectors based on some criterion? Currently I have developed my model with a simple concatenation layer like this:

image_side = images_model(image_input)
text_side = text_model(text_input)
# Concatenation
merged = layers.Concatenate(name='Concatenation')([image_side, text_side])
merged = layers.Dense(128, activation = 'relu', name='Dense_128')(merged)
merged = layers.Dropout(0.2)(merged)
output = layers.Dense(nClasses, activation='softmax', name = "class")(merged)

Thank you in advance!

Upvotes: 5

Views: 4160

Answers (1)

Marco Cerliani
Marco Cerliani

Reputation: 22031

here a possibility to implement a weighted average between two tensors (model outputs), where the weight can be learned automatically. I also introduce the constrain that the weights must sum up to 1. To grant this we have to simply apply a softmax on our weights. In the dummy example below I combine with this method the output of two fully-connected branches but you can manage it in every other scenario

here the custom layer:

class WeightedAverage(Layer):

    def __init__(self, n_output):
        super(WeightedAverage, self).__init__()
        self.W = tf.Variable(initial_value=tf.random.uniform(shape=[1,1,n_output], minval=0, maxval=1),
            trainable=True) # (1,1,n_inputs)

    def call(self, inputs):

        # inputs is a list of tensor of shape [(n_batch, n_feat), ..., (n_batch, n_feat)]
        # expand last dim of each input passed [(n_batch, n_feat, 1), ..., (n_batch, n_feat, 1)]
        inputs = [tf.expand_dims(i, -1) for i in inputs]
        inputs = Concatenate(axis=-1)(inputs) # (n_batch, n_feat, n_inputs)
        weights = tf.nn.softmax(self.W, axis=-1) # (1,1,n_inputs)
        # weights sum up to one on last dim

        return tf.reduce_sum(weights*inputs, axis=-1) # (n_batch, n_feat)

here the full example in a regression problem:

inp1 = Input((100,))
inp2 = Input((100,))
x1 = Dense(32, activation='relu')(inp1)
x2 = Dense(32, activation='relu')(inp2)
x = [x1,x2]
W_Avg = WeightedAverage(n_output=len(x))(x)
out = Dense(1)(W_Avg)

m = Model([inp1,inp2], out)
m.compile('adam','mse')

n_sample = 1000
X1 = np.random.uniform(0,1, (n_sample,100))
X2 = np.random.uniform(0,1, (n_sample,100))
y = np.random.uniform(0,1, (n_sample,1))

m.fit([X1,X2], y, epochs=10)

in the end, you can also visualize the value of the weights in this way:

tf.nn.softmax(m.get_weights()[-3]).numpy()

reference and other examples: https://towardsdatascience.com/neural-networks-ensemble-33f33bea7df3

Upvotes: 3

Related Questions