Reputation: 603
I am currently trying to leverage an intermediate layer from my already trained DL model as an embedding to a given input. The code below already works at getting the layer I want, however it is extremely slow to do this iteratively for a large number of inputs.
model = load_model('model.h5')
inp = model.input
outputs = [layer.output for layer in model.layers]
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs]
def text2tensor(text):
"""Convert string to tensor"""
tensor = tokenizer.texts_to_sequences([text])
tensor = pad_sequences(tensor, maxlen=10, padding='pre')
return tensor
def get_embedding(tensor, at_layer):
"""Get output at particular layer in network """
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs][at_layer-1]
layer_outs = [func([tensor, 1.]) for func in [functors]]
return layer_outs[0][0]
texts = ['this is my first text',
'this is my second text',
'this is my third text',
.....nth text]
embeddings = np.empty((0,256))
for t in texts:
tensor = text2tensor(t)
embedding = get_embedding(tensor,at_layer=4)
embeddings = np.append(embeddings,[embedding[0]],axis=0)
How do I make use of batch processing so that I don't have to do this one by one? It is extremely slow with the above implementation, but it works.
Upvotes: 0
Views: 486
Reputation: 33410
In addition to the point I mentioned in my comment, I suggest you to create a model instead of a backend function:
input_tensor = Input(shape=(10,)) # assuming maxlen=10
new_model = Model(input_tensor, my_desired_layer.output)
Then, first pre-process your text data to form an input array (i.e. my_data
below) and afterwards use predict
method and pass a batch_size
argument to it to exploit batch processing:
out = new_model.predict(my_data) # the default batch size is 32
Upvotes: 1