Reputation: 6829
I am using transfer learning with MobileNet and then sending the extracted features to a LSTM for video data classification.
Images are resized to (224,224) when I set the train,test,validation dataset using image_dataset_from_directory().
EDIT: So I need to pad the sequences of the data but I get the following error when I do so, I am not too sure how I can do it when I am using the image_dataset_from_directory():
train_dataset = sequence.pad_sequences(train_dataset, maxlen=BATCH_SIZE, padding="post", truncating="post")
InvalidArgumentError: assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP]
[[{{node decode_image/cond_jpeg/else/_1/decode_image/cond_jpeg/cond_png/else/_20/decode_image/cond_jpeg/cond_png/cond_gif/else/_39/decode_image/cond_jpeg/cond_png/cond_gif/Assert/Assert}}]] [Op:IteratorGetNext]
I checked the train_dataset type:
<BatchDataset shapes: ((None, None, 224, 224, 3), (None, None)), types: (tf.float32, tf.int32)>
Global variables:
TARGETX = 224
TARGETY = 224
CLASSES = 3
SIZE = (TARGETX,TARGETY)
INPUT_SHAPE = (TARGETX, TARGETY, 3)
CHANNELS = 3
NBFRAME = 5
INSHAPE = (NBFRAME, TARGETX, TARGETY, 3)
Mobilenet function:
def build_mobilenet(shape=INPUT_SHAPE, nbout=CLASSES):
# INPUT_SHAPE = (224,224,3)
# CLASSES = 3
model = MobileNetV2(
include_top=False,
input_shape=shape,
weights='imagenet')
base_model.trainable = True
output = GlobalMaxPool2D()
return Sequential([model, output])
LSTM function:
def action_model(shape=INSHAPE, nbout=3):
# INSHAPE = (5, 224, 224, 3)
convnet = build_mobilenet(shape[1:])
model = Sequential()
model.add(TimeDistributed(convnet, input_shape=shape))
model.add(LSTM(64))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(64, activation='relu'))
model.add(Dense(nbout, activation='softmax'))
return model
model = action_model(INSHAPE, CLASSES)
model.summary()
Model: "sequential_16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
time_distributed_6 (TimeDist (None, 5, 1280) 2257984
_________________________________________________________________
lstm_5 (LSTM) (None, 64) 344320
_________________________________________________________________
dense_45 (Dense) (None, 1024) 66560
_________________________________________________________________
dropout_18 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_46 (Dense) (None, 512) 524800
_________________________________________________________________
dropout_19 (Dropout) (None, 512) 0
_________________________________________________________________
dense_47 (Dense) (None, 128) 65664
_________________________________________________________________
dropout_20 (Dropout) (None, 128) 0
_________________________________________________________________
dense_48 (Dense) (None, 64) 8256
_________________________________________________________________
dense_49 (Dense) (None, 3) 195
=================================================================
Total params: 3,267,779
Trainable params: 3,233,667
Non-trainable params: 34,112
Upvotes: 0
Views: 971
Reputation: 16916
You model is perfectly fine. Its the way you are feeding the data is the problem.
Your model code:
import tensorflow as tf
import keras
from keras.layers import GlobalMaxPool2D, TimeDistributed, Dense, Dropout, LSTM
from keras.applications import MobileNetV2
from keras.models import Sequential
import numpy as np
from keras.preprocessing.sequence import pad_sequences
TARGETX = 224
TARGETY = 224
CLASSES = 3
SIZE = (TARGETX,TARGETY)
INPUT_SHAPE = (TARGETX, TARGETY, 3)
CHANNELS = 3
NBFRAME = 5
INSHAPE = (NBFRAME, TARGETX, TARGETY, 3)
def build_mobilenet(shape=INPUT_SHAPE, nbout=CLASSES):
# INPUT_SHAPE = (224,224,3)
# CLASSES = 3
model = MobileNetV2(
include_top=False,
input_shape=shape,
weights='imagenet')
model.trainable = True
output = GlobalMaxPool2D()
return Sequential([model, output])
def action_model(shape=INSHAPE, nbout=3):
# INSHAPE = (5, 224, 224, 3)
convnet = build_mobilenet(shape[1:])
model = Sequential()
model.add(TimeDistributed(convnet, input_shape=shape))
model.add(LSTM(64))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(64, activation='relu'))
model.add(Dense(nbout, activation='softmax'))
return model
Lets try out this model with some dummy data now:
So you model accepts a sequence of images (i.e frames of the video) and classified them (the video) into one of the 3 classes.
Lets create a dummy data with 4 videos each of 10 frames, i.e batch size = 4 and time steps = 10
X = np.random.randn(4, 10, TARGETX, TARGETY, 3)
y = model(X)
print (y.shape)
Output:
(4,3)
As expected the output size is (4,3)
Now the problem you will be facing with using image_dataset_from_direcctory
will be how to batch variable length videos since the number of frames in each video will/might vary. The way to handle it is using pad_sequences
.
For example if first video has 10 frames second has 9 and so on you can do something like below
X = [np.random.randn(10, TARGETX, TARGETY, 3),
np.random.randn(9, TARGETX, TARGETY, 3),
np.random.randn(8, TARGETX, TARGETY, 3),
np.random.randn(7, TARGETX, TARGETY, 3)]
X = pad_sequences(X)
y = model(X)
print (y.shape)
Output:
(4,3)
So once you read images using image_dataset_from_direcctory
you will have to pad the variable length frames into batch.
Upvotes: 2