Reputation: 483
I'm fine-tuning the Inception V3 model with Keras, in order to convert it with coremltools into a .mlmodel file.
However, when converting the model coremltools throws an error saying the following when the converter reaches the last layer of the model:
coremltools/models/neural_network.py", line 2501, in set_pre_processing_parameters
channels, height, width = array_shape
ValueError: need more than 1 value to unpack
I used the code from the Keras documentation on applications found here: https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
And added a piece of code loading my dataset from the VGG example found here: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
My final script looks like this, using TesorFlow as backend:
LOAD THE DATAfrom keras.preprocessing.image import ImageDataGenerator
img_width, img_height = 299, 299
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 358
nb_validation_samples = 21
epochs = 1
batch_size = 15
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
TRAIN THE MODEL
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(7, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save('finetuned_inception.h5')
Upvotes: 2
Views: 442
Reputation: 1278
I'm writing here in response to @SwimBikeRun's request (as I need a bit more space) I was converting YOLO to Keras and then Keras to CoreML. For conversion I was using this script https://github.com/qqwweee/keras-yolo3/blob/master/convert.py
In the conversion-process the model was eventually created like that:
input_layer = Input(shape=(None, None, 3))
...
model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index])
And those "None"-inputs was what made CoreML conversion fail. For CoreML the input-size to your model must be known. So I changed it to this:
input_layer = Input(shape=(416, 416, 3)
Your input-size will probably vary.
For your original question:
Maybe check your base_model.input
size for the same problem.
Upvotes: 2