Hitesh
Hitesh

Reputation: 1325

Error in creating h5 file (hdf file)

For below code i have save models weights in mnist_weights1234.h5. and want to create same file like mnist_weights1234.h5 with same layer configuration

import keras
from __future__ import print_function
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import numpy as np
from sklearn.model_selection import train_test_split

batch_size = 128
num_classes = 3
epochs = 1

# input image dimensions
img_rows, img_cols = 28, 28

#Just for reducing data set 
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x1_train=x_train[y_train==0]; y1_train=y_train[y_train==0]
x1_test=x_test[y_test==0];y1_test=y_test[y_test==0]
x2_train=x_train[y_train==1];y2_train=y_train[y_train==1]
x2_test=x_test[y_test==1];y2_test=y_test[y_test==1]
x3_train=x_train[y_train==2];y3_train=y_train[y_train==2]
x3_test=x_test[y_test==2];y3_test=y_test[y_test==2]

X=np.concatenate((x1_train,x2_train,x3_train,x1_test,x2_test,x3_test),axis=0)
Y=np.concatenate((y1_train,y2_train,y3_train,y1_test,y2_test,y3_test),axis=0)

# the data, shuffled and split between train and test sets
x_train, x_test, y_train, y_test = train_test_split(X,Y)

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(1, kernel_size=(2, 2),
                 activation='relu',
                 input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(16,16)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.save_weights('mnist_weights1234.h5')

Now i want to create file like mnist_weights.h5. So i use below code and getting error.

hf = h5py.File('mnist_weights12356.h5', 'w')
hf.create_dataset('conv2d_2/conv2d_2/bias', data=weights[0])
hf.create_dataset('conv2d_2/conv2d_2/kernel', data=weights[1])
hf.create_dataset('dense_2/dense_2/bias', data=weights[2])
hf.create_dataset('dense_2/dense_2/kernel', data=weights[3])
hf.create_dataset('flatten_2', data=None)
hf.create_dataset('max_pooling_2d_2', data=None)
hf.close()

But getting following error:TypeError: One of data, shape or dtype must be specified. How to solve problem

Upvotes: 1

Views: 1315

Answers (2)

Daniel Möller
Daniel Möller

Reputation: 86620

If you want to use weights that are in numpy arrays, simply set the weights in the layers:

model.get_layer('conv2d_2').set_weights([weights[1],weights[0]])
model.get_layer('dense_2').set_weights([weights[3],weights[2]])

If your arrays are stored in files:

array = numpy.load('arrayfile.npy')

You can save the entire model weights as numpy arrays:

numpy.save('weights.npy', model.get_weights())
model.set_weights(numpy.load('weights.npy'))

Upvotes: 1

Dr. Snoopy
Dr. Snoopy

Reputation: 56377

The error message has your solution. In these lines:

hf.create_dataset('flatten_2', data=None)
hf.create_dataset('max_pooling_2d_2', data=None)

You are giving data equals to None. To create a dataset, the HDF5 library needs a minimum information, and as the error says, you either need to give a dtype (the data type of the dataset' elements), or a non-None data parameter (to infer the shape), or a shape parameter. You are giving none of these, so the error is correct.

Just give enough information in the create_dataset call for a dataset ti be created.

Upvotes: 0

Related Questions