Reputation: 6290
I'm using a scikit-learn custom pipeline (sklearn.pipeline.Pipeline
) in conjunction with RandomizedSearchCV
for hyper-parameter optimization. This works great.
Now I would like to insert a keras model as a first step into the pipeline. The parameters of the model should be optimized. The computed (fitted) keras model should then be used later on in the pipeline by other steps, so I think I have to store the model as a global variable so that the other pipeline steps can use it. Is this right?
I know that keras offers some wrappers for the scikit-learn API, but the problem is that these wrappers already do classification/regression, but I only want to compute the keras model and nothing else.
How can this be done?
For example, I have a method which returns the model:
def create_model(file_path, argument2,...):
...
return model
The method needs some fixed parameters like a file_path
etc. but X
and y
are not needed (or can be ignored). The parameters of the model should be optimized (number of layers etc.).
Upvotes: 44
Views: 30951
Reputation: 4175
Nowadays the go-to wrapper to use Keras models in scikit-learn seems to be Scikeras, since Keras/TensorFlow's own wrapper used in Felipe's answer is gone.
Upvotes: 0
Reputation: 13
This is a modification of the RBM example in sklearn documentation, but the neural network implemented in keras with tensorflow backend.
# -*- coding: utf-8 -*-
"""
Created on Mon Nov 27 17:11:21 2017
@author: ZED
"""
from __future__ import print_function
print(__doc__)
# Authors: Yann N. Dauphin, Vlad Niculae, Gabriel Synnaeve
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import convolve
from keras.models import Sequential
from keras.layers.core import Dense,Activation
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn import datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline
#%%
# Setting up
def nudge_dataset(X, Y):
"""
This produces a dataset 5 times bigger than the original one,
by moving the 8x8 images in X around by 1px to left, right, down, up
"""
direction_vectors = [
[[0, 1, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[1, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 1],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 1, 0]]]
shift = lambda x, w: convolve(x.reshape((8, 8)), mode='constant',
weights=w).ravel()
X = np.concatenate([X] +
[np.apply_along_axis(shift, 1, X, vector)
for vector in direction_vectors])
Y = np.concatenate([Y for _ in range(5)], axis=0)
return X, Y
# Load Data
digits = datasets.load_digits()
X = np.asarray(digits.data, 'float32')
X, Y = nudge_dataset(X, digits.target)
X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scaling
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.2,
random_state=0)
#%%
def create_model():
model = Sequential()
model.add(Dense(100, input_dim=64))
model.add(Activation('tanh'))
"""
#other layer
model.add(Dense(500))
model.add(Activation('tanh'))
"""
model.add(Dense(10))
model.add(Activation('softmax'))
# Compile model
model.compile(loss = 'binary_crossentropy', optimizer = 'adadelta', metrics=['accuracy'])
return model
rbm = BernoulliRBM(random_state=0, verbose=True)
#This is the model you want. it is in sklearn format
clf = KerasClassifier(build_fn=create_model, verbose=0)
classifier = Pipeline(steps=[('rbm', rbm), ('VNN', clf)])
#%%
# Training
# Hyper-parameters. These were set by cross-validation,
# using a GridSearchCV. Here we are not performing cross-validation to
# save time.
rbm.learning_rate = 0.06
rbm.n_iter = 20
# More components tend to give better prediction performance, but larger
# fitting time
rbm.n_components = 64
#adapt targets to hot matrix
yTrain = np_utils.to_categorical(Y_train, 10)
# Training RBM-Logistic Pipeline
classifier.fit(X_train, yTrain)
#%%
# Evaluation
print()
print("NN using RBM features:\n%s\n" % (
metrics.classification_report(
Y_test,
classifier.predict(X_test))))
#%%
# Plotting
plt.figure(figsize=(4.2, 4))
for i, comp in enumerate(rbm.components_):
plt.subplot(10, 10, i + 1)
plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.suptitle('64 components extracted by RBM', fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.show()
Upvotes: -2
Reputation: 11897
You need to wrap your Keras model as a Scikit learn model first and then proceed as usual.
Here's a quick example (I've omitted the imports for brevity)
Here is a full blog post with this one and many other examples: Scikit-learn Pipeline Examples
# create a function that returns a model, taking as parameters things you
# want to verify using cross-valdiation and model selection
def create_model(optimizer='adagrad',
kernel_initializer='glorot_uniform',
dropout=0.2):
model = Sequential()
model.add(Dense(64,activation='relu',kernel_initializer=kernel_initializer))
model.add(Dropout(dropout))
model.add(Dense(1,activation='sigmoid',kernel_initializer=kernel_initializer))
model.compile(loss='binary_crossentropy',optimizer=optimizer, metrics=['accuracy'])
return model
# wrap the model using the function you created
clf = KerasRegressor(build_fn=create_model,verbose=0)
# just create the pipeline
pipeline = Pipeline([
('clf',clf)
])
pipeline.fit(X_train, y_train)
Upvotes: 40