Mathews24
Mathews24

Reputation: 751

Erroneous output from scikit-learn Gaussian process regression

I have a set of data (X,y) where X is my input and it is 2-dimensional, y is my output and it is 1-dimensional, and each pair (X,y) has a corresponding non-uniform noise term. Here is a working example where I apply Gaussian process regression:

import numpy as np 
from sklearn.gaussian_process import GaussianProcessRegressor 
from sklearn.gaussian_process.kernels import RBF, WhiteKernel, ConstantKernel as C  

lenX = 20
X1min = 0.
X1max = 1.
X2min = 0.
X2max = 2.
X1 = np.linspace(X1min,X1max,lenX)
X2 = np.linspace(X2min,X2max,lenX)
time_spacing = X2[1] - X2[0]

X = []
i = 0
while i < lenX:
    j = 0
    while j < lenX:
        X.append([X1[i],X2[j]])
        j = j + 1
    i = i + 1

X = np.array(X)

def fun_y(X):
    y = 5.*((np.sin(X[:,0]))**2.)*(np.e**(-(X[:,1]**2.)))
    y[y < 0.001] = 0.0
    return y

y = fun_y(X)
noise = 0.1*y #0.2/y + 0.2#*np.linspace(1.0,0.1,len(X))

len_x1 = 10
len_x2 = 100
x1_min = X1min
x2_min = X2min
x1_max = X1max
x2_max = X2max
x1 = np.linspace(x1_min, x1_max, len_x1)
x2 = np.linspace(x2_min, x2_max, len_x2) 

i = 0 
inputs_x = []
while i < len(x1):
    j = 0
    while j < len(x2):
        inputs_x.append([x1[i],x2[j]])
        j = j + 1
    i = i + 1
inputs_x_array = np.array(inputs_x)   #simply a set of inputs I want to predict at

kernel = C(1.0, (1e-10, 1000)) * RBF(length_scale = [1., 1.], length_scale_bounds=[(1e-5, 1e5),(1e-7, 1e7)]) \
        + WhiteKernel(noise_level=1, noise_level_bounds=(1e-10, 1e10)) #\

gp = GaussianProcessRegressor(kernel=kernel, alpha=noise ** 2, n_restarts_optimizer=100) 

# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y.reshape(-1,1)) #removing reshape results in a different error 

y_pred_index, y_pred_sigma_index = gp.predict(inputs_x_array, return_std=True)

Despite working with numerous variants of kernels, I continue observing convergence errors when trying to find an optimal fit of the hyperparameters to the data:

/.local/lib/python3.6/site-packages/sklearn/gaussian_process/gpr.py:481: ConvergenceWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'grad': array([ 3.89194489e-03,  9.32690036e-03, -0.00000000e+00,  6.42836597e+01]), 'task': b'ABNORMAL_TERMINATION_IN_LNSRCH', 'funcalls': 128, 'nit': 26, 'warnflag': 2}
  ConvergenceWarning)

I've attempted to add/multiply RBF kernels, vary the bounds for the hyperparameters, and include WhiteNoise, but none of my approaches appear to work. Any thoughts on what I can do to avoid this error and select a good kernel for fitting the data?

Upvotes: 2

Views: 2389

Answers (1)

j014
j014

Reputation: 11

I'm not certain that this is a good kernel for your data, but just by limiting the hyperparameter bounds I did manage to get rid of the ConvergenceWarning:

kernel = C(1.0, (1e-3, 1e3)) * RBF(length_scale = [.1, .1], length_scale_bounds=[(1e-2, 1e2),(1e-2, 1e2)]) \
        + WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e-4))

Asking for gp.kernel_.get_params(deep=True) yields

{'k1': 1.51**2 * RBF(length_scale=[0.843, 1.15]),
 'k1__k1': 1.51**2,
 'k1__k1__constant_value': 2.275727769273166,
 'k1__k1__constant_value_bounds': (0.001, 1000.0),
 'k1__k2': RBF(length_scale=[0.843, 1.15]),
 'k1__k2__length_scale': array([0.84331346, 1.15091614]),
 'k1__k2__length_scale_bounds': [(0.01, 100.0), (0.01, 100.0)],
 'k2': WhiteKernel(noise_level=1.4e-08),
 'k2__noise_level': 1.403204609548082e-08,
 'k2__noise_level_bounds': (1e-10, 0.0001)}

See also this remark.

Upvotes: 1

Related Questions