Ryan Hendricks
Ryan Hendricks

Reputation: 111

Does the leave-one-out algorithm form a linear prediction?

I am using the leave-one-out algorithm using code that I found here. I'm copying the code below:

from sklearn.model_selection import train_test_split
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from numpy import mean
from numpy import absolute
from numpy import sqrt
import pandas as pd

df = pd.DataFrame({'y': [6, 8, 12, 14, 14, 15, 17, 22, 24, 23],
                   'x1': [2, 5, 4, 3, 4, 6, 7, 5, 8, 9],
                   'x2': [14, 12, 12, 13, 7, 8, 7, 4, 6, 5]})

#define predictor and response variables
X = df[['x1', 'x2']]
y = df['y']

#define cross-validation method to use
cv = LeaveOneOut()

#build multiple linear regression model
model = LinearRegression()

#use LOOCV to evaluate model
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error',
                         cv=cv, n_jobs=-1)

#view mean absolute error
mean(absolute(scores))

I have two questions regarding this method:

  1. How does the model form a prediction from the data (apart from the one data point that it excludes)? Is it linear regression?
  2. From what I understand, the error is calculated to be the sum of (actual value-predicted value)^2. Is there any way that I could modify the code such that the error could become the sum of [(actual value-predicted value)/actual value]^2?

Upvotes: 1

Views: 99

Answers (0)

Related Questions