javierdvalle
javierdvalle

Reputation: 2603

Wrong intercept in Spark linear regression

I am starting with Spark Linear Regression. I am trying to fit a line to a linear dataset. It seems that the intercept is not correctly adjusting, or probably I am missing something..

With intercept=False:

linear_model = LinearRegressionWithSGD.train(labeledData, iterations=100, step=0.0001, intercept=False)

Plot with intercept=False

This seems normal. But when I use intercept=True:

linear_model = LinearRegressionWithSGD.train(labeledData, iterations=100, step=0.0001, intercept=True)

Plot with intercept=True

The model that I get in the last case is exactly:

(weights=[0.0353471289751], intercept=1.0005127185289888)

I have tried with different datasets, step sizes and iterations, but always the model converges the intercept is about 1

EDIT - This is the code I am using:

from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.regression import LinearRegressionWithSGD
import numpy as np
import matplotlib.pyplot as plt
from pyspark import SparkContext
sc = SparkContext("local", "regression")

# Generate data
SIZE = 300
SLOPE = 0.1
BASE = -30
NOISE = 10

x = np.arange(SIZE)
delta = np.random.uniform(-NOISE,NOISE, size=(SIZE,))
y = BASE + SLOPE*x + delta
data = zip(range(len(y)), y) # zip with index
dataRDD = sc.parallelize(data)

# Normalize data
# mean = np.mean(data)
# std = np.std(data)
# dataRDD = dataRDD.map(lambda r: (r[0], (float(r[1])-mean)/std))

labeledData = dataRDD.map(lambda r: LabeledPoint(float(r[1]), [float(r[0])]))

# Create linear model
linear_model = LinearRegressionWithSGD.train(labeledData, iterations=1000, step=0.0002, intercept=True, convergenceTol=0.000001)
print linear_model

true_vs_predicted = labeledData.map(lambda p: (p.label, linear_model.predict(p.features))).collect()

# PLOT
fig = plt.figure()
ax = fig.add_subplot(111)
ax.grid()

y_real = [x[0] for x in true_vs_predicted] 
y_pred = [x[1] for x in true_vs_predicted] 

plt.plot(range(len(y_real)), y_real, 'o', markersize=5, c='b')
plt.plot(range(len(y_pred)), y_pred, 'o', markersize=5, c='r')

plt.show()

Upvotes: 2

Views: 1292

Answers (1)

Mahesh
Mahesh

Reputation: 11

This is because the number of iterations and the step size are both smaller. As a result, The trial process is ending before reaching the local optima.

Upvotes: 1

Related Questions