TyForHelpDude
TyForHelpDude

Reputation: 5002

sklearn `MemoryError` in python

I try to implement a simple machine learning app with python 2.7 and scipy 0.18.1 I share sample codea and the train datas download link below you can copy-paste and run, my problem is when I get the line I have "memory error"

predicted = model.predict_proba(test_data[features])

I searched on internet but cant fix I appreciate any help..

here you can find sample datas via this link: https://www.kaggle.com/c/sf-crime/data

import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.metrics import log_loss
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import LogisticRegression
import numpy as np

# Load Data with pandas, and parse the first column into datetime
train = pd.read_csv('train.csv', parse_dates=['Dates'])
test = pd.read_csv('test.csv', parse_dates=['Dates'])

# Convert crime labels to numbers
le_crime = preprocessing.LabelEncoder()
crime = le_crime.fit_transform(train.Category)

# Get binarized weekdays, districts, and hours.
days = pd.get_dummies(train.DayOfWeek)
district = pd.get_dummies(train.PdDistrict)
hour = train.Dates.dt.hour
hour = pd.get_dummies(hour)

# Build new array
train_data = pd.concat([hour, days, district], axis=1)
train_data['crime'] = crime

# Repeat for test data
days = pd.get_dummies(test.DayOfWeek)
district = pd.get_dummies(test.PdDistrict)

hour = test.Dates.dt.hour
hour = pd.get_dummies(hour)

test_data = pd.concat([hour, days, district], axis=1)

training, validation = train_test_split(train_data, train_size=.60)

features = ['Friday', 'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday',
            'Wednesday', 'BAYVIEW', 'CENTRAL', 'INGLESIDE', 'MISSION',
            'NORTHERN', 'PARK', 'RICHMOND', 'SOUTHERN', 'TARAVAL', 'TENDERLOIN']

training, validation = train_test_split(train_data, train_size=.60)
model = BernoulliNB()
model.fit(training[features], training['crime'])
predicted = np.array(model.predict_proba(validation[features]))
log_loss(validation['crime'], predicted)

# Logistic Regression for comparison
model = LogisticRegression(C=.01)
model.fit(training[features], training['crime'])
predicted = np.array(model.predict_proba(validation[features]))
log_loss(validation['crime'], predicted)

model = BernoulliNB()
model.fit(train_data[features], train_data['crime'])
predicted = model.predict_proba(test_data[features]) #MemoryError!!!!

# Write results
result = pd.DataFrame(predicted, columns=le_crime.classes_)
result.to_csv('testResult.csv', index=True, index_label='Id')

EDITED: Error stacktrace ss: enter image description here

Upvotes: 1

Views: 1692

Answers (2)

jude
jude

Reputation: 360

There is insufficient RAM for the process you intend to do

Upvotes: 0

Alex
Alex

Reputation: 12913

What if you tried predicting in chunks? For example you could try:

N_split = 10
split_data = np.array_split(test_data[features], N_split)
split_predicted = []
for data in split_data:
    split_predicted.append( model.predict_proba(data) )

predicted = np.concatenate(split_predicted)

Upvotes: 2

Related Questions