Reputation: 146
Using the train and test dataset from the titanic, I am trying to predict if a passenger survived or not in the titanic based on their sex. I want to build a classification and then test and evaluate it, in order to achieve my goal.
But I am getting this error :
ValueError: Found input variables with inconsistent numbers of samples: [418, 891]
from this line :
scores = cross_val_score(Model, cross_val_X, cross_val_Y, cv=5, scoring='accuracy')
I understand that the cross_val_X, cross_val_Y have different number of rows and therefore the error. Am I right or wrong ? What should I do to fix the error ?
I also want to test my model on the test dataset and I think I need to change the data that I am providing the predict method. Is that right ?
import pandas as pd #data processing, CSV File(I/O)
import numpy as np #linear algebra
from google.colab import files
import io
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier # K-Nearest Neighbours
from sklearn.metrics import classification_report #Build a text report showing the classification metrics.
from sklearn.metrics import accuracy_score #Accuracy classification score.
from sklearn.metrics import confusion_matrix #Compute confusion matrix to evaluate the accuracy of a classification.
#Visualization Libraries
import matplotlib.pyplot as plt
import seaborn as sns
#for cross validation , import k-folder
from sklearn.model_selection import cross_val_score
#upload the file to train the model
uploaded = files.upload() #select the path to train.csv => upload from local drive
df_train = pd.read_excel('train_updated.xlsx')
df_train2 = df_train.copy()
#upload the files to test the model
uploaded = files.upload() #select the path to train.csv => upload from local drive
df_test = pd.read_csv('test_updated.csv', delimiter=';') #reads our data and saves it in a data structure called DataFrame, splits into columns
print('\n Head of the file: train_updated.xlsx')
print(df_train.head()) #print the head(=the first 5 rows) of the csv, to see features and target variable
print('\n Data info of the file: train_updated.xlsx') #to see if there is any NaN value and length of this data
print(df_train.info() )
print('\n Data info of the file: test') #to see if there is any NaN value and length of this data
print(df_test.info() )
#1st pivot
print('How many women and men survived?')
sex_pivot = df_train2.pivot_table(index="Sex",values="Survived")
sex_pivot.plot.bar()
plt.show()
#replace all nan with 0
df_train.replace(np.nan, 0, inplace=True)
df_test.replace(np.nan, 0, inplace=True)
#convert to int
df_test['Embarked'].replace(( {'S': 0, 'C': 1, 'Q': 2}), inplace=True)
#df_test = df_test.drop(columns=['PassengerId', 'Name', 'Sex', 'Cabin', 'Ticket', 'Fare'])
#print('\n AFTER DROPPING COLUMNS \n FILE: test')
#print(df_test.info)
#Splitting data
#Our input will be every column except ‘Survived’ because ‘Survived’ is what we will be attempting to predict. Therefore, ‘Survived’ will be our target.
#separate target values(Y)
Y = df_train['Survived'].values.reshape(-1, 1)
print('\n Y: target value') #view target values
print(Y.shape)
#convert to int
df_train['Embarked'].replace(( {'S': 0, 'C': 1, 'Q': 2}), inplace=True)
#separate input values(X)
df_train = df_train.drop(columns=['Survived', 'PassengerId', 'Name', 'Sex', 'Cabin', 'Ticket', 'Fare'])
print('\n AFTER DROPPING COLUMNS \n file: train.csv')
print(df_train.info)
X = df_train['Sex_Boolean'].values.reshape(-1, 1) #create a dataframe with all training data except the target column
print('\n X: input data and shape ')
print(X)
print(X.shape)
#train_test_split: splits data arrays into two subsets: for training data and for testing data
#1st parameter= input data, 2nd parameter= data target
#train_test_split will split our data set and will return 4 values, the train attributes (X_train), test attributes (X_test), train labels (y_train) and the test labels (y_test).
X_train, X_test, y_train, y_test = train_test_split(X, Y , train_size=0.7, test_size=0.3) # 70% training and 30% test .
print('After: Train data split')
print('X_train: ', X_train.shape)
print('X_test: ', X_test.shape)
print('y_train: ', y_train.shape)
print('y_test: ', y_test.shape )
#OPTIMAL K --> PLOT
# try K=1 through K=25 and record testing accuracy
k_range = range(1, 26)
# We can create Python dictionary using [] or dict()
scores = []
# We use a loop through the range 1 to 26
# We append the scores in the dictionary
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(accuracy_score(y_test, y_pred))
# allow plots to appear within the notebook
%matplotlib inline
# plot the relationship between K and testing accuracy
plt.plot(k_range, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
# K-Nearest Neighbours Algorithm
Model = KNeighborsClassifier(n_neighbors=3) #initialization
Model.fit(X_train, y_train) #train the model
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
# Accuracy score
print('Accuracy is ',accuracy_score(y_pred,y_test)) # round(knn.score(X_train, Y_train) * 100, 2)
#K-fold cross-validation
cross_val_knn = KNeighborsClassifier(n_neighbors=3)
cross_val_X
try:
cross_val_X=(df_test['Sex_Boolean'].values.reshape(-1, 1) )# df_test['Pclass','Age','SibSp','Parch','Embarked','Sex_Boolean'] pd.get_dummies(
except KeyError:
print("column sex boolean cannot be found")
print( "cross val x: ", cross_val_X )
cross_val_Y= Y
print( "cross val y: ", cross_val_Y )
print( "SHAPE X AND Y : ", cross_val_X.shape, cross_val_Y.shape )
# X,y will automatically devided by 5 folder, the scoring I will still use the accuracy
scores = cross_val_score(Model, cross_val_X, cross_val_Y, cv=5, scoring='accuracy')
results:
Head of the file: train_updated.xlsx
PassengerId Survived Pclass ... Cabin Embarked Sex_Boolean
0 1 0 3 ... NaN S 1
1 2 1 1 ... C85 C 0
2 3 1 3 ... NaN S 0
3 4 1 1 ... C123 S 0
4 5 0 3 ... NaN S 1
[5 rows x 13 columns]
Data info of the file: train_updated.xlsx
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null int64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
12 Sex_Boolean 891 non-null int64
dtypes: float64(1), int64(7), object(5)
memory usage: 90.6+ KB
None
Data info of the file: test
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 418 non-null int64
1 Pclass 418 non-null int64
2 Name 418 non-null object
3 Sex 418 non-null object
4 Age 332 non-null float64
5 SibSp 418 non-null int64
6 Parch 418 non-null int64
7 Ticket 418 non-null object
8 Fare 417 non-null float64
9 Cabin 91 non-null object
10 Embarked 418 non-null object
11 Sex_Boolean 418 non-null int64
dtypes: float64(2), int64(5), object(5)
memory usage: 39.3+ KB
None
How many women and men survived?
Y: target value
(891, 1)
AFTER DROPPING COLUMNS
file: train.csv
<bound method DataFrame.info of Pclass Age SibSp Parch Embarked Sex_Boolean
0 3 22.0 1 0 0 1
1 1 38.0 1 0 1 0
2 3 26.0 0 0 0 0
3 1 35.0 1 0 0 0
4 3 35.0 0 0 0 1
.. ... ... ... ... ... ...
886 2 27.0 0 0 0 1
887 1 19.0 0 0 0 0
888 3 0.0 1 2 0 0
889 1 26.0 0 0 1 1
890 3 32.0 0 0 2 1
[891 rows x 6 columns]>
X: input data and shape
[[1]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
....
[1]
[1]]
(891, 1)
After: Train data split
X_train: (623, 1)
X_test: (268, 1)
y_train: (623, 1)
y_test: (268, 1)
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:136: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
..
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:136: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
precision recall f1-score support
0 0.00 0.00 0.00 174
1 0.35 1.00 0.52 94
accuracy 0.35 268
macro avg 0.18 0.50 0.26 268
weighted avg 0.12 0.35 0.18 268
Accuracy is 0.35074626865671643
cross val x: [[1]
[0]
[1]
[1]
....
[1]
[0]
[1]
[1]
[1]]
cross val y: [[0]
[1]
[1]
[1]...
[0]
[0]
[1]
[0]
[1]
[0]]
SHAPE X AND Y : (418, 1) (891, 1)
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:136: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
ValueError Traceback (most recent call last)
<ipython-input-24-7748c3e3a4a7> in <module>()
184
185 # X,y will automatically devided by 5 folder, the scoring I will still use the accuracy
--> 186 scores = cross_val_score(Model, cross_val_X, cross_val_Y, cv=5, scoring='accuracy')
187
188 # print all 3 times scores
3 frames
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
210 if len(uniques) > 1:
211 raise ValueError("Found input variables with inconsistent numbers of"
--> 212 " samples: %r" % [int(l) for l in lengths])
213
214
ValueError: Found input variables with inconsistent numbers of samples: [418, 891]
Upvotes: 1
Views: 1061
Reputation: 2358
As an extra to what @yatu said, cross_val_score
should take model, X,Y
as arguments, you do not need to fit different values again link to cross_val_score
Look at the code snippet they present
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_val_score
diabetes = datasets.load_diabetes()
X = diabetes.data[:150]
y = diabetes.target[:150]
lasso = linear_model.Lasso()
print(cross_val_score(lasso, X, y, cv=3))
If you want to measure performance on a holdout set (or test_set) as you say, you should do the following in your code and maybe change the scorer
parameter in cross_val_score
:
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
score = cross_val_score(knn, X_train, y_train, cv=3, scorer=None)
scores.append(score)
The cross_val_score
is already predicting on a hold_out set itself, so you do not need to do preds = knn.predict(X_test); accuracy_score(preds, y_test)
Upvotes: 1
Reputation: 88236
The problem is a bit hard to follow, since there is quite a lot of code. Though it seems that Y
is:
Y = df_train['Survived'].values.reshape(-1, 1)
And then you assign it to cross_val_Y= Y
. Whereas cross_val_X
comes from df_test
:
cross_val_X=(df_test['Sex_Boolean'].values.reshape(-1, 1)
So it looks like they will indeed have different shapes, which would explain the issue, since as stated in the docs, the expected arrays must have shape:
X: array-like of shape (n_samples, n_features) The data to fit. Can be for example a list, or an array.
y: array-like of shape (n_samples,) or (n_samples, n_outputs), default=None The target variable to try to predict in the case of supervised learning.
So the amount of samples n_samples
has to be the same.
Upvotes: 1