Gary
Gary

Reputation: 2167

Translating a Linear Regression from Matlab to Python

I tried to translate a piece of code from Matlab to Python and I'm running into some errors:

Matlab:

function [beta] = linear_regression_train(traindata)
y = traindata(:,1); %output
ind2 = find(y == 2);
ind3 = find(y == 3);
y(ind2) = -1;
y(ind3) = 1;
X = traindata(:,2:257); %X matrix,with size of 1389x256
beta = inv(X'*X)*X'*y;

Python:

def linear_regression_train(traindata):
        y = traindata[:,0] # This is the output
        ind2 = (labels==2).nonzero()
        ind3 = (labels==3).nonzero()
        y[ind2] = -1
        y[ind3] = 1
        X = traindata[ : , 1:256]
        X_T = numpy.transpose(X)
        beta = inv(X_T*X)*X_T*y
        return beta

I am receiving an error: operands could not be broadcast together with shapes (257,0,1389) (1389,0,257) on the line where beta is calculated.

Any help is appreciated!

Thanks!

Upvotes: 1

Views: 856

Answers (1)

TheBlackCat
TheBlackCat

Reputation: 10298

The problem is that you are working with numpy arrays, not matrices as in MATLAB. Matrices, by default, do matrix mathematical operations. So X*Y does a matrix multiplication of X and Y. With arrays, however, the default is to use element-by-element operations. So X*Y multiplies each corresponding element of X and Y. This is the equivalent of MATLAB's .* operation.

But just like how MATLAB's matrices can do element-by-element operations, Numpy's arrays can do matrix multiplication. So what you need to do is use numpy's matrix multiplication instead of its element-by-element multiplication. For Python 3.5 or higher (which is the version you should be using for this sort of work), that is just the @ operator. So your line becomes:

beta = inv(X_T @ X) @ X_T @ y

Or, better yet, you can use the simpler .T transpose, which is the same as np.transpose but much more concise (you can get rid of the `np.transpose line entirely):

beta = inv(X.T @ X) @ X.T @ y

For Python 3.4 or earlier, you will need to use np.dot since those versions of python don't have the @ matrix multiplication operator:

beta = np.dot(np.dot(inv(np.dot(X.T, X)), X.T), y)

Numpy has a matrix object that uses matrix operations by default like the MATLAB matrix. Do not use it! It is slow, poorly-supported, and almost never what you really want. The Python community has standardized around arrays, so use those.

There may also be some issues with the dimensions of traindata. For this to work properly then traindata.ndim should be equal to 3. In order for y and X to be 2D, traindata should be 3D.

This could be an issue if traindata is 2D and you want y to be MATLAB-style "vector" (what MATLAB calls "vectors" aren't really vectors). In numpy, using a single index like traindata[:, 0] reduces the number of dimensions, while taking a slice like traindata[:, :1] doesn't. So to keep y 2D when traindata is 2D, just do a length-1 slice, traindata[:, :1]. This is exactly the same values, but this keeps the same number of dimensions as traindata.

Notes: Your code can be significantly simplified using logical indexing:

def linear_regression_train(traindata):
    y = traindata[:, 0] # This is the output
    y[labels == 2] = -1
    y[labels == 3] = 1
    X = traindata[:, 1:257]
    return inv(X.T @ X) @ X.T @ y
    return beta

Also, your slice is wrong when defining X. Python slicing excludes the last value, so to get a 256 long slice you need to do 1:257, as I did above.

Finally, please keep in mind that modifications to arrays inside functions carry over outside the functions, and indexing does not make a copy. So your changes to y (setting some values to 1 and others to -1), will affect traindata outside of your function. If you want to avoid that, you need to make a copy before you make your changes:

y = traindata[:, 0].copy()

Upvotes: 2

Related Questions