Marcelo de Sousa
Marcelo de Sousa

Reputation: 195

Doubts perceptron

I am studying machine learning on their own account and i came across the following signature of a perceptron:

def ClassicPerceptron(W,X,Y,maxiter=1000,reorder=True):
    """ClassicPerceptron function implements the most basic perceptron. 

    This algorithm starts by reordering the training samples and their labels
    if reorder is equal to True. Then, it iterates for all the samples, as many
    times as it takes, to correctly classify all the samples, or until the number 
    of iterations reaches maxiter.

    Parameters
    ----------
    W : numpy array of floats
        The initial set of weights for the perceptron classificator.
    X : numpy array of floats
        The dataset with the bias (first column is equal to 1.0).
    Y : numpy array of floats
        The labels (-1.0, ou 1.0) for each line of X.
    maxiter : integer
        The maximum number of iterations allowed before stopping.
    reorder : boolean
        reorder the training samples and their labels.

    Returns
    -------
    W : numpy array of floats
        The last set of weights for the perceptron classificator.
    niter : integer
        The current number of iterations until success, or maxiter. 
        This is just to have an idea on how many iterations it took 
        to converge.

    """

I thought curious because the algorithm does not respect the update of weights, because we all have seen up to now use updating of weights, in fact I don't understand well that definition, I imagined that this reordering would shuffle the training examples, but I am a bit lost, like a light of how top this algorithm. PS: Please do not respond with code, just liked an explanation.

Upvotes: 0

Views: 63

Answers (1)

leoschet
leoschet

Reputation: 1866

Well, the way I see it, since you can pass reorder=False, the reorder step is optional, so when it says

Then, it iterates for all the samples, as many times as it takes, to correctly classify all the samples, or until the number of iterations reaches maxiter.

It seems to be updating/adjusting the weights until it correctly find the optimal solution (separates the classes with a hyperplane) or until maxiter is reached. In other words, it seems to respect the update of weights.

It would be helpful if you could provide us, if possible, the method implementation, so the concept or the idea behind reordering the training set could be understood.

Besides that training approach, one can calculate the bias and the weights by solving the linear system, such as: X * W = Y. Where X is the training samples plus the additional bias column, W the weight array plus the bias weight and Y the training labels. In fact, I thought, at first glance, that this was the method intention.

In this scenario, the reorder step would be helpful either to get the matrix in a staggered form or to get a lower triangular matrix. Both using d samples from the set, where d is the dimension (amount of features plus one, for the bias)

Note that, in order to solve this linear system, you need to consider X * W individual results (i.e.: a single row from X with the only column from W) as 1 or -1. You can achieve this by considering every result above the limiar as 1. Otherwise, -1.

Upvotes: 0

Related Questions