SKM
SKM

Reputation: 989

Issue in calculating error for several runs of an experiment

In the following short example code which is a part of a larger code, I am trying to find the Mean square error which is a performance metric that will decide how good the function has been evaluated by examining the MSE. Lower the value of MSE, closer is the evaluated output to the true result. I repeat the experiment 10 times with 10 different data sets and record the minimum error among all the data sets. This entire process is run 100 times. The data is a matrix of size 10*3 i.e 10 data samples each containing 3 elements.

I am having doubts in the way I calculate the Mean Square Error, Average mean square error and the Minimum error. At the end I am interested to plot a graph that shows decreasing curve of the error where X axis = Number of function evaluations and Y axis = MinimumErr, so as to show the error function decreasing smotthly over 100 trials of the program. Please help

for trials = 1:100
    for expt = 1:10
        DataSet = Data(expt,:);
        for evaluation = 1:50
            %Evaluate a function 
            [B1 B2 B3] = F(DataSet)

            %Find error between the desired outputs(A1,A2,A3) of the function and the obtained output (B1,B2,B3). The function evaluation returns these 3 values.
            err(evaluation,:) = (A1-B1)^2+ (A2-B2)^2 + (A3-B3)^2;
        end
        MeanSqErr = sum(err)/(3*evaluation);
    end
    MinimumErr(expt)  = min(err);
end
AverageMSE= sum(MeanSqErr)/(trials)

Upvotes: 1

Views: 452

Answers (1)

tashuhka
tashuhka

Reputation: 5126

Before even writing a single line of code, it is needed to understand what we want.

The Mean Squared Error (MSE) is a measurement of difference defined as:

enter image description here

where Yhat is the estimated output and Y is the reference output. Both signals/vectors have the same number of points, which is n.

Then, you want the averaged MSE over m experiments, hence you need to apply the mean operator.

enter image description here

For example, you have a reference measurement Y = [0 1 3 6 10]. In the first experiment, you measure Y1 = [1 2 4 5 9], and in the second experiment you measure Y2 = [0 1 2 3 8]. The MSE of the first and second experiment are 1 and 2.8, respectively. Hence, the averaged MSE over all the experiments is 1.9.

Y  = [0 1 3 6 10];
Y1 = [1 2 4 5 9];
Y2 = [0 1 2 3 8];

MSE1 = ((Y-Y1)*(Y-Y1).')/numel(Y);
MSE2 = ((Y-Y2)*(Y-Y2).')/numel(Y);
MSEavg = (MSE1+MSE2)/2; 

Your code seems right but messy, except for the MinimumErr variable that should be inside of the loop for expt = 1:10. I would reorganize your code a bit as:

% Parameters
Ntrials = 100;
Nexpt   = 10;
Neval   = 50;

% Calculate
A = [A1 A2 A3];
MSE = zeros(Ntrials,Nexpt,Neval);
for trials = 1:Ntrials
    for expt = 1:Nexpt
        for eval = 1:Neval
            % Evaluate a function 
            [B1,B2,B3] = F(Data(expt,:));
            B = [B1 B2 B3];
            % Find MSE 
            MSE(trials,expt,eval) = ((A-B)*(A-B).')/numel(A);
        end
    end
end

% Statistics
MeanSqErr  = mean(MSE,3);
MinimumErr = min(MeanSqErr,[],2);
AverageMSE = mean(MeanSqErr,2);

% Plot
figure; plot(1:Ntrials,AverageMSE); xlabel('#trials'); ylabel('MSE');

Upvotes: 1

Related Questions