Reputation: 575
This is probably a noob question, but I am trying to minimize the mean absolute error in GAMS. Consider the following data in GAMS:
set Time /0 * 2/;
parameter y(Time),u(Time),v(Time),yhat(Time),MAE;
scalar
alpha /0/
beta /0/;
y("0") = 24;
y("1") = 23;
y("2") = 26;
I want to do the following equation based on exponentiel smoothing (the equation is taking from here):
I can do that in GAMS with an loop:
u("0") = y("0");
v("0") = 0;
loop(Time,
u(Time) = (alpha*y(Time))+(1-alpha)*(u(Time-1)-v(Time-1));
v(Time) = beta*(u(Time)-u(Time-1))+(1-beta)*v(Time-1);
yhat(Time) = u(Time-1)+v(Time-1);
);
From this I can calculate the mean absolute error:
set Timesub(Time) / 1 * 2 /;
MAE = sum(Timesub,abs(yhat(Timesub)-y(Timesub)))/2;
However, instead of assuming a value for alpha and beta, I want to minimize the value of MAE by changing the value in alpha and beta subject to the constraint that 0 < alpha <= 1.0 and 0 < beta <= 1.0.
But I am not sure how to setup this minimization problem in GAMS. Can anyone help me?
Upvotes: 0
Views: 181
Reputation: 16797
First, note that your GAMS assignment for u has a bug (sign error).
In GAMS you have to "unroll" the loop and construct a large system of simultaneous equations. Using data from your reference, this can look like:
set
t /t1*t15/
;
parameter y(t) 'data' /
t1 3
t2 5
t3 9
t4 20
t5 12
t6 17
t7 22
t8 23
t9 51
t10 41
t11 56
t12 75
t13 60
t14 75
t15 88
/;
variables
u(t),v(t),yhat(t),MAE
;
positive variables
alpha, beta
abserr(t)
;
alpha.up = 1;
beta.up = 1;
equations
udef(t)
vdef(t)
pred(t)
abs1(t)
abs2(t)
obj
;
u.fx("t1") = y("t1");
v.fx("t1") = 0;
yhat.fx("t1") = 0;
udef(t-1).. u(t) =e= alpha*y(t)+(1-alpha)*(u(t-1)+v(t-1));
vdef(t-1).. v(t) =e= beta*(u(t)-u(t-1))+(1-beta)*v(t-1);
pred(t-1).. yhat(t) =e= u(t-1)+v(t-1);
abs1(t)$(ord(t)>1).. -abserr(t) =l= yhat(t)-y(t);
abs2(t)$(ord(t)>1).. yhat(t)-y(t) =l= abserr(t);
obj.. MAE =e= sum(t$(ord(t)>1),abserr(t))/(card(t)-1);
* initial point
alpha.l = 0.4;
beta.l = 0.7;
model m /all/;
option nlp=conopt;
solve m minimizing MAE using nlp;
parameter results(*,*);
results(t,'y') = y(t);
results(t,'u') = u.l(t);
results(t,'v') = v.l(t);
results(t,'yhat') = yhat.l(t);
results(t,'|e|') = abserr.l(t);
display results;
display alpha.l,beta.l,MAE.l;
The results look like:
---- 73 PARAMETER results
y u v yhat |e|
t1 3.000 3.000
t2 5.000 3.428 0.370 3.000 2.000
t3 9.000 4.910 1.333 3.798 5.202
t4 20.000 9.184 3.878 6.243 13.757
t5 12.000 12.835 3.681 13.062 1.062
t6 17.000 16.620 3.771 16.516 0.484
t7 22.000 20.735 4.069 20.391 1.609
t8 23.000 24.418 3.735 24.803 1.803
t9 51.000 33.038 7.962 28.153 22.847
t10 41.000 41.000 7.962 41.000
t11 56.000 50.467 9.264 48.962 7.038
t12 75.000 62.996 12.089 59.731 15.269
t13 60.000 71.860 9.298 75.085 15.085
t14 75.000 79.841 8.159 81.158 6.158
t15 88.000 88.000 8.159 88.000
---- 74 VARIABLE alpha.L = 0.214
VARIABLE beta.L = 0.865
VARIABLE MAE.L = 6.594
This is a bit better than reported in link. The reason is that this is actually a non-convex problem. I verified that CONOPT actually found the global optimal solution (verification by using a global solver).
Upvotes: 1