Reputation: 9
I am trying to solve a system of nine algebraic equations in MATLAB of the following form:
eq1 = x1 * 3.12091E-17 * 10.96 * exp(x2 + x3 * 4.96) - 1765;
eq2 = x1 * 3.12091E-17 * 5.08 * exp(x2 + x3 * 5.09) - 720;
eq3 = x1 * 3.12091E-17 * 57.2 * exp(x2 + x3 * 5.22) - 7133;
eq4 = x1 * 3.12091E-17 * 1.08 * exp(x2 + x3 * 5.3) - 123;
eq5 = x1 * 3.12091E-17 * 5.01 * exp(x2 + x3 * 5.32) - 565;
eq6 = (100 - x1) * 4.91606E-18 * 0.096 * exp(x2 + x3 * 6.61) - 8;
eq7 = (100 - x1) * 4.91606E-18 * 0.318 * exp(x2 + x3 * 6.64) - 28;
eq8 = (100 - x1) * 4.91606E-18 * 0.054 * exp(x2 + x3 * 6.66) - 4;
eq9 = (100 - x1) * 4.91606E-18 * 0.832 * exp(x2 + x3 * 6.9) - 57;
This system has only three unknowns in nine equations. I have tried to solve it using inbuilt MATLAB solvers, but unfortunately nothing worked. The correct values of the unknowns are: x1=4.6; x2=47; x3=-1.2
.
What are your ideas on what is possibly wrong that MATLAB cannot solve this system?
Regards, I.M.
Upvotes: 0
Views: 361
Reputation: 38032
I think Wolfie and Durkee are wrong here, and you simply made some other mistake not shown in the question body.
You have a multiplication of a large number by a small number. This is not usually an issue in floating point (save for under/overflow situations), it's addition/subtraction that usually causes problems.
Basically, floating point equals scientific notation in binary. Just as in normal scientific notation, floating point preserves relative accuracy indepent of the magnitude of the numbers used, because the mantissa and exponent are treated independently. A small demonstration (refer to the IEEE/754 standard for background):
% Take two random numbers from the ENTIRE range of possible doubles:
R = @() typecast(randi(intmax('uint32'), 1,2, 'uint32'), 'double');
R1 = R()
R2 = R()
% Their product
p1 = R1*R2
% Compute the same product, but with randomized exponents/equal mantissas
R1 = typecast(R1,'uint64');
R2 = typecast(R2,'uint64');
for bit = 64-11:64-1 % (ignore sign bit)
R1 = bitset(R1, bit, rand() < 0.5);
R2 = bitset(R2, bit, rand() < 0.5);
end
R1 = typecast(R1,'double')
R2 = typecast(R2,'double')
p2 = R1*R2
% Show binary representations of these products
dec2bin(typecast(p1,'uint64'), 64)
dec2bin(typecast(p2,'uint64'), 64)
which gives, for example:
R1 =
1.455811561734462e+106
R2 =
2.491861453769942e-128
p1 =
3.627680714638727e-22
R1 =
2.773851093980602e+246
R2 =
2.835869106622333e-287
p2 =
7.866278623790151e-41
ans =
'0011101101111011011010001111010001111011100010001110101000000000'
ans =
'0011011110011011011010001111010001111011100010001110101000000000'
Ignoring bit 0 (the sign bit), these two strings demonstrate that only the first 11 bits (the exponent) differ, but the rest (the mantissa) is identical. This means that p1
and p2
are both the best possible representation of the desired product R1*R2
in floating point, and the magnitudes of R1
and R2
have zero effect on the accuracy (unless the exponent under/overflows).
Now, back to your question. First off, I'm very curious how you managed to solve a set of nonlinear equations with solvers for linear systems :)
Also, when I implement your equations and substitute the values you give, I get a vector of negative values...so I'm not sure what you're solving for here...
Anyway, I made reasonable assumptions and tried to solve your system with fsolve
. And indeed, I found no substantial problem:
function X = eqs()
% Oh just shut up, FSOLVE...
options = optimset('TolX' , 1e-14,...
'TolFun' , 1e-14,...
'Algorithm', 'Levenberg-Marquardt',...
'Display' , 'off');
% Pretend we don't know a good initial value
done = false;
while ~done
try
[X, ~, e] = fsolve(@equations, 100*randn(3,1), options);
done = (e==1);
catch ME
if ~strcmp(ME.identifier, 'optimlib:levenbergMarquardt:UsrObjUndefAtX0')
rethrow(ME); end
end
end
end
function Y = equations(X)
% Static factors
A = 3.12091e-17 * [10.96; 5.08; 57.2; 1.08; 5.01];
B = 4.91606e-18 * [0.096; 0.318; 0.054; 0.832];
C = [4.96; 5.09; 5.22; 5.30; 5.32; 6.61; 6.64; 6.66; 6.90];
D = [1765; 720; 7133; 123; 565; 8; 28; 4; 57];
% (found by running the equation with your solution, to have a complete
% equation compatible with fsolve)
E = [-7.087431856800463e+02
-3.011362826270133e+02
-3.097892400288644e+03
-5.378653741735363e+01
-2.515404691148800e+02
-3.826949873156565e+00
-1.466555675930393e+01
-1.789357783939141e+00
-3.146292129042946e+01];
% Value of the equations at current X
Y = [A.*X(1).*ones(5,1); B.*(100-X(1)).*ones(4,1)] .* ...
exp(X(2) + C*X(3)) - D - E;
end
Results:
>> eqs()
ans =
4.599999930381616e+00
4.700000001536806e+01
-1.200000000274661e+00
The only problem here might be the speed. I've used a very simple global search routine: iteratively taking random starting values until some combination of them causes fsolve
to converge. In your problem's context, you might be able to make more reasonable and accurate estimates, reducing the computation time.
Upvotes: 0
Reputation: 30046
If you have 3 unknowns in 9 equations then your problem is overdetermined. If your system has a solution, then some equations must be linear combinations of others - in short you only need 3 (linearly independent) equations to solve for 3 variables.
Next, as mentioned in the comments, you are going to hit floating point issues when using numbers as small as 10^-18
and as large as e^40
, especially within the same expression! You can see that the accuracy isn't high enough with a simple test
exp(40) + 1e-18 > exp(40) % Returns false, i.e. not enough accuracy for additional term
eps(exp(40)) % Gives smallest distance to next number, returns 32.
% This is many orders of magnitude larger than 10^-18!
Once you have resolved those issues, you should look at fsolve
, which is MATLAB's solver for nonlinear systems of equations from some initial guess.
Upvotes: 1