Reputation: 1177
I have two functions
m1 = f1(w, s)
m2 = f2(w, s)
f1() and f2() are all blackboxs. Given w and s, I can get m1 and m2.
Now, I need to design or find a function g, such that
m2' = g(m1)
Also, the difference between m2 and m2' must be minimized.
The w and s are all stochastic process.
How can I find such a function g()
? What knowledge domain does this belong to ?
Upvotes: 2
Views: 231
Reputation: 178451
Assuming you can invoke f1,f2 as many times as you want - this can be solved using regression.
(w_1,s_1,m2_1),...,(w_n,s_n,m2_n)
.(m1_1,m2_1),...,(m1_n,m2_n)
.(1,m1_1,m1_1^2,m1_1^3,m2_1), ...
It is easy to generalize it to any
degree of polynom or any other set base functions.However, note that for some functions, this might be impossible to calculate find a good model to fit, since you lose data when you reduce the dimensionality from 2 (w,s) to 1 (m1).
Matlab code snap (poor choice of functions):
%example functions
f = @(w,s) w.^2 + s.^3 -1;
g = @(w,s) s.^2 - w + 2;
%random points for sampling
w = rand(1,100);
s = rand(1,100);
%the data
m1 = f(w,s)';
m2 = g(w,s)';
%changing dimension:
d = 5;
points = size(m1,1);
A = ones(points,d);
for jj=1:d
A(:,jj) = (m1.^(jj-1))';
end
%OLS:
theta = pinv(A'*A)*A'*m2;
%new point:
w = rand(1,1);
s = rand(1,1);
m1 = f(w,s);
%estimate the new point:
A = ones(1,d);
for jj=1:d
A(:,jj) = (m1.^(jj-1))';
end
%the estimation:
estimated = A*theta
%the real value:
g(w,s)
Upvotes: 3
Reputation: 3372
This kind of problems are studied in fields such as statistic or inverse problems. Here's one way to approach the problem theoretically (from the point of view of inverse problems):
First of all, it is quite clear that in the general case, the function g might not exists. However, what you can (try to) compute, given that you (assume to) know something about the statistics of w and s, is the posterior probability density p(m2|m1), which can then be used to compute estimators for m2 given m1, for instance, a maximum a posteriori estimate.
The posterior density can be computed using Bayes' formula:
p(m2|m1) = (\int p(m1,m2|w,s)p(w,s) dw ds) / (\int p(m1|w,s) dw ds)
which, in this case, might be (theoretically) nasty to apply since some of the involved maginal probability densities are singular. The best way to proceed numerically depends on the additional assumptions you can do on the statistics of w and s (e.g., Gaussian) and the functions f1, f2 (e.g., smooth). There is no silver bullet.
amit's OLS solution is probably a good starting point. Just be sure to sample from the correct distributions for w and s.
Upvotes: 1