Reputation: 1509
So generally, if you have two functions f,g: X -->Y, and if there is some binary operation + defined on Y, then f + g has a canonical definition as the function x --> f(x) + g(x).
What's the best way to implement this in Mathematica?
f[x_] := x^2
g[x_] := 2*x
h = f + g;
h[1]
yields
(f + g)[1]
as an output
of course,
H = Function[z, f[z] + g[z]];
H[1]
Yields '3'.
Upvotes: 12
Views: 3286
Reputation: 18271
I would suggest defining an operator other than the built-in Plus
for this purpose. There are a number of operators provided by Mathematica that are reserved for user definitions in cases such as this. One such operator is CirclePlus
which has no pre-defined meaning but which has a nice compact representation (at least, it is compact in a notebook -- not so compact on a StackOverflow web page). You could define CirclePlus
to perform function addition thus:
(x_ \[CirclePlus] y_)[args___] := x[args] + y[args]
With this definition in place, you can now perform function addition:
h = f \[CirclePlus] g;
h[x]
(* Out[3]= f[x]+g[x] *)
If one likes to live on the edge, the same technique can be used with the built-in Plus
operator provided it is unprotected first:
Unprotect[Plus];
(x_ + y_)[args___] := x[args] + y[args]
Protect[Plus];
h = f + g;
h[x]
(* Out[7]= f[x]+g[x] *)
I would generally advise against altering the behaviour of built-in functions -- especially one as fundamental as Plus
. The reason is that there is no guarantee that user-added definitions to Plus
will be respected by other built-in or kernel functions. In some circumstances calls to Plus
are optimized, and those optimizations might be not take the user definitions into account. However, this consideration may not affect any particular application so the option is still a valid, if risky, design choice.
Upvotes: 3
Reputation: 22579
I will throw in a complete code for Gram - Schmidt and an example for function addition etc, since I happened to have that code written about 4 years ago. Did not test extensively though. I did not change a single line of it now, so a disclaimer (I was a lot worse at mma at the time). That said, here is a Gram - Schmidt procedure implementation, which is a slightly generalized version of the code I discussed here:
oneStepOrtogonalizeGen[vec_, {}, _, _, _] := vec;
oneStepOrtogonalizeGen[vec_, vecmat_List, dotF_, plusF_, timesF_] :=
Fold[plusF[#1, timesF[-dotF[vec, #2]/dotF[#2, #2], #2]] &, vec, vecmat];
GSOrthogonalizeGen[startvecs_List, dotF_, plusF_, timesF_] :=
Fold[Append[#1,oneStepOrtogonalizeGen[#2, #1, dotF, plusF, timesF]] &, {}, startvecs];
normalizeGen[vec_, dotF_, timesF_] := timesF[1/Sqrt[dotF[vec, vec]], vec];
GSOrthoNormalizeGen[startvecs_List, dotF_, plusF_, timesF_] :=
Map[normalizeGen[#, dotF, timesF] &, GSOrthogonalizeGen[startvecs, dotF, plusF, timesF]];
The functions above are parametrized by 3 functions, realizing addition, multiplication by a number, and the dot product in a given vector space. The example to illustrate will be to find Hermite
polynomials by orthonormalizing monomials. These are possible implementations for the 3 functions we need:
hermiteDot[f_Function, g_Function] :=
Module[{x}, Integrate[f[x]*g[x]*Exp[-x^2], {x, -Infinity, Infinity}]];
SetAttributes[functionPlus, {Flat, Orderless, OneIdentity}];
functionPlus[f__Function] := With[{expr = Plus @@ Through[{f}[#]]}, expr &];
SetAttributes[functionTimes, {Flat, Orderless, OneIdentity}];
functionTimes[a___, f_Function] /; FreeQ[{a}, # | Function] :=
With[{expr = Times[a, f[#]]}, expr &];
These functions may be a bit naive, but they will illustrate the idea (and yes, I also used Through
). Here are some examples to illustrate their use:
In[114]:= hermiteDot[#^2 &, #^4 &]
Out[114]= (15 Sqrt[\[Pi]])/8
In[107]:= functionPlus[# &, #^2 &, Sin[#] &]
Out[107]= Sin[#1] + #1 + #1^2 &
In[111]:= functionTimes[z, #^2 &, x, 5]
Out[111]= 5 x z #1^2 &
Now, the main test:
In[115]:=
results =
GSOrthoNormalizeGen[{1 &, # &, #^2 &, #^3 &, #^4 &}, hermiteDot,
functionPlus, functionTimes]
Out[115]= {1/\[Pi]^(1/4) &, (Sqrt[2] #1)/\[Pi]^(1/4) &, (
Sqrt[2] (-(1/2) + #1^2))/\[Pi]^(1/4) &, (2 (-((3 #1)/2) + #1^3))/(
Sqrt[3] \[Pi]^(1/4)) &, (Sqrt[2/3] (-(3/4) + #1^4 -
3 (-(1/2) + #1^2)))/\[Pi]^(1/4) &}
These are indeed the properly normalized Hermite polynomials, as is easy to verify. The normalization of built-in HermiteH
is different. Our results are normalized as one would normalize the wave functions of a harmonic oscillator, say. It is trivial to obtain a list of polynomials as expressions depending on a variable, say x:
In[116]:= Through[results[x]]
Out[116]= {1/\[Pi]^(1/4),(Sqrt[2] x)/\[Pi]^(1/4),(Sqrt[2] (-(1/2)+x^2))/\[Pi]^(1/4),
(2 (-((3 x)/2)+x^3))/(Sqrt[3] \[Pi]^(1/4)),(Sqrt[2/3] (-(3/4)+x^4-3 (-(1/2)+x^2)))/\[Pi]^(1/4)}
Upvotes: 3
Reputation: 24336
Consider:
In[1]:= Through[(f + g)[1]]
Out[1]= f[1] + g[1]
To elaborate, you can define h
like this:
h = Through[ (f + g)[#] ] &;
If you have a limited number of functions and operands, then UpSet
as recommended by yoda is surely syntactically cleaner. However, Through
is more general. Without any new definitions involving Times
or h
, one can easily do:
i = Through[ (h * f * g)[#] ] &
i[7]
43218
Upvotes: 13
Reputation: 42235
Another way of doing what you're trying to do is using UpSetDelayed
.
f[x_] := x^2;
g[x_] := 2*x;
f + g ^:= f[#] + g[#] &; (*define upvalues for the operation f+g*)
h[x_] = f + g;
h[z]
Out[1]= 2 z + z^2
Also see this very nice answer by rcollyer (and also the ones by Leonid & Verbeia) for more on UpValues
and when to use them
Upvotes: 10