Reputation: 67
I've solved a dynamic programming problem in Haskell (actually a Project Euler problem) with what boils down to a generalisation of the Fibonacci sequence:
f(n) = f(n-1) + f(n-2)
g(n) = g(n-1) + g(n-3)
h(n) = h(n-1) + h(n-4)
There are a few more functions like this, and because of the size of the problem, I've had to add memoization, which is pretty trivial, like so:
memF = (map f' [0 ..] !!)
where f' x | x <=1 = 1
f' 2 = 2
f' x = memF(x-1) + memF(x-2)
memG = (map f' [0 ..] !!)
where f' x | x <=2 = 1
f' 3 = 2
f' x = memG(x-1) + memG(x-3)
This works fine, so I can get the answer as (memF 100) + (memG 100) + ...
and I've answered the question, but the repeated code is ugly, and I'd rather define a single function to generate the memoized functions, so something like:
mem d = (map f' [0 ..] !!)
where f' x | x < d = 1
f' x | x == d = 2
f' x = (mem d) (x-1) + (mem d)(x-d)
And then answer as mem 2 100 + mem 3 100 + ...
This fails, or at least the caching doesn't work, I guess because the array is recreated with each call, I suppose I could use the StateMonad or a memoization library, but I'd be interested to know if there is a way to do this without Monads. Is there please?
Upvotes: 4
Views: 113
Reputation: 116139
You need another binding to avoid the recursive call to mem d
:
mem d = g
where g = (map f' [0 ..] !!)
f' x | x < d = 1
f' x | x == d = 2
f' x = g (x-1) + g (x-d)
Also be careful when you are calling mem
, since each call to mem
will create its own cache. E.g.
mem 10 x + mem 10 y
will not cache anything, while
let g = mem 10 in g x + g y
will use the same cache.
An alternative could be to use a single "global" cache for all the calls mem d x
, using memoization on the pair (d,x)
. This looks a bit more tricky to achieve, though.
Upvotes: 4