Reputation: 153
I'm trying to write some haskell code that calculates a property given some inputs to it. I'll be generic since I have a few different applications I'd like to do with this style of program.
--some random calculation
longCalc a b c d e f g = a*b+c-d/e*f+g
I want to find the combination of inputs that gives the lowest result. So I generate a list of pairs.
allPairs = [(show [a,b,c,d,e,f,g], longCalc a b c d e f g) | a<-[1..40],b<-[1..40],c<-[1..40],d<-[1..40],e<-[1..40],f<-[1..40],g<-[1..40]]
minimum' :: Ord a => [(t, a)] -> (t, a)
minimum' [] = error "minimum of empty list"
minimum' (x:xs) = minTail x xs
where minTail currentMin [] = currentMin
minTail (m, n) (p:ps)
| n > (snd p) = minTail p ps
| otherwise = minTail (m, n) ps
Now I have a list with a descriptor of what the inputs were along with the answer.
bestAnswer = minimum' allPairs
I'm not so sure this is the most efficient way of doing this calculation.
In addition I would like to see a progressive calculation of the best answer.
onlyLessFunc f _ [] = []
onlyLessFunc f y (x:xs)
| f(x) < f(y) = [x] ++ onlyLessFunc f x xs
| otherwise = onlyLessFunc f x xs
This function should produce a list where only better results get added to the end.
How might I go about making this more efficient? I am currently seeing that my approach uses tons of memory (all of it to be specific).
Upvotes: 0
Views: 50
Reputation: 476574
You can boost the efficiency of your proposed minimum'
with:
minimum' :: Ord a => [(t, a)] -> (t, a)
minimum' [] = error "minimum of empty list"
minimum' (x:xs) = minTail x xs
where minTail m@(xm, fm) (xf@(x, f): xs) | f < fm = minTail xf xs
| otherwise = minTail m xs
minTail cm [] = cm
with some extra techniques, we can definitely boost it a but further, but in general this will not boost performance significantly.
A second optimization that will have a huge impact on the use of memory, is not to construct such function with allPairs
, but to pass the list immediately into the function. Now Haskell can garbage collect the list elements that are already evaluated. So we can call it with:
minimum' [([a,b,c,d,e,f,g], longCalc a b c d e f g) | a<-[1..40],b<-[1..40],c<-[1..40],d<-[1..40],e<-[1..40],f<-[1..40],g<-[1..40]]
But still we will not boost the algorithm to the levels that constraint programming and integer linear programming can. Usually those mechanisms aim to exploit relations in the function, and thus restrict the domains of the variables from the moment a first value is calculated. For integer linear programming (ILP) this is usually done with an optimization technique called cutting plane.
As for your second question, again we can not boost this in terms of time complexity, but we can boost performance with:
onlyLessFunc f y = go (f y)
where go _ [] = []
go fy (x:xs) | fx < fy = x : go fx xs
| otherwise = go fy xs
where fx = f x
But again, as said, you are actually trying to optimize the wrong part of the algorithm. Usually it is more beneficial to prevent generating all those values in the first place.
Upvotes: 1