Reputation: 1875
We have been asked to answer whether foldr
or foldl
is more efficient.
I am not sure, but doesn't it depend on what I am doing, especially what I want to reach with my functions?
Is there a difference from case to case or can one say that foldr
or foldl
is better , because...
Is there a general answer ?
Thanks in advance!
Upvotes: 16
Views: 8611
Reputation: 30227
In languages with strict/eager evaluation, folding from the left can be done in constant space, while folding from the right requires linear space (over the number of elements of the list). Because of this, many people who first approach Haskell come over with this preconception.
But that rule of thumb doesn't work in Haskell, because of lazy evaluation. It's possible in Haskell to write constant space functions with foldr
. Here is one example:
find :: (a -> Bool) -> [a] -> Maybe a
find p = foldr (\x next -> if p x then Just x else next) Nothing
Let's try hand-evaluating find even [1, 3, 4]
:
-- The definition of foldr, for reference:
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
find even (1:3:4:[])
= foldr (\x next -> if even x then Just x else next) (1:3:4:[])
= if even 1 then Just 1 else foldr (\x next -> if even x then Just x else next) (3:4:[])
= foldr (\x next -> if even x then Just x else next) (3:4:[])
= if even 3 then Just 3 else foldr (\x next -> if even x then Just x else next) (4:[])
= foldr (\x next -> if even x then Just x else next) (4:[])
= if even 4 then Just 4 else foldr (\x next -> if even x then Just x else next) []
= Just 4
The size of the expressions in the intermediate steps has a constant upper bound—this actually means that this evaluation can be carried out in constant space.
Another reason why foldr
in Haskell can run in constant space is because of the list fusion optimizations in GHC. GHC in many cases can optimize a foldr
into a constant-space loop over a constant-space producer. It cannot generally do that for a left fold.
Nonetheless, left folds in Haskell can be written to use tail recursion, which can lead to performance benefits. The thing is that for this to actually succeed you need to be very careful about laziness—naïve attempts at writing a tail recursive algorithm normally lead to linear-space execution, because of an accumulation of unevaluated expressions.
Takeaway lessons:
Prelude
and Data.List
as much as possible, because they've been carefully written to exploit performance features like list fusion.foldr
first.foldl
, use foldl'
(the version that avoids unevaluated expressions).Upvotes: 6
Reputation: 74334
A fairly canonical source on this question is Foldr Foldl Foldl' on the Haskell Wiki. In summary, depending on how strictly you can combine elements of the list and what the result of your fold is you may decide to choose either foldr
or foldl'
. It's rarely the right choice to choose foldl
.
Generally, this is a good example of how you have to keep in mind the laziness and strictness of your functions in order to compute efficiently in Haskell. In strict languages, tail-recursive definitions and TCO are the name of the game, but those kinds of definitions may be too "unproductive" (not lazy enough) for Haskell leading to the production of useless thunks and fewer opportunities for optimization.
foldr
If the operation that consumes the result of your fold can operate lazily and your combining function is non-strict in its right argument, then foldr
is usually the right choice. The quintessential example of this is the nonfold
. First we see that (:)
is non-strict on the right
head (1 : undefined)
1
Then here's nonfold
written using foldr
nonfoldr :: [a] -> [a]
nonfoldr = foldr (:) []
Since (:)
creates lists lazily, an expression like head . nonfoldr
can be very efficient, requiring just one folding step and forcing just the head of the input list.
head (nonfoldr [1,2,3])
head (foldr (:) [] [1,2,3])
head (1 : foldr (:) [] [2,3])
1
A very common place where laziness wins out is in short-circuiting computations. For instance, lookup :: Eq a => a -> [a] -> Bool
can be more productive by returning the moment it sees a match.
lookupr :: Eq a => a -> [a] -> Bool
lookupr x = foldr (\y inRest -> if x == y then True else inRest) False
The short-circuiting occurs because we discard isRest
in the first branch of the if
. The same thing implemented in foldl'
can't do that.
lookupl :: Eq a => a -> [a] -> Bool
lookupl x = foldl' (\wasHere y -> if wasHere then wasHere else x == y) False
lookupr 1 [1,2,3,4]
foldr fn False [1,2,3,4]
if 1 == 1 then True else (foldr fn False [2,3,4])
True
lookupl 1 [1,2,3,4]
foldl' fn False [1,2,3,4]
foldl' fn True [2,3,4]
foldl' fn True [3,4]
foldl' fn True [4]
foldl' fn True []
True
foldl'
If the consuming operation or the combining requires that the entire list is processed before it can proceed, then foldl'
is usually the right choice. Often the best check for this situation is to ask yourself whether your combining function is strict---if it's strict in the first argument then your whole list must be forced anyway. The quintessential example of this is sum
sum :: Num a => [a] -> a
sum = foldl' (+) 0
since (1 + 2)
cannot be reasonably consumed prior to actually doing the addition (Haskell isn't smart enough to know that 1 + 2 >= 1
without first evaluating 1 + 2
) then we don't get any benefit from using foldr
. Instead, we'll use the strict combining property of foldl'
to make sure that we evaluate things as eagerly as needed
sum [1,2,3]
foldl' (+) 0 [1,2,3]
foldl' (+) 1 [2,3]
foldl' (+) 3 [3]
foldl' (+) 6 []
6
Note that if we pick foldl
here we don't get quite the right result. While foldl
has the same associativity as foldl'
, it doesn't force the combining operation with seq
like foldl'
does.
sumWrong :: Num a => [a] -> a
sumWrong = foldl (+) 0
sumWrong [1,2,3]
foldl (+) 0 [1,2,3]
foldl (+) (0 + 1) [2,3]
foldl (+) ((0 + 1) + 2) [3]
foldl (+) (((0 + 1) + 2) + 3) []
(((0 + 1) + 2) + 3)
((1 + 2) + 3)
(3 + 3)
6
We get extra, useless thunks (space leak) if we choose foldr
or foldl
when in foldl'
sweet spot and we get extra, useless evaluation (time leak) if we choose foldl'
when foldr
would have been a better choice.
nonfoldl :: [a] -> [a]
nonfoldl = foldl (:) []
head (nonfoldl [1,2,3])
head (foldl (:) [] [1,2,3])
head (foldl (:) [1] [2,3])
head (foldl (:) [1,2] [3]) -- nonfoldr finished here, O(1)
head (foldl (:) [1,2,3] [])
head [1,2,3]
1 -- this is O(n)
sumR :: Num a => [a] -> a
sumR = foldr (+) 0
sumR [1,2,3]
foldr (+) 0 [1,2,3]
1 + foldr (+) 0 [2, 3] -- thunks begin
1 + (2 + foldr (+) 0 [3])
1 + (2 + (3 + foldr (+) 0)) -- O(n) thunks hanging about
1 + (2 + (3 + 0)))
1 + (2 + 3)
1 + 5
6 -- forced O(n) thunks
Upvotes: 38
Reputation: 2634
(Please read the comments on this post. Some interesting points were made and what I wrote here isn't completely true!)
It depends. foldl is usually faster since it's tail recursive, meaning (sort of), that all computation is done in-place and there's no call-stack. For reference:
foldl f a [] = a
foldl f a (x:xs) = foldl f (f a x) xs
To run foldr we do need a call stack, since there is a "pending" computation for f
.
foldr f a [] = a
foldr f a (x:xs) = f x (foldr f a xs)
On the other hand, foldr can short-circuit if f is not strict in its first argument. It's lazier in a way. For example, if we define a new product
prod 0 x = 0
prod x 0 = 0
prod x y = x*y
Then
foldr prod 1 [0...n]
Takes constant time in n, but
foldl prod 1 [0...n]
takes linear time. (This will not work using (*)
, since it does not check if any argument is 0. So we create a non-strict version. Thanks to Ingo and Daniel Lyons for pointing it out in the comments)
Upvotes: 3