Reputation: 30615
I understand if I had the statement c = a AND b, if a was false then the compiler wouldnt bother evaluating b, it would know the result because a is already false.
However, what if I had a function which was recursive and ANDing together the recursive calls.
So myfunc input1 input2 = and[myfunc(input1),myfunc(input2)]
if a function return from any point in the above recursive function-calling tree returned false, would the recursive function calls terminate and the value false would just be evaluated at the original calling point?
In other words, would it use lazy evaluation above?
Upvotes: 5
Views: 591
Reputation: 16107
Yes. In fact, one implementation of and
is recursive (and doesn't have any strictness added):
and :: [Bool] -> Bool
and [] = True
and (x:xs) = x && and xs
To show that this works, you can pass an infinite list of False
s to and
, and see that it returns
Prelude> and (repeat False)
False
Note, however, this does not work with an infinite list of True
s, because it will forever look for a False
, but never find one.
Upvotes: 8
Reputation: 48687
if a function return from any point in the above recursive function-calling tree returned false, would the recursive function calls terminate and the value false would just be evaluated at the original calling point?
You can write recursive calls that rely upon that behaviour. For example, the following exists
function in OCaml returns true
if the result of applying the given predicate function p
to each element in the list returns true
for any element:
let rec exists p = function
| [] -> false
| x::xs -> p x || exists p xs
Thanks to short-circuit evaluation this will stop iterating over the list as soon as p
returns true
for the first time.
In other words, would it use lazy evaluation above?
Note that this is short-circuit evaluation and not laziness.
Upvotes: 2
Reputation: 4985
The answer, in short, is that yes, Haskell will be lazy in recursive functions.
The laziness of &&
isn't a special case: it's a consequence of its definition:
(&&) :: Bool -> Bool -> Bool
True && y = y
False && _ = False
Here, Haskell's laziness means that it can match on the first argument to &&
, and the second argument needn't be evaluated to know the outcome.
For a recursive function like and
we have the definition:
and :: [Bool] -> Bool
and [] = True
and (b:bs) = b && and bs
This is a recursive definition, Haskell is lazy in that the values of b
and bs
in a non-empty list will only be evaluated if needed: in this case, the definition of &&
forces us to look at the first element b
, and if it is False
then the rest of bs
doesn't have to be evaluated.
The lesson here is that laziness is something that Haskell provides by virtue of its pattern matching: when enough input is consumed to match a pattern, then the rest can be left unevaluated until it is demanded.
Upvotes: 6
Reputation: 16069
I'm new to Haskell, but yes, it would evaluate the entire tree of recursive calls on the left side of and, and only if they finally returned true would it move on to evaluate the recursive calls on the right.
Actually, it wouldn't evaluate either of those until you ended up using the result of my_func in something like a print, but the above stands if you've already forced evaluation of my_func.
Think of it as passing a promise to do work around for everything it does. So the result of a call to my_func isn't the result, it's a promise to find out. It probably hasn't even thought about what that promise means until it has to. Only then does it go in and figure out what it has to call.
I might be off, but that's how I understand it.
Upvotes: 2