Paul
Paul

Reputation: 382

Haskell laziness and seq

I am trying to understand laziness and seq in Haskell:


main = do
  f [1, 2, 3, 4] 0
  f' [1, 2, 3, 4] 0

g x = 42 * x -- this could be whatever

-- 1. lazy
f [] v = print v -- base case
f (x:xs) v = f xs v'
  where v' = v + g x

-- 2. strict
f' [] v = print v -- base case
f' (x:xs) v = seq v' f' xs v'
  where v' = v + g x

I know foldl and foldl' do what I want in these cases. I am more interested in understanding how this is implemented.

Upvotes: 4

Views: 171

Answers (1)

chi
chi

Reputation: 116139

Yes, you are right.

Operationally, in case 1, variable v is bound to a thunk, an unevaluated expression which becomes larger and larger until print forces its evaluation. The memory footprint is not constant.

In case 2, variable v is always bound to an evaluated number. The recursion runs in constant space.

By the way, it is idiomatic to write seq v' f' xs v' as

v' `seq` f' xs v'

(which is equivalent since application is always strict on the function) or, leveraging $!

f' xs $! v'

Another common choice is to use a bang pattern and forget seq

f' [] !v = print v -- base case
f' (x:xs) !v = f' xs v'
  where v' = v + g x

The bangs ! ensure that v is demanded immediately, so even if it is a thunk, it is evaluated before proceeding. This also ensures a constant memory footprint.

As a profiling tool for strictness, you can try Debug.Trace.trace, which will print a debug message whenever some thunk is demanded. This is not to be used for general output, but for general debugging is fine.

Upvotes: 6

Related Questions