user3677101
user3677101

Reputation:

why Haskell list is more efficient if it's left associate

I'm learning the basics of Haskell, and come across many tutorials saying that if a list is constructed from left to right using ++ is more efficient than from right to left. But I can't seem to understand why.

For example, why this

a ++ (b ++ (c ++ (d ++ (e ++ f))))

is more efficient than

((((a ++ b) ++ c) ++ d) ++ e) ++ f

Upvotes: 5

Views: 540

Answers (3)

J. Abrahamson
J. Abrahamson

Reputation: 74334

One short answer is that it's less lazy. Another is that it does unneeded work.

Haskell is lazy, so if you want to understand operational characteristics you have to examine values under deconstruction. Or, less technically, we ought to use this list to understand how it performs. Let's compute take 2 on a simpler example.

take 3 ([1] ++ ([2] ++ [3]))      vs.      take 3 (([1] ++ [2]) ++ [3])

We'll just continuously update definitions after each case examination occurs in order to see how reduction performs. To keep the clutter down, I won't unroll whole definitions but instead just step through the argument reductions they force. For completeness, here are some definitions of take and (++)

take 0 _      = []
take n []     = []
take n (x:xs) = x : take (n-1) xs

[]     ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)

And we note that for n > 0, take n examines its next argument while (++) always and only ever examines its first argument.

take 3 ([1] ++ ([2] ++ [3]))
take 3 (1 : ([] ++ ([2] ++ [3])))
1 : take 2 ([] ++ ([2] ++ [3]))
1 : take 2 ([2] ++ [3])
1 : take 2 (2 : ([] ++ [3]))
1 : 2 : take 1 ([] ++ [3])        -- *
1 : 2 : take 1 [3]
1 : 2 : take 1 (3 : [])
1 : 2 : 3 : take 0 []
1 : 2 : 3 : []

As this operationalization shows, take is able to lazily consume values off the computation of (++) as soon as they are available, returning the head of the result after just two steps.

Compare the other associativity. It doesn't produce the result head until the 3rd step. Deeper nesting would perform worse.

take 3 (([1] ++ [2]) ++ [3])
take 3 ((1 : ([] ++ [2])) ++ [3])
take 3 (1 : (([] ++ [2]) ++ [3]))
1 : take 2 (([] ++ [2]) ++ [3])     -- *
1 : take 2 ([2] ++ [3]
1 : take 2 (2 : ([] ++ [3]))
1 : 2 : take 1 ([] ++ [3])          -- *
1 : 2 : take 1 [3]
1 : 2 : take 1 (3 : [])
1 : 2 : 3 : take 0 []
1 : 2 : 3 : []

Here we see that take is forced to lift results of evaluating the left-nested calls "up to the top" before it can return partial results. This lifting is exacerbated by more nesting and longer left-hand arguments to (++). It's also worth noting that the in the starred steps we're appending an empty list on the left side. In the latter example, this occurs on the intermediate step as ([] ++ [2]) ++ [3] while in the former example this occurs on the whole computation as [] ++ ([2] ++ [3]) exemplifying extra, unnecessary work done to the LHS of the nested appends.

Upvotes: 10

bheklilr
bheklilr

Reputation: 54058

It comes down to how lists and ++ is implemented. You can think of lists being implemented like

data List a = Empty | Cons a (List a)

Just replace [] with Empty and : with Cons. This is a very simple definition of a singly linked list in Haskell. Singly linked lists have a concatenation time of O(n), with n being the length of the first list. To understand why, recall that for linked lists you hold a reference to the head or first element, and in order to perform any operation you have to walk down the list, checking each value to see if it has a successor.

So for every list concatenation, the compiler has to walk down the entire length of the first list. If you have the lists a, b, c, and d with the lengths n1, n2, n3, and n4 respectively, then for the expression

((a ++ b) ++ c) ++ d

It first walks down a to construct a ++ b, then stores this result as x, taking n1 steps since a has n1 elements. You're left with

(x ++ c) ++ d

Now the compiler walks down x to construct x ++ c, then stores this result as y in n1 + n2 steps (it has to walk down elements of a and b this time). you're left with

y ++ d

Now y is walked down to perform the concatenation, taking n1 + n2 + n3 steps, for a total of n1 + (n1 + n2) + (n1 + n2 + n3) = 3n1 + 2n2 + n3 steps.

For the expression

a ++ (b ++ (c ++ d))

The compiler starts at the inner parentheses, construction c ++ d -> x in n3 steps, resulting in

a ++ (b ++ x)

Then b ++ x -> y in n2 steps, resulting in

a ++ y

Which is finally collapsed in n1 steps, with a total number of steps as n3 + n2 + n1, which is definitely fewer than 3n1 + 2n2 + n3.

Upvotes: 13

swang
swang

Reputation: 5239

Here's my understanding:

In the first case,

a ++ (b ++ (c ++ (d ++ (e ++ f))))

When you want to do a ++ b, a would have already been constructed, all you need is to append b to a. Same goes for a ++ b ++ c, all you need is appending c to the result of a ++ b, and so on.

While in the second case,

((((a ++ b) ++ c) ++ d) ++ e) ++ f

When you want to do a ++ b ++ c, you'd have to do a ++ b first, then append c, to do a ++ b ++ c ++ d, you'd have to do a ++ b again! then a ++ b ++ c, then append d.

So there's a lot of repeated computation in the second case, hence it's slower.

Upvotes: 0

Related Questions