chs
chs

Reputation: 652

Non-lazy branch of GHC

I have heard there is a branch of GHC that compiles to strict code by default whereas laziness can be enabled by annotation. (IIRC, he said a financial company develops the branch and uses it for production code.) Is that true? I can't find it.

The person also suggested that the opinion that strict evaluation is more practical than lazy evaluation (by default) gains acceptance more and more. I don't find this confirmed in the Haskell mailing list, but maybe that is because people there are not that practice-oriented?

All I find on strict Haskell are explicit things like $! and rnf. While I find lazy evaluation very elegant, I'd like to develop a a program in Haskell where I want to avoid space leaks and would like to have predictable performance.

Disclaimer: I'm not making a case for strictness, I'd just like to have a look at strict Haskell or something like that.

Upvotes: 13

Views: 1711

Answers (5)

user8174234
user8174234

Reputation:

I have heard there is a branch of GHC that compiles to strict code by default whereas laziness can be enabled by annotation

You might try GHC 8.0.2, 8.2.2, or 8.4.1, aka any of the last three releases. They have a {-# LANGUAGE Strict #-} pragma to be used for numerical code and such.

financial company develops the branch and uses it for production code.) Is that true? I can't find it.

Standard chartered indeed develops its own Haskell compiler. I would not expect them to offer it to the public. I am not sure it is strict by default.

The person also suggested that the opinion that strict evaluation is more practical than lazy evaluation

This is not meaningful nor is it supported by evidence. In fact, the llvm-hs-pure package introduced a bug by choosing to use the strict version of the state monad rather than the lazy version. Moreover, something like the parallel-io package would not work.

I'd like to develop a a program in Haskell where I want to avoid space leaks and would like to have predictable performance.

I have not been bitten by a space leak caused by laziness in the past two years. I would suggest instead using benchmarks and profiling your application. Writing Haskell with predictable performance is easier than adding strictness annotations and hoping your program still compiles. You will be much better served by understanding your program, profiling, and learning functional data structures than mindlessly adding compiler pragmas to improve your program's performance.

Upvotes: 3

shapr
shapr

Reputation: 1671

It sounds like you've heard about Robert Ennals' PhD thesis on speculative evaluation with GHC. He created a fork of GHC called the "spec_eval" fork, where speculative evaluation was done. Because Haskell is non-strict rather than explicitly lazy, spec_eval was strict up to the point where it actually made a difference. While it was faster in all cases, it required large changes to GHC and was never merged.

This question has sort of been answered before on this site.

Upvotes: 5

Dan Burton
Dan Burton

Reputation: 53665

Some good things have been said about why you shouldn't shy away from Haskell's laziness, but I feel that the original question remains unanswered.

Function application in Haskell is non-strict; that is, a function argument is evaluated only when required.

~ Haskell Report 2010 > Predefined Types and Classes # Strict Evaluation

This is a little misleading, however. Implementations can evaluate function arguments before they are required, but only to a limited extent. You have to preserve non-strict semantics: so if the expression for an argument results an infinite loop, and that argument is not used, then a function call with that argument must not infinite loop.

So you are allowed to implement Haskell in a way which is not fully "Lazy", but it nevertheless cannot be "Strict" either. This seems like a contradiction at first blush, but it is not. A few related topics you might want to check out:

  • Eager Haskell, an implementation of the Haskell programming language which by default uses eager evaluation. I believe this is what you were thinking of (though it's not a branch of GHC).
  • Speculative Execution and Speculative Parallelism (see e.g. the speculation package).
  • Optimistic Evaluation, a paper by SPJ about speeding up GHC with strictness optimizations.

Upvotes: 3

none
none

Reputation: 21

If I understand it correctly, a strict Haskell could not have monadic I/O as we know it. The idea in Haskell is that all Haskell code is pure (that includes IO actions, which work like the State monad) and "main" gives a value of type IO () to the runtime which then repeatedly forces on the sequencing operator >>=.

For a counterpoint to Tekmo's post you might look at Robert Harper's blog, * http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/ and related. It goes both ways.

In my experience, laziness is difficult at first but then you get used to it and it's fine.

The classic advocacy piece for laziness is Hughes's paper "Why Functional Programming Matters" which you should be able to find easily.

Upvotes: 2

Gabriella Gonzalez
Gabriella Gonzalez

Reputation: 35089

You're looking for Disciple.

So there are two kinds of laziness to distinguish in Haskell. There's lazy I/O, which is an abomination and is solved by iteratee libraries (Shameless plug: including my pipes library). Then there's laziness in pure computations, which is still open to debate, but I'll try to summarize the key advantages of laziness since you are already familiar with the disadvantage:

Laziness is more efficient

A simple example is:

any = foldr (||) False

any finds if any value in a list is True. This only evaluates the elements up to the first True, so it doesn't matter if the list is very long.

Laziness only computes as much as it has to, meaning that if you chain together two lazy computations, it can actually improve the time complexity of the resulting computation. This Stack Overflow comment gives another good example of this.

This is actually the same reason why iteratee libraries are very resource-efficient. They only do as much work as they have to generate results, and this leads to very efficient memory and disk usage with very easy-to-use semantics.

Laziness is inherently more composable

This is well known by people who have programmed in both strict and functional languages, but I actually inadvertently demonstrated a limited proof of this while working on the pipes library, where the lazy version is the only version that permits a Category instance. Pipes actually work in any monad, including the pure Identity monad, so my proofs translate to pure code as well.

This is the true reason why I believe that laziness in general is really the future of programming, however I still think that it's an open question of whether or not Haskell implemented laziness "right".

Upvotes: 12

Related Questions