Reputation: 52957
I've read some monads tutoriais and they pretty much propose that monads are necessary to implement sequencing of operations. But the same can be accomplished with let
:
(let*
( (a 1)
(b (+ a 1))
(c (+ b 1)))
c)
Is desucared to (not actually, I wrote this braindead for illustration purposes, see Will Ness's comment):
((lambda (c) c)
((lambda (b) (+ b 1))
((lambda (a) (+ a 1))
1)))
So how come monads be necessary to implement sequencing, if this macro can do it already?
Upvotes: 3
Views: 1231
Reputation: 850
I think it’s worth adding a subtle point here.
The monad design pattern and macros offer similar results, which, I think led to this question.
Monads offer a way to extract repeated code and the ability to branch off based on some condition in most FP languages besides providing alternate ways to sequence two functions by wrapping some underlying type with a context that we can control.
However, in lisps we don’t need it because macros are much more powerful and provide same or more functionality. We can rearrange code however we like. Thus we use the threading macros, the cond macro, the let macro etc in Clojure.
Thus, as we can see, monads, lawful or unlawful are very useful in languages with limited code manipulation support but if you’re using something lispy macros often much more useful.
Upvotes: 0
Reputation: 1095
(Edited for clarity to more directly address the question.)
Monads and macros are different sorts of things. As I'll show below, monad do
notation for the Identity
monad in Haskell is similar to let-rec
in the
desugaring. Then we'll see how the desugaring is not equivalent due to lazy
evaluation in Haskell.
The summary, if you don't care to read any further, is that a macro rewrites
the syntax tree (in Lisps), and in Haskell, monad do
-notation is syntax
sugar. Monads themselves are just types that have an associated dictionary of
functions, so the Identity
monad is not the same as IO
, which is not the
same as STM
or Either
or Cont
.
First, let's assume Lisps and Haskell have the same execution strategy (strict,
call by value). In this assumption, there is not much difference between
Haskell's Monad
do
-notation and the average Lisp's let-rec
. For all of
the Haskell I'm going to be showing, I'm going to have this import statement at
the top:
import Control.Monad.Identity
So, consider the following function:
f :: Identity Int
f = do
a <- return 1
b <- return (a + 1)
c <- return (b + 1)
return c
I am using the Identity monad, which behaves the most similarly to let-rec
.
The do
notation will desugar to this:
fDesugar :: Identity Int
fDesugar =
(return 1 >>= \a ->
(return (a + 1) >>= \b ->
(return (b + 1) >>= \c ->
return c)))
This looks like your Lisp example, but obviously it is using infix notation and this results in a different sequence of arguments. We can rewrite it to be Lisp like:
fLisp :: Identity Int
fLisp =
((=<<) (\c -> return c)
((=<<) (\b -> return (b + 1))
((=<<) (\a -> return (a + 1))
(return 1))))
Now, we can "run" this monad using runIdentity
. What does that mean? Why do we
need to "run" monads?
Monads are defined on constructors of types. Some tutorials describe them as
burrito wrappers, but I'm just going to say that a monad can be defined on a lot
of types-that-take-types. One example of a monad is the one I've already used,
Identity
, another monad is IO
. Each of these types is actually a value of
kind * -> *
, that is, it takes a type (of kind *
) and returns a new type. So
a monad is defined for Identity
, which can then be used with Identity Int
,
Identity String
, Identity Foo
, and so on.
Because monads are defined per-type, when evaluating something of type
Identity T
, we need to know how to execute that monad. The sequencing, so to
speak, is dependent on the types.
Identity is a very simple constructor and monad. The constructor is:
newtype Identity a = Identity { runIdentity :: a }
That is, to construct a value of type Identity T
, we just need to pass in a
T
. And to get it out, we just apply runIdentity
to it. That is:
makeIdentity :: a -> Identity a
makeIdentity a = Identity a
runIdentity :: Identity a -> a
runIdentity (Identity a) = a
To understand fLisp
, fDesugar
, or f
, we need to know what >>=
and
return
mean in the context of the Identity
monad:
instance Monad Identity where
return a = Identity a
m >>= k = k (runIdentity m)
With this knowledge, we can derive =<<
for Identity:
k =<< m = k (runIdentity m)
With this in hand, we can rewrite fLisp
, using our definitions. In Haskell
parlance, we are using equational reasoning. We can substitute the left hand
side of our definitions above with the right hand side. So we substitute
(=<<) a b
with a (runIdentity b)
, and return a
with Identity a
:
fLisp' :: Identity Int
fLisp' =
((\c -> Identity c)
(runIdentity ((\b -> Identity (b + 1))
(runIdentity ((\a -> Identity (a + 1))
(runIdentity (Identity 1)))))))
But runIdentity
just strips Identity
from whatever it is applied to. When
runIdentity
is applied to something of the form:
runIdentity ((\a -> Identity (f a)) b)
we can move it inside its argument:
(\a -> runIdentity (Identity a)) b
, and reduce it to (\a -> f a) b
or just
f a
. Let's do all those steps:
fLisp'' :: Identity Int
fLisp'' =
((\c -> Identity c)
(((\b -> (b + 1))
(((\a -> (a + 1))
1)))))
We can finally take the last Identity
out of the first function, and we get:
fLisp''' :: Identity Int
fLisp''' =
Identity
((\c -> c)
(((\b -> (b + 1))
(((\a -> (a + 1))
1)))))
So, clearly Identity
behaves identically to let-rec
, right?
Monads are defined per-type, so Identity
is a lot like your macro let-rec
,
but where the two differ in a critical way is that Haskell does not behave as
we assumed. Haskell does not perform strict, call-by-value evaluation.
We can prove this by writing this in Haskell. blowUp
is a raw error type, when
evaluated it will bubble up an exception and end execution.
blowUp :: a
blowUp = error "I will crash Haskell!"
riskyPair :: (Int, a)
riskyPair = (5, blowUp)
fst' :: (a, b) -> a
fst' = \(a, b) -> a
five :: Int
five = fst' riskyPair
When five
is evaluated, the result is indeed five. Despite it coming from the
tainted value g
, we are able to safely evaluate fst' riskyPair
without
blowing up. Try evaluating riskyPair
though, and you'll see the exception.
Consider the following function g
, it uses the Identity monad, but what does
it do?
g :: Identity Int
g = do
a <- return 1
b <- return (a + 1)
c <- return (b + 1)
d <- return (blowUp)
return c
Curiously, runIdentity g
returns 3
. The same as runIdentity f
.
Monads are not macros, not even the Identity
monad duplicates the behavior of
let-rec
. I tried this in Racket, and I got an error. I could not even compile
the program! Defining g
evaluated the error code.
(define g
(letrec
( (a 1)
(b (+ a 1))
(c (+ b 1))
(d (error "I will crash Racket!"))
) c))
In Haskell, values are lazy in their arguments. If the second value of the pair
is never forced (required) then Haskell will not blow up. And fst f
is "safe"
to execute. In Lisp, values are strict, and g
will always blow up.
Because of Haskell's laziness, many authors will hand-wave monads as the only way of sequencing operations. This is not strictly true! (No pun intended.)
Monads, then, are not just macros for let-rec
, because the definitions for
>>=
and return
can vary so wildly. We can represent potentially failing
computations with the Either
monad.
I tend to think of monads not as ways of sequencing operations, but more like a
user-defined "semicolon" operator. In some monads (IO
), the act of binding is
a lot like a semi-colon in C or C++. In other monads, the semicolons (binds) do
other things, like chain together error conditions or modify control flow.
See this example
of the Cont
monad for reference. Notice that in this case, inside the Cont
monad, the flow of execution can be altered by statements inside the do
block.
Upvotes: 5
Reputation: 5406
A Monad is not a macro. A Monad is the existence of two functions with a particular correspondence between them. So it does not do away with the lambdas, but specifies a restriction upon them.
I wouldn't say that monads "sequence" operations. It is not always what they do. A (r->a)
is a monad in a
, and it certainly does not "sequence" anything. A ((a->r)->a)
is a Monad in a
, too.
I suggest the focus of understanding the monads should really be on understanding the meaning of those two operations and the laws. Usually the tutorials talk about return
and >>=
, but the epiphany is in understanding <=<
, which can be used to express >>=
.
In "plain" types you define composition as:
(.) :: (b->c) -> (a->b) -> (a->c)
then write
f . g
Using monads, you define composition as:
(<=<) :: (Monad m) => (b->m c) -> (a->m b) -> (a->m c)
then write
f <=< g
Upvotes: 5
Reputation: 10783
A monad is just a set of laws defined on a type. Here are what the laws would look like in Scheme notation (return
and >>=
are the monadic functions):
(>>= (return a) k) == (k a) ; Left identity
(>>= m return) == m ; Right identity
(>>= m (lambda (x) (>>= (k x) h))) == (>>= (>>= m k) h) ; Associativity
Translating this gets tricky though because Scheme is dynamically typed. >>=
and return
have different behavior depending on the types involved.
A simple example is the Maybe
monad, which could look something like this in Scheme:
(define (Just x)
(cons 'Just x))
(define Nothing 'Nothing)
(define return Just)
(define (>>= m f)
(if (eq? m 'Nothing)
'Nothing
(f (cdr m))))
This can be considered to represent a computation that might fail. Here are some examples of the Maybe monad:
(>>= (Just 1) (lambda (x) (return (+ x 5)))) == '(Just . 6)
(>>= Nothing (lambda (x) (return (* x 10)))) == 'Nothing
(>>= (Just 5) (lambda (x) Nothing)) == 'Nothing
Note how if any of the subcomputations result in Nothing
, the entire computation results in Nothing. This is the major feature of the Maybe
monad.
Another common example is the list monad, which could look something like
(define (return x) (list x))
(define (>>= xs f)
(flatten (map f xs)))
This monad can be thought of as a nondeterministic computation (that is, a computation that might have take on several possible values). Here are some examples:
(>>= '(1 2 3) (lambda (x) (return (* x 10)))) == '(10 20 30)
(>>= '(5 6 7) (lambda (x) (list (* x 10) (+ x 2)))) == '(50 7 60 8 70 9)
Note: I would highly recommend learning about functors (in the sense that the word is used in Haskell) and applicative functors before really trying to learn monads. These concepts build on each other.
Upvotes: 7