Reputation: 61041
Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
Upvotes: 17
Views: 6504
Reputation: 53665
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why +
has the type signature Num a => a -> a -> a
.
Firstly, the Num
typeclass has no way to convert one artbitrary instance of Num
into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int
.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a
is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)
Upvotes: 3
Reputation: 2571
The type signature of (+)
is defined as Num a => a -> a -> a
, which means that it works on any member of the Num
typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do
for the let
expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1
and 2.0
.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double
, so it just assumes the other one was supposed to be a Double
and executes the computation. However, when you use the let
expression, it only has one number to work off of, so it decides 1
is an Integer
and 2.0
is a Double
.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
Upvotes: 20
Reputation: 183878
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal
resp. fromRational literal
), so in 1 + 2.0
, you really have fromInteger 1 + fromRational 2
, in the absence of other constraints, the result type defaults to Double
.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion
), that entity gets assigned a monomorphic type. For the literal 1
, we have a Num
constraint, therefore, according to the defaulting rules, its type is defaulted to Integer
in the binding let a = 1
. Similarly, the fractional literal's type is defaulted to Double
.
It will work, by the way, if you :set -XNoMonomorphismRestriction
in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
Upvotes: 12
Reputation: 2912
You can use GHCI to learn a little more about this. Use the command :t
to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1
is a constant which can be any numeric type (Double
, Integer
, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a
is Integer
. Similarly, if you write let b = 2.0
then Haskell infers the type Double
. Using let
made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+)
has type Num a => a -> a -> a
, the two arguments need to have the same type.
You can fix this with the fromIntegral
function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Upvotes: 8