Reputation: 31
Calling the 'bar' method of 'Foo', I get an error to the effect that it can't unify the types of 3 and 4 because they're overloaded literals. But 'standaloneBar', which has seemingly the same type, works fine. The difference must be the typeclass parameters, but I don't see why this prevents the unification.
{-# LANGUAGE MultiParamTypeClasses #-}
module Main where
class Foo a b where
bar :: a -> b -> a
data Baz a = Baz a
instance Foo Int (Baz a) where
bar i (Baz _) = i
standaloneBar :: a -> b -> a
standaloneBar x _ = x
main = do
--putStrLn $ show $ bar 3 (Baz 4) -- Can't unify
putStrLn $ show $ standaloneBar 3 (Baz 4) -- Works fine
putStrLn $ show $ bar (3::Int) (Baz 4) -- Works fine
putStrLn $ show $ ((bar 3 (Baz 4)) :: Int) -- Works fine
If I add type annotations, then it works fine.
The way I understand unification here, even though 3 and 4 are ambiguous, they can still be unified:
*Util Delta Exp Tmi Util> :t 3
3 :: Num p => p
*Util Delta Exp Tmi Util> :t 4
4 :: Num p => p
*Util Delta Exp Tmi Util> :t 3 + 4
3 + 4 :: Num a => a
So why can't it do the same for 'bar'?
(I realize that a functional dependency here fixes the problem, but I'm specifically trying to allow for multiple instances that would be prevented by that.)
Upvotes: 2
Views: 92
Reputation: 71400
standaloneBar
doesn't really have the same type. It is a -> b -> a
, which is the same type as the type given for bar
in the class. But the problem isn't that bar 3 (Baz 4)
is failing to match the general type from the class, rather it is that bar 3 (Baz 4)
is failing to uniquely determine an instance.
When type classes are involved, type inference doesn't just have to determine that there is some assignment of type variables that is well-typed, it has to actually decide which specific instance will be selected1. Different instances could have very different behaviour, so the choice matters.
The type the compiler infers for the use of bar
in your code is bar :: (Foo a (Baz b), Num a, Num b, Show a) => a -> Baz b -> a
(the Show
constraint comes from the result being passed to show
). Now, the version of bar
from the Foo Int (Baz a)
instance has type Int -> Baz a -> Int
, which obviously does unify with your usage of bar
. But other possible instances might also unify. There could be Foo Double (Baz a)
, or Foo a (Baz Float)
, or any number of other possibilities.
It would be possible for the compiler to work by picking Foo Int (Baz a)
, since that't the only instance in scope and it does fit. However the rules of the language are designed so that the compiler actually should not consider the instances which are in scope when it determines which instance is needed! It is required that the appropriate instance is uniquely clear from the context of the call, and then the compiler checks whether such an instance is actually available. So there would need to be a polymorphic Foo a (Baz b)
instance in scope for this code to work. And indeed, if I replace your instance with:
instance Foo a (Baz b) where
bar i _ = i
Then your code compiles and runs without error!
So your original code isn't specific enough about the types to uniquely determine that Foo Int (Baz b)
is the required instance, so the compiler reports errors about ambiguous types. The problem isn't that it your use of bar
doesn't unify with a -> b -> a
, or even that it doesn't unify with Int -> Baz b -> Int
; it does unify with both. Rather the problem is that it isn't specific to the type Int -> Baz b -> Int
. Adding additional type information resolves that, and is required.
The reason for this language design decision is so that variations in which instances are in scope (say by adding and removing imports) can never change the meaning of valid code into other valid code. If you remove a required instance the code will stop working, and if you add a conflicting instance the code will cause an error, but if you have code that compiles with 2 instances in scope by using 1 of them, you can never make it continue to work and use the other one solely by changing your imports. The intent is that the choice of instance should be inherently required by your code supplied by the programmer, the choice should not be just accidentally made by the compiler.
It's worth noting that there is a major exception to this rule of instance selection not being based on which instances are actually in scope2. That is type defaulting.
Simple expressions involving numeric literals would almost always be ambiguous under the rules I described above. An example is show $ 1 + 2
. The +
and the show
in that have differing behaviour for different instances (+
isn't even fully associative for floating point numbers for example!), so by the reasoning above this code should be invalid, and the programmer should be required to write something like show $ 1 + (2 :: Int)
.
The language designers considered this to be too onerous, so the defaulting rules were defined. They're described in more detail in the Haskell Report, but basically they allow ambiguous type variables to be defaulted if there is a constraint involving one of the "numeric" type classes (like Num
, Integral
, etc) from the Prelude, and the ambiguous type does not have any type class constraints except those involving classes defined in the Prelude. If these (very conservative) constraints are met, then some default types will be tried (the default list of default types is Integer, Double
, but it can be customised; the conditions under which defaulting will be tried cannot be customised, however), and if one of those allows instances to be found for all constraints then the code will be accepted with the compiler choosing those instances for you.
This means that the ambiguous type variables a
and b
in bar :: (Foo a (Baz b), Num a, Num b, Show a) => a -> Baz b -> a
cannot be defaulted; they're involved in the constraint Foo a (Baz b)
, which is not for a class defined in the Prelude.
But when you use standaloneBar
(or you use my more polymorphic Foo a (Baz b)
instance above), then the only constraints that the compiler is unable to resolve are (Num a, Num b, Show a)
. Here all of the constraints are Prelude classes, and Num
is a "numeric class", so the compiler tries Integer
for both a
and b
, which works and allows the code to compile.
1 Unless the type class constraint can be "passed on" in the signature of the caller, but the caller here is main :: IO ()
, so that option isn't available.
2 If you enable further extensions such as OverlappingInstances
then there are more exceptions, but I'm not describing that here.
Upvotes: 0
Reputation: 116139
The compiler has to account for the possibility that, later on, and possibly in another module, someone defines something like
instance Foo Double (Baz a) where
bar i (Baz _) = i + 1
In such case, putStrLn $ show $ bar 3 (Baz 4)
could print 3
or 4.0
depending on the type of the literal 3
. Hence, it is rejected.
Note that the error mentions ambiguity, not the failure of unification:
prog.hs:16:14: error:
• Ambiguous type variable ‘a0’ arising from a use of ‘show’
prevents the constraint ‘(Show a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
In your GHCi session, > :t 3 + 4
can output Num a => a
since it is possible for that to report a polymorphic type. If you run > :t show (3+4)
the result is a monomorphic String
, and that forced GHCi to choose a specific type a
to instantiate the constants with. It happens that Num
receives special care by Haskell, and some "default" types are tried when that happens. This is indeed called "defaulting", and only happens with a few Prelude
classes. It does not apply to custom classes such as your Foo
, where instead an ambiguity is reported.
Upvotes: 3