Reputation: 551
Why does foo1 fail and foo2 succeeds? Shouldn't the compiler automatically check all the supertypes of Blah?
trait Foo[A] {
def bar: A
}
trait Bleh;
case class Blah extends Bleh;
implicit object BlehFoo extends Foo[Bleh]
def foo1[A:Foo](a:A) = a
def foo2[A,B:Foo](a:A)(implicit aToB: A => B) = aToB(a)
// Shouldn't it automatically use Bleh?
foo1(Blah())
// Failure: could not find implicit value for evidence parameter of type Foo[Blah]
foo2(Blah())
// Success: Bleh = Blah()
Upvotes: 1
Views: 648
Reputation: 38045
You can't use Foo[Bleh]
as Foo[Blah]
since Foo[Bleh]
is not a Foo[Blah]
. You should make Foo
contravariant on A
to use Foo[Bleh]
as Foo[Blah]
.
trait Foo[-A] {
def bar(a: A) = println(a) // to make Foo contravariant
}
This works just fine:
scala> foo1(Blah())
res0: Blah = Blah()
Your original code contains an answer to your question. Let's assume you could use your original Foo[Bleh]
as Foo[Blah]
:
def foo1[A:Foo](): A = implicitly[Foo[A]].bar
val b: Blah = foo1[Blah]()
In case Foo[Bleh]
is used here you'll get Bleh
as result of bar
, but you are expecting Blah
and Bleh
is not a Blah
.
Fortunately compiler will not allow you to use your original Foo[Bleh]
as Foo[Blah]
:
scala> trait Foo[-A] {
| def bar: A
| }
<console>:8: error: contravariant type A occurs in covariant position in type => A of method bar
def bar: A
^
Type inference
This works fine:
foo1[Bleh](Blah())
But compiler will not infer type parameter A
here as Bleh
. In order to understand "why" we should know what A:Foo
means:
def foo1[A:Foo](a:A) = a // syntax sugar
def foo1[A](a:A)(implicit ev: Foo[A]) = a // same method
A:Foo
is a syntax sugar for addition implicit parameter.
If you have 2 parameter groups compiler will infer type in first group and then considers that types known. So after type inference on first parameter group (a:A)
type Blah
is known and second parameter group can't affect on type parameter.
Upvotes: 6