Reputation: 6168
One of my projects uses a mix of scala features that appear to not mix well together:
The problem I'm running into is type class instance derivation fails if either:
Lazy
Here's the smallest possible amount of code I could write to reproduce the issue:
import shapeless._
trait Show[A] {
def show(a: A): String
}
object Show {
def from[A](f: A => String): Show[A] = new Show[A] {
override def show(a: A) = f(a)
}
implicit val intShow: Show[Int] = Show.from(_.toString)
implicit def singletonShow[A](implicit
sa: Show[A]
): Show[A :: HNil] = Show.from {
case (a :: HNil) => sa.show(a)
}
implicit def singletonCaseClassShow[A, H <: HList](implicit
gen: Generic.Aux[A, H],
sh: Lazy[Show[H]]
): Show[A] = Show.from {
a => sh.value.show(gen.to(a))
}
}
object Run extends App {
implicit class ShowOps[A](val a: A) extends AnyVal {
def show(header: String = "> ")(implicit sa: Show[A]): String =
header + sa.show(a)
}
case class Foo(i: Int)
println(Foo(12).show())
}
This fails to compile with the following error message:
Run.scala:10: could not find implicit value for parameter sa: Show[Run.Foo]
[error] println(Foo(12).show())
The compilation error is fixed by either:
header
parameter to show
in Run.scala
Lazy
wrapper to the implicit Show[H]
in Show.scala
I must admit I'm at a complete loss here. I'd love to understand what happens, and I'd love to know of a workaround if one exists.
Upvotes: 3
Views: 1044
Reputation: 533
Short answer:
If you move the context bound to the implicit class, it also works fine. You have to sacrifice the value class to do it, but I think it's also cleaner to tell the compiler up front that only A
s which have a Show
will get enriched by it:
implicit class Show2Ops[A : Show](a: A) {
def show2(header: String = "> ") = header + implicitly[Show[A]].show(a)
}
println(Foo(12).show2())
Long theory:
Lazy
does some interesting tricks, which are hard to follow. You didn't specifically ask about what Lazy
is doing, but I was curious about it, since I use it all the time without being sure how it works. So I took a look at it. As near as I can tell, it goes something like this.
You have a case class with a recursive field:
case class A(first: Int, next: Option[A])
And assume you had another case in Show
's companion for Option
:
implicit def opt[A](implicit showA: Show[A]): Show[Option[A]] = Show.from {
case Some(a) => s"Some(${showA.show(a)})"
case None => "None"
}
And instead of singletonShow
you had a real HNil
case and an inductive case, as is typical:
implicit val hnil: Show[HNil] = Show.from(_ => "")
implicit def hcons[H, T <: HList](implicit
showH: Show[H],
showT: Show[T]
): Show[H :: T] = Show.from {
case h :: t => showH(h) + ", " + showT(t) // for example
}
And let's rename singletonCaseClassShow
to genericShow
because it's not just for singletons anymore.
Now let's say you didn't have the Lazy
there in genericShow
. When you try to summon a Show[A]
, the compiler goes to:
genericShow[A]
with open implicit search for Show[A]
hcons[Int :: Option[A] :: HNil]
with open implicit search for Show[A]
and Show[Int :: Option[A] :: HNil
intShow
with open implicit search for Show[A]
and Show[Int]
and Show[Option[A] :: HNil]
hcons[Option[A] :: HNil]
with open implicit search for Show[A]
and Show[Option[A] :: HNil]
opt[A]
with open implicit search for Show[A]
and Show[Option[A]]
and Show[Option[A] :: HNil]
genericShow[A]
with open implicit search for Show[A]
and Show[Option[A]]
and Show[Option[A] :: HNil]
Now it's pretty clear that there's a problem, because it's going to go back to #2 and happen all over again, never making any progress.
How Lazy
overcomes this is by going into a macro at the time the compiler attempts to materialize an implicit instance of it. So when you use implicit showH: Lazy[Show[H]]
in hcons
instead of just Show[H]
, the compiler goes to that macro to find Lazy[Show[H]]
instead of staying in your implicit Show
cases.
The macro checks the open implicits (which macros helpfully have access to) and goes into its own implicit resolution algorithm that always fully resolves open implicits before continuing with finding the implicit instance of T
(for Lazy[T]
). If it comes to resolving an implicit that's already open, it substitutes a dummy tree (essentially telling the compiler "I got this, don't worry about it") that tracks the knotted dependencies so that the rest of the resolution can finish. And at the end, it cleans up the dummy trees (I can't quite figure out how this works; there's a surprising amount of code there and it's pretty complicated!)
So why does Lazy
seem to mess up your default parameter situation? I think it's the confluence of a few things (only a hypothesis):
ShowOps
, calling .show
on a value causes a it to be implicitly wrapped in ShowOps[A]
. What is A going to be? Is it going to be Foo
, AnyRef
, Any
? Is it going to be a unique single type? It's not exactly clear, because at that time there is no constraint on A
and Scala doesn't know that your call to .show
will actually constraint it (due to the context bound).Lazy
, this works out OK, because if Scala chooses the wrong A
and .show
doesn't typecheck, it will realize its mistake and back out of the A
it chose.Lazy
, there is a bunch of other logic going on, and it kind of tricks Scala into thinking that whatever A
it chose is fine. But when it comes time to close the loop, it doesn't work out, and by that time it's too late to back out.A
in ShowOps[A]
.Upvotes: 13