Reputation: 1128
Some open source libraries like twitters seem to have a convention for marking case classes as final
.
My team is deciding whether to adopt this convention but we don't yet understand the pro's of doing this. Currently the only possible advantage I can perceive is that it stops accidental inheritance from case classes.
But are there any other advantages? Does it improve compile times or allow the compiler to add internal optimizations?
I was hoping that it would aid with detecting missing values in pattern matches but it doesn't seem to do that either. In a simple script like this, the compiler generates a warning for matchSealed
but not for matchFinal
:
sealed case class Sealed(one: Option[Int], two: Option[Int])
def matchSealed(s: Sealed): Unit = s match {
case Sealed(Some(i), None) => println(i)
}
final case class Final(one: Option[Int], two: Option[Int])
def matchFinal(f: Final): Unit = f match {
case Final(Some(i), None) => println(i)
}
My impression of final
is that it's a stronger restriction than sealed
, so it is odd that this does not generate a warning.
Upvotes: 1
Views: 938
Reputation: 665
You would get different behaviour if your case class is nested inside a trait or class, for scoping reasons. Without the modifier, each case class instance has access to the enclosing run-time trait/class instance. With it, such access may be lost, depending on whether references to the outer scope is present, and you get a compiler warning. Whether you need the path-dependent behaviour or not will determine your choice.
Upvotes: 0
Reputation: 12202
Here is an explanation of some benefits:
A final case class cannot be extended by any other class. This means you can make stronger guarantees about how your code behaves. You know that nobody can subclass your class, override some methods, and make something goofy happen. This is great when you are debugging code – you don’t have to go hunting all over the object hierarchy to work out which methods are actually being called.
Of course making classes final does mean that you lose a form of extensibility. If you do find yourself wanting to allow users to implement functionality you should wrap that functionality up in a trait and use the type class pattern instead.
Upvotes: 2