Consider the two codes below. They accomplish the same goal : only such A[T]
-s can be stored in the Container
where T
extends C
However they use two different approaches to achieve this goal :
1) existentials
2) covariance
I prefer the first solution because then A
remains simpler. Is there any reason why I ever would want to use the second solution (covariance) ?
My problem with the second solution is that it is not natural in the sense that it should not be A
-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility. The second solution is also more complicated once I want to start to operate on A
and then I have to deal with all the stuff that comes with covariance.
What benefit would I get by using the second (more complicated, less natural) solution ?
object Existentials extends App {
class A[T](var t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c=new Container[A[_<:C]]()
c.t=new A(new C)
// c.t=new Z // not compile
val r: A[_ <: C] = c.t
println(r)
}
object Cov extends App{
class A[+T](val t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c: Container[A[C]] =new Container[A[C]]()
c.t=new A(new C)
//c.t=new A(new Z) // not compile
val r: A[C] = c.t
println(r)
}
EDIT (in response to Alexey's answer):
Commenting on : "My problem with the second solution is that it is not natural in the sense that it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility."
If I have class A[T](var t:T)
that means that I can store only A[T]
-s and not ( A[S]
where S<:T
) in a container, in any container.
However if I have class A[+T](var t:T)
then I can store A[S]
where S<:T
as well in any container.
So when declaring A
either to be invariant or covariant I decide what type of A[S] can be stored in a container (as shown above), this decision takes place at the declaration of A
.
However , I think, this decision should take place, instead, at the declaration of the container because it is container specific what will be allowed to go into that container, only A[T]
-s or also A[S]
where S<:T
-s.
In other words, changing the variance in A[T]
has effects globally, while changing the type parameter of a container from A[T]
to A[_<:S]
has a well defined local effect on the container itself. So the principle of "changes should have local effects" here favors the existential solution as well.
In the first case
A
is simpler, but in the second case its clients are. Since there is normally more than one place where you useA
, this is often a worthwhile tradeoff. Your own code demonstrates it: when you need to writeA[_ <: C]
in the first case (in two places), you can just useA[C]
in the second one.In addition, in the first case you can write just
A[C]
whereA[_ <: C]
is really desired. Let's say you have a methodNow you can't call
foo(y)
withy: A[C1]
even though it would make sense:y.t
does have typeC
.When this happens in your code, it can be fixed, but what about third-party?
Of course, this applies to the standard library types as well: if types like
Maybe
andList
weren't covariant, either signatures for all methods taking/returning them would have to be more complex or many programs which are currently valid and make perfect sense would break.Variance isn't about what you can store in a container; it is about when
A[B]
is a subtype ofA[C]
. This argument is a bit like saying that you shouldn't haveextends
at all: otherwiseclass Apple extends Fruit
allows you to store anApple
inContainer[Fruit]
, and deciding that isContainer
's responsibility.