Consider this simple pair of function templates.
template <typename T>
void foo(T& ) { std::cout << __PRETTY_FUNCTION__ << '\n'; }
template <typename C>
void foo(const C& ) { std::cout << __PRETTY_FUNCTION__ << '\n'; }
If we call foo
with a non-const argument:
int i = 4;
foo(i);
The T&
overload is preferred based on [over.ics.rank]/3.2.6, since the deduced reference int&
is less cv-qualified than the deduced reference const int&
.
However, if we call foo
with a const argument:
const int ci = 42;
foo(ci);
The const C&
overload is preferred because it is "more specialized" based on [over.match.best]/1.7. But what are the rules to determine this? My understanding was that you synthesize a type for C
(call it, M
) and try to perform deduction on foo(M)
- but that would succeed (with T == M
). It's only an rvalue that would cause that deduction to fail - but how does the compiler know that it has to choose an rvalue in the synthesis step?
Disclaimer: The types we consider are always types of parameters. The types/value categories/etc. of the actual arguments passed are solely considered in overload resolution, never in partial ordering.
Partial ordering considers both overloads in two "turns", in which one template is always the parameter template, and the other template is the argument template. [temp.deduct.partial]/2:
You should be familiar with the way transformed "templates" are generated. This is specified in §14.5.6.2/3.
So our (transformed) argument templates are
[temp.deduct.partial]/3 & /4:
Thus we have two turns, and in both we have a type
P
and a typeA
:Turn 1:
P1
:T const&
A1
:Unique1&
Turn 2:
P2
:T&
A2
:Unique2 const&
But before the fun begins, some transformations are performed on these types as well - I abbreviated [temp.deduct.partial]/5 & /7:
P
orA
are references, then they're replaced by the type they refer to.Note that the removed cv-qualifiers are "remembered" for later. [temp.deduct.partial]/6:
Thus we're left with
Turn 1:
P1
:T
A1
:Unique1
Turn 2:
P2
:T
A2
:Unique2
Now we perform deduction - which clearly succeeds in both turns by setting
T=Unique[1/2]
. From [temp.deduct.partial]/8:That gives us both that
Unique1&
is at least as specialized asT const&
, and thatUnique2 const&
is at least as specialized asT&
.However, this is where [temp.deduct.partial]/(9.2) steps in:
The remembered cv-qualifiers come into play.
A2
is "more cv-qualified (as described above)" thanP2
, henceP2
is not considered to be at least as specialized asA2
.Finally, [temp.deduct.partial]/10:
implies that since the type
T&
is not at least as specialized asUnique2 const&
and we already established thatT const&
is at least as specialized asUnique1&
, theT const&
-overload is more specialized than theT&
-overload.The aforementioned rule in paragraph 9 is currently subject of CWG #2088 created four months ago by R. Smith:
This will not alter the result established though, since the types we got are indeed identical.