Reputation: 12803
I ran across this interesting situation today:
var a = new HashSet<Object> { 1.0, 2.0, 3.0 };
a.Contains(1); //False
a.Contains(1.0); //True
Of course, this was just a generic version of this:
Object b = 2.0;
b.Equals(2); //False
b.Equals(2.0); //True
I realize the reason for this is because if I write 2.0 == 2
, the C# compiler secretly inserts a cast from integer to double, and by using an Object intermediate, the compiler doesn't have enough information to do this.
My question is, doesn't the runtime have enough information to lift the integer to double for the comparison? If the C# compiler assumes it's desirable enough to have an implicit conversion, why shouldn't the JIT have similar behavior?
Upvotes: 2
Views: 118
Reputation: 2531
This feels more like a community question rather than one looking for a specific solution. But I'll be happy to weight in.
First, I'll start by saying double equality comparison is already dangerous as I'm sure you might know. There are tons of information out there on the subject of
if(doubleNum == doubleNum2)
but also consider should two object of different types that are not relatives be equatable? While writing the code the compiler can assume some basic ideas. But the first thing most .Equals(object) methods do is check for type compatibility and if they are not compatible then a false is returned. I would assume this is the case here as the b.Equals(2) assumes an int Type is passed which is not a double.
After looking at the double.Equals(object) method, you'll notice the first thing it does is check to see if the passed object is a double. Due to it being an Int32, the function returns false.
While the other answers are more at the reasons the implementation is how it is. The above should explain why the run time behaves how it does.
Upvotes: 0
Reputation: 109567
C# has to work the way the language specification says that it works. This has nothing to do with the Jitter, which just has to implement the language specification.
The C# language specification says how ==
must work.
The CLR specification says how Equals()
must work.
There was actually an interesting change made between .Net 1.1 and .Net 2.0.
In .Net 1.1, 3f.Equals(3) == false
.
In .Net 2.0, 3f.Equals(3) == true.
This is not that same as the object comparing version of Equals()
. Shows you how subtle this kind of thing is.
An interesting (but very old) blog about it here: http://blogs.msdn.com/b/jmstall/archive/2005/03/23/401038.aspx
It actually does have a few details that do relate to your question, so it's worth a read.
Upvotes: 4
Reputation: 99879
C# is a specific programming language with specific semantics for handling 2.0 == 2
, defined in the ECMA-334 standard. The Common Language Runtime (CLR) is an execution environment defined by the ECMA-335 standard, and operates on bytecode, not C# source code. The semantics of these differ in many ways, so while the runtime portion could have been implemented to automatically perform widening conversions for these types of comparisons, it wasn't actually done that way.
The specific comparison done here happens to be calling Double.Equals(Object)
, which returns
true
if obj is an instance ofDouble
and equals the value of this instance; otherwise, false.
Upvotes: 2