user3473830
user3473830

Reputation: 7295

How CLR converts Double/Single to Decimal internally?

I was interested to see how .NET converts Double/Single data types to Decimal, so I started Studying Decimal type struct source code that I came across to the code below.

It seems All other types conversions are implemented within framework's class library except double/float which are handled externally by CLR.

So, basically the question is How CLR does the Conversion?

    [MethodImpl(MethodImplOptions.InternalCall)]
    public extern Decimal(float value);

    [MethodImpl(MethodImplOptions.InternalCall)]
    public extern Decimal(double value);

    public Decimal(int value)
    {
        int num = value;
        if (num < 0)
        {
            this.flags = -2147483648;
            num = -num;
        }
        else
        {
            this.flags = 0;
        }
        this.lo = num;
        this.mid = 0;
        this.hi = 0;
    }

    [CLSCompliant(false)]
    public Decimal(uint value)
    {
        this.flags = 0;
        this.lo = (int)value;
        this.mid = 0;
        this.hi = 0;
    }

    .
    .
    .

Upvotes: 2

Views: 736

Answers (1)

Dono
Dono

Reputation: 1284

While it is not as current as the latest CLR, the general details may be confirmed in the previously released SSCLI 2 (AKA Rotor). The native portions of System.Decimal are implemented in clr\src\vm\comdecimal.cpp. Each calls VarDecFromR4 and VarDecFromR8, respectively. These are native APIs that are exposed by OleAut32.dll.

As for your next question: how does OleAut32 implement these functions? You're best bet is to attach a debugger and disassemble the functions. With WinDbg, you can do this with the uf command.

Upvotes: 5

Related Questions