spender
spender

Reputation: 120430

Why doesn't HttpServerUtility.UrlTokenDecode throw an exception when it doesn't find a padding character?

So, let's throw a nonsense string at HttpServerUtility.UrlTokenDecode, and ensure that it ends in the digit 0-9:

HttpServerUtility.UrlTokenDecode("fu&+bar0");

and it blows up with a FormatException.

Now let's try the same without the number at the end:

HttpServerUtility.UrlTokenDecode("fu&+bar");

No exception occurs and the method returns null.

I understand that the character at the end is meant to represent the number of padding characters that would occur when the string is base64 encoded, and that by the algorithm, this is only permitted to be a digit character between 0-9, as we can see in this decompiled code:

int num = (int) input[length - 1] - 48;
if (num < 0 || num > 10)
{
    return (byte[]) null;
}

So my question is: why does this method return null when handed a particular type of corrupted token, but throw an exception when a different type of corruption is encountered? Is there a rationale behind this decision, or is it just a case of sloppy implementation?

Upvotes: 0

Views: 431

Answers (1)

DavidG
DavidG

Reputation: 118937

You can view the source code for HttpServerutility.UrlTokenDecode yourself.

But essentially when there is a number at the end of the input, it passes the first stage of evaluation and gets passed into the Base64 decoding routines. Inside those routines is where the FormatException is raised due to the nonsense input.

Upvotes: 1

Related Questions