Reputation: 24395
I have data that is being encrypted and sent to another page, where it is decoded and interpreted. The data is hashed into unicode using System.Text.Encoding.Unicode.GetString(), then is encrypted, then encoded into a Base64 string and sent through the URL, then the receiving site takes that out of Base64 and decrypts it again, and uses System.Text.Encoding.Unicode.GetBytes() to get back to bytes again.
However, I'm getting a single character that is not interpreted into bytes the same way as I am sending it. It "appears" the same, as a '�'. This string is then decrypted into bytes, but evaluates differently.
When encoding, the unicode characters take two bytes- this character is encoded as these bytes:
[0]: 139
[1]: 222
but is decoded as
[0]: 253
[1]: 255
All the other characters are converted properly into and out of the encryption with their byte conversions remaining the same.
Is this character a special case? Are there other characters that would have this same effect? Is something broken with the encryption?
Upvotes: 0
Views: 237
Reputation: 596527
You should not use a Text encoding at all in this situation. Encrypt the original data directly to a byte array, then encode that to a base64 string, then decode that to a byte array, and decrypt the original bytes. The only String is the base64 string. DO NOT use an intermediate String during the enrypting/decrypting stages.
Upvotes: 2