Reputation: 730
I'm dealing with unicode stuff in my DB. I have a data field defined as varchar(max), and I'm preventing user to save unknown characters in this field, like "≤" for example (all unicode above U+00FF). While doing so, I found that some characters if sent to be saved in this field would be displayed as "?", so I thought that all unicode characters above "U+00FF" will all be displayed like this, but then I found that "U+201B" which is "‛" is displayed "?" but the next character "U+201C" which is "“" is displayed as "“". Can someone please explain to me why is that?
Update: Sorry if I was not clear, but I do not want to convert to nvarchar, I want to keep my field as varchar. What I need to understand is why a character like "‛" is displayed as "?" in a "varchar" field while the next unicode character "“" is displayed properly?
Upvotes: 1
Views: 3903
Reputation: 153
You need to change your data type to nvarchar which will hold any unicode character where varchar is restricted to 8bit codepage.
For more information, read the accepted answer in this link below.
Difference between varchar and nvarchar
Upvotes: 1
Reputation: 51494
If you want to store Unicode characters, you should use an nvarchar
type, not varchar
Upvotes: 1