Reputation: 24150
I'm writing a number converter. How can I convert a integer to a binary string in C# WITHOUT using built-in functions (Convert.ToString
does different things based on the value given)?
Upvotes: 10
Views: 46019
Reputation: 1440
Starting with .NET 8 the binary format specifier (B) was (finally) introduced, rendering any extra function to do the conversion (built-in or custom) obsolete (well, unless you want to do something very specific like adding underscores between each byte or nibble for example). It's simply part of the standard numeric format strings for the default ToString()
methods of integral numeric types now:
int i = 42;
string binary = i.ToString("B"); // 101010
int j = 0x0f0f;
// you can specify the number of digits in the result string, for example 16:
string itsSoSimpleNow = $"{j:B16}"; // 0000111100001111
// this was not possible with Convert.ToString(...) before
ulong k = ulong.MaxValue;
string evenUlongsWorkFinally = k.ToString("B"); // 11...11
You can use B
or b
for the specifier, it makes no difference :)
If you're stuck with an older .NET version for now (or really want to go custom)
I haven't actually seen an easy ASCII based solution so far, so here's one:
public static string ToBinaryString(ulong u)
{
Span<byte> ascii = stackalloc byte[64];
for (int i = 0; i < 64; i++)
{
// we want the MSB to be on the left, so we need to reverse everything
// other than that we simply grab the ith bit (from the LSB)
// and simply OR that to the ASCII character '0' (0x30).
// if the bit was 0 the result is '0' itself, otherwise
// if the bit was 1 then the result is '0' | 1 (0x30 | 1) which
// yields 0x31 which is also conveniently the ASCII code for '1'.
ascii[63 - i] = (byte)((u & (1uL << i)) >> i | '0');
}
return Encoding.ASCII.GetString(ascii);
}
the example above uses ulong
(partially because that doesn't work with the Convert.ToString(..., 2)
approach, at least not on .NET 7), but you can use any other integer type as well. Just be sure to cast your value to the corresponding unsigned type first as we don't want signed shifts here. Obviously you may have to change the sizes to the bit sizes of your type. So for Int32
you'd use this code:
public static string ToBinaryString(int value)
{
// note the cast to unsigned here.
// we don't want any funny negative values in here.
// the result will be the two's complement binary representation of negative values
uint u = (uint)value;
Span<byte> ascii = stackalloc byte[32];
for (int i = 0; i < 32; i++)
{
ascii[31 - i] = (byte)((u & (1u << i)) >> i | '0');
}
return Encoding.ASCII.GetString(ascii);
}
Upvotes: 2
Reputation:
Here's mine: (The upper part convert 32-char binary string to 32-bit integer, the lower part convert 32-bit integer back to 32-char binary string). Hope this helps.
string binaryString = "011100100111001001110011";
int G = 0;
for (int i = 0; i < binaryString.Length; i++)
G += (int)((binaryString[binaryString.Length - (i + 1)] & 1) << (i % 32));
Console.WriteLine(G); //7500403
binaryString = string.Empty;
for (int i = 31; i >= 0; i--)
{
binaryString += (char)(((G & (1 << (i % 32))) >> (i % 32)) | 48);
}
Console.WriteLine(binaryString); //00000000011100100111001001110011
Upvotes: 1
Reputation: 1
Here is an elegant solution:
// Convert Integer to binary and return as string
private static string GetBinaryString(Int32 n)
{
char[] b = new char[sizeof(Int32) * 8];
for (int i = 0; i < b.Length; i++)
b[b.Length-1 - i] = ((n & (1 << i)) != 0) ? '1' : '0';
return new string(b).TrimStart('0');
}
Upvotes: 0
Reputation: 70701
Almost all computers today use two's complement representation internally, so if you do a straightforward conversion like this, you'll get the two's complement string:
public string Convert(int x) {
char[] bits = new char[32];
int i = 0;
while (x != 0) {
bits[i++] = (x & 1) == 1 ? '1' : '0';
x >>= 1;
}
Array.Reverse(bits, 0, i);
return new string(bits);
}
That's your basis for the remaining two conversions. For sign-magnitude, simply extract the sign beforehand and convert the absolute value:
byte sign;
if (x < 0) {
sign = '1';
x = -x;
} else {
sign = '0';
}
string magnitude = Convert(x);
For one's complement, subtract one if the number is negative:
if (x < 0)
x--;
string onec = Convert(x);
Upvotes: 17
Reputation: 25652
At least part of the answer is to use decimal.GetBits(someValue)
to convert the decimal to its binary representation.
BitConverter.GetBytes
can be used, in turn, on the elements returned from decimal.GetBits()
to convert integers into bytes.
You may find the decimal.GetBits()
documentation useful.
I'm not sure how to go from bytes to decimal, though.
Update: Based on Author's update:
BitConverter
contains methods for converting numbers to bytes, which is convenient for getting the binary representation. The GetBytes()
and ToInt32()
methods are convenient for conversions in each direction. The ToString()
overloads are convenient for creating a hexadecimal string representation if you would find that easier to interpret as 1's and 0's.
Upvotes: 6
Reputation: 81660
This is an unsafe implementation:
private static unsafe byte[] GetDecimalBytes(decimal d)
{
byte* dp = (byte*) &d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
result[i] = *dp;
}
return result;
}
And here is reverting back:
private static unsafe decimal GetDecimal(Byte[] bytes)
{
if (bytes == null)
throw new ArgumentNullException("bytes");
if (bytes.Length != sizeof(decimal))
throw new ArgumentOutOfRangeException("bytes", "length must be 16");
decimal d = 0;
byte* dp = (byte*)&d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
*dp = bytes[i];
}
return d;
}
Upvotes: 0
Reputation: 16007
You can construct the representations digit by digit from first principles.
Not sure what built-in functions you don't want to use, but presumably you can construct a string character by character?
For one's complement and two's complement, calculate those with an additional step.
Or is this way too basic for what you need?
Upvotes: 0