Reputation: 12515
I am converting base-10 numbers to base-2 numbers, and specifying the number of bits I'd like to use to represent these base-10 numbers.
Here's my code for negative numbers:
function output = DTB(decimal,binary)
if decimal < 0
smallestNum = -(2^(bits-1));
if decimal < smallestNum
error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
output = '';
end
output = '1';
bits = bits - 1;
if smallestNum == decimal
while bits ~= 0
output = [output,'0'];
bits = bits - 1;
end
end
num = smallestNum;
while num ~= decimal
num = smallestNum + 2^(bits-1);
if num > decimal
output = [output,'0'];
else
output = [output,'1'];
smallestNum = smallestNum + 2^(bits-1);
end
bits = bits - 1;
end
while bits ~= 0
output = [output,'0'];
bits = bits - 1;
end
end
This works fine. The issue I'm running into (oddly enough, since going from positive decimals to binary should be easier) is with positive integers. It should just be a minor tweak to the negative number algorithm, right? The positive number piece does not work in the case of decimal
= 8 and bits
= 6, for example (it doesn't work for different powers of 2). What's wrong, here?
Here's the code for positive integers:
if decimal > 0
largestNum = (2^(bits-1))-1;
if decimal > largestNum
error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
output = '';
end
% first spot must be zero to show it's a positive number
output = '0';
bits = bits - 1;
largestNum = largestNum + 1;
num = largestNum;
while num ~= decimal
num = largestNum - 2^(bits-1);
if num > decimal
output = [output,'0'];
end
if num <= decimal
output = [output,'1'];
largestNum = largestNum - 2^(bits-1);
end
bits = bits - 1;
end
while bits ~= 0
output = [output,'0'];
bits = bits - 1;
end
Upvotes: 1
Views: 3873
Reputation: 1
you can use this script in matlab:
a=[1 2 3 4;-2 -4 3 4;7 8 9 4];
[c,v]=size(a);
n3=c*v;
word_len=5;%with bits of binary word
data = reshape(a', n3, 1);
databin= fi(data,1,5);
h=bin(databin)%result
Upvotes: -1
Reputation: 311
You need to reduce largest num when you put a zero in the output array, because you're essentially starting from a binary array of all ones (ie largestNum). This code worked for me:
if decimal > 0
largestNum = (2^(bits-1))-1;
if decimal > largestNum
error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
output = '';
end
% first spot must be zero to show it\'s a positive number
output = '0';
bits = bits - 1;
largestNum = largestNum + 1;
num = largestNum;
while num ~= decimal
num = largestNum - 2^(bits-1);
if num > decimal
output = [output,'0'];
largestNum = largestNum - 2^(bits-1);
end
if num <= decimal
output = [output,'1'];
end
bits = bits - 1;
end
while bits ~= 0
output = [output,'0'];
bits = bits - 1;
end
end
I'm not sure what this is for, but I would highly recommend using the built in dec2bin to do this.
Upvotes: 3