Bwoods
Bwoods

Reputation: 187

Why does the binary fourteen bit floating point model used in textbooks use bias 16 where as the IEEE single precision uses bias 127?

In my computer architecture course we use a 14 bit binary model;(1 bit for sign,5 bits for exponent, and 8 bits for our mantissa). When inputting the Exponent my instructor has us add 16 to offset it.(bias 16) Why are we using 16 bias? Is it because 5 bits can only represent up to 31 numbers? If so please elaborate and compare to IEEE single precision that uses a 127 bias when using the exponent. Lastly if someone can give me a clear definition of bias used in this context and in binary I would greatly appreciate it. Please comment if anything I said was unclear.

Upvotes: 3

Views: 1198

Answers (2)

Patricia Shanahan
Patricia Shanahan

Reputation: 26175

There are several ways of representing a range of numbers including both positive and negative. Adding a bias is particularly flexible. The range [-n, m) can be represented by adding n to each number, mapping it to the range [0, m+n).

That system is used for the exponent in all the floating point systems I have used. It simplifies some comparisons, because larger unsigned binary value of the non-sign bits represents larger absolute magnitude of the float, except for special values such as NaNs.

For float exponents, the bias is around half the exponent range, so that approximately half the values are on each side of zero. Exact balance is impossible because there are an even number of bit patterns, and one is used for zero.

As discussed in another answer, the IEEE 754 standard would use a bias of 15 for a 5 bit exponent.

There are several possible reasons for choosing 16:

  • There is some actual technical reason, such as the suggested one of not treating 31 as special.
  • Bias 16 makes the representation of 1.0 particularly simple, with a single non-zero bit.
  • Being subtly different from IEEE 754 helps convince students that floating point does not imply IEEE 754. There are other floating point formats.
  • Being subtly different from IEEE 754 may discourage use of existing tools to get the results for exercises without understanding how the representation works.
  • It is an arbitrary choice of one of the reasonable values for the exponent bias, without reference to IEEE 754.

Upvotes: 3

fsasm
fsasm

Reputation: 541

The IEEE 754 binary float formats follow a simple pattern for the exponent bias. When the exponent has p bits the bias is 2^{p-1}-1. With this the exponent has an equal number of positive and negative exponents.

For single precision floats p is 8 and therefore the bias is 127. For your format p is 5 and the bias is 15. Maybe your instructor changed the bias to 16 because the format don't support denorm, infinity and NaN.

Upvotes: 3

Related Questions