What's the need for different ways to represent data using the binary system?
There are many ways to represent data using binary like - unsigned, signed magnitude, 1s/2s complement, offset-M,
floating-point, ASCII, and Unicode. Why do we need so many different ways?