Reputation: 604
2^10 = 1KB, 2^20 = 1MB, etc. etc.
Except, a byte is 8 bits so I do not understand why we are using powers of 2 as an explanation. To talk about Bits in powers of 2 I can completely understand but with Bytes, I am totally lost. Many textbooks / online resources talk about it in this way, what am I missing here?
By the way, I understand 2^10 = 1024 which is approximately 10^3 = 1000. What I don't understand is why we justify the use prefixes and bytes using powers of 2.
Upvotes: 4
Views: 15584
Reputation: 11
given that modern computing makes use of "Words" (8 bits) as the minimal storing unit, but that the most practical computing unit is instead 2 Words (because this unit can store Real numbers);
then the simplest explanation is that a "two-Words" processing system originates a "2^n" measure unit for computing systems
the conclusion is that a 2-Bytes-based measure is not directly a consequence of using a Binary logic (0 and 1 in bits) but as a matter of facts, this makes for the two systems to be arithmetically omogeneous -- and confusing :)
the following step is inquiring why they developed 8-bits rather than 6 or 11-bits storing units
Upvotes: 0
Reputation: 11
I think the part that you are hung up on is the conversion from byte, to KB, to MB, etc. We all know the conversion, but let me clarify:
1024 bytes is a kilobyte. 1024 kilobytes is a megabyte, etc.
As far as the machines go, they don't care about this conversion! They just store it as x bytes. Honestly I'm not sure if it cares are bytes, and just deals with bits.
While I'm not entirely sure, I think the 1024 rate is an arbitrary choice made by some human. It's close to 1000 which is used in the metric system. I thought the same thing as you did, like "this has nothing to do with binary!". As one of the other answers says, it's nothing more than "easy to work with".
Upvotes: -1
Reputation: 1309
The reason is that you do not only use bytes to store numbers, but also to address memory bytes that store numbers (or even other addresses). With 1 Byte you have 256 possible addresses, so you can access 256 different bytes. Using only 200 bytes, for example, just because it is a rounder number would be a waste of address space.
This example assumes 8 bit addresses for simplification, usually you have 64 bit addresses in modern PCs.
By the way, in the context of hard drives, the amount of memory is often a round number, e.g. 1 TB, because they address memory space differently. Powers of 2 are used in most memory types, like RAM, flash drives/SSDs, cache memory. In these cases, they are sometimes rounded, e.g. 1024 KB as 1 MB.
There are actually 2 different names for powers of 2 and powers of 10. Powers of ten are known as kilo-bytes, mega-bytes, giga-bytes, while powers of two are called kibi-bytes, mebi-bytes and gibi-bytes. Most people just use the former ones in both cases.
Upvotes: 4
Reputation: 604
Okay so I figured my own question out. 2^3 bits = 2^0 Bytes. So if we have 2^13 bits and want to convert it to bytes then 2^13 bits = x * 1Byte / (2^3 bits) = 2^10 bytes which is a kilobyte. Now with this conversion, it makes much more sense to me why they choose to represent Bytes in powers of 2.
We can do the same thing with powers of ten, 10^1 ones = 10^0 tens. Then if we want to convert 10^25 ones to tens we get 10^25 ones = x * (10^0 tens / 10^1 ones) = 10^24 tens as expected.
Upvotes: 1
Reputation: 168
I am not sure if I get what you are exactly asking, but:
2^10 bits = 1KBits
2^10 bytes = 1KBytes = ((2^3)(2^10)Bits = 2^13 Bits
These are two different numbers of bits and you should not confuse them with eachother
Upvotes: 0
Reputation: 2336
From your question, I think that you understand about powers of two and measuring bytes. If not, the other answers explain that.
Is your question is why not use bits rather than bytes since bits are truly binary?
The reason that memory, disk space, etc is described in bytes rather than bits has to do with the word addressability of early computers. The bit, nibble and byte came about as workable amounts of memory in simple computers. The first computers had actual wires that linked the various bits together. 8-bit addressability was a significant step forward.
Bytes instead of bits is just a historical convention. Networks measurements are in (mega) bits for similar historical reasons.
Wikipedia has some interesting details.
Upvotes: 4
Reputation: 45262
I'll ask the question you're really asking: Why don't we just use powers of 10?
To which we'll respond: why should we use powers of 10? Because the lifeforms using the computers happen to have 10 fingers?
Computers break everything down to 1s and 0s.
1024 in binary = 10000000000 (2^10), which is a nice round number.
1000 in binary = 1111101000 (not an even power of 2).
If you are actually working with a computer at a low level (ie looking at the raw memory), it is much easier to think using numbers that get represented as round numbers in the way they are stored.
Upvotes: 5