Reputation:
Most of the constants I see in others code are powers of 2, i.e.
#define SIZE 256
or
public static final int SIZE = 2048;
Is there any particular reason why we do that instead of i.e.
#define SIZE 257
?
Upvotes: 10
Views: 1340
Reputation: 86306
Our variable sizes are powers of 2 (1, 2, 4, or 8 bytes). The computer is comfortable working on these boundaries. In the old days, we used to carefully pad our structs to make our code go faster, and sometimes to make pointer arithmetic easier.
If you have a choice between a size of 256 and 257, we go with 256. One reason would be for debugging. When you look at memory or in a file, your debugger or hex file viewer will show data in lines that are a power of two.
Here's one that shows 16 bytes per line, in groups of 4.
(source: wikimedia.org)
For flags, we make them powers of two so we can treat them all separately in one variable instead of in many variables or an array.
So they can all be or'd to and and'd from the same variable.
bits |= 8; //00000100
bits |= 32; //00010000
//bits is now 40 00010100
bits &= 32; //00010000
//bits is now 32 00010000
Many programmers will write the numbers in hexadecimal instead of decimal so that it's easier to see what's going on with individual bits.
Upvotes: 2
Reputation: 10129
When it comes to array sizes I suspect there are two reasons why powers of two are favoured. One - as evidenced by several replies here - is that programmers who don't know what is going on "under the hood" seem to have a general sense that it might somehow be more efficient to use a power of two. The other is (mostly historical now) to do with cyclic buffers.
Cyclic buffers that are powers of two can be handled more easily and faster using masks can on the read and write indices (or pointers) rather than using the normally slower modulo operation or using conditions that require branches. This was crucial on older machines but can still be important for transferring large amounts data - e.g. graphics processing
For example, in C, the the number of bytes available to be read in a cyclic buffer can be obtained by:
pending = (SIZE + wr - rd) & (SIZE - 1);
If not using power of two then the equivalent would be:
pending = (SIZE + wr - rd) % (SIZE - 1);
On machines that don't implement a division/modulus instruction that little "%" could take several hundred cycles, so you would need something like:
if (wr >= rd)
pending = wr - rd;
else
pending = (SIZE + wr) - rd;
Which clutters the code and causes branching which can stall the instruction pipeline.
Writing to the buffer which was something like
buf[wr++] = x;
if (wr == SIZE)
rd = 0;
becomes the (generally) more efficient:
buf[wr++] = x;
wr &= SIZE-1;
Of course if you used an unsigned 8 bit variable to index a 256 entry array then you didn't even need to do the masking.
Upvotes: 2
Reputation: 31
Adding more bits to a memory chip is usually done by quadrupling the number of "cells" on the chip. Twice as wide, twice as long, four times the memory (excepting smaller distances between the "cells").
Also it's easier to multiply using a single shift rather than the generic mulplication algorithm of adding successive shifts depending on if a particular bit is set or not. OpenGL was famous for requiring power of two sizes for textures to access particular scan lines.
Upvotes: 1
Reputation: 86432
With binary computers, it's convenient to use standards-based binary multiples like Mebibyte (or Kibi, Gibi, Tebi ...). These power of 2 numbers also look nice in Octal or Hex notations.
Upvotes: 1
Reputation: 3306
Not much reason really. Due to alignment of variables in structured and on the stack an array of three bytes will likely take of 4 or 8 bytes in memory. I think it just feels nice.
Allocating a whole number of pages from the heap may not work as effectively as desired due to the overhead of the heap's internal structure. Allocation 4096 bytes (1 page for a 32 bit windows machine) may likely result in an allocation of 4104 bytes or more.
If the constants are flags, then the story is very different. It is typically more efficient to have bit flags than flags in some base that is not a power of 2.
Upvotes: 1
Reputation: 22591
Powers of 2 are convenient because they map well to underlying constraints in hardware.
Such as:
For flags, powers of 2 always have a single bit set. So things like MY_FLAG_1 | MY_FLAG_2 | MY_FLAG_3 | ...
only work with powers of two. Likewise for testing for flags with &
.
Its also something of a convention to pick the nearest larger power of two for buffer sizes and the like.
Upvotes: 29
Reputation: 127527
Memory is typically allocated in multiples of page sizes from the operating system, and in many cases, it is useful to arrange things to fit pages exactly (in order to not waste memory). It depends on the specific allocation routine whether this really helps; e.g. if there is an implicit header, having the size a power of two may actually hurt.
Upvotes: 4
Reputation: 94217
One good reason is bitmasking. As an example, if you are using constants to represent attributes of an object (or anything else), you can save many attributes into a single integer through bitmasking, and identify the individual attributes later. Great for saving many "flags" in a database.
Upvotes: 18
Reputation: 3941
Because Computer Memory works with 0/1 that is binary system.
Upvotes: -2