asj
asj

Reputation: 29

Processor architecture

While HDDs evolve and offer more and more space on less room, why are we "sticking with" 32-bit or 64-bit?

Why can't there be a e.g.: 128-bit processor?

(This is not my homework; I'm just a student interested beyond the things they teach us in informatics)

Upvotes: 2

Views: 697

Answers (8)

Yale Zhang
Yale Zhang

Reputation: 1578

I'd also like to offer a computer architect's view of why 128bit is impractical at the moment:

  1. Energy cost. See Bill Dally's presentations on how today, most energy in processors is spent moving data around (dissipated in the wires). However, since the most significant bits of a 128bit computation should change little, it should mitigate this problem.

  2. Most arithmetic operations have a non-linear cost w.r.t operand size:

    a. A tree multiplier has space complexity n^2, w.r.t. number of bits.

    b. The delay of a hierarchical carry look ahead adder is Log[n] w.r.t number of bits (I think). So a 128bit adder will be slower than a 64bit add. Can anyone give some hard numbers (Log[n] seems very cheap) ?

  3. Few programs use 128bit integers or quad precision floating point, and when they do, there are efficient ways to compose them from 32 or 64bit ops.

Upvotes: 1

Krazy Glew
Krazy Glew

Reputation: 7534

Well, I happen to be a professional computer architect (my inventions are probably in the computer you are reading this on), and although I have not yet been paid to work on any processor with more than 64 bits of address, I know some of my friends who have been.

And I have been playing around with 128 bit architectures for fun for a few decades.

I.e. its already happening.

Actually, it has already happened to a limited extent. The HP Precision Architecture, Intel Itanium, and the higher end versions of the IBM Power line, have what I call a folded virtual memory. I have described these elsewhere, e.g. in comp.arch posts in some details, http://groups.google.com/group/comp.arch/browse_thread/thread/53a7396f56860e17/f62404dd5782f309?lnk=gst&q=folded+virtual+memory#f62404dd5782f309

I need to create a comp-arch.net wiki post for these.

But you can get the manuals for these processors and read them yourself.

E.g. you might start with a 64 bit user virtual address. The upper 8 bits may be used to index a region table, that returns an upper 24 bits that is concatenated with the remaining 64-8=56 bits to produce an 80 bit expanded virtual address. Which is then translated by TLBs and page tables and hash lookups, as usual, to whatever your physical address is.

Why go from 64->80?

One reason is shared libraries. You may want to have the shared libraries to stay at the same expanded virtual address in all processors, so that you cam share TLB entries. But you may be required, by your language tools, to relocate them to different user virtual addresses. Folded virtual addresses allow this.

Folded virtual addresses are not true >64 bit virtual addresses usable by the user.

For that matter, there are many proposals for >64 bit pointers: e.g. I worked on one where a pointer consisted of a 64bit address, and 64 bit lower and upper bounds, and metadata, for a total of 128 bits. Bounds checking. But, although these have >64 bit pointers or capabilities, they are not truly >64 bit virtual addresses.

Linus posts about 128 bit virtual addresses at http://www.realworldtech.com/beta/forums/index.cfm?action=detail&id=103574&threadid=103545&roomid=2

Upvotes: 1

nos
nos

Reputation: 229088

The main need for a 64 bit processor is to address more memory - and that is the driving force to switch to 64 bit. On 32 bit systems, you can really only address 4Gb of RAM, at least per process. 4Gb is not much.

64 bits give you an address space of several petabytes.(though, a lot of current 64 bit hardware can address "only" 48 bits - thats still enough to support 256 terrabytes of ram though).

Upping the natural integer sizes for a processor does not automatically make it "better" though. There are tradeoffs. With 128bit you'd need twice as much storage(registers/ram/caches/etc.) compared to 64 bit for common data types - with all the drawback that might have - more ram needed to store data, more data to transmit = slower, wider buses might requires more physical space/perhaps more power, etc.

Upvotes: 0

Paul R
Paul R

Reputation: 212949

When we talk about an n-bit architecture we are often conflating two rather different things:

(1) n-bit addressing, e.g. a CPU with 32-bit address registers and a 32-bit address bus can address 4 GB of physical memory

(2) size of CPU internal data paths and general purpose registers, e.g. a CPU with 32-bit internal architecture has 32-bit registers, 32-bit integer ALUs, 32-bit internal data paths, etc

In many cases (1) and (2) are the same, but there are plenty of exceptions and this may become increasingly the case, e.g. we may not need more than 64-bit addressing for the forseeable future, but we may want > 64 bits for registers and data paths (this is already the case with many CPUs with SIMD support).

So, in short, you need to be careful when you talk about, e.g. a "64-bit CPU" - it can mean different things in different contexts.

Upvotes: 3

Nick Craver
Nick Craver

Reputation: 630379

There's very little need for this, when do you deal with numbers that large? The current addressable memory space available to 64-bit is well beyond what any machine can handle for at least a few years...and beyond that it's probably more than any desktop will hold for quite a while.

Yes, desktop memory will continue to increase, but 4 billion times what it is now? That's going to take a while...sure we'll get to 128-bit, if the whole current model isn't thrown out before then, which I see equally as likely.

Also, it's worth noting that upgrading something from 32-bit to 64-bit puts you in a performance hole immediately in most scenarios (this is a major reason Visual Studio 2010 remains 32-bit only). The same will happen with 64-bit to 128-bit. The more small objects you have, the more pointers, which are now twice as large, that's more data to pass around to do the same thing, especially if you don't need that much addressable memory space.

Upvotes: 4

RichieHindle
RichieHindle

Reputation: 281415

Because the difference between 32-bit and 64-bit is astronomical - it's really the difference between 232 (a ten-digit number in the billions) and 264 (a twenty-digit number in the squillions :-).

64 bits will be more than enough for decades to come.

Upvotes: 5

NG.
NG.

Reputation: 22904

Cost. Also, what do you think the 128-bit architecture will get you? Memory addressing and such, but to handle it effectively, you need higher bandwidth buses and basically some new instruction languages that handle it. 64-bit is more than enough for addressing (18446744073709551616 bytes).

HDDs still have a bit of ground to catchup to RAM and such. They're still going to be the IO bottleneck I think. Plus, newer chips are just supporting more cores rather than making a massive change to the language.

Upvotes: 2

Jerome WAGNER
Jerome WAGNER

Reputation: 22412

The next big thing in processor's architecture will be quantum computing. Instead of beeing just 0 or 1, a qbit has a probability of being 0 or 1.

This will lead to huge improvements in the performance of algorithm (for instance, it will be very easy to crack down any RSA private/public key).

Check http://en.wikipedia.org/wiki/Quantum_computer for more information and see you in 15 years ;-)

Upvotes: 0

Related Questions