hr0m
hr0m

Reputation: 2825

How does a 32-bit machine compute a double precision number

If i only have 32-bit machine, how do does the cpu compute a double precision number? This number is 64 bit wide. How does a FPU handle it?

The more general question would be, how to compute something which is wider, then my alu. However i fully understood the integer way. You can simply split them up. Yet with floating point numbers, you have the exponent and the mantissa, which should be handled differnetly.

Upvotes: 7

Views: 13342

Answers (5)

Patricia Shanahan
Patricia Shanahan

Reputation: 26185

There are a several different concepts in a computer architecture that can be measured in bits, but none of them prevent handling 64 bit floating point numbers. Although these concepts may be correlated, it is worth considering them separately for this question.

Often, "32 bit" means that addresses are 32 bits. That limits each process's virtual memory to 2^32 addresses. It is the measure that makes the most direct difference to programs, because it affects the size of a pointer and the maximum size of in-memory data. It is completely irrelevant to the handling of floating point numbers.

Another possible meaning is the width of the paths that transfer data between memory and the CPU. That is not a hard limit on the sizes of data structures - one data item may take multiple transfers. For example, the Java Language Specification does not require atomic loads and stores of double or long. See 17.7. Non-Atomic Treatment of double and long. A double can be moved between memory and the processor using two separate 32 bit transfers.

A third meaning is the general register size. Many architectures use separate registers for floating point. Even if the general registers are only 32 bits the floating point registers can be wider, or it may be possible to pair two 32 bit floating point registers to represent one 64-bit number.

A typical relationship between these concepts is that a computer with 64 bit memory addresses will usually have 64 bit general registers, so that a pointer can fit in one general register.

Upvotes: 4

anatolyg
anatolyg

Reputation: 28300

It seems that the question is just "how does FPU work?", regardless of bit widths.

FPU does addition, multiplication, division, etc. Each of them has a different algorithm.

Addition

(also subtraction)
Given two numbers with exponent and mantissa:

  • x1 = m1 * 2 ^ e1
  • x2 = m2 * 2 ^ e2

, the first step is normalization:

  • x1 = m1 * 2 ^ e1
  • x2 = (m2 * 2 ^ (e2 - e1)) * 2 ^ e1 (assuming e2 > e1)

Then one can add the mantissas:

  • x1 + x2 = (whatever) * 2 ^ e1

Then, one should convert the result to a valid mantissa/exponent form (e.g., the (whatever) part might be required to be between 2^23 and 2^24). This is called "renormalization" if I am not mistaken. Here one should also check for overflow and underflow.

Multiplication

Just multiply the mantissas and add the exponents. Then renormalize the multiplied mantissas.

Division

Do a "long division" algorithm on the mantissas, then subtract the exponents. Renormalization might not be necessary (depending on how you implement the long division).

Sine/Cosine

Convert the input to a range [0...π/2], then run the CORDIC algorithm on it.

Etc.

Upvotes: 1

Waters
Waters

Reputation: 373

Let's look at integer arithmetic first, since it is simpler. Inside of you 32 bit ALU there are 32 individual logic units with carry bits that will spill up the chain. 1 + 1 -> 10, the carry but carried over to the second logic unit. The entire ALU will also have a carry bit output, and you can use this to do arbitrary length math. The only real limitation for the but width is how many bits you can work with in one cycle. To do 64 bit math you need 2 or more cycles and need to do the carry logic yourself.

Upvotes: 1

user555045
user555045

Reputation: 64913

Not everything in a "32-bit machine" has to be 32bit. The x87 style FPU hasn't been "32-bit" from its inception, which was a very long time before AMD64 was created. It was always capable of doing math on 80-bit extended doubles, and it used to be a separate chip, so no chance of using the main ALU at all.

It's wider than the ALU yes, but it doesn't go through the ALU, the floating point unit(s) use their own circuits which are as wide as they need to be. These circuits are also much more complicated than the integer circuits, and they don't really overlap with integer ALUs in their components

Upvotes: 11

gnasher729
gnasher729

Reputation: 52622

Even 8 bit computers provided extended precision (80 bit) floating point arithmetic, by writing code to do the calculations.

Modern 32 bit computers (x86, ARM, older PowerPC etc.) have 32 bit integer and 64 or 80 bit floating-point hardware.

Upvotes: 1

Related Questions