nonopolarity
nonopolarity

Reputation: 151166

Is there a reason why arbitrary precision arithmetic (such as BigInt in JavaScript) is implemented in binary?

From this question, it seems Google Chrome and Node.js both chose to implement arbitrary precision arithmetic in binary. Is there a good reason to do that?

If we can add, subtract, multiply, or divide, and do 7 + 8 = 15 and carry to the next digit, it is faster than doing it bit by bit, with 7 + 8 needing to add two bits 4 times.

Upvotes: 1

Views: 554

Answers (2)

jmrk
jmrk

Reputation: 40581

V8 developer here. Binary is a good choice because hardware is binary [*]. That doesn't mean that operations happen one bit at a time. In V8, a BigInt's "digits" are uintptr_t values, i.e. register-sized (32 bit on a 32-bit machine, 64 bit on a 64-bit machine) unsigned integers. See our blog post for an overview, and the source for all the gory details. FWIW, many other implementations (e.g. GMP, OpenJDK, Go, Dart) have made the same basic choice.

[*] Some hardware architectures have instructions for "binary coded decimal" arithmetic, which is similar to what you're describing, but this approach is (1) generally considered less efficient, and (2) not available on all architectures that we want V8 to run on.

Upvotes: 3

nonopolarity
nonopolarity

Reputation: 151166

One possible answer: it is done by adding two 32 or 64 bit integer together, so it is faster than doing it one decimal digit at a time.

To get the result of a multiplication, probably in one machine code cycle, two 64 bit integers can multiply and all digits of the result can be obtained.

Upvotes: 0

Related Questions