Reputation: 509
my assignment is to write a compiler that will compile small programming language to a fake assembly language (register and memory cells are of undefined length).
What I have to do is converting such instruction:
a = b * c
to a set of INC, SHL i, ADD i instructions (where i is pointer to a memory, there's just one register available) that will perform multiplication. The problem is, that the algorithm must do it's work in O(log n) time.
Could anybody explain step-by-step the algorithm solving this problem? My friend has mentioned Booth-Mcsorley algorithm, but I hardly understand it.
EDIT: Fake assembly instructions include: READ, WRITE, LOAD i, STORE i, ADD i, SUB i, SHR i, SHL i, INC, RESET, JUMP k, JZERO k, JODD k (jump to k if register is odd)
Upvotes: 2
Views: 7621
Reputation: 93476
Since this is a fake assembler I wonder why is speed important? More likely it is to discourage you from implementing a naive iterative addition algorithm the length of which will be proportional to the magnitude of the right-hand operand (or the smaller of the two operands if you want a simple optimisation, but O(n) nonetheless).
The method of implementing multiply in instruction sets that lack a multiply instruction is to use shift-and-addition (essentially long multiplication in binary). Given the instructions you have been provided, this seems like the most likely required solution.
This question: How can I multiply and divide using only bit shifting and adding? may therefore be relevant. A shift+add implementation that is O(log2 n) in C is presented as an answer to What is the time complexity of this multiplication algorithm?, but could be adapted to your instruction set.
Upvotes: 3