Binary multiplication algorithm

Discuss how to write good code, break bad code, your current pet projects, or the best way to approach novel problems

Binary multiplication algorithm

Post by KthProg on Wed Feb 20, 2013 12:14 am
([msg=73908]see Binary multiplication algorithm[/msg])

Some background:

As far as I can find, multiplication is implemented in most computers by using a series of adders.
There are much more efficient methods for multiplication (Especially for large numbers) but information on how they are implemented in computer architecture is limited or completely non-existent. The current common implementation uses either Dadda multiplication or Wallace, with small differences between the two. Both appear to be implemented with an add then carry structure.

Early computers used shift and add algorithms to perform multiplication, getting around limitations of the time. At least one of these methods (Booths method) is still used at least for the optimization of multiplication if not the actual process. The following is a method I've found repeated many times when searching for similar algorithms to the one I've created.

Code: Select all
unsigned int main(unsigned int m, unsigned int n){
unsigned int p = 0;
while (n != 0){
if ((n & 0x01) != 0){
p = p + m;
m = m << 1;
n = n >> 1;
}}}


My method is much simpler in its composition and in its execution.
It is unlikely not to be faster than the above method, and should be faster than Wallace or Dadda multiplication, although tests I've ran show differently. There are three possible alternate explanations for this.
1. My computer does not use Dadda multiplication
2. During compilation, the nature of my code was changed, and became less optimal
3. The method used to time the functions is inaccurate or does not work

The following is my method:

Code: Select all
unsigned int main(unsigned int x, unsigned int y){
unsigned int a=0;
for (char bit=0; bit<=31; bit++){
if (y & (1<<bit)){
a = (x<<bit) + a;
}
}
return a;}


this particular implementation is for 32 bit unsigned integers.
(0 to 31)

Comparison of the two functions is as follows:
alg2: one incremental loop
alg1: one conditional loop
alg2: one binary condition
alg1: three binary conditions while (n!=0),if (n & 0x01), and != 0
alg2: one addition
alg1: one addition
alg2: two left shifts
alg1: one left shift, one right shift
alg2: one assignment per loop
alg1: three assignments per loop

alg2 is my algorithm.
alg1 is the common algorithm ive found on the topic.

In summary, my method contains 2 less condition checks, no right shifts (which may mean it can be implemented with signed data types) and two less variable assignments per loop then the common method used above.

Any input is appreciated, especially from those who know more about these processes and even from those who may be able to shoot down the usefulness of my method, especially if you can offer more information on what the common algorithm is in use today, so I can try to surpass that.

Also any comments on the possibility of using circular shifts to iterate through character combinations is appreciated.
I already have a method for it, but i dont know if its practical.

My code could be optimized by choosing the binary number with the least number of 1's for y, whether or not this would make it faster or slower, i dont know. It would probably be an even tradeoff.

Here's an explanation of the method. Its really simple (which is why I doubt its better than whats out there now)
any binary number abcde can be written as:
(a0000 +
0b000 +
00c00 +
000d0 +
0000e)

Multiplying this by a number x we get
(a0000 + 0b000 + 00c00 + 000d0 + 0000e)x, distributing we get
x*a0000 + x*0b000 + x*...etc

each of these operations is equivalent to a binary shift to the left by the position of a,b,c...etc
if the bit is set, it shifts to the left by the position of the bit, it adds this to the last number
if the bit is not set, it does nothing.

this does in fact work with floats, with one minor change.
You bit shift right while youre testing the fractional portion, and left in the non fractional portion.

EDIT: it's officialy useless, as per this:
http://www.geoffknagge.com/fyp/booth.shtml

not only does it all this a novice approach, but booth encoding is the same exact approach, except it always reduces it to two bitshifts. Its a very common method. =[ lol
User avatar
KthProg
Poster
Poster
 
Posts: 219
Joined: Wed Jan 23, 2013 7:06 pm
Blog: View Blog (0)


Return to Programming

Who is online

Users browsing this forum: No registered users and 0 guests