Academic Question

The fear of every surveillance society: citizens protecting their own privacy with strong cryptography

Academic Question

Post by hidden_test on Mon Dec 20, 2010 9:19 pm
([msg=50875]see Academic Question[/msg])

Hi
This is a mathematical problem that has been bothering me
I am new to cryptography and this question is about the mathematical background required
Please see these slides
http://rapidshare.com/files/437713784/F ... cation.pdf

I would like to discuss some aspects of these notes
First question is, in page 4 what does does it mean by: "The stuff in parenthesis doesn't amount to much (though it is fun to think about how much there is)."

And also, the number of steps for multiplication is 2n^2. i think it should be more. Could anyone please explain this to me?

If I can understand this I am hoping i can move further in my study of cryptography.

Thanks in advance. 8-)
hidden_test
New User
New User
 
Posts: 3
Joined: Mon Dec 20, 2010 9:12 pm
Blog: View Blog (0)


Re: Academic Question

Post by nathandelane on Thu Dec 23, 2010 4:06 pm
([msg=50996]see Re: Academic Question[/msg])

PDF referenced: http://noether.uoregon.edu/~sadofsky/fa ... cation.pdf

Hi, I've been looking at the pages in your PDF on addition and multiplication, and I disagree somewhat with what they say. But I'm not sure that it really matters how many steps it takes to do n-digit multiplication or n-digit addition. Especially when it pertains to encryption. Understanding the approximate number of operations required to evaluate a mathematical equation might be useful, especially if you need to know that. To understand this better it is good to know what the processor or virtual environment does. For example if you are writing assembly, you need to know how the processor and memory handle wide numbers. A 32-bit processor can handle 32-bit unsigned integer values well or 31-bit signed integer values assuming the processor uses two's compliment to identify positive and negative, but decimal values are only 16-bit with 15 bits of decimal space. Sometimes processors use registers to store flags and temporary values as well. All of this is explained in computer architecture. But a lot of it is unimportant or less important unless you are relying on small-scale hardware (i.e. 64kb or program memory, low processor cache: 0-16kb, etc.) or low-level programming such as assembly, C or C++. If you are using a virtual environment such as those provided by .NET/MONO, Java, or any of a number of scripting languages, such as Ruby, Perl or Python, then 32 bits of precision is exactly what you expect it to be, because these environments "fix" processor precision issues by using their own variables to store data and check it.

Back to the math, if I add two one-digit numbers together, like 1 + 1, then I get 2. IMO 1 + 1 is a single operation. In computer terms, three operations would take place to perform the addition:

  • Place 1 in the left operand register
  • Place 1 in the right operand register
  • Add left operand register and right operand register, placing the result in the result register (placing the result is done as part of the adder ALU's logic by design, so it is not a separate operation)
This is pretty much the lowest level of operation you can get to. In a 32-bit system, you can expect values up to 32-bits in size to operate similarly, as in: 4294967295 + 4294967295 = 8589934590. Since 4294967295 is the maximum unsigned value that a 32-bit integer can hold, if that was the limit of bit-width for numbers in your system, then you would likely get a buffer overflow, resulting in 4294967294 (1 less than the two numbers being added due to the bit-carry upon add):

Code: Select all
   11111111111111111111111111111111
+  11111111111111111111111111111111
-----------------------------------
  111111111111111111111111111111110
  ^
  |
   --- This bit is carried by the add operation because all of the binary fields are full.

[32-bit register can only hold 11111111111111111111111111111110 which omits the first 1
resulting in 4294967294]

Now typically when we think of simple multiplication we think of it as a single operation. For example 3 * 1 = 3 or 12 * 2 = 24, but when you break it down we are still adding. One method of performing multiplication is to iteratively add the multiplicand (in the case of the first expression, 1) the number of times minus one to itself as defined by the multiplier (in the case of the first expression, 3). So 3 * 1 becomes 1 + 1 + 1. Likewise the second expression would become 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2.

One property that is important to know about both multiplication and addition is that the order of the operands does not matter. For example 3 * 1 is exactly the same as 1 * 3, likewise 12 * 2 is the same as 2 * 12. In both sets the answer is the same no matter which order the multiplicands occur. For addition 1 + 3 is the same as 3 + 1, and 12 + 2 is the same as 2 + 12. However in performing symbolic arithmetic it might often be simpler to place values containing more digits in a higher order than those with fewer digits.

For example, in:

Code: Select all
   2
+ 12
----
  14

There is an imaginary zero placed next to the 2. A computer might not understand that however unless it is well-defined. Null is not the same as 0 (in arithmetic). So instead we must have:

Code: Select all
  02
+ 12
----
  14

It would be simpler for a computer program to identify whether there is a difference in widths of operands and adjust the order in order to place larger-width operands into a higher order, like this:

Code: Select all
  12
+  2
----
  14

When we do multiplication symbolically, if we are using more than one digit in second operand, then we have to keep track of new operands for addition. If there are three digits in the second of the two operands then there will be three new operands to keep track of and add after the multiplication is done. For example:

Code: Select all
     634
x    512
--------
    1268
    6340
+ 317000
--------
  324608

Another example:

Code: Select all
     4
x  512
------
     8
    40
+ 2000
------
  2048

So in the first example we find that we must first multiply each digit by each digit. That is 3 * 3 operations, or 9 operations. Then we must add the resulting numbers. The largest number contains six digits, so we will have six addition operations to perform. There are also two carries the occur during the addition. You could count those as extra operations. So for the first multiplication problem, evaluated symbolically, we get 9 + 6 + 2 operations or 17 operations total. As compared to 512 (since it is the smaller operand) operations if we use a loop, or a single operation if the CPU supports the width of the numbers.

The second example has 3 + 4 operations or 7 operations, compared to 4 using a loop, and again a single operation since the CPU supports the width of these values.

Looking back now, the PDF document that you referenced states that, in reference to Naive Multiplication, it takes 2n^2 steps where n is the number of digits (assuming both operands have the same number of digits) to evaluate a symmetrical multiplication expression. Which for two 3-digit operands would be 2(3^2) = 2(9) = 18. We found that the number of operations to multiply those two specific numbers was 17. The document doesn't say anything about when the opearnds are asymmetrical as in my second example, 4 * 512, but that resulted in only 7 operations, a prime number which is neither a factor of three or 1 (obviously) The 18 was close however. Let's take two symmetrical 4-digit numbers and multiply them symbolically.

Code: Select all
     1111
x    1111
---------
     1111
    11110
   111100
+ 1111000
---------
  1234321

Given the hypothesis of Hal Sadofsky, we should require 2(4^2) = 2(16) = 32 operations to symbolically complete this multiplication. From the multiplication side we use 4 * 4 = 16 operations, then on the addition side, the widest value is seven digits wide, so we know there will be at minimum seven operations required to add the four operands together, and that ends up being all that we need resulting in 7 + 16 = 23. 23 is pretty distant from 32.

Let's try five digits:

Code: Select all
      10101
x     10101
-----------
      10101
     000000
    1010100
   00000000
+ 101010000
-----------
  102030201

(5 * 5) + 9 operations = 25 + 9 = 34. And Sadofsky says 2(5^2) = 2(25) = 50. Even further away.

The result is:

Code: Select all
# Operands    # Operations    # Sadofsky: 2(n^2)
------------------------------------------------
1              1               2
2              7               8
3             17              18
4             23              32
5             34              50

So to answer your questions. I still don't know what he means by "The stuff in parenthesis doesn't amount to much (though it is fun to think about how much there is)." And actually the number of steps IMO is less than 2n^2. Hope that helps explain it a little. And maybe I'm wrong.

-- Thu Dec 23, 2010 2:31 pm --

As a follow-up I realized that on page three of the PDF document it states "Consider addition of two digits, or multiplication of two digits to be a single operation." I assumed the addition of the symbolic results of the multiplication to be singular for each column. But based on this statement, given the number of operands there are (n - 1) * w operations where n is the number of operands and w is the number of digits in the widest operand.

Given this information take the four-digit multiplication problem again:

Code: Select all
     1111
x    1111
---------
     1111
    11110
   111100
+ 1111000
---------
  1234321

In the addition portion of the multiplication problem there are (4 - 1) * 7 operations of adding single digits to single digits (x + 1 or x + 1). That means in the addition section there are 21 steps. Making the total 16 + 21 = 37. So if this is your thinking then you are correct that 2(n^2) is low for the number of operations.

You can always expect the number of operands for the addition portion to be the total number of digits of the widest multiplicand. And the width of the widest addition operand will be either 2w - 1 or 2w, where w is the width of the widest multiplicand, depending on whether there is a carry on the last symbolic multiplication.

So at most the number of operations for a multiplication problem would be (m^2) + (2m * (m - 1)), where m is the width of the widest multiplicand. If you take into account carries, then you have to depend on having at most (((m-1) * m) + (2m - 1)) - 1 carries as well.

The minimum number of operations would be:
Multiplication: (m^2)+(2m*(m-1))
Carries: 0

Code: Select all
Revised given the statement from the document.


# Operands    # Operations    Sadofsky: 2(n^2)  Nathandelane:
                (examples)                      [(m^2)+(2m*(m-1))]+[((m-1)*m)+(2m-1)-1]
--------------------------------------------------------------------------------------
1               1              2                 1
2               7              8                8-12
3              17             18                21-28
4              23             32                40-58
5              34             50                65-93
Me, Nathandelane, Highly influential to Hackerdom, Premature Optimization=http://c2.com/cgi/wiki?PrematureOptimization
User avatar
nathandelane
Poster
Poster
 
Posts: 204
Joined: Thu Jun 26, 2008 11:26 am
Location: Utah
Blog: View Blog (0)


Re: Academic Question

Post by hidden_test on Thu Jan 06, 2011 3:05 am
([msg=51784]see Re: Academic Question[/msg])

Thank you for your reply nathan

For some reason i did not get an email notification of your reply to my post so I couldn't read it before
I am still reading what you have written and I see that at one point you agree that the number of steps needed should have been greater than what the author said. My opinion was based on counting the possible carries that may arise while adding all the numbers in a the corresponding column after the multiplication of individual digits, but according to my advisor it's nothing to worry about, as the writer may have got his result on the basis of some assumption he didn't state. The main point is that the multiplications take more time than additions and and the amount of time needed for multiplication increases quadratically [ O(n^2) ] as the length of input increases, so minor additions may be ignored. moreover, addition of a carry to the addition of two bits may be considered a single operation (as is the case in case of a full adder).

However I am still reading up on the topic so if I have any new developments I will let you know
thank you for your reply
hidden_test
New User
New User
 
Posts: 3
Joined: Mon Dec 20, 2010 9:12 pm
Blog: View Blog (0)



Return to Crypto

Who is online

Users browsing this forum: No registered users and 0 guests

cron