Math and Hacking/Computers

Mathematics and Science; the subtle and ubiquitous arts

Math and Hacking/Computers

Post by Jgrimm on Sat Sep 12, 2009 7:52 pm
([msg=29547]see Math and Hacking/Computers[/msg])

So I was reading one of the previous posts about it being a misconception that math has much to do with hacking and computers in general. So I'm going to counter act that misconception a little.

Basically anything done in your computer is done with strings of 1's and 0's, binary, I'm sure most of you already know that little tidbit already. I'm going to match up a few of the areas of math that some topics you all are interested in fall into.

Encryption - this field uses algebra, finite field theory, elliptic curve theory, linear algebra, chaos theory, calculus, and we can keep going but you get the idea.

Code theory - this is the transmission of information over the web - uses linear algebra, modular arithmetic, boolean algebra, binary conversions, fast fourier transforms, galois field theory, and again the list goes on and on.

I don't know of too many pure programming languages that don't use the theories of recursive definitions, discrete structures, arrays, multidimensional space, and linear algebra just to name a few.

Networking - Topology is a field of mathematics used extensively in networking, studying connectedness and things like that(I know a bit about topology but I only know that it's used in networking, not how).

Probability theory is used a lot more in hacking than you would ever realize, for example probability is used to determine how strong or weak an encryption algorithm is, or a password is.

I could go on all day, but you get the idea.
Jgrimm
New User
New User
 
Posts: 4
Joined: Sat Sep 12, 2009 7:27 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by sanddbox on Sat Sep 12, 2009 8:18 pm
([msg=29549]see Re: Math and Hacking/Computers[/msg])

Sorry, but you're over-analyzing. You don't need to know calculus to encrypt something. Binary is easy math. Algebra is easy math. Nearly everything you named is either algebra, a variation thereof, or common sense. Recursion isn't rocket science. Arrays aren't either. Linear algebra? Sorry, you're trying too hard.

Even toddlers know the number 1, and usually understand the concept of 0. Thus, binary isn't hard math.

Finite Theory - the idea that stuff that isn't infinite isn't infinite. No duh. Galois is very similar.

The very concept of chaos theory is just that tiny things can snowball to create bigger effects. (Such as the movement of a mouse being used to help encrypt a file securely). That's not difficult math. Now, if you try to calculate the effect of a pebble dropping causing humidity to rise, that's hard. But that still doesn't require high-level mathematics.

Boolean algebra is just true or false. It's not a science. Toddlers understand that one too. See the 1's and 0's argument.

Binary conversion isn't hard either. It's just paying attention to which switches are on.

Probability is also an easy concept. Even kindergartners can calculate whether it'd be easy to get away with something or not.

Topology - You said yourself you don't understand the functionality behind it.

Basically what you are saying is that algebra is used a lot. Yes, it is. It's not hard math though.
Image

HTS User Composition:
95% Male
4.98% Female
.01% Monica
.01% Goat
User avatar
sanddbox
Expert
Expert
 
Posts: 2331
Joined: Sat Jul 04, 2009 5:20 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by thedotmaster on Sat Sep 12, 2009 10:04 pm
([msg=29553]see Re: Math and Hacking/Computers[/msg])

Heh. If it's so easy then why do we have so many buffer overflow vulns in software?
Encryption certainly is not just simple math at all.
Jgrimm is right that there are strong ties between computer science and mathematics. Most decent university courses for computer science require qualifications in maths, often with higher priority than other computing qualifications.

You don't need to know calculus to encrypt something.

If you're talking about using built-in functions, then no - you don't. But encryption is not just A=5,B=23, etc. Read up about keypair generator functions for proof of that one.

You mention probability being an easy concept. Sure, it is. But advanced probability is still presumably very hard.
If this link works, can you do this easy enough?

Maths is absolutely crucial to becoming decent in many areas of computing.
Image
User avatar
thedotmaster
Contributor
Contributor
 
Posts: 984
Joined: Sun May 04, 2008 4:39 pm
Location: North West UK
Blog: View Blog (1)


Re: Math and Hacking/Computers

Post by Jgrimm on Sun Sep 13, 2009 11:32 pm
([msg=29592]see Re: Math and Hacking/Computers[/msg])

Sanddbox I have to admit that I haven't had a good laugh like that in a while. So lets get started. The first post was off the top of my head we'll get into a little more detail here.

1. You're right I can encrypt something using a simple shift cypher and it can be broken in an exhaustive attack by a modern computer in about 30 seconds. If that's what you're aiming for rock on. There exists a nasty attack called a square root attack that states for many trapdoor encryption functions the amount of work to break it is equal to the square root of the cardinality of the group you're working with, where the group is the basis for your encryption function. These are simple encryption functions mind you, nothing too crazy, public key exchange systems. So we have a minimum size constraint, enter elliptic curve cryptography, this type of cryptography uses elliptic curve groups as a basis for an encryption system. Elliptic curves allow a much smaller sized key to be used, but the thing is describe an elliptic curve group. Such a group is the set of all integer solutions to a given polynomial in n dimensional space. The size of such a group is very very hard to quantify, if you can do so I would suggest you do a PhD thesis on the subject and write your own ticket in the mathematics world. Regardless this type of thing is actually a mix of algebra ( hence group ) geometry ( hence surface in n dimensional space ) and calculus ( the polynomial has to have certain properties to be used, such as differentiable in certain circumstances ).

2. Any time you are using a matrix to solve anything you are using linear algebra so...and array is actually using linear algebra, also anytime you're trying to solve a system of linear equations in multivariate space you are using linear algebra so I guess the sad thing is that you are using it, if you are doing these things, without even realizing it.

3. Recursion isn't rocket science however it is a mathematical concept that many have problems with, If you don't then congrats but trivializing it isn't something that is helpful to others who do have problems and we're here for helping each other not boosting our own ego's at our own intelligence....well I'm not here to do that.

4. Binary math isn't hard when you're talking about adding, subtracting, multiplication, and division I suppose. What about looking at a binary string and computing how much work it will take to make sure that binary string is correct, or taking that encrypted binary string and breaking it? Do you know how to break apart a data stream in binary, convert it to hex and convert that to ASCII, if you don't know why that would be useful look it up. I doubt a toddler can explain it to you.

5. You misread finite theory---Finite Field Theory- which is analogous to Galois Theory in many terms. So lets talk about Finite field theory. A run down of a practical use for this situation are BCH codes. I hate to break this to you but they use irreducible polynomials and their roots to construct finite fields which are then used to encode information for transmission through communications channels. The fun thing about these codes are that they can tell you if there are errors in the information you received without having to open garbage in your browser, even better they can correct those errors for you.

6. As for Galois being anything close to the fact that finite stuff isn't infinite is funny. Galois field theory is about field extensions to include polynomials that couldn't be factored in the smaller field. That's just one aspect of which there are many. You're oversimplifying things you don't understand obviously.

7. Don't mistake my statement of not knowing how it's used to not understanding how it's used. I didn't feel like looking it up at the time but I do now. Network topology has to do with the configuration of the terminals in the network. Connectedness is a prime concern here. The topological definition for connectedness is about if two points can be connected using any path, much like whether two computers in a network can be connected using any path. A famous problem in Topology was one by Euler who posed a question about whether a person could traverse five bridges only once in a town and end up on the other side of the town, may seem simple, but what about when there are 10,000 bridges and you want the most efficient route to connect each one, sounds like a university campus network to me.

And the statement that Algebra is not hard math :) I will pose a question for you and lets see if you think it is as simple as you believe.

Given a Semi-Group Action from a finite semi-group to a finite set, where you know that for a,b in G the semi-group, and s in S the set, where as = k and bs = y, can you compare the amount of work it would take to just run an exhaustive process that simply computes the value of gs for every element of G vs the amount of work that it would take to consider the action as a subset of the Symmetric group Sn where n = the cardinality of S and determine the possible answers by running a search through that subset?

If you can answer that you will have my respect completely and I would be very grateful for the answer, it's nothing really important just a side note on the work I'm doing now but it takes someone far smarter than me to figure it out, or at least I need the help of someone smarter than I am. I would be grateful for points in the right direction also.

Oh and a final note that problem is actually directly from an issue involving encryption in a general case. If you can solve that in a pertinent way you could possibly get published if it's definitive enough for the absolute general case in which I'm referring.
Jgrimm
New User
New User
 
Posts: 4
Joined: Sat Sep 12, 2009 7:27 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by thedotmaster on Mon Sep 14, 2009 1:36 am
([msg=29600]see Re: Math and Hacking/Computers[/msg])

Pwn'd.
Image
User avatar
thedotmaster
Contributor
Contributor
 
Posts: 984
Joined: Sun May 04, 2008 4:39 pm
Location: North West UK
Blog: View Blog (1)


Re: Math and Hacking/Computers

Post by Jgrimm on Mon Sep 14, 2009 3:26 am
([msg=29617]see Re: Math and Hacking/Computers[/msg])

I realized that I missed a few points in the last post so I'll clear them up.

Chaos Theory also includes infinite repeatability-i.e. fractals- Use the right fractal equation to generate your encryption algorithm and you know it has an inverse, but you also know that it could take years of exhaustively running values to get your key values without knowing the inverse,

Probability - The probability or randomly guessing a key value is extremely important to encryption and computer science. If you have a probability of guessing the right key or password to a system that's like .9 your system sucks and you should know it. The probability that you will get an error in the BCH example I used is good to know also, if your probability to get more than 3 errors in a given code is .0000000000000000000000000001 then you are simply wasting time and resources to create a code system that will detect more than 4 errors and correct more than 3, in fact that high of error values is probably over kill. It is also the theory behind pseudo random number generators, since pure random number generators are a myth, you want a very low probability you will be reproducing the same number multiple times.

Boolean algebra is the math behind logic gates and anything that uses logic, every if then function, do while function, true false function, etc you use in actual programming ( idk about scripting haven't done it much, haven't done much programming much either to be honest ). It gets as complicated as you feel like making it.

If anybody wants more information on any of these subjects I'll do my best to provide some articles
Jgrimm
New User
New User
 
Posts: 4
Joined: Sat Sep 12, 2009 7:27 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by Spe-edS on Mon Sep 14, 2009 5:47 am
([msg=29620]see Re: Math and Hacking/Computers[/msg])

Wow. i haven't seen such complete and udder ownage since... well shit...
-Special Needs
Spe-edS
Experienced User
Experienced User
 
Posts: 51
Joined: Tue Aug 18, 2009 6:36 pm
Location: The Internets
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by Jgrimm on Wed Sep 16, 2009 1:56 pm
([msg=29755]see Re: Math and Hacking/Computers[/msg])

Here's a paper I wrote on the subject of BCH codes, due to different encoding in this and the original document some things didn't display correctly, all the numbers on the right of x in the polynomial are powers, and all the numbers to the right of the alpha are powers also.

Codes

In Applied Algebra: Codes, Ciphers, and Discrete Algorithms by Darel W. Hardy and Carol L. Walker codes are defined as “a system of words or other symbols used to represent a given set of words or other symbols.” The purpose of a code is to change information that may not be easily transmitted into something that is easily transmitted. Codes are different from encryption in that they are not designed to obscure the information. Some examples of codes are ASCII and binary. ASCII is used to change the text that you input on your computer to a form that can be stored in memory and reproduced from memory when necessary, among other things. Bar codes on products we buy at the store are an easily thought of code, all those different size lines give the cash registers the information needed to sell you the product you want. In just these examples we can see that codes have different properties and go about their functions in different ways; I will be discussing a code that has been designed with the property that it can detect and correct a given number of errors in information received. This type of code is naturally called an error detecting/correcting code. The codes that I will be discussing are called BCH codes, named for Bose-Chaudhuri-Hocquenghem which are the names of the people that designed it.

BCH Codes

BCH codes are based on properties of polynomial fields. We will be generating the finite fields GF(pn) where p is a prime and n is an element of the natural numbers; these are Galois Fields. The way the code works is to associate the plaintext you want to transmit with a polynomial and then encode it using another polynomial. The encoding polynomial will allow us to check and fix errors in the received information.

Theorem: Let q(x) be an irreducible polynomial in GF(p) of degree n and use q(x) to construct GF(pn). Let α be a primitive element in GF(pn), let mi(x) be the minimal polynomial for αi for i = 1,2,….,2t, and set g(x) = lcm[m1(x), m2(x),…, m2t(x)]. Let deg(g(x)) = k and let a(x) represent the plaintext polynomial of degree at most pn – k – 2. Then the minimum weight of a nonzero codeword a(x)g(x) is at least d = 2t +1, and at least t errors can be corrected.

There are some properties of finite fields that will allow us to see what is going on with these codes more clearly. We start with the irreducible polynomial q(x) in GF(p)[x]. We note that GF(p) is a finite dimensional vector space where a vector is given as (an-1, an-2, …, a0), where each ai is taken mod p for every 0 ≤ i < n, represents a polynomial of degree n – 1 in GF(p)[x]. We are able to choose the size of the finite field we are working in using the property that GF(pn) is the splitting field of the polynomial xp^n – x, hence this polynomial factors in GF(pn). In fact this polynomial factors into all of the distinct irreducible polynomials in GF(p) of degree dividing pn. So once we have all the factors we have a list to choose q(x) from. We then let α be a root of q(x) which forms a cyclic multiplicative group of order pn-1 where n is the deg(q(x)). What we need to do next is find the encoding polynomial g(x). First we need to figure out how many errors we want to be able to correct in our code, which should be chosen using the formula above in such a way that the code is still useful. It wouldn’t be useful if you find that you can only send one bit of information at a time because you constructed g(x) too large. Since t is the value of the number of errors that we can correct, if we want to construct a code that will correct 3 errors we need 2t = 2(3) = 6 consecutive minimal polynomials. So in this instance we need to find the minimal polynomials of α, α2, …, α6 to compute g(x). We note here that we would probably need to be working with a fairly large n to make this code a workable one since our maximum degree of a(x) is given by the formula above and our deg(g(x)) will most likely be close to the size of pn for small n. The following is an example of the process of creating a BCH code.
I will construct a code in GF(8) using GF(2). Since 8 = 23 we want to use an irreducible polynomial of degree 3. We can determine the irreducible polynomials of GF(8) by factoring the polynomial x8+ x or by building up all the polynomials of degree 3 using polynomials of degree 1 and 2 and choosing one that doesn’t fall into that list. I will use the irreducible polynomial q(x) = x3+ x2 + 1. Then we let α be a root for q(x). Then q(α) = 0. We then use this to generate GF(8). We show that |α| = 7 using the power notation. Since α is a root of q(x) we get α3 + α2 + 1 = 0 and α3 = α2 + 1. Taking the powers of α results in the following table:
0 = 0 α0 = 1 α1 = α α2 = α2 α3 = α2 + 1
α4 = α2 + α + 1 α5 = α + 1 α6 = α2 + α α7 = 1
and we have shown that α is a primitive element of GF(8).
We then need to compute g(x) which is the lcm of the minimal polynomials for each αi.
We then get the following results using the similar process outlined in [1] taking
0 = 02 = (α3 + α2 + 1)2 = (α3)2 + (α2)2 + 1 = (α2)3 + (α2)2 + 1
This means that m1(α2) = 0 and m1(x) = m2(x). We also have that m1(α4) = 0, this follows from the fact that 4 = 22 and hence α = (α2)2 thus (m1(α2))2 = 02. Therefore we have the result that m1(x) = m2(x) = m4(x). Then we figure out the minimal polynomial for α3, α6, and α5 which have the same minimal polynomial demonstrated by α6 = (α3)2 and (α6)2 = α12 = α5 since 12 is congruent to 5 mod 7. We compute this by setting m3(x) = (x – α3)(x – α5)(x – α6) = x3 + x2(α6 + α5 + α3) + x(α11 + α9 + α8) + 1 = x3 + x2(α6 + α5 + α3) + x(α4 + α2 + α) + 1. Reducing the polynomial we get x3 + x2(0) + x(1) + 1, hence m3(x) = x3 + x + 1. To compute g(x) we take the lcm(m1(x),…, m6(x)) so we have m1(x)m3(x) = g(x). Therefore g(x) = (x3 + x2 + 1)( x3 + x + 1) = x6 + x5 + x4 + x3 + x2 + x + 1. Notice that we have included the minimal polynomials for 6 consecutive roots in our construction of g(x), so we could theoretically correct 3 errors with this code. Unfortunately however when we apply the above theorem for the maximum degree of the plaintext polynomial given as pn – k – 2, where k is the deg(g(x)), we find that deg(a(x)) ≤ 0 which means that this code isn’t useful, illustrating the statement about the size of pn relative to the deg(g(x)).
We can now use some examples to illustrate the decoding of received information and checking it for errors.

Example 1: Assume that the polynomial q(x) = x3 + x + 1 is used to construct a BCH code that corrects a single error with plaintext polynomials of the form a(x) = a3x3 + a2x2 + a1x1 + a0 ϵ GF2[x]. Assume that the message x5 + x4 + x3 + 1 is received. What is the plaintext?

Solution: Since this code corrects a single error it turns out that q(x) = g(x) since g(x) is the minimal polynomial of α and α2 as shown above. To determine if this received information is correct we first take x5 + x4 + x3 + 1 mod q(x). We want this to turn out to be zero due to the properties stated above. The result of this operation turns out to be x + 1 showing that we have an error in our received information. However since this code is a single error correcting code and the result was a double error as a remainder we have to find the single error equivalent of x + 1. We do this by examining the powers of α determined by this minimal polynomial and find that x + 1 is associated with α3 which is our single error. We then subtract this error from the received polynomial and the result is x5 + x4 + 1. We then take this as our intended received information and compute the plaintext polynomial by taking c(x)/g(x) = a(x), where c(x) is our code polynomial we just computed and a(x) is our plaintext. The result of this operation is a(x) = x2 + x + 1. We can consider this plaintext in the form of a(x) = (0, 1, 1, 1) in a binary style form.

Example 2: Assume that the polynomial q(x) = x4 + x + 1 is used to construct a BCH code that corrects a single error with plaintext polynomials a(x) є GF(2). What is the largest possible degree for a(x)? Assume that the polynomial x12 + x10 + x8 + x6 + x2 + x is received. What is the plaintext?

Solution: The root α of this polynomial generates GF(16). Since we are generating a code that corrects a single error we want t = 1 in the equation 2t + 1 by Theorem 90 on p.215 of [1], where t is the number of consecutive roots. Thus we need 2 consecutive roots. This happens to fall apart easily giving that g(x) = q(x). Since α is a root of the irreducible polynomial q(x), q(x) is the minimal polynomial of α and α2 hence it should be used as the generating polynomial. We now know the order of g(x) and using the equation pn – k – 2 for the maximum degree of a(x) found in Theorem 90 also we have 16 – 4 – 2 = 10, and deg(a(x)) ≤ 10. When we evaluate r(x) / g(x) we find that there was an error in the received text. This error residue turns out to be x2 + x + 1 and for the same reason as above we find the associated single error which is x10. Then we proceeded in the same manner outlined in 10.2.1 and the result is x12 + x8 + x6 + x2 + x which is our code word c(x). Then evaluating c(x) / g(x) = a(x) we find that a(x) = x8 + x5 + x. Which we can represent in binary form as (0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0).

Example 3: Let α be a root of x2 + x + 2 in GF(9) and calculate g(x) = m1(x)m2(x).

Solution: We do this in a similar fashion as the previous problems and example with the exception that we are now working modulo 3 instead of modulo 2. In this problem we find that the roots repeat themselves in multiples of 3, i.e. m1(x) = m3(x) and m2(x) = m6(x). Therefore we know that m1(x) = x2 + x + 2 and m2(x) = (x – α2)(x – α6) = x2 – xα6 – xα2 + α8 = x2 – x(α6 + α2) + 1. Then simplifying (α6 + α2) we end up with α + 2 + 2α + 1 = 0 mod 3. Thus m2(x) = x2 + 1. To find g(x) we now take m1(x)m2(x) = (x2 + x + 2)(x2 + 1) = x4 + x2 + x3 + x + 2x2 + 2 and simplifying we end up with g(x) = x4 + x3 + x + 2.

Example 4: Assume the generator polynomial defined in Example 3 produced a codeword that was received as 2x7 + 2x4 + 2x3 + x2 + 2x + 2. What is the plaintext?

Solution: Since we are using the generator polynomial from Example 3 we have that g(x) = x4 + x3 + x + 2, so we need to evaluate (2x7 + 2x4 + 2x3 + x2 + 2x + 2)/ (x4 + x3 + x + 2). While doing this we need to keep in mind that we are now working in GF(32) so our coefficients are mod 3 now instead of mod 2. Going through the polynomial division we have that (2x3) (x4 + x3 + x + 2) = 2x7 + 2x6 + 2x4 + x3, note that 2(2) = 4 (mod 3) = 1. Subtracting (2x7 + 2x4 + 2x3 + x2 + 2x + 2) – (2x7 + 2x6 + 2x4 + x3) = -2x6 + x3 + x2 + 2x + 2 and noting that -2 (mod 3) = 1 we can change this to x6 + x3 + x2 + 2x + 2. Then we can take (x2)(x4 + x3 + x + 2) = x6 + x5 + x3 + 2x2 and we have (x6 + x3 + x2 + 2x + 2) – (x6 + x5 + x3 + 2x2) = -x5 + (-x2) + 2x + 2, noting again that we have coefficients mod 3 we have 2x5 + 2x2 + 2x + 2. Continuing in the same manner all the way through the polynomial division we end up with the error residue as 2x3. We now have that r(x) = g(x)a(x) + e(x) therefore r(x) – e(x) = g(x)a(x) and we take (2x7 + 2x4 + 2x3 + x2 + 2x + 2) – (2x3) = 2x7 + 2x4 + x2 + 2x + 2. Evaluating (r(x) – e(x))/g(x) = (2x7 + 2x4 + x2 + 2x + 2)/(x4 + x3 + x + 2) = 2x3 + x2 + 2x + 1 = a(x). As above we can also represent this plaintext word in a ternary notation, similar to the binary we used above, as a(x) = (2, 1, 2, 1). Note that there are only 4 positions in this notation because the maximum deg(a(x)) = pn – k – 2 = 9 – 4 – 2 = 3.

We note finally that there is a check in most codes called a parity check matrix. These matrices allow the receiver to determine if an error has occurred and in some cases correct that error. A notable example is the Hamming(7,4) codes’ parity check matrix. While BCH codes have a parity check matrix a useful property of this code allows us to skip the matrix algebra and simply use g(x) as our parity check. This simplifies the number of operations necessary for the error detection/correction portion of the code. We can however construct the parity check matrix for a BCH code by creating a matrix uses powers of α. That is we have as the first row [ 1 α α2 … αd-1], where d – 1 is the number of coefficients in our codeword, we have as the second row [ 1 α2 (α2)2 … (α2)d-1] the third row is taking powers of α3 and so on until we have a matrix pn rows and d – 1 columns. We then take our codeword c(x) as a vector with the entries as the coefficients and do the associated linear algebra multiplying our matrix by the vector. This multiplication should result in 0 if we have received our information right. However we don’t need to go through this process because it is replaced by simply finding the residue of our received polynomial mod g(x).

Conclusion
Through the discussion above and the concrete examples that have been provided I have demonstrated the construction and use of BCH codes.



Bibliography
David S. Dummit and Richard M. Foote, Abstract Algebra Third Edition, John Wiley and Sons Inc., New Jersey, 2004.

Darel W. Hardy and Carol L. Walker, Applied Algebra: Codes, Ciphers, and Discrete Algorithms, Prentice Hall, New Jersey, 2003.

Piyush Kuror, Lecture 14 BCH codes, http://www.cmi.ac.in/~ramprasad/lecturenotes/comp_ numb_theory/lecture14.pdf, April 28, 2008.

-- Wed Sep 16, 2009 1:01 pm --

some of the text properties didn't actually convert correctly for example that GF(pn) is actually p to the power n. Some of the subscripts did the same thing, sorry about that, hopefully it will still give some insight.
Jgrimm
New User
New User
 
Posts: 4
Joined: Sat Sep 12, 2009 7:27 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by Defience on Wed Sep 16, 2009 4:26 pm
([msg=29759]see Re: Math and Hacking/Computers[/msg])

"What if C-A-T really spelled dog?"
-Revenge of The Nerds II
User avatar
Defience
Addict
Addict
 
Posts: 1275
Joined: Thu Jun 12, 2008 3:16 pm
Blog: View Blog (0)


Re: Math and Hacking/Computers

Post by sanddbox on Tue Dec 22, 2009 6:48 pm
([msg=31743]see Re: Math and Hacking/Computers[/msg])

Jgrimm wrote:Sanddbox I have to admit that I haven't had a good laugh like that in a while.

^Yes, now your argument is much more valid! [sarcasm]

Sorry for not replying; I didn't notice this thread and was away for HTS for awhile.

Is math a fairly large part of the programs we create? Yes.

But the idea that you need a lot of these concepts to be able to program is absurd.

If you're going to make an encryption program, then yes, it's a damn good idea to research encryption.

Obviously, a very basic knowledge of math is needed. However, complex math itself isn't necessary to program. If you're going to make a program that is heavily laden with math concepts (encryption, calculating fractals, whatever floats your boat), then a knowledge of the subject your program is about is necessary.

Oh, and to that other guy, it's spelled 'utter'. Sorry for being nitpicky, it's my OCD speaking.
Image

HTS User Composition:
95% Male
4.98% Female
.01% Monica
.01% Goat
User avatar
sanddbox
Expert
Expert
 
Posts: 2331
Joined: Sat Jul 04, 2009 5:20 pm
Blog: View Blog (0)


Next

Return to Math & Science

Who is online

Users browsing this forum: No registered users and 0 guests