DingoBebe wrote:I'm just trying figure out how one would determine to use ascii for this mission?
You assume it supports at the very minimum ASCII. But in the case you're a doubtful spirit I did an interesting little experiment for determining how strings get encrypted and put the "Ѫ" (spider character)(also note forum works with Unicode) which is in Unicode and encrypted it, with the result being "&$3475A". So you get 7 characters for the price of one.
The conclusion of the experiment:
The code behind encryption works on a byte per byte basis ( 1 byte = 1 ASCII char.) so it sees the spider character as a series of 7 bytes. Encryption works with bytes and bits, ASCII is just meaningless to it. But for us it gives the look that it works on a character basis but with a Unicode character we see it's clearly not the case.
DingoBebe wrote:As a person, I know that the letter 'd' is two character values from the letter 'a'
Because of ASCII but I could invent my unique scheme with 'a' and 'd' being x values apart. Also didn't you mean 3 char values apart?
DingoBebe wrote: But what determines the values for the system I am decoding?
The Encoding Scheme.
DingoBebe wrote:How do I know the values for 'a' is still 1 and 'd' is still 4?
Again you assume the system supports ASCII because of its broad use but under different circumstances besides Unicode another more exotic Encoding Scheme could be used.