We could have had a very different history if they'd used DES or RC2 for encryption!
We could have had a very different history if they'd used DES or RC2 for encryption!
So much of symmetric key cryptography is just trying to find creative ways of creating and recreating 'one time pads' so we can distribute the key material instead of the pads themselves.
The one thing that stood out to me with the original blog post and a quick glance at the code was that it appeared as if the pad was certainly not actually random.
Could anyone that has actually understood it a bit more confirm or reject this?
Edit: It seems that the random generation can be found starting here https://github.com/Vulacode/RANDOM/blob/d6a1a1d694b22e6a115b... With three methods, one (RAND2) seems to use the basic interpreter rng more or less directly and the other two seem to be fairly simple prngs seeded from the basic interpreter's rng.
I don't actually know what the state of basic interpreter rngs was in the early '80s but I would be fairly surprised if they're anything that is secure.
That is a method though and it's basically what stream ciphers are doing, translating a key into a random stream that's then applies to the plaintext. One benefit of the true OTP though is you don't have to transfer the software and ensure it's generating the same key stream on both ends.
The difference is in the nature of the security guarantees. Almost every cryptographic primitive is "computationally secure", which means the best-known attack is to try every key, and that would take beyond the heat death of the universe. One-time pads have "information-theoretical security", which is that even if you try every possible key, you don't learn the contents of the message, because every possible message has a corresponding decryption key that will produce it from the ciphertext you are trying to break.
The reason why this is the case is because the size of the message space is equal to the size of the key space. In every other cryptosystem, you have a key space that is much smaller than the message space - say, 256 bit AES keys, or 512 bit SHA-2 hashes, for messages that can have many billions of bits in them. It's unlikely for something that wasn't the key to happen to decrypt to a valid-looking message under this scenario. But with a one-time pad, you are actually brute-forcing the message space by brute-forcing the key space. Even if you knew the hash of the plaintext, it wouldn't help. You'd just be brute-forcing whatever hash you used to find collisions.
This property goes away if you start repeating key stream bits by any deterministic process. Hence why just sending a PRNG seed is a bad one-time pad. This is also the difference between /dev/random and /dev/urandom. Linux generates randomness from a PRNG, but it's seeded by unpredictable hardware events and other sources of entropy, and there's a bunch of logic to estimate how much entropy is available. /dev/random specifically blocks until that estimate is positive, so that one-time pads and the like don't repeat bits. (In fact, this is basically the only time you should be using /dev/random! /dev/urandom is perfectly acceptable for all other cryptographic use cases!)
In BASIC, the word "RANDOMIZE" sets the seed for the RND function, and you'll find it's initially dependant on time (including the typing speed of the user):
https://github.com/Vulacode/RANDOM/blob/main/RANDOM.BAS#L295
It then is reinitialised periodically by mixing in run time (which is highly variable due to microprocessor limitations) and checksums of previous parts of the stream:
https://github.com/Vulacode/RANDOM/blob/main/RANDOM.BAS#L319
The RAND[123] appears to be Bennett Fox's Algorithm 647, which was designed for simulation purposes (statistical randomness), and is based on Lewis-Goodman-Miller's construction from 1969, so it had a great deal of scrutiny.
I think this would have been state of the art in the late 1980s.
The generator used at the time in BASIC seems to have reseeded the PRNG automatically based on processor time and the checksum of the last block generated by the previous seed so you'd have to use some other source of randomness because you couldn't control that on disparate machines even if you changed the clock on the decoding machine to exactly match the encoding machine at the time of generation.
Instead of just using a statistically useful rand the creator of this would have had to create their own implementation of a stream cipher and that's trusting the NSA hasn't backdoored all of them which was a fear at the time. We're honestly not certain still, though the times that people were most paranoid about like the DES standard it turns out they were actually improving the algorithms resilience.