Google researchers have run a very interesting experiment involving AIs communicating with each other and building their own form of encryption to do so.
The Google Brain team (the mission of which is to ‘make machines intelligent’), specifically researchers Martin Abadi and David Andersen, used three different neural networks in their experiment which posed the question of whether a pair of these could use secret keys (i.e. cryptography) to protect information they were communicating from the prying eyes of a third AI.
The setup was two AIs, Bob and Alice, communicating together, with the third AI being Eve which was tasked with trying to decrypt their messages (Alice, Bob and Eve being commonly used placeholder names in the field of cryptography, incidentally).
Alice sent messages to Bob and he was initially pretty bad at interpreting them, but after a while, he rapidly became much more accurate at deciphering messages, and the pair eventually had a seemingly solid encryption system which Eve could not break into.
As TechCrunch, which spotted the experiment, notes, it was particularly interesting because of the novel techniques the neural networks used when it came to keeping their communicated info secret (using cryptographic avenues that humans wouldn’t commonly explore).
Eve did have a little success in cracking parts of messages midway through the experiment, but that quickly dried up when Alice and Bob refined their techniques.
As to the real-world impact of this, it doesn’t mean a whole lot right now, but it’s certainly interesting to see the sort of things AI could potentially be involved in down the line security-wise.
And of course there’s the obligatory worry about artificial intelligence being able to keep secrets from its creators. And perhaps the bigger worry of just what those secrets might be.