Advertisement

Google’s AI has created its own form of encryption via machine learning

Company’s deep learning project achieves significant milestone

One small step for artificial intelligence, one giant leap toward the day robots take over the world.

At least, this is how most will interpret this story. But what the team at Google Brain, the company’s deep learning project, has achieved has more so to do with the former than the latter. You see, researchers Martín Abadi and David Andersen have demonstrated the ability for two neural network-based computing systems to communicate with one another using simple, un-taught encryption techniques.

AI encryption
The computers created this encryption strategy vis-à-vis machine learning; that is, they were never taught cryptographic algorithms. The resulting cypher text was super basic, especially when compared to current human-designed systems. But what’s impressive is that this marks the first time a neural network (computing systems loosely based on artificial neurons) has created a form of cryptography, a function this particular technology is known to not be good at doing. 

In the experiments, the Google Brain researchers started with three neural networks, referred to as “Alice,” “Bob,” and “Eve.” Each was trained to perfect a singular role — Alice’s one job was to send a secret message to Bob; Bob’s job was to decode the message from Alice; and Eve’s job was to try and ascertain what the message was, despite not being directly communicated to (she was the eavesdropper). 

To ensure its message remained secret, Alice was tasked with converting the original, plain-text message into something unintelligible, so that anyone who intercepted it wouldn’t be able to understand it. This cypher text then had to be deciphered by Bob. 

To allow this transferring of information between Alice and Bob to take place, both systems started with pre-agreed keys, which Eve did not have access to; these sets of numbers were then used to help encrypt and decrypt the message.

At first, both neural networks were horrible at sending and understanding secret messages; but over time, practice proved to perfect the communication. Alice slowly developed its own encryption strategy, and Bob began to better understand how to decrypt the message sent.

The first scenario was played out 15,000 times. During this run, Bob successfully converted Alice’s cypher text messages back into plain text, while Eve could only guess eight of the 16 bits forming the message. With each bit being just a “1” or a “0,” Eve’s success is approximately on par with pure chance. 

Further study into this benchmark achievement is still needed, as it's not fully understood how the encryption strategy worked. That’s because machine learning, a relatively new understanding under the umbrella definition of artificial intelligence, is understood to work, but not easy to understand how a solution is reached. This lack of knowledge, of course, means it is pretty hard to make any sort of security guarantee when it comes to the future of encryption technology. The idea that private messages could be sent from computer to computer with not only a unique encryption code each time but a unique encryption/decryption method certainly sounds promising in a day and age where privacy concerns are of paramount concern to even the more general of users. But as was mentioned earlier, there was a lack of sophistication to the process created by Alice and Bob. 

So, computers and on up to the human race’s future robotic overlords still have a long way to go before achieving the same level of sophistication as that which is found in human-made encryption today. 

To learn more, read the team’s study, Learning to Protect Communications with Adversarial Neural Cryptography.

Via New Scientist

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply