Google is teaching artificial intelligence to take personally encrypt messages between themselves
Google has built up a team that could show that the modern day artificial intelligence has the potential to build its own form of encryption. The encryption is not complex at the moment, but the research is at hand to set forth an encryption that could get as stronger as time goes on and hackers attempt to crack the encryption.
Researchers over at Google Brain, one of the units for the search company which focuses on the deep learning is the one which has been given the task of encryption by the artificial intelligence. The group has built up a game which has three entities which would be powered by a deep neural network named; Alice, Bob and Eve.
Alice in particular was designed to send some encrypted messages in 16 zeroes and ones to Bob. Bob is the one with the task of decrypting the messages once it gets them. The two bots both start with a shared key, which is used as the foundation of the messenger’s encryption.
Eve comes in as a man in the middle which is able to intercept any information and any attempts which might be made to decrypt the information as well. In an attempt to work out the encryption and avoid Eve from viewing it, Alice started to transform the messages in various ways and Bob is then told to adaptively shift their decryption so as to keep up. The researchers were judging Eve’s ability by exactly how well she got the correct message, and with Alice’s whether Eve’s answer was also different from the original one than the random guess. Bob’s measure is also being held by meeting a certain threshold which was for arriving at the right answer.
The three networks are being designed as a generative adversarial networks which means that they are not taught anything about the encryption or any examples of the encrypted and the decrypted messages. These bots learnt by trying to outsmart each other.
During the first 7,000 messages sent, Alice and Bob also started out simply and there were no errors. At first, the encryption which was made by Alice was easy for Bob to figure out, but that means Eve is also getting the answer easy. However, the next 6,000 messages, Alice and Bob both designed a better encryption method between themselves which Eve was unable to crack.
Bob managed to decrypt all the messages sent to him without any errors, but Eve had problems getting seven or eight of the 16 characters wrong. And also considering that the answers were all zero’s and one’s, Eve would get the same result if there were any chances of flipping a coin.
This was a simple test for checking whether the AI could succeed the attempts of generating encryption between them, but it also questions the future of security.