做厙TV

Cracking the code: This group of 做厙TV computer science researchers are decoding ciphers with AI

Photo of undergraduate codebreakers
Codebreakers: Sheldon Huang, Ivan Zhang, Aidan Gomez, Muhammad Osama and Bryan Li, the FOR.ai research team (photo courtesy of FOR.ai).

To break the Enigma code during the Second World War, British computer scientist Alan Turing developed a mathematical model to unlock the cipher faster than any human.

Today, a group of University of Toronto undergraduate computer science students are decoding encrypted text using a neural network, a framework for machine learning algorithms inspired by the brain.

We're at a stage [in our research] where we can pretty confidently say that the architecture works, and it's more general than anything that's been previously developed, says Aidan Gomez, a fourth-year student in the department of computer science. Their accuracy results are above 95 per cent, he adds.

Gomez and the research team Sheldon Huang, Bryan Li, Muhammad Osama and Ivan Zhang, are 2018 fellows of AI Grant, a recently established non-profit that provides select projects nearly $50,000 in cloud computing resources from Google, among others. They also receive exclusive access to a global AI network of mentors including Andrej Karpathy, a 做厙TV alumnus who was formerly of OpenAI and now director of AI at Tesla. 

Roger Grosse, an assistant professor in the department of computer science, and Lukasz Kaiser, senior research scientist at Google Brain, help mentor the team.

Similar to natural language translation tasks, their project uses plain text, or English, and cipher text as two different languages. The neural network reads both and makes connections between the two without any additional support in translation. 

Gomez says the method is able to crack a much more complicated cipher called Vigen癡re, historically termed the indecipherable cipher, where a hidden key is only known to the sender and recipient. The key determines an entirely different Caesarian, or shift cipher, to be used at each position; meaning the neural network can no longer simply count the frequencies of letters and perform simple frequency analysis. 

Read more about Aidan Gomez

This is a much more complicated cipher to crack and its part of the goal of getting closer and closer to the complexity of unsupervised language translation itself, says Gomez.  

Huang, who is also president and co-founder of the FOR.ai partner organization, , or UTMIST, says their approach is fundamentally different from current approaches that are supervised with human feedback or labelled data not unlike the task of translating the alien language seen in the movie Arrival.

They crack the language by making connections between two languages, word by word, says Huang. 

None of [the algorithm] is hard-coded or relying on a humans knowledge of language, says Gomez. We came up with an architecture than can infer those mappings independently. 

The group says cracking modern ciphers is impractical, and provably assured to be too difficult. With an end-goal of unsupervised language translation, say English to German based on two completely unrelated texts, their cipher methods could be used to unlock lost languages, when native speakers no longer exist.  

This projects clearly demonstrates a neural networks capacity to build up a really strong model of language, and then apply that to drawing connections between two abstract languages, says Gomez. 

The FOR.ai team will be looking to recruit members interested in participating in their machine learning research. But Zhang forewarns it is gruelling though intensely gratifying work.

Even at a very high-tech lab like [the department of computer sciences machine learning] group, these things still take days [to perform], says Zhang. 

A lot of hardware, running experiments  and a lot of epiphanies.

 

 

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Computer Science