Watson! Are you my friend? – The arising of digital natural languages
Matteo Mazzanti is a consultant at Purple Scout, read his take on Artificial Intelligence. Does AI make us humans feel a bit stupid when it invents its own languages, did the Terminator predict the future and will AI end up treating humans as cattle?
In August 1990 “Bears Discover Fire” was published in Isaac Asimov’s Science Fiction Magazine. Such fun read. It won the Hugo prize the following year as the best short science fiction story. The tale is about evolving and the aging of a society. The premise is that bears have discovered fire and are having campfires on highway medians. The tale’s protagonist says “They don’t hibernate anymore, they make a fire and keep it going all winter.” It’s clear they are evolving. Now, try and guess the humans reaction.
In July 2017 two bots had to be shut down by a team of engineers at FAIR’s labs (Facebook’s AI Research) as they were speaking to each other in a language that couldn’t be understood by humans. “There was no reward to sticking to English language,” says visiting Georgia Tech researcher, Dhruv Batra.
We still have some degree of control over AIs while we are also obsessed over losing that control. Science fiction has allowed us to explore these themes in stories such as ‘The Terminator’ and publications such as ‘Amazing Stories’. Some science fiction stories have become true these days, but which ones?
A predictable phenomenon has been observed concerning AI and language: if you give them freedom to play, they prefer their own language. In December 2016 a published article titled: “Multi-Agent Cooperation and the Emergence of (Natural) Language”enlightened that: “the agents develop their own language interactively out of the need to communicate”.
Facebook’s agents code was publicly unveiled and made open software in June 2014, after Facebook has been optimizing and training its agents for at least two years. Bots originally made for “end-to-end negotiation” in English, developed their own functional language to communicate with each other as English was found inefficient.
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Is this conversation a threat for humanity?
On August 14th 2017, Elon Musk declared AI ‘vastly more risky than North Korea’. He is co-founder of “The future of humanity”, a foundation that research on the dangers of AI. He is also co-founder of OpenAI, a consortium sponsored by Microsoft and Amazon among others, researching “the path to safe AI“. His declaration coincided with OpenAI’s bot beating a world pro at the strategic multiplayer game Dota 2. Elon Musk pushes for public regulation over AI: he posted on twitter on August the 12th, “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.”
Some IT giants have decided to regulate themselves. On May 2017 the consortium Partnership-On-AI was “established to study and formulate best practices on AI technologies…” Elon gets straight to the point: the killer robots, the war drones, he wants them all banned from UN. 116 AI – founders and robotics companies from 26 countries have signed a petition to ban lethal autonomous weapons, otherwise known as “killer robots”. This petition coincides with the beginning of formal UN talks exploring such a ban. 123 member nations have agreed to the talks — which were triggered in part by the publication of a similar petition in 2015 — but discussions have been delayed due to unpaid fees from member states.
Why AI scares us?
Competition brings out the best or worst in us. No matter the competitive venue, two teams clash on the playing field or two corporations vying for product dominance, we can understand the motives, failings and humanness of the participants. But that is not the case with AI where pure “Competition” is the name of the game they are playing all the time, even when we rest. And if you are on the losing side, you might go extinct. Soon they will be really good at it. There are many ways of becoming extinct. One is to disappear completely because of an enormous catastrophe. Think of the dinosaurs. Another way trough active extermination, if something like Terminator’s SkyNet came true. And another way of going extinct can be described as “the time trap”: existing and not evolving. Like our cows exist without evolving and will be that way in one billion years. Being the losers, AI could treat us like cattle, while we wouldn’t understand what’s going on really. In the worst scenario we could even devolve to animals. Assisted by our desire to be technically “plugged in”, as if we were in a Truman show built by AIs.
Already in November 2013, at Stanford University labs, bots were talking to each other in order to maximize their effort while they were playing games, proving artificial collaboration is a spontaneous, “emergent” phenomena among AIs. Some basic English language with some basic grammar rules were provided to the bots to allow communicating during some games and they started following Grice’s maxims of collaborative communication.
On March 15th, 2017, Barkley University researchers Igor Mordatc (OpenAI member) together with Pieter Abbeel, published research in which AI agents developed their own language. On OpenAI’s website a page explains in few words what AI do to communicate. AIs are able to “compress” complex meaning in one symbol “A”, thus speeding up communication. AI is probably also good at formalizing complex grammar rules. They definitely have impeccable logic. This might be the end of APIs programming from humans. Once this engine is plugged into it, the internet of things and clouds computing could start interoperating automagically.
On November 2016 Google Brain team and Google Translate team published an article on what was going on at their labs since they had joined together in an effort to “improve people’s lives” through AI. As the researchers reported in the article, . “…the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network.” They were able to track the interlingua in a 3D graphical representation. How did this happen?
In september of the same year a new system was plugged into Google Translate, the Google Neural Machine Translation (GNMT) which is an end-to-end learning framework, able to learn from a large amount of examples. It was a great success with translations becoming substantially better. GNMT has extended its algorithms to address 103 languages. Since then it has developed a spooky new way of working that performs “Zero Shot Translation”, going language to language skipping examples.
Nobody knows details of an AI’s computation process, that’s why we use AIs: the things we ask them (calculations) are so tedious and complex that we invented machines to do the job. We provide them with formal logic rules, formalized concepts, symbolic worlds, changes requiring computation, all in order to reach different and desired states of the symbolic world in which the are playing. We train them for complex tasks until they are really good at it. It is happening with language. Once we wanted them to speak. Now they do speak and we might feel less clever.
To stay updated with the ongoing times Elon Musk suggest to get a neural-lace: a brain computer interface (BCI) that should speed up our minds. Kind of “reverse training”. While waiting for Elon’s promise to come true, the most curious could start at home with their own AI at the most basic tasks, like traffic sign classification, just to get some confidence. If you happen to live in Malmö, you have the chance to join some AI enthusiasts that started a meetup group in the summer of 2017, with the aim of exploring those techniques that make AI so special, come and make some (AI) friends.
Read more about AI and digital natural languages here: