Artificial Intelligence will Speak Its Own Language Soon

Written by mikecorp | Published 2017/03/17
Tech Story Tags: artificial-intelligence | machine-learning | language | technology | future

TLDRvia the TL;DR App

The scene from Arrival 2016.

Grounded language is a new step towards Artificial Intelligence revealed by OpenAI

The article is about a system that invents a language which is tied to perception of the world. In sum, the post reveals possibilities that might be opened via researches related to an artificial language. At least the language will be similar to a signal language typical for animals. Further languages will be evolved into more complex technologies.

There is no such thing as an evolution of languages. There is an evolution of the ability to use languages. This ability appeared about 75000 years ago. And it was extremely simple. And what we call a language today is how our language is transformed into a spoken act. As Chomsky mentioned it is a secondary language regarding essential processes of thinking. There are the variety of about 6000 different languages over the world. What we really want is to understand that an underlying principle that gives us the ability to acquire any of these 6000 languages. And create several new ones.

The language is not necessary spoken sounds but rather it is more an inner process. It’s closer to a thinking process.

The language in some sense is similar to vision.

We have a written language and we have some photos. An ability to look at an object from several prospectives is the same to asking questions for details or hidden facts. An inner dialog is the same to imagining scenes. The most interesting part is that two abilities are closer than ever on the lowest level. Also, they are built from the same material with the same principles. Discovering a system that can handle both vision and a language is the base for intelligence.

The ultimate goal is to make a system that recognizes reality via visual perception then creates abstraction. Also, the system is able to use a language for manipulations with the abstractions. The goal is to connect it in the way human mind does. I wrote more about this translation process here:

What Makes Translation the Essence of Intelligence_How many of you recognize the pattern on image above? Yes, it’s Fibonacci numbers:_chatbotslife.com

In spite of the fact that a language and vision refer to the same abstractions in the mind the source of all abstractions is the reality and that’s why we start grasping it with the simplest visual objects and not with a language. Later language-described objects become as real as what we look at. But there is no option to grasp a human language for a machine without an interaction with a physical world. That’s why OpenAI’s learning to communicate strategy is promising.

Another reason to do such researches is that there is no possibility yet to put robots into a physical world to learn the whole environment. It just will take too much time. It’s not possible to acquire a language through static data. The only way is to be an active participant in an environment. Also, there are no easy ways to make the evasive experiments with a human mind and the computer simulations are the best candidates to become a tool of linguistics in 21 century.

The goal is to create an intelligent agent that understands us. And it’s a pretty hard problem. It has been researching since 1960. However, we have not been able to describe a language formally yet because it does not exists without context. The environment is such context.

Competition and Cooperation

We have already seen a system which is able to demonstrate awesome results in the reinforcement learning experiments. It is DeepMind Q-learning implementation playing the Atari-games. In short, the system had an environment and an agent earning score. And the agent successfully learned how to play well.

Another breakthrough was AlphaGo. The key difference is that an opponent was present behind the game. Also, the environment had much more states. One of the brilliant solutions worth mentioning is that the agent was playing against own copies.

The next move will be the system where agents are able to find a way to cooperate with each other to achieve an additional value for both. OpenAI research shows how an intelligent agent behaves in a totally different environment — cooperative world like ours.

Learning to communicate_In this post we’ll outline new OpenAI research in which agents develop their own language._openai.com

The BlackBox Problem

An inner language might be next breakthrough to help manage the complexity of ML frameworks. Today we have to put a lot of efforts to clarify what ML system is doing and why. The language that is pretty close to a human one is an upcoming interface for working with ML engines. For multipurpose agents, such language is the best way to define an objective function.

Indeed, as AI systems become increasingly sophisticated and complex, it is hard to envision how we will collaborate with them without language — without being able to ask them, “Why?” More than this, the ability to communicate effortlessly with computers would make them infinitely more useful, and it would feel nothing short of magical. — Will Knight.

The quote is a part of the article that reveals some points where a language would provide significant advantages:

Creating machines that understand language is AI’s next big challenge_About halfway through a particularly tense game of Go held in Seoul, South Korea, between Lee Sedol, one of the best…_www.technologyreview.com

The Language Itself

Despite basic structure and vocabulary difference, it is possible to describe English and Chinese via the same terms: nouns, verbs, particles, tenses, etc. Both languages were created by thousands of communicating minds on top of surrounded reality. Next article demonstrates details:

-The Surprisingly Simple Logic Behind Japanese Sentence Structure. (image is link to article)

Imagine two people the English and the Chinese. They are having a chat. There are no options to send anything except native language. There is no option to learn each other language for them in this situation.(This argument is pretty close to Chinese room argument.) But imagine they have met. It’s not so complex to learn each other language soon. What has changed? They got a surrounded reality. They are able to connect a new language with it. The babies are able to acquire language in the same way.

The Language Games

This article would not be full without mentioning Language Games developed by Ludwig Wittgenstein. Consider the description from Wikipedia:

The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar” “slab”, “beam”. A calls them out; — B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language. (PI 2.)[3]

Later “this” and “there” are added (with functions analogous to the function these words have in natural language), and “a, b, c, d” as numerals. An example of its use: builder A says “d — slab — there” and points, and builder B counts four slabs, “a, b, c, d…” and moves them to the place pointed to by A. The builder’s language is an activity into which is woven something we would recognize as language, but in a simpler form. This language-game resembles the simple forms of language taught to children, and Wittgenstein asks that we conceive of it as “a complete primitive language” for a tribe of builders.

So, OpenAI’s research is a step toward creating an agent that will adapt and integrate itself in a cooperation with humans. Each such cooperation could be defined as a language game.

Also, I recommend an article written by Eberhard Schoeneburg. It clarifies the role of language-games in AI:

Language Games - the Key to unlock AI ?_Published on Artificial Intelligence (AI) researchers and companies active in AI are still struggling heavily with…_www.linkedin.com

Conclusion

We made a step on a road of growing a smart system from seeds. These seeds are preconditions and algorithms. Also, the seeds are clear and perceived while a final system is powerful and hardly understood. And the combination of several seeds will lead to more powerful intelligent machines and to AGI eventually. “Learning to communicate” is another seed in a list of deep reinforcement learning, Q-learning, Monte Carlo planning etc.

Still, we don’t know how to copy valuable principles of work from brains and we reinvent similar ones piece by piece by using a try and trial approach and simulations. Also, there is no a tangible consciousness itself but we are on the road to a building of a new framework with an ability to communicate.

Just imagine that each ML expert system will be able to talk and depict its thinking process. A grounded language is a discernible trend.

I would even say that we have a kind of stagnation in AI today. We are at the stage of growing technology yet. However, it will be possible to recognize capabilities of AI only in a product phase and it is coming.

Call to Action:


Published by HackerNoon on 2017/03/17