An AlphaGo for Natural Language?

By Minh Le

In October 2015, AlphaGo played its first winning games against a professional Go player. After the Nature article was published and the $1 million bet against Lee Sedol was proposed, Go players around the world have studied the matches extensively to speculate about the coming matches. The consensus was a win for Lee Sedol. As Myungwan Kim, 9-dan professional and the only human in the world to have a winning record again Lee Sedol, the machine’s play was too soft, tends to follow patterns and it didn’t show an understanding for abstract concepts and delayed consequences. He went on to remark “what a lucky guy Lee Sedol is” and to play against AlphaGo is “kind of easy money”. What happened next is history.

Reinforcement learning

It should be noted that it is not the first time AlphaGo demonstrated such a impressive learning capability. While Fan Hui, the player who lost 0-5 to AlphaGo in October, was 34 and has played professionally since the age of 15, AlphaGo only started learning the game last year. The reason it improve so fast is a technique called reinforcement learning.

In reinforcement learning, a machine played against itself (or an older version of itself) millions of games. After each game, the result (win/lose) is treated as a reinforcement signal that it will use to tweak its parameters. The sequences of actions that led to victory are favoured and will become more likely in the future. Combine with a technique called deep learning which helps it recognize abstract patterns on the board much like human intuition, the learning of AlphaGo is not too different from a human player. Except it is a million times faster.

It shouldn’t come as a surprise that machines behave similarly to human in certain aspects. Deep learning for image recognition borrows the architecture of visual pathways in the brain and reinforcement learning sprouted from research in animal cognition. Computer scientists extract useful design principles from decades of neuroscience research to put into computer. Accompanied by rigorous mathematical analysis and vast computing power, machines can outperform humans in many tasks.

Natural language

What is the last defence line of humans against machines? Natural language seems to be a natural answer. Language is one of the unique traits of human, what makes us who we are. Language carries along not only knowledge accumulated throughout thousands years of civilization but also culture, values, morals. Researchers have long hypothesized that natural language is AI-complete, i.e. one can only solve it by creating a human-level generally intelligent machine.

But no matter how great it is, natural language is the product of the complex but limited machinery in our brains and there is no reason computers can’t do it better. A fetus encounters language the first time in early months but language competence develops slowly through the years and only reach maturity in 15-20s. A machine can compress this process to one year or several months using reinforcement learning as AlphaGo has demonstrated. It would talk to itself as a way to improve its own knowledge (as we do when we work out a math proof) or talk to another version of itself to communicate ideas.

One day, we might talk to computers naturally or hear them talking to each other, not unlike in “Her”.

Minh Ngoc Lê is a young researcher in Natural Language Processing. He has devoted most of his research time on working towards a NLP system that can capture the nuance of human language and extract human intelligence from text.

Starting from 2014, Le is part of Spinoza project “Understanding language by machines”, subproject ULM-4 “A quantum model of text understanding”. He focuses on alternative architectures of NLP systems which would allow two-way interaction between modules and integration of world knowledge into NLP. To train interdependent modules, he also works on reinforcement learning for NLP.

Leave a Reply

Your email address will not be published.