×

News

Professor Sungroh Yoon, “Rapid advances in AI language ability… Movie ‘HER’ where one falls in love with artificial intelligence could become a reality within a few years(ChosunBiz, 20190210)

March 4, 2019l Hit 700

With the advent of Deep Learning in 2012, research in artificial intelligence (AI) blossomed. This year, AI is predicted to reach a turning point with natural language(language used in everyday life) at its center. With rapid advances in technologies including AI text comprehension, context understanding, and voice modulation, the air is thick with anticipation of the development of unprecedented conversational AI.

BERT(Bidirectional Encoder Representations from Transformers), an AI language model announced through a paper by Google last October, in particular, is at the head of change. BERT shocked academia worldwide by performing better than humans in GLUE(General Language Understanding Evaluation) and SQuAD(Stanford Question Answering Dataset), the most recognized evaluation platform in natural language processing.

[SNU ECE Professor Sungroh Yoon]

In an interview with ChosunBiz on the 7th, SNU ECE Professor Sungroh Yoon said, “This year, conversational AI is the most noteworthy trend in AI technology,” and pointed out Google’s BERT as an example. “Google has given birth to a monstrous creation(artificial intelligence) through a significantly improved deep learning architecture, hardware acquired through investment on a colossal scale, and training with an enormous amount of data.”

Professor Sungroh Yoon is one of the most noted young engineers who are conducting research and development in AI in Korea. In 2013, he received the ‘Outstanding Young Engineers Award’ from IEEE(Institute of Electrical and Electronics Engineers) and later that year attracted attention for his research analyzing the international economic situation with AI technology based on bioinformatics. Currently, he is conducting research on AI language models with oversea IT companies such as Microsoft(MS) and IBM and also making progress in joint AI projects with domestic companies such as Hyundai Motors and SK.

◇ Having acquired sight with deep learning, AI will now acquire voice

What led the advances in artificial intelligence for the past several years was learning based on images. The method was to extract necessary information based on learning with millions or tens of million images. In particular, such visual AI could be applied for practical use in medicine. With a reliability of about 97%, AI in radiology was especially appraised as performing better than humans.

Progress in voice related AI was sluggish in comparison. There was a comparatively simple learning method for images as well as an abundance of samples whereas the human language system was far more complicated. Moreover, RNN(Recurrent Neural Network), a language model adopted by most IT companies including Amazon, Microsoft, Naver, and Kakao, had evident limitations in growing into a conversational AI.

First of all, with RNN, the order of input significantly affects how the AI learns the language. For instance, language information entered first will evaporate as more information is entered. This is why the AI cannot perfectly understand the entire sentence entered by the user. This phenomenon is also related to why conventional AI speakers are only capable of simple conversation and why the accuracy of their answers drop as questions become longer.

Google chose a language model based on Attention. As its name suggests, Attention is a method that analyzes the intention of the user’s words by concentrating on important words among the inputted information. It repeats the learning process by learning while continuously updating the important words and concentrating on the user’s intention and context analysis. ‘Transformer’ is the deep learning architecture that Google developed in 2017 based on Attention.

[Google’s AI language model BERT scored 93 points in the natural language processing test SQuAD, which is higher than the human score 91]

Professor Sungroh Yoon explained, “Google used the Transformer architecture both ways to design BERT. The method was very effective for developing a conversational AI. Such structure will replace the conventional RNN and it will be possible to develop language abilities more efficiently.”

He continued by saying, “When you look into BERT’s performance in understanding natural language, it equals or outperforms humans in some points. For example, after feeding it a complex novel like ‘The Lord of the Rings’ and afterwards asking questions such as, “Where is the ring now?”, it will be able to answer. With such speed in progress, it will be possible to anticipate the development of an advanced conversational AI like the one in the movie ‘HER’.”

◇ AI reborn as a ‘god of studying’

There are also active attempts other than BERT to revolutionize the manner in which AI learning occurs centered around the academia. A typical example is automated machine learning (AutoML), with which the AI looks for the optimal algorithm for composing the deep learning network that will build up the AI.

In general, conventional deep learning research substantially relies on ‘manual work’ of the AI researcher. Professor Yoon explained, “When constructing the deep learning network, there is more for people to do than you think. Even for people who are considered experts in AI, there is an abundance of work where data has to be entered one by one and multiple experiments are necessary to judge which learning method will be correct.

It is extremely difficult work for actual AI researchers to design machine learning architecture appropriate for their research purpose and real data characteristics. When one also considers the tuning process required after creating the learning network, a considerable amount of time and effort is necessary. Even then, it is hard to guarantee that the designed architecture is the most effective method.

With such difficulties, NSA(Neural Architecture Search) is a AutoML technology that is currently gathering attention. As the name hints, it is an algorithm where the AI looks for its learning architecture on its own. This could be likened to a situation where the researcher notifies the AI of which subjects it will learn, the given time, and other important conditions, and the AI creates its most effective learning model and time table on its own. This will crucially reduce the cost and time required for AI research and development.

Professor Yoon emphasized, “After deep learning blossomed in 2012, AI has spread to various fields, and there is also fundamental development in progress. There were advances in algorithms, which were most important when it comes to AI learning, and these consequently acted as the jet engine for AI. In particular, there are more and more cases where human-made algorithms are outperformed by AI-made algorithms, which are developed through the AI’s own improvements to conventional learning methods.”

Source: http://ee.snu.ac.kr/community/news?bm=v&bbsidx=48449
Translated by: Jee Hyun Lee, English Editor of Department of Electrical and Computer Engineering, jlee621@snu.ac.kr