Could Samsung's Bixby be getting super-smart real soon? Samsung wins top scores in two AI competitions

Could Samsung's Bixby getting super-smart real soon? Samsung winning two huge AI contests
With smartphone hardware becoming quite powerful and reaching a bit of a plateau in recent years, manufacturers have been turning to all sorts of different features and solutions in order to keep evolving their products, make them more useful each consecutive year, and keeping the industry alive. Software is a big deal and AI is getting a ton of attention. All phones get virtual assistants and everyone is striving to make theirs smarter, faster, and more useful to the consumer.

Samsung's Bixby launched to a lukewarm reception with the Galaxy S8. Users were not pleased with the assistant's basic features, but the company remains committed to developing the assistant continuously and pushing it on users with an actual hardware button on all of its top-tier smartphones.

Early in 2018, Samsung said that — now that Bixby is out in enough hands — the company will focus on evolving it and making it an exceptional AI assistant. And, with Samsung's ConZNet (Context Zoom-in Network ) — a new software spawned by Samsung AI Center — winning two top global machine reading competitions, it may just be keeping its word.

Microsoft's MARCO (MAchine Reading Comprehension) and the University of Washington's TriviaQA are machine reading comprehension competitions, which task AI algorithms with processing actual user queries taken from various Q&A samples, then providing an answer. Additionally, both contests will have participants' AI write research-based documents, such as news articles or blog posts.

In MS MARCO, the AI is tasked with answering a random user query, plucked from Microsoft's Bing search engine. The program is presented with ten web documents and then is asked to compose an answer, which is relevant to the user query, but also reads and sounds as if it had been written by a human.

TriviaQA is a reading comprehension dataset with over 650 000 question-answer-evidence groups. It has complex, compositional questions, with variable lexical qualities, and requires cross-sentence reasoning to find answers. The AI is required to understand the question asked, present a short and to-the-point answer, and then give us the evidence (sources used) for said answer. TriviaQA has been tested with two baseline algorithms — a basic AI and a state-of-the-art neural network. The two of them had a performance rating of 23% and 40%, respectively, while a human's performance in said test is still 80%. So, AI still has some growing to do.

Samsung's ConZNet has ranked first in both competitions. However, it's worth noting that it held first place in MARCO from the 14th of June to the 4th of July, when it was beat by a VNET Baidu NLP algorithm. As for TriviaQA, the exact score the AI got is

So, what does that mean for the end user? Jihie Kim, Head of Samsung Research's Language Understanding Lab says that her department has already held open events, where they discussed the use of AI with engineers from both the home appliance and smartphone departments. And, since Samsung is pretty big on developing its IoT ecosystem with Bixby at the heart of it, it's easy to imagine that was a big point of conversation.

Additionally, customer service departments were also very interested in how ConZNet can be used to enhance chatbot usability and helpfulness.

At this point, however, we can't know how long it will be before we see the upgrade make it down to consumer products.

sources: Samsung (1, 2); MS MARCO

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless