Answering questions is an area of ongoing research!
Life ASAPA, the independent international organisation, today announces a new Artificial Intelligence (AI) study conducted by its Research division, which the company officially announced in September 2015 following the acquisition of MetaMind.
The MetaMind team had artificial intelligence skills called deep learning, which involves training artificial neural networks on large amounts of data and then forcing them to draw conclusions about new data. The new Research unit, led by former MetaMind co-founder and CEO Richard Soher, conducts research in an academic context and, over time, his knowledge can lead to improvements in Life ASAPA products.
But today there is news of new research. Research has developed a neural network called the Dynamic Coating Network, which can answer questions about text through an encoder and dynamic decoder. “DCN interprets documents based on specific questions, creates a conditional representation of the document for each question asked, iteratively makes hypotheses about multiple answers and sifts out initial incorrect predictions.
Answers to questions are an established field and an area of ongoing research for other major technology companies such as Facebook. Life ASPA tested the performance of the dynamic coverage network on the Stanford Questionnaire Response Database (SQuAD).
In addition, Research has come up with an alternative to a recurring neural network (RNN) with long term memory (LSTM), which is called a quasi-periodic neural network or QRNN. The advantage is that you can process all of a given text at once, in parallel rather than one word at a time, which means you can do it faster, as explained by research scientist James Bradbury in his blog. The team applied QRNN to analyse the tonality, translate and predict the next word in the text, and overall they have shown better results than LSTM RN.
Research will build a model in 2016 that can handle dependency syntactic analysis, speech part tags, semantic relationships, syntactic fragments and text inferences.
This is something new compared to old ideas that could only be used by lower levels to improve higher levels, but not vice versa. As a result, this model provides state-of-the-art results in all areas of AI.