scholarly journals SVM answer selection for open-domain question answering

Author(s):  
Jun Suzuki ◽  
Yutaka Sasaki ◽  
Eisaku Maeda
2020 ◽  
Vol 34 (05) ◽  
pp. 9169-9176
Author(s):  
Jian Wang ◽  
Junhao Liu ◽  
Wei Bi ◽  
Xiaojiang Liu ◽  
Kejing He ◽  
...  

Neural network models usually suffer from the challenge of incorporating commonsense knowledge into the open-domain dialogue systems. In this paper, we propose a novel knowledge-aware dialogue generation model (called TransDG), which transfers question representation and knowledge matching abilities from knowledge base question answering (KBQA) task to facilitate the utterance understanding and factual knowledge selection for dialogue generation. In addition, we propose a response guiding attention and a multi-step decoding strategy to steer our model to focus on relevant features for response generation. Experiments on two benchmark datasets demonstrate that our model has robust superiority over compared methods in generating informative and fluent dialogues. Our code is available at https://github.com/siat-nlp/TransDG.


2020 ◽  
Author(s):  
Yuxiang Wu ◽  
Pasquale Minervini ◽  
Pontus Stenetorp ◽  
Sebastian Riedel

Author(s):  
Martin Fajcik ◽  
Martin Docekal ◽  
Karel Ondrej ◽  
Pavel Smrz

2021 ◽  
Vol 9 ◽  
pp. 929-944
Author(s):  
Omar Khattab ◽  
Christopher Potts ◽  
Matei Zaharia

Abstract Systems for Open-Domain Question Answering (OpenQA) generally depend on a retriever for finding candidate passages in a large corpus and a reader for extracting answers from those passages. In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages. We argue that this modeling choice is insufficiently expressive for dealing with the complexity of natural language questions. To address this, we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT to OpenQA. ColBERT creates fine-grained interactions between questions and passages. We propose an efficient weak supervision strategy that iteratively uses ColBERT to create its own training data. This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system attains state-of-the-art extractive OpenQA performance on all three datasets.


Sign in / Sign up

Export Citation Format

Share Document