Transfer Learning and Domain Adaptation for Named-Entity Recognition

Author(s):  
Raghul Prakash ◽  
Rahul Kumar Dubey
2021 ◽  
Author(s):  
Xin Zhang ◽  
Guangwei Xu ◽  
Yueheng Sun ◽  
Meishan Zhang ◽  
Pengjun Xie

2021 ◽  
Vol 9 ◽  
pp. 1116-1131
Author(s):  
David Ifeoluwa Adelani ◽  
Jade Abbott ◽  
Graham Neubig ◽  
Daniel D’souza ◽  
Julia Kreutzer ◽  
...  

Abstract We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state- of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.1


2019 ◽  
Vol 22 (6) ◽  
pp. 1291-1304 ◽  
Author(s):  
DunLu Peng ◽  
YinRui Wang ◽  
Cong Liu ◽  
Zhang Chen

2021 ◽  
Author(s):  
Jong-Kang Lee ◽  
Jue-Ni Huang ◽  
Kun-Ju Lin ◽  
Richard Tzong-Han Tsai

BACKGROUND Electronic records provide rich clinical information for biomedical text mining. However, a system developed on one hospital department may not generalize to other departments. Here, we use hospital medical records as a research data source and explore the heterogeneous problem posed by different hospital departments. OBJECTIVE We use MIMIC-III hospital medical records as the research data source. We collaborate with medical experts to annotate the data, with 328 records being included in analyses. Disease named entity recognition (NER), which helps medical experts in consolidating diagnoses, is undertaken as a case study. METHODS To compare heterogeneity of medical records across departments, we access text from multiple departments and employ the similarity metrics. We apply transfer learning to NER in different departments’ records and test the correlation between performance and similarity metrics. We use TF-IDF cosine similarity of the named entities as our similarity metric. We use three pretrained model on the disease NER task to valid the consistency of the result. RESULTS The disease NER dataset we release consists of 328 medical records from MIMIC-III, with 95629 sentences and 8884 disease mentions in total. The inter annotator agreement Cohen’s kappa coefficient is 0.86. Similarity metrics support that medical records from different departments are heterogeneous, ranges from 0.1004 to 0.3541 compare to Medical department. In the transfer learning task using the Medical department as the training set, F1 score performs in three pretrained models average from 0.847 to 0.863. F1 scores correlate with similarity metrics with Spearman’s coefficient of 0.4285. CONCLUSIONS We propose a disease NER dataset based on medical records from MIMIC-III and demonstrate the effectiveness of transfer learning using BERT. Similarity metrics reveal noticeable heterogeneity between department records. The deep learning-based transfer learning method demonstrates good ability to generalize across departments and achieve decent NER performance thus eliminates the concern that training material from one hospital might compromise model performance when applied to another. However, the model performance does not show high correlation to the departments’ similarity.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 37736-37745 ◽  
Author(s):  
Mohammad Al-Smadi ◽  
Saad Al-Zboon ◽  
Yaser Jararweh ◽  
Patrick Juola

2021 ◽  
Vol 58 (3) ◽  
pp. 102537
Author(s):  
Debora Nozza ◽  
Pikakshi Manchanda ◽  
Elisabetta Fersini ◽  
Matteo Palmonari ◽  
Enza Messina

2021 ◽  
Vol 11 (13) ◽  
pp. 6007
Author(s):  
Muzamil Hussain Syed ◽  
Sun-Tae Chung

Entity-based information extraction is one of the main applications of Natural Language Processing (NLP). Recently, deep transfer-learning utilizing contextualized word embedding from pre-trained language models has shown remarkable results for many NLP tasks, including Named-entity recognition (NER). BERT (Bidirectional Encoder Representations from Transformers) is gaining prominent attention among various contextualized word embedding models as a state-of-the-art pre-trained language model. It is quite expensive to train a BERT model from scratch for a new application domain since it needs a huge dataset and enormous computing time. In this paper, we focus on menu entity extraction from online user reviews for the restaurant and propose a simple but effective approach for NER task on a new domain where a large dataset is rarely available or difficult to prepare, such as food menu domain, based on domain adaptation technique for word embedding and fine-tuning the popular NER task network model ‘Bi-LSTM+CRF’ with extended feature vectors. The proposed NER approach (named as ‘MenuNER’) consists of two step-processes: (1) Domain adaptation for target domain; further pre-training of the off-the-shelf BERT language model (BERT-base) in semi-supervised fashion on a domain-specific dataset, and (2) Supervised fine-tuning the popular Bi-LSTM+CRF network for downstream task with extended feature vectors obtained by concatenating word embedding from the domain-adapted pre-trained BERT model from the first step, character embedding and POS tag feature information. Experimental results on handcrafted food menu corpus from customers’ review dataset show that our proposed approach for domain-specific NER task, that is: food menu named-entity recognition, performs significantly better than the one based on the baseline off-the-shelf BERT-base model. The proposed approach achieves 92.5% F1 score on the YELP dataset for the MenuNER task.


Sign in / Sign up

Export Citation Format

Share Document