scholarly journals Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning

Author(s):  
Shengchao Liu ◽  
Yingyu Liang ◽  
Anthony Gitter

In settings with related prediction tasks, integrated multi-task learning models can often improve performance relative to independent single-task models. However, even when the average task performance improves, individual tasks may experience negative transfer in which the multi-task model’s predictions are worse than the single-task model’s. We show the prevalence of negative transfer in a computational chemistry case study with 128 tasks and introduce a framework that provides a foundation for reducing negative transfer in multitask models. Our Loss-Balanced Task Weighting approach dynamically updates task weights during model training to control the influence of individual tasks.

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_4) ◽  
Author(s):  
ChienYu Chi ◽  
Yen-Pin Chen ◽  
Adrian Winkler ◽  
Kuan-Chun Fu ◽  
Fie Xu ◽  
...  

Introduction: Predicting rare catastrophic events is challenging due to lack of targets. Here we employed a multi-task learning method and demonstrated that substantial gains in accuracy and generalizability was achieved by sharing representations between related tasks Methods: Starting from Taiwan National Health Insurance Research Database, we selected adult people (>20 year) experienced in-hospital cardiac arrest but not out-of-hospital cardiac arrest during 8 years (2003-2010), and built a dataset using de-identified claims of Emergency Department (ED) and hospitalization. Final dataset had 169,287 patients, randomly split into 3 sections, train 70%, validation 15%, and test 15%.Two outcomes, 30-day readmission and 30-day mortality are chosen. We constructed the deep learning system in two steps. We first used a taxonomy mapping system Text2Node to generate a distributed representation for each concept. We then applied a multilevel hierarchical model based on long short-term memory (LSTM) architecture. Multi-task models used gradient similarity to prioritize the desired task over auxiliary tasks. Single-task models were trained for each desired task. All models share the same architecture and are trained with the same input data Results: Each model was optimized to maximize AUROC on the validation set with the final metrics calculated on the held-out test set. We demonstrated multi-task deep learning models outperform single task deep learning models on both tasks. While readmission had roughly 30% positives and showed miniscule improvements, the mortality task saw more improvement between models. We hypothesize that this is a result of the data imbalance, mortality occurred roughly 5% positive; the auxiliary tasks help the model interpret the data and generalize better. Conclusion: Multi-task deep learning models outperform single task deep learning models in predicting 30-day readmission and mortality in in-hospital cardiac arrest patients.


2020 ◽  
Vol 34 (05) ◽  
pp. 8936-8943
Author(s):  
Tianxiang Sun ◽  
Yunfan Shao ◽  
Xiaonan Li ◽  
Pengfei Liu ◽  
Hang Yan ◽  
...  

Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing. How choosing a suitable sharing mechanism depends on the relations among the tasks, which is not easy since it is difficult to understand the underlying shared factors among these tasks. In this paper, we propose a novel parameter sharing mechanism, named Sparse Sharing. Given multiple tasks, our approach automatically finds a sparse sharing structure. We start with an over-parameterized base network, from which each task extracts a subnetwork. The subnetworks of multiple tasks are partially overlapped and trained in parallel. We show that both hard sharing and hierarchical sharing can be formulated as particular instances of the sparse sharing framework. We conduct extensive experiments on three sequence labeling tasks. Compared with single-task models and three typical multi-task learning baselines, our proposed approach achieves consistent improvement while requiring fewer parameters.


2019 ◽  
Vol 62 (7) ◽  
pp. 2099-2117 ◽  
Author(s):  
Jason A. Whitfield ◽  
Zoe Kriegel ◽  
Adam M. Fullenkamp ◽  
Daryush D. Mehta

Purpose Prior investigations suggest that simultaneous performance of more than 1 motor-oriented task may exacerbate speech motor deficits in individuals with Parkinson disease (PD). The purpose of the current investigation was to examine the extent to which performing a low-demand manual task affected the connected speech in individuals with and without PD. Method Individuals with PD and neurologically healthy controls performed speech tasks (reading and extemporaneous speech tasks) and an oscillatory manual task (a counterclockwise circle-drawing task) in isolation (single-task condition) and concurrently (dual-task condition). Results Relative to speech task performance, no changes in speech acoustics were observed for either group when the low-demand motor task was performed with the concurrent reading tasks. Speakers with PD exhibited a significant decrease in pause duration between the single-task (speech only) and dual-task conditions for the extemporaneous speech task, whereas control participants did not exhibit changes in any speech production variable between the single- and dual-task conditions. Conclusions Overall, there were little to no changes in speech production when a low-demand oscillatory motor task was performed with concurrent reading. For the extemporaneous task, however, individuals with PD exhibited significant changes when the speech and manual tasks were performed concurrently, a pattern that was not observed for control speakers. Supplemental Material https://doi.org/10.23641/asha.8637008


Sign in / Sign up

Export Citation Format

Share Document