scholarly journals Teacher–student training and triplet loss to reduce the effect of drastic face occlusion

2021 ◽  
Vol 33 (1) ◽  
Author(s):  
Mariana-Iuliana Georgescu ◽  
Georgian-Emilian Duţǎ ◽  
Radu Tudor Ionescu
Author(s):  
Rui Liu ◽  
Berrak Sisman ◽  
Jingdong Li ◽  
Feilong Bao ◽  
Guanglai Gao ◽  
...  

2021 ◽  
pp. 413-424
Author(s):  
Jing Liu ◽  
Rupak Vignesh Swaminathan ◽  
Sree Hari Krishnan Parthasarathi ◽  
Chunchuan Lyu ◽  
Athanasios Mouchtaris ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 4345-4352
Author(s):  
Liang Jiang ◽  
Zujie Wen ◽  
Zhongping Liang ◽  
Yafang Wang ◽  
Gerard De Melo ◽  
...  

In the past decade, there has been substantial progress at training increasingly deep neural networks. Recent advances within the teacher–student training paradigm have established that information about past training updates show promise as a source of guidance during subsequent training steps. Based on this notion, in this paper, we propose Long Short-Term Sample Distillation, a novel training policy that simultaneously leverages multiple phases of the previous training process to guide the later training updates to a neural network, while efficiently proceeding in just one single generation pass. With Long Short-Term Sample Distillation, the supervision signal for each sample is decomposed into two parts: a long-term signal and a short-term one. The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher–student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates. Moreover, the teachers for each sample are unique, such that, overall, the model learns from a very diverse set of teachers. Comprehensive experimental results across a range of vision and NLP tasks demonstrate the effectiveness of this new training method.


Sign in / Sign up

Export Citation Format

Share Document