parameter sharing
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 30)

H-INDEX

4
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Georgios Marios Skaltsis ◽  
Hyo-Sang Shin ◽  
Antonios Tsourdos

2021 ◽  
Author(s):  
Zhaolei Wang ◽  
Jun Zhang ◽  
Yue Li ◽  
Qinghai Gong ◽  
Wuyi Luo ◽  
...  

2021 ◽  
Vol 160 ◽  
pp. 107936
Author(s):  
Yi Qin ◽  
Qunwang Yao ◽  
Yi Wang ◽  
Yongfang Mao

2021 ◽  
Vol 30 (05) ◽  
Author(s):  
Xiang Tian ◽  
Bolun Zheng ◽  
Shengyu Li ◽  
Chenggang Yan ◽  
Jiyong Zhang ◽  
...  

Author(s):  
Dongming Yang ◽  
Yuexian Zou ◽  
Can Zhang ◽  
Meng Cao ◽  
Jie Chen

Human-Object Interaction (HOI) detection devotes to learn how humans interact with surrounding objects. Latest end-to-end HOI detectors are short of relation reasoning, which leads to inability to learn HOI-specific interactive semantics for predictions. In this paper, we therefore propose novel relation reasoning for HOI detection. We first present a progressive Relation-aware Frame, which brings a new structure and parameter sharing pattern for interaction inference. Upon the frame, an Interaction Intensifier Module and a Correlation Parsing Module are carefully designed, where: a) interactive semantics from humans can be exploited and passed to objects to intensify interactions, b) interactive correlations among humans, objects and interactions are integrated to promote predictions. Based on modules above, we construct an end-to-end trainable framework named Relation Reasoning Network (abbr. RR-Net). Extensive experiments show that our proposed RR-Net sets a new state-of-the-art on both V-COCO and HICO-DET benchmarks and improves the baseline about 5.5% and 9.8% relatively, validating that this first effort in exploring relation reasoning and integrating interactive semantics has brought obvious improvement for end-to-end HOI detection.


2021 ◽  
Vol 426 ◽  
pp. 227-234
Author(s):  
Xi Chen ◽  
Song Zhang ◽  
Gehui Shen ◽  
Zhi-Hong Deng ◽  
Unil Yun

2020 ◽  
Vol 97 ◽  
pp. 106783
Author(s):  
Xu Gou ◽  
Linbo Qing ◽  
Yi Wang ◽  
Mulin Xin ◽  
Xianmin Wang

2020 ◽  
Author(s):  
Yutong Dai ◽  
Hao Lu ◽  
Chunhua Shen

2020 ◽  
Author(s):  
Nicholas Menghi ◽  
Kemal Kacar ◽  
Will Penny

AbstractThis paper uses constructs from the field of multitask machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach. We found, as hypothesised, that subject performance was significantly higher on the second task if it shared the same subspace as the first, an advantage that played out most strongly at the beginning of the second task. Additionally, accuracy was positively correlated over subjects learning same-subspace tasks but was not correlated for those learning different-subspace tasks. These results, and other aspects of learning dynamics, were compared to the behaviour of a Neural Network model trained using sequential Bayesian inference. Human performance was found to be consistent with a Soft Parameter Sharing variant of this model that constrained representations to be similar among tasks but only when this aided learning. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.Author summaryHow does knowledge gained from previous experience affect learning of new tasks ? This question of “Transfer Learning” has been addressed by teachers, psychologists, and more recently by researchers in the fields of neural networks and machine learning. Leveraging constructs from machine learning, we designed pairs of learning tasks that either shared or did not share a common subspace. We compared the dynamics of transfer learning in humans with those of a multitask neural network model, finding that human performance was consistent with a soft parameter sharing variant of the model. Learning was boosted in the early stages of the second task if the same subspace was shared between tasks. Additionally, accuracy between tasks was positively correlated but only when they shared the same subspace. Our results highlight the roles of subspaces, showing how they could act as a learning boost if shared, and be detrimental if not.


Sign in / Sign up

Export Citation Format

Share Document