single demonstration
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 18 (4(Suppl.)) ◽  
pp. 1350
Author(s):  
Tho Nguyen Duc ◽  
Chanh Minh Tran ◽  
Phan Xuan Tan ◽  
Eiji Kamioka

Imitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing considered tasks despite the limitation in the number of expert demonstrations, which clearly indicate the potential of our model.


2021 ◽  
Vol 4 ◽  
Author(s):  
Francisco S. Melo ◽  
Manuel Lopes

In this paper, we propose the first machine teaching algorithm for multiple inverse reinforcement learners. As our initial contribution, we formalize the problem of optimally teaching a sequential task to a heterogeneous class of learners. We then contribute a theoretical analysis of such problem, identifying conditions under which it is possible to conduct such teaching using the same demonstration for all learners. Our analysis shows that, contrary to other teaching problems, teaching a sequential task to a heterogeneous class of learners with a single demonstration may not be possible, as the differences between individual agents increase. We then contribute two algorithms that address the main difficulties identified by our theoretical analysis. The first algorithm, which we dub SplitTeach, starts by teaching the class as a whole until all students have learned all that they can learn as a group; it then teaches each student individually, ensuring that all students are able to perfectly acquire the target task. The second approach, which we dub JointTeach, selects a single demonstration to be provided to the whole class so that all students learn the target task as well as a single demonstration allows. While SplitTeach ensures optimal teaching at the cost of a bigger teaching effort, JointTeach ensures minimal effort, although the learners are not guaranteed to perfectly recover the target task. We conclude by illustrating our methods in several simulation domains. The simulation results agree with our theoretical findings, showcasing that indeed class teaching is not possible in the presence of heterogeneous students. At the same time, they also illustrate the main properties of our proposed algorithms: in all domains, SplitTeach guarantees perfect teaching and, in terms of teaching effort, is always at least as good as individualized teaching (often better); on the other hand, JointTeach attains minimal teaching effort in all domains, even if sometimes it compromises the teaching performance.


Author(s):  
Elias De Coninck ◽  
Tim Verbelen ◽  
Pieter Van Molle ◽  
Pieter Simoens ◽  
Bart Dhoedt IDLab
Keyword(s):  

2017 ◽  
Vol 37 (1) ◽  
pp. 137-154 ◽  
Author(s):  
Peter Englert ◽  
Marc Toussaint

We consider the scenario where a robot is demonstrated a manipulation skill once and should then use only a few trials on its own to learn to reproduce, optimize, and generalize that same skill. A manipulation skill is generally a high-dimensional policy. To achieve the desired sample efficiency, we need to exploit the inherent structure in this problem. With our approach, we propose to decompose the problem into analytically known objectives, such as motion smoothness, and black-box objectives, such as trial success or reward, depending on the interaction with the environment. The decomposition allows us to leverage and combine (i) constrained optimization methods to address analytic objectives, (ii) constrained Bayesian optimization to explore black-box objectives, and (iii) inverse optimal control methods to eventually extract a generalizable skill representation. The algorithm is evaluated on a synthetic benchmark experiment and compared with state-of-the-art learning methods. We also demonstrate the performance on real-robot experiments with a PR2.


Author(s):  
Inigo Iturrate ◽  
Esben Hallundbaek Ostergaard ◽  
Martin Rytter ◽  
Thiusius Rajeeth Savarimuthu

Author(s):  
Anahita Mohseni-Kabir ◽  
Charles Rich ◽  
Sonia Chernova ◽  
Candace L. Sidner ◽  
Daniel Miller

Sign in / Sign up

Export Citation Format

Share Document