Full text
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (1MB) | Preview |
Rodriguez Ramos, Alejandro ORCID: https://orcid.org/0000-0002-3257-4602, Álvarez Fernández, Adrián, Bavle, Hriday, Campoy Cervera, Pascual
ORCID: https://orcid.org/0000-0002-9894-2009 and How, Jonathan P.
(2019).
Vision-Based Multirotor Following Using Synthetic Learning Techniques.
"Sensors", v. 19
(n. 21);
pp..
ISSN 1424-8220.
https://doi.org/10.3390/s19214794.
Title: | Vision-Based Multirotor Following Using Synthetic Learning Techniques |
---|---|
Author/s: |
|
Item Type: | Article |
Título de Revista/Publicación: | Sensors |
Date: | November 2019 |
ISSN: | 1424-8220 |
Volume: | 19 |
Subjects: | |
Freetext Keywords: | multirotor; UAV; following; synthetic learning; reinforcement learning; deep learning |
Faculty: | E.T.S.I. Industriales (UPM) |
Department: | Automática, Ingeniería Eléctrica y Electrónica e Informática Industrial |
UPM's Research Group: | Computer Vision CVG |
Creative Commons Licenses: | Recognition - No derivative works - Non commercial |
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (1MB) | Preview |
Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights).
Item ID: | 64118 |
---|---|
DC Identifier: | https://oa.upm.es/64118/ |
OAI Identifier: | oai:oa.upm.es:64118 |
DOI: | 10.3390/s19214794 |
Official URL: | https://www.mdpi.com/1424-8220/19/21/4794 |
Deposited by: | Memoria Investigacion |
Deposited on: | 28 Sep 2020 16:33 |
Last Modified: | 28 Sep 2020 16:33 |