scholarly journals Soccer-Assisted Training Robot Based on Image Recognition Omnidirectional Movement

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Bin Tan

With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.

SIMULATION ◽  
2019 ◽  
Vol 96 (2) ◽  
pp. 169-183
Author(s):  
Saumya R Sahoo ◽  
Shital S Chiddarwar

Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is used. The developed controller, the Kinect-based vision system, and the HIL configuration are validated in the HIL simulation-based environment. The results confirm that the proposed HIL system can be an efficient tool for verifying the performance of the hardware and simulation designs of flatness-based control systems for omnidirectional mobile robots.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110593
Author(s):  
Ivan Kholodilin ◽  
Yuan Li ◽  
Qinglin Wang ◽  
Paul David Bourke

Recent advancements in deep learning require a large amount of the annotated training data containing various terms and conditions of the environment. Thus, developing and testing algorithms for the navigation of mobile robots can be expensive and time-consuming. Motivated by the aforementioned problems, this article presents a photorealistic simulator for the computer vision community working with omnidirectional vision systems. Built using unity, the simulator integrates sensors, mobile robots, and elements of the indoor environment and allows one to generate synthetic photorealistic data sets with automatic ground truth annotations. With the aid of the proposed simulator, two practical applications are studied, namely extrinsic calibration of the vision system and three-dimensional reconstruction of the indoor environment. For the proposed calibration and reconstruction techniques, the processes themselves are simple, robust, and accurate. Proposed methods are evaluated experimentally with data generated by the simulator. The proposed simulator and supporting materials are available online: http://www.ilabit.org .


2011 ◽  
Vol 5 (4) ◽  
pp. 569-574
Author(s):  
Atsushi Ozato ◽  
◽  
Noriaki Maru ◽  

This article proposes a Linear Visual Servoing (LVS)-based method of controlling the position and attitude of omnidirectional mobile robots. This article uses two markers to express their target position and attitude in binocular visual space coordinates, based on which new binocular visual space information which includes position and attitude angle information is defined. Binocular visual space information and the motion space of an omnidirectional mobile robot are linearly approximated, and, using the approximation matrix and the difference in the binocular visual space information between a target marker and a robot marker, the robot’s translational velocity and rotational velocity are generated. Since those are all generated based only on disparity information on an image, which is similar to how this is done in existing LVS, a camera angle is not required. Thus, the method is robust against calibration errors in camera angles, as is existing LVS. The effectiveness of the proposed method is confirmed by simulation.


2004 ◽  
Vol 16 (1) ◽  
pp. 80-89 ◽  
Author(s):  
Akihiro Matsumoto ◽  
◽  
Shoji Tsukuda ◽  
Gosuke Yoshita ◽  

We used an omnidirectional vision for navigating an omnidirectional mobile robot. We examined teaching algorithms by showing a few images for mobile robot navigation, and verified its feasibility through experiments. To improve positioning accuracy, we designed and tested a positional error compensation algorithm that fits omnidirectional images.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141667813 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra Carolina Hernandez ◽  
Jonathan Crespo ◽  
Ramon Barber

The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.


Robotica ◽  
2013 ◽  
Vol 31 (6) ◽  
pp. 969-980 ◽  
Author(s):  
Yaser Maddahi ◽  
Ali Maddahi ◽  
Nariman Sepehri

SUMMARYOdometry errors, which occur during wheeled mobile robot movement, are inevitable as they originate from hard-to-avoid imperfections such as unequal wheels diameters, joints misalignment, backlash, slippage in encoder pulses, and much more. This paper extends the method, developed previously by the authors for calibration of differential mobile robots, to reduce positioning errors for the class of mobile robots having omnidirectional wheels. The method is built upon the easy to construct kinematic formulation of omnidirectional wheels, and is capable of compensating both systematic and non-systematic errors. The effectiveness of the method is experimentally investigated on a prototype three-wheeled omnidirectional mobile robot. The validations include tracking unseen trajectories, self-rotation, as well as travelling over surface irregularities. Results show that the method is very effective in improving position errors by at least 68%. Since the method is simple to implement and has no assumption on the sources of errors, it should be considered seriously as a tool for calibrating omnidirectional mobile having any number of wheels.


Sign in / Sign up

Export Citation Format

Share Document