Dynamic Targets Detection for Robotic Applications Using Panoramic Camera Based on Optical Flow

2013 ◽  
Vol 376 ◽  
pp. 455-460
Author(s):  
Wei Zhu ◽  
Li Tian ◽  
Fang Di ◽  
Jian Li Li ◽  
Ke Jie Li

Optical flow method is an important and valid method in the field of detection and tracking of moving objects for robot inspection system. Due to the traditional Horn-Schunck optical flow method and Lucas-Kanade optical flow method cannot meet the demands of real-time and accuracy simultaneously, an improved optical flow method based on Gaussian image pyramid is proposed. The layered structure of the images can be obtained by desampling of the original sequential images so that the motion with the high speed can be changed into continuous motion with lower speed. Then the optical flows of corner points of the lowest layer will be calculated by the LK method and be delivered to the upper layer and so on. Thus the estimated optical flow vectors of the original sequential images will be obtained. In this way, the requirement of accuracy and real time could be met for robotic moving obstacle recognition.

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xiong Zhao ◽  
Tao Zuo ◽  
Xinyu Hu

Most of the current visual Simultaneous Localization and Mapping (SLAM) algorithms are designed based on the assumption of a static environment, and their robustness and accuracy in the dynamic environment do not behave well. The reason is that moving objects in the scene will cause the mismatch of features in the pose estimation process, which further affects its positioning and mapping accuracy. In the meantime, the three-dimensional semantic map plays a key role in mobile robot navigation, path planning, and other tasks. In this paper, we present OFM-SLAM: Optical Flow combining MASK-RCNN SLAM, a novel visual SLAM for semantic mapping in dynamic indoor environments. Firstly, we use the Mask-RCNN network to detect potential moving objects which can generate masks of dynamic objects. Secondly, an optical flow method is adopted to detect dynamic feature points. Then, we combine the optical flow method and the MASK-RCNN for full dynamic points’ culling, and the SLAM system is able to track without these dynamic points. Finally, the semantic labels obtained from MASK-RCNN are mapped to the point cloud for generating a three-dimensional semantic map that only contains the static parts of the scenes and their semantic information. We evaluate our system in public TUM datasets. The results of our experiments demonstrate that our system is more effective in dynamic scenarios, and the OFM-SLAM can estimate the camera pose more accurately and acquire a more precise localization in the high dynamic environment.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Wei Sun ◽  
Min Sun ◽  
Xiaorui Zhang ◽  
Mian Li

Video-based moving vehicle detection and tracking is an important prerequisite for vehicle counting under complex transportation environments. However, in the complex natural scene, the conventional optical flow method cannot accurately detect the boundary of the moving vehicle due to the generation of the shadow. In addition, traditional vehicle tracking algorithms are often occluded by trees, buildings, etc., and particle filters are also susceptible to particle degradation. To solve this problem, this paper proposes a kind of moving vehicle detection and tracking based on the optical flow method and immune particle filter algorithm. The proposed method firstly uses the optical flow method to roughly detect the moving vehicle and then uses the shadow detection algorithm based on the HSV color space to mark the shadow position after threshold segmentation and further combines the region-labeling algorithm to realize the shadow removal and accurately detect the moving vehicle. Improved affinity calculation and mutation function of antibody are proposed to make the particle filter algorithm have certain adaptivity and robustness to scene interference. Experiments are carried out in complex traffic scenes with shadow and occlusion interference. The experimental results show that the proposed algorithm can well solve the interference of shadow and occlusion and realize accurate detection and robust tracking of moving vehicles under complex transportation environments, which has the potentiality to be processed on a cloud computing platform.


2012 ◽  
Vol 24 (4) ◽  
pp. 686-698 ◽  
Author(s):  
Lei Chen ◽  
◽  
Hua Yang ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel method for accurate optical flow estimation in real time for both high-speed and low-speed moving objects based on High-Frame-Rate (HFR) videos. We introduce a multiframe-straddling function to select several pairs of images with different frame intervals from an HFR image sequence even when the estimated optical flow is required to output at standard video rates (NTSC at 30 fps and PAL at 25 fps). The multiframestraddling function can remarkably improve the measurable range of velocities in optical flow estimation without heavy computation by adaptively selecting a small frame interval for high-speed objects and a large frame interval for low-speed objects. On the basis of the relationship between the frame intervals and the accuracies of the optical flows estimated by the Lucas–Kanade method, we devise a method to determine multiple frame intervals in optical flow estimation and select an optimal frame interval from these intervals according to the amplitude of the estimated optical flow. Our method was implemented using software on a high-speed vision platform, IDP Express. The estimated optical flows were accurately outputted at intervals of 40 ms in real time by using three pairs of 512×512 images; these images were selected by frame-straddling a 2000-fps video with intervals of 0.5, 1.5, and 5 ms. Several experiments were performed for high-speed movements to verify that our method can remarkably improve the measurable range of velocities in optical flow estimation, compared to optical flows estimated for 25-fps videos with the Lucas–Kanade method.


Author(s):  
Yingnian Wu ◽  
Qi Yang ◽  
Xiaohang Zhou

The theory and technology of human–machine coordination and natural interaction have a wide range of application prospect in future smart factories. This paper elaborates on the design and implementation of a body-following wheeled robot system based on Kinect, as well as the use of gesture recognition function to enhance the interactive performance. An improved optical flow method is put forward to obtain the direction and speed of the target movement. The smoothing parameters in traditional optical flow are replaced by variables. The new smoothing parameter is related to the local gradient value. Compared with the traditional optical flow method, it can reflect the status of moving objects more clearly, reduce noise and ensure real-time performance, solving the problem of tracking state oscillation caused by the skeleton node drifts when the target is occluded. The experiment on the wheeled robot confirms that the system can accomplish the tracking task in a preferable way.


Sign in / Sign up

Export Citation Format

Share Document