Improved AGV Path Planning Algorithm Based on Grid Map Model

Author(s):  
Chen Zeyu ◽  
Feng Zixiao ◽  
Fan Zhiqiang
2021 ◽  
Vol 7 ◽  
pp. e612
Author(s):  
Dong Wang ◽  
Jie Zhang ◽  
Jiucai Jin ◽  
Deqing Liu ◽  
Xingpeng Mao

A global path planning algorithm for unmanned surface vehicles (USVs) with short time requirements in large-scale and complex multi-island marine environments is proposed. The fast marching method-based path planning for USVs is performed on grid maps, resulting in a decrease in computer efficiency for larger maps. This can be mitigated by improving the algorithm process. In the proposed algorithm, path planning is performed twice in maps with different spatial resolution (SR) grids. The first path planning is performed in a low SR grid map to determine effective regions, and the second is executed in a high SR grid map to rapidly acquire the final high precision global path. In each path planning process, a modified inshore-distance-constraint fast marching square (IDC-FM2) method is applied. Based on this method, the path portions around an obstacle can be constrained within a region determined by two inshore-distance parameters. The path planning results show that the proposed algorithm can generate smooth and safe global paths wherein the portions that bypass obstacles can be flexibly modified. Compared with the path planning based on the IDC-FM2 method applied to a single grid map, this algorithm can significantly improve the calculation efficiency while maintaining the precision of the planned path.


2021 ◽  
Vol 11 (16) ◽  
pp. 7378
Author(s):  
Hongchao Zhuang ◽  
Kailun Dong ◽  
Yuming Qi ◽  
Ning Wang ◽  
Lei Dong

In order to effectively solve the inefficient path planning problem of mobile robots traveling in multiple destinations, a multi-destination global path planning algorithm is proposed based on the optimal obstacle value. A grid map is built to simulate the real working environment of mobile robots. Based on the rules of the live chess game in Go, the grid map is optimized and reconstructed. This grid of environment and the obstacle values of grid environment between each two destination points are obtained. Using the simulated annealing strategy, the optimization of multi-destination arrival sequence for the mobile robot is implemented by combining with the obstacle value between two destination points. The optimal mobile node of path planning is gained. According to the Q-learning algorithm, the parameters of the reward function are optimized to obtain the q value of the path. The optimal path of multiple destinations is acquired when mobile robots can pass through the fewest obstacles. The multi-destination path planning simulation of the mobile robot is implemented by MATLAB software (Natick, MA, USA, R2016b) under multiple working conditions. The Pareto numerical graph is obtained. According to comparing multi-destination global planning with single-destination path planning under the multiple working conditions, the length of path in multi-destination global planning is reduced by 22% compared with the average length of the single-destination path planning algorithm. The results show that the multi-destination global path planning method of the mobile robot based on the optimal obstacle value is reasonable and effective. Multi-destination path planning method proposed in this article is conducive to improve the terrain adaptability of mobile robots.


2021 ◽  
Vol 9 (3) ◽  
pp. 252
Author(s):  
Yushan Sun ◽  
Xiaokun Luo ◽  
Xiangrui Ran ◽  
Guocheng Zhang

This research aims to solve the safe navigation problem of autonomous underwater vehicles (AUVs) in deep ocean, which is a complex and changeable environment with various mountains. When an AUV reaches the deep sea navigation, it encounters many underwater canyons, and the hard valley walls threaten its safety seriously. To solve the problem on the safe driving of AUV in underwater canyons and address the potential of AUV autonomous obstacle avoidance in uncertain environments, an improved AUV path planning algorithm based on the deep deterministic policy gradient (DDPG) algorithm is proposed in this work. This method refers to an end-to-end path planning algorithm that optimizes the strategy directly. It takes sensor information as input and driving speed and yaw angle as outputs. The path planning algorithm can reach the predetermined target point while avoiding large-scale static obstacles, such as valley walls in the simulated underwater canyon environment, as well as sudden small-scale dynamic obstacles, such as marine life and other vehicles. In addition, this research aims at the multi-objective structure of the obstacle avoidance of path planning, modularized reward function design, and combined artificial potential field method to set continuous rewards. This research also proposes a new algorithm called deep SumTree-deterministic policy gradient algorithm (SumTree-DDPG), which improves the random storage and extraction strategy of DDPG algorithm experience samples. According to the importance of the experience samples, the samples are classified and stored in combination with the SumTree structure, high-quality samples are extracted continuously, and SumTree-DDPG algorithm finally improves the speed of the convergence model. Finally, this research uses Python language to write an underwater canyon simulation environment and builds a deep reinforcement learning simulation platform on a high-performance computer to conduct simulation learning training for AUV. Data simulation verified that the proposed path planning method can guide the under-actuated underwater robot to navigate to the target without colliding with any obstacles. In comparison with the DDPG algorithm, the stability, training’s total reward, and robustness of the improved Sumtree-DDPG algorithm planner in this study are better.


Sign in / Sign up

Export Citation Format

Share Document