Realtime feature points matching method in game engine

2008 ◽  
Vol 28 (3) ◽  
pp. 799-800
Author(s):  
Yang LIU
Author(s):  
Youssef Ouadid ◽  
Abderrahmane Elbalaoui ◽  
Mehdi Boutaounte ◽  
Mohamed Fakir ◽  
Brahim Minaoui

<p>In this paper, a graph based handwritten Tifinagh character recognition system is presented. In preprocessing Zhang Suen algorithm is enhanced. In features extraction, a novel key point extraction algorithm is presented. Images are then represented by adjacency matrices defining graphs where nodes represent feature points extracted by a novel algorithm. These graphs are classified using a graph matching method. Experimental results are obtained using two databases to test the effectiveness. The system shows good results in terms of recognition rate.</p>


2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


2018 ◽  
Vol 55 (4) ◽  
pp. 041005
Author(s):  
赵夫群 Zhao Fuqun ◽  
耿国华 Geng Guohua

Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2553 ◽  
Author(s):  
Jingwen Cui ◽  
Jianping Zhang ◽  
Guiling Sun ◽  
Bowen Zheng

Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v2 depth camera. Secondly, the YOLOv3 algorithm is used to identify the various types of target crops in the RGB images, and the feature points of the target crops are determined. Finally, the 3D coordinates of the feature points are displayed on the point cloud images. Compared with other methods, this method of crop identification has high accuracy and small positioning error, which lays a good foundation for the subsequent harvesting of crops using mechanical arms. In summary, the method used in this paper can be considered effective.


Sign in / Sign up

Export Citation Format

Share Document