3d object detection
Recently Published Documents


TOTAL DOCUMENTS

469
(FIVE YEARS 381)

H-INDEX

30
(FIVE YEARS 16)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 613
Author(s):  
Pablo Venegas ◽  
Eugenio Ivorra ◽  
Mario Ortega ◽  
Idurre Sáez de Ocáriz

The maintenance of industrial equipment extends its useful life, improves its efficiency, reduces the number of failures, and increases the safety of its use. This study proposes a methodology to develop a predictive maintenance tool based on infrared thermographic measures capable of anticipating failures in industrial equipment. The thermal response of selected equipment in normal operation and in controlled induced anomalous operation was analyzed. The characterization of these situations enabled the development of a machine learning system capable of predicting malfunctions. Different options within the available conventional machine learning techniques were analyzed, assessed, and finally selected for electronic equipment maintenance activities. This study provides advances towards the robust application of machine learning combined with infrared thermography and augmented reality for maintenance applications of industrial equipment. The predictive maintenance system finally selected enables automatic quick hand-held thermal inspections using 3D object detection and a pose estimation algorithm, making predictions with an accuracy of 94% at an inference time of 0.006 s.


Cobot ◽  
2022 ◽  
Vol 1 ◽  
pp. 2
Author(s):  
Hao Peng ◽  
Guofeng Tong ◽  
Zheng Li ◽  
Yaqi Wang ◽  
Yuyuan Shao

Background: 3D object detection based on point clouds in road scenes has attracted much attention recently. The voxel-based methods voxelize the scene to regular grids, which can be processed with the advanced feature learning frameworks based on convolutional layers for semantic feature learning. The point-based methods can extract the geometric feature of the point due to the coordinate reservations. The combination of the two is effective for 3D object detection. However, the current methods use a voxel-based detection head with anchors for classification and localization. Although the preset anchors cover the entire scene, it is not suitable for detection tasks with larger scenes and multiple categories of objects, due to the limitation of the voxel size. Additionally, the misalignment between the predicted confidence and proposals in the Regions of the Interest (ROI) selection bring obstacles to 3D object detection. Methods: We investigate the combination of voxel-based methods and point-based methods for 3D object detection. Additionally, a voxel-to-point module that captures semantic and geometric features is proposed in the paper. The voxel-to-point module is conducive to the detection of small-size objects and avoids the presets of anchors in the inference stage. Moreover, a confidence adjustment module with the center-boundary-aware confidence attention is proposed to solve the misalignment between the predicted confidence and proposals in the regions of the interest selection. Results: The proposed method has achieved state-of-the-art results for 3D object detection in the  Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) object detection dataset. Actually, as of September 19, 2021, our method ranked 1st in the 3D and Bird Eyes View (BEV) detection of cyclists tagged with difficulty level ‘easy’, and ranked 2nd in the 3D detection of cyclists tagged with ‘moderate’. Conclusions: We propose an end-to-end two-stage 3D object detector with voxel-to-point module and confidence adjustment module.


Author(s):  
Niclas Vodisch ◽  
Ozan Unal ◽  
Ke Li ◽  
Luc Van Gool ◽  
Dengxin Dai

2022 ◽  
pp. 108524
Author(s):  
Rui Qian ◽  
Xin Lai ◽  
Xirong Li

Author(s):  
Joanna Stanisz ◽  
Konrad Lis ◽  
Marek Gorgon

AbstractIn this paper, we present a hardware-software implementation of a deep neural network for object detection based on a point cloud obtained by a LiDAR sensor. The PointPillars network was used in the research, as it is a reasonable compromise between detection accuracy and calculation complexity. The Brevitas / PyTorch tools were used for network quantisation (described in our previous paper) and the FINN tool for hardware implementation in the reprogrammable Zynq UltraScale+ MPSoC device. The obtained results show that quite a significant computation precision limitation along with a few network architecture simplifications allows the solution to be implemented on a heterogeneous embedded platform with maximum 19% AP loss in 3D, maximum 8% AP loss in BEV and execution time 375ms (the FPGA part takes 262ms). We have also compared our solution in terms of inference speed with a Vitis AI implementation proposed by Xilinx (19 Hz frame rate). Especially, we have thoroughly investigated the fundamental causes of differences in the frame rate of both solutions. The code is available at https://github.com/vision-agh/pp-finn.


Machines ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Mu Chen ◽  
Huaici Zhao ◽  
Pengfei Liu

Three-dimensional (3D) object detection is an important task in the field of machine vision, in which the detection of 3D objects using monocular vision is even more challenging. We observe that most of the existing monocular methods focus on the design of the feature extraction framework or embedded geometric constraints, but ignore the possible errors in the intermediate process of the detection pipeline. These errors may be further amplified in the subsequent processes. After exploring the existing detection framework of keypoints, we find that the accuracy of keypoints prediction will seriously affect the solution of 3D object position. Therefore, we propose a novel keypoints uncertainty prediction network (KUP-Net) for monocular 3D object detection. In this work, we design an uncertainty prediction module to characterize the uncertainty that exists in keypoint prediction. Then, the uncertainty is used for joint optimization with object position. In addition, we adopt position-encoding to assist the uncertainty prediction, and use a timing coefficient to optimize the learning process. The experiments on our detector are conducted on the KITTI benchmark. For the two levels of easy and moderate, we achieve accuracy of 17.26 and 11.78 in AP3D, and achieve accuracy of 23.59 and 16.63 in APBEV, which are higher than the latest method KM3D.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Huaijin Liu ◽  
Jixiang Du ◽  
Yong Zhang ◽  
Hongbo Zhang

Currently, there are many kinds of voxel-based multisensor 3D object detectors, while point-based multisensor 3D object detectors have not been fully studied. In this paper, we propose a new 3D two-stage object detection method based on point cloud and image fusion to improve the detection accuracy. To address the problem of insufficient semantic information of point cloud, we perform multiscale deep fusion of LiDAR point and camera image in a point-wise manner to enhance point features. Due to the imbalance of LiDAR points, the object point cloud in the long-distance area is sparse. We design a point cloud completion module to predict the spatial shape of objects in the candidate boxes and extract the structural information to improve the feature representation ability to further refine the boxes. The framework is evaluated on widely used KITTI and SUN-RGBD dataset. Experimental results show that our method outperforms all state-of-the-art point-based 3D object detection methods and has comparable performance to voxel-based methods as well.


2021 ◽  
Vol 12 (9) ◽  
pp. 459-469
Author(s):  
D. D. Rukhovich ◽  

In this paper, we propose a novel method of joint 3D object detection and room layout estimation. The proposed method surpasses all existing methods of 3D object detection from monocular images on the indoor SUN RGB-D dataset. Moreover, the proposed method shows competitive results on the ScanNet dataset in multi-view mode. Both these datasets are collected in various residential, administrative, educational and industrial spaces, and altogether they cover almost all possible use cases. Moreover, we are the first to formulate and solve a problem of multi-class 3D object detection from multi-view inputs in indoor scenes. The proposed method can be integrated into the controlling systems of mobile robots. The results of this study can be used to address a navigation task, as well as path planning, capturing and manipulating scene objects, and semantic scene mapping.


Sign in / Sign up

Export Citation Format

Share Document