scholarly journals Fusing Appearance and Prior Cues for Road Detection

2019 ◽  
Vol 9 (5) ◽  
pp. 996
Author(s):  
Fenglei Ren ◽  
Xin He ◽  
Zhonghui Wei ◽  
Lei Zhang ◽  
Jiawei He ◽  
...  

Road detection is a crucial research topic in computer vision, especially in the framework of autonomous driving and driver assistance. Moreover, it is an invaluable step for other tasks such as collision warning, vehicle detection, and pedestrian detection. Nevertheless, road detection remains challenging due to the presence of continuously changing backgrounds, varying illumination (shadows and highlights), variability of road appearance (size, shape, and color), and differently shaped objects (lane markings, vehicles, and pedestrians). In this paper, we propose an algorithm fusing appearance and prior cues for road detection. Firstly, input images are preprocessed by simple linear iterative clustering (SLIC), morphological processing, and illuminant invariant transformation to get superpixels and remove lane markings, shadows, and highlights. Then, we design a novel seed superpixels selection method and model appearance cues using the Gaussian mixture model with the selected seed superpixels. Next, we propose to construct a road geometric prior model offline, which can provide statistical descriptions and relevant information to infer the location of the road surface. Finally, a Bayesian framework is used to fuse appearance and prior cues. Experiments are carried out on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) road benchmark where the proposed algorithm shows compelling performance and achieves state-of-the-art results among the model-based methods.

2021 ◽  
pp. 1-19
Author(s):  
Mingzhou Liu ◽  
Xin Xu ◽  
Jing Hu ◽  
Qiannan Jiang

Road detection algorithms with high robustness as well as timeliness are the basis for developing intelligent assisted driving systems. To improve the robustness as well as the timeliness of unstructured road detection, a new algorithm is proposed in this paper. First, for the first frame in the video, the homography matrix H is estimated based on the improved random sample consensus (RANSAC) algorithm for different regions in the image, and the features of H are automatically extracted using convolutional neural network (CNN), which in turn enables road detection. Secondly, in order to improve the rate of subsequent similar frame detection, the color as well as texture features of the road are extracted from the detection results of the first frame, and the corresponding Gaussian mixture models (GMMs) are constructed based on Orchard-Bouman, and then the Gibbs energy function is used to achieve road detection in subsequent frames. Finally, the above algorithm is verified in a real unstructured road scene, and the experimental results show that the algorithm is 98.4% accurate and can process 58 frames per second with 1024×960 pixels.


2021 ◽  
Vol 11 (17) ◽  
pp. 7984
Author(s):  
Prabu Subramani ◽  
Khalid Nazim Abdul Sattar ◽  
Rocío Pérez de Prado ◽  
Balasubramanian Girirajan ◽  
Marcin Wozniak

Connected autonomous vehicles (CAVs) currently promise cooperation between vehicles, providing abundant and real-time information through wireless communication technologies. In this paper, a two-level fusion of classifiers (TLFC) approach is proposed by using deep learning classifiers to perform accurate road detection (RD). The proposed TLFC-RD approach improves the classification by considering four key strategies such as cross fold operation at input and pre-processing using superpixel generation, adequate features, multi-classifier feature fusion and a deep learning classifier. Specifically, the road is classified as drivable and non-drivable areas by designing the TLFC using the deep learning classifiers, and the detected information using the TLFC-RD is exchanged between the autonomous vehicles for the ease of driving on the road. The TLFC-RD is analyzed in terms of its accuracy, sensitivity or recall, specificity, precision, F1-measure and max F measure. The TLFC- RD method is also evaluated compared to three existing methods: U-Net with the Domain Adaptation Model (DAM), Two-Scale Fully Convolutional Network (TFCN) and a cooperative machine learning approach (i.e., TAAUWN). Experimental results show that the accuracy of the TLFC-RD method for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset is 99.12% higher than its competitors.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 928
Author(s):  
Man Kiat Wong ◽  
Tee Connie ◽  
Michael Kah Ong Goh ◽  
Li Pei Wong ◽  
Pin Shen Teh ◽  
...  

Background: Autonomous vehicles are important in smart transportation. Although exciting progress has been made, it remains challenging to design a safety mechanism for autonomous vehicles despite uncertainties and obstacles that occur dynamically on the road. Collision detection and avoidance are indispensable for a reliable decision-making module in autonomous driving. Methods: This study presents a robust approach for forward collision warning using vision data for autonomous vehicles on Malaysian public roads. The proposed architecture combines environment perception and lane localization to define a safe driving region for the ego vehicle. If potential risks are detected in the safe driving region, a warning will be triggered. The early warning is important to help avoid rear-end collision. Besides, an adaptive lane localization method that considers geometrical structure of the road is presented to deal with different road types. Results: Precision scores of mean average precision (mAP) 0.5, mAP 0.95 and recall of 0.14, 0.06979 and 0.6356 were found in this study. Conclusions: Experimental results have validated the effectiveness of the proposed approach under different lighting and environmental conditions.


Author(s):  
Jing Chen ◽  
Edin Šabić ◽  
Scott Mishler ◽  
Cody Parker ◽  
Motonori Yamaguchi

Objective The present study investigated the design of spatially oriented auditory collision-warning signals to facilitate drivers’ responses to potential collisions. Background Prior studies on collision warnings have mostly focused on manual driving. It is necessary to examine the design of collision warnings for safe takeover actions in semi-autonomous driving. Method In a video-based semi-autonomous driving scenario, participants responded to pedestrians walking across the road, with a warning tone presented in either the avoidance direction or the collision direction. The time interval between the warning tone and the potential collision was also manipulated. In Experiment 1, pedestrians always started walking from one side of the road to the other side. In Experiment 2, pedestrians appeared in the middle of the road and walked toward either side of the road. Results In Experiment 1, drivers reacted to the pedestrian faster with collision-direction warnings than with avoidance-direction warnings. In Experiment 2, the difference between the two warning directions became nonsignificant. In both experiments, shorter time intervals to potential collisions resulted in faster reactions but did not influence the effect of warning direction. Conclusion The collision-direction warnings were advantageous over the avoidance-direction warnings only when they occurred at the same lateral location as the pedestrian, indicating that this advantage was due to the capture of attention by the auditory warning signals. Application The present results indicate that drivers would benefit most when warnings occur at the side of potential collision objects rather than the direction of a desirable action during semi-autonomous driving.


2021 ◽  
Vol 11 (8) ◽  
pp. 3531
Author(s):  
Hesham M. Eraqi ◽  
Karim Soliman ◽  
Dalia Said ◽  
Omar R. Elezaby ◽  
Mohamed N. Moustafa ◽  
...  

Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads.


Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.


2021 ◽  
Vol 13 (22) ◽  
pp. 4525
Author(s):  
Junjie Zhang ◽  
Kourosh Khoshelham ◽  
Amir Khodabandeh

Accurate and seamless vehicle positioning is fundamental for autonomous driving tasks in urban environments, requiring the provision of high-end measuring devices. Light Detection and Ranging (lidar) sensors, together with Global Navigation Satellite Systems (GNSS) receivers, are therefore commonly found onboard modern vehicles. In this paper, we propose an integration of lidar and GNSS code measurements at the observation level via a mixed measurement model. An Extended Kalman-Filter (EKF) is implemented to capture the dynamic of the vehicle movement, and thus, to incorporate the vehicle velocity parameters into the measurement model. The lidar positioning component is realized using point cloud registration through a deep neural network, which is aided by a high definition (HD) map comprising accurately georeferenced scans of the road environments. Experiments conducted in a densely built-up environment show that, by exploiting the abundant measurements of GNSS and high accuracy of lidar, the proposed vehicle positioning approach can maintain centimeter-to meter-level accuracy for the entirety of the driving duration in urban canyons.


2021 ◽  
Author(s):  
Da-Ren Chen ◽  
Wei-Min Chiu

Abstract Machine learning techniques have been used to increase detection accuracy of cracks in road surfaces. Most studies failed to consider variable illumination conditions on the target of interest (ToI), and only focus on detecting the presence or absence of road cracks. This paper proposes a new road crack detection method, IlumiCrack, which integrates Gaussian mixture models (GMM) and object detection CNN models. This work provides the following contributions: 1) For the first time, a large-scale road crack image dataset with a range of illumination conditions (e.g., day and night) is prepared using a dashcam. 2) Based on GMM, experimental evaluations on 2 to 4 levels of brightness are conducted for optimal classification. 3) the IlumiCrack framework is used to integrate state-of-the-art object detecting methods with CNN to classify the road crack images into eight types with high accuracy. Experimental results show that IlumiCrack outperforms the state-of-the-art R-CNN object detection frameworks.


Sign in / Sign up

Export Citation Format

Share Document