scholarly journals A Multi-Class Multi-Movement Vehicle Counting Framework for Traffic Analysis in Complex Areas Using CCTV Systems

Energies ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 2036 ◽  
Author(s):  
Khac-Hoai Nam Bui ◽  
Hongsuk Yi ◽  
Jiho Cho

Traffic analysis using computer vision techniques is attracting more attention for the development of intelligent transportation systems. Consequently, counting traffic volume based on the CCTV system is one of the main applications. However, this issue is still a challenging task, especially in the case of complex areas that involve many vehicle movements. This study performs an investigation of how to improve video-based vehicle counting for traffic analysis. Specifically, we propose a comprehensive framework with multiple classes and movements for vehicle counting. In particular, we first adopt state-of-the-art deep learning methods for vehicle detection and tracking. Then, an appropriate trajectory approach for monitoring the movements of vehicles using distinguished regions tracking is presented in order to improve the performance of the counting. Regarding the experiment, we collect and pre-process the CCTV data at a complex intersection to evaluate our proposed framework. In particular, the implementation indicates the promising results of our proposed method, which achieve accuracy around 80% to 98% for different movements for a very complex scenario with only a single view of the camera.

2018 ◽  
Vol 15 (1) ◽  
pp. 172988141774994 ◽  
Author(s):  
Xinyu Zhang ◽  
Hongbo Gao ◽  
Chong Xue ◽  
Jianhui Zhao ◽  
Yuchao Liu

Intelligent transportation systems and safety driver-assistance systems are important research topics in the field of transportation and traffic management. This study investigates the key problems in front vehicle detection and tracking based on computer vision. A video of a driven vehicle on an urban structured road is used to predict the subsequent motion of the front vehicle. This study provides the following contributions. (1) A new adaptive threshold segmentation algorithm is presented in the image preprocessing phase. This algorithm is resistant to interference from complex environments. (2) Symmetric computation based on a traditional histogram of gradient (HOG) feature vector is added in the vehicle detection phase. Symmetric HOG feature with AdaBoost classification improves the detection rate of the target vehicle. (3) A motion model based on adaptive Kalman filter is established. Experiments show that the prediction of Kalman filter model provides a reliable region for eliminating the interference of shadows and sharply decreasing the missed rate.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1136
Author(s):  
David Augusto Ribeiro ◽  
Juan Casavílca Silva ◽  
Renata Lopes Rosa ◽  
Muhammad Saadi ◽  
Shahid Mumtaz ◽  
...  

Light field (LF) imaging has multi-view properties that help to create many applications that include auto-refocusing, depth estimation and 3D reconstruction of images, which are required particularly for intelligent transportation systems (ITSs). However, cameras can present a limited angular resolution, becoming a bottleneck in vision applications. Thus, there is a challenge to incorporate angular data due to disparities in the LF images. In recent years, different machine learning algorithms have been applied to both image processing and ITS research areas for different purposes. In this work, a Lightweight Deformable Deep Learning Framework is implemented, in which the problem of disparity into LF images is treated. To this end, an angular alignment module and a soft activation function into the Convolutional Neural Network (CNN) are implemented. For performance assessment, the proposed solution is compared with recent state-of-the-art methods using different LF datasets, each one with specific characteristics. Experimental results demonstrated that the proposed solution achieved a better performance than the other methods. The image quality results obtained outperform state-of-the-art LF image reconstruction methods. Furthermore, our model presents a lower computational complexity, decreasing the execution time.


2021 ◽  
Author(s):  
Qing Xu ◽  
Xuewu Lin ◽  
Mengchi CAI ◽  
Yu-ang Guo ◽  
Chuang Zhang ◽  
...  

Abstract Environment perception is one of the most critical technology of intelligent transportation systems (ITS). Motion interaction between multiple vehicles in ITS makes it important to perform multi-object tracking (MOT). However, most existing MOT algorithms follow the tracking-by-detection framework, which separates detection and tracking into two independent segments and limit the global efficiency. Recently, a few algorithms have combined feature extraction into one network; however, the tracking portion continues to rely on data association, and requires complex post-processing for life cycle management. Those methods do not combine detection and tracking efficiently. This paper presents a novel network to realize joint multiobject detection and tracking in an end-to-end manner for ITS, named as global correlation network (GCNet). Unlike most object detection methods, GCNet introduces a global correlation layer for regression of absolute size and coordinates of bounding boxes, instead of offsetting predictions. The pipeline of detection and tracking in GCNet is conceptually simple, and does not require complicated tracking strategies such as non-maximum suppression and data association. GCNet was evaluated on a multi-vehicle tracking dataset, UA-DETRAC, demonstrating promising performance compared to state-of-the-art detectors and trackers.


Traffic data plays a major role in transport related applications. The problem of missing data has greatly impact the performance of Intelligent transportation systems(ITS). In this work impute the missing traffic data with spatio-temporal exploitation for high precision result under various missing rates. Deep learning based stacked denoise autoencoder is proposed with efficient Elu activation function to remove noise and impute the missing value.This imputed value will be used in analyses and prediction of vehicle traffic. Results are discussed that the proposed method outperforms well in state of the art approaches.


Author(s):  
J. Apeltauer ◽  
A. Babinec ◽  
D. Herman ◽  
T. Apeltauer

This paper presents a new approach to simultaneous detection and tracking of vehicles moving through an intersection in aerial images acquired by an unmanned aerial vehicle (UAV). Detailed analysis of spatial and temporal utilization of an intersection is an important step for its design evaluation and further traffic inspection. Traffic flow at intersections is typically very dynamic and requires continuous and accurate monitoring systems. Conventional traffic surveillance relies on a set of fixed cameras or other detectors, requiring a high density of the said devices in order to monitor the intersection in its entirety and to provide data in sufficient quality. Alternatively, a UAV can be converted to a very agile and responsive mobile sensing platform for data collection from such large scenes. However, manual vehicle annotation in aerial images would involve tremendous effort. In this paper, the proposed combination of vehicle detection and tracking aims to tackle the problem of automatic traffic analysis at an intersection from visual data. The presented method has been evaluated in several real-life scenarios.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Hoanh Nguyen

Vision-based traffic sign detection plays a crucial role in intelligent transportation systems. Recently, many approaches based on deep learning for traffic sign detection have been proposed and showed better performance compared with traditional approaches. However, due to difficult conditions in driving environment and the size of traffic signs in traffic scene images, the performance of deep learning-based methods on small traffic sign detection is still limited. In addition, the inference speed of current state-of-the-art approaches on traffic sign detection is still slow. This paper proposes a deep learning-based approach to improve the performance of small traffic sign detection in driving environments. First, a lightweight and efficient architecture is adopted as the base network to address the issue of the inference speed. To enhance the performance on small traffic sign detection, a deconvolution module is adopted to generate an enhanced feature map by aggregating a lower-level feature map with a higher-level feature map. Then, two improved region proposal networks are used to generate proposals from the highest-level feature map and the enhanced feature map. The proposed improved region proposal network is designed for fast and accuracy proposal generation. In the experiments, the German Traffic Sign Detection Benchmark dataset is used to evaluate the effectiveness of each enhanced module, and the Tsinghua-Tencent 100K dataset is used to compare the effectiveness of the proposed approach with other state-of-the-art approaches on traffic sign detection. Experimental results on Tsinghua-Tencent 100K dataset show that the proposed approach achieves competitive performance compared with current state-of-the-art approaches on traffic sign detection while being faster and simpler.


2019 ◽  
Vol 11 (18) ◽  
pp. 2155 ◽  
Author(s):  
Jie Wang ◽  
Sandra Simeonova ◽  
Mozhdeh Shahbazi

Along with the advancement of light-weight sensing and processing technologies, unmanned aerial vehicles (UAVs) have recently become popular platforms for intelligent traffic monitoring and control. UAV-mounted cameras can capture traffic-flow videos from various perspectives providing a comprehensive insight into road conditions. To analyze the traffic flow from remotely captured videos, a reliable and accurate vehicle detection-and-tracking approach is required. In this paper, we propose a deep-learning framework for vehicle detection and tracking from UAV videos for monitoring traffic flow in complex road structures. This approach is designed to be invariant to significant orientation and scale variations in the videos. The detection procedure is performed by fine-tuning a state-of-the-art object detector, You Only Look Once (YOLOv3), using several custom-labeled traffic datasets. Vehicle tracking is conducted following a tracking-by-detection paradigm, where deep appearance features are used for vehicle re-identification, and Kalman filtering is used for motion estimation. The proposed methodology is tested on a variety of real videos collected by UAVs under various conditions, e.g., in late afternoons with long vehicle shadows, in dawn with vehicles lights being on, over roundabouts and interchange roads where vehicle directions change considerably, and from various viewpoints where vehicles’ appearance undergo substantial perspective distortions. The proposed tracking-by-detection approach performs efficiently at 11 frames per second on color videos of 2720p resolution. Experiments demonstrated that high detection accuracy could be achieved with an average F1-score of 92.1%. Besides, the tracking technique performs accurately, with an average multiple-object tracking accuracy (MOTA) of 81.3%. The proposed approach also addressed the shortcomings of the state-of-the-art in multi-object tracking regarding frequent identity switching, resulting in a total of only one identity switch over every 305 tracked vehicles.


Sign in / Sign up

Export Citation Format

Share Document