Fast Prediction of Contrast Detection Probability

2020 ◽  
Vol 2020 (16) ◽  
pp. 40-1-40-7
Author(s):  
Robin Jenkin

Contrast detection probability (CDP) is proposed as an IEEE P2020 metric to predict camera performance intended for computer vision tasks for autonomous vehicles. Its calculation involves comparing combinations of pixel values between imaged patches. Computation of CDP for all meaningful combinations of m patches involves approximately 3/2(m2-m).n4 operations, where n is the length of one side of the patch in pixels. This work presents a method to estimate Weber contrast based CDP based on individual patch statistics and thus reduces to computation to approximately 4n2m calculations. For 180 patches of 10×10 pixels this is a reduction of approximately 6500 times and for 180 25×25 pixel patches, approximately 41000. The absolute error in the estimated CDP is less than 0.04 or 5% where the noise is well described by Gaussian statistics. Results are compared for simulated patches between the full calculation and the fast estimate. Basing the estimate of CDP on individual patch statistics, rather than by a pixel-to-pixel comparison facilitates the prediction of CDP values from a physical model of exposure and camera conditions. This allows Weber CDP behavior to be investigated for a wide variety of conditions and leads to the discovery that, for the case where contrast is increased by decreasing the tone value of one patch and therefore increasing noise as contrast increases, there exists a maxima which yields identical Weber CDP values for patches of different nominal contrast. This means Weber CDP is predicting the same detection performance for patches of different contrast.

2019 ◽  
Vol 63 (6) ◽  
pp. 60405-1-60405-9
Author(s):  
Robin B. Jenkin

Abstract Autonomous vehicles rely on the detection and recognition of objects within images to successfully navigate. Design of camera systems is non-trivial and involves trading system specifications across many parameters to optimize performance, such as f-number, focal length, CFA choice, pixel, and sensor size. As such, tools are needed to evaluate and predict the performance of such cameras for object detection. Contrast Detection Probability (CDP) is a relatively new objective image quality metric proposed to rank the performance of camera systems intended for use in autonomous vehicles. Detectability index is derived from signal detection theory as applied to imaging systems and is used to estimate the ability of a system to statistically distinguish objects, most notably in the medical imaging and defense fields. A brief overview of CDP and detectability index is given after which an imaging model is developed to compare and explore the behavior of each with respect to camera parameters. Behavior is compared to matched filter detection performance. It is shown that, while CDP can yield a first order ranking of camera systems under certain constraints, it fails to track detector performance for negative contrast targets and is relatively insensitive.


2020 ◽  
Vol 2020 (16) ◽  
pp. 60405-1-60405-9
Author(s):  
Robin B. Jenkin

Autonomous vehicles rely on the detection and recognition of objects within images to successfully navigate. Design of camera systems is non-trivial and involves trading system specifications across many parameters to optimize performance, such as f-number, focal length, CFA choice, pixel, and sensor size. As such, tools are needed to evaluate and predict the performance of such cameras for object detection. Contrast Detection Probability (CDP) is a relatively new objective image quality metric proposed to rank the performance of camera systems intended for use in autonomous vehicles. Detectability index is derived from signal detection theory as applied to imaging systems and is used to estimate the ability of a system to statistically distinguish objects, most notably in the medical imaging and defense fields. A brief overview of CDP and detectability index is given after which an imaging model is developed to compare and explore the behavior of each with respect to camera parameters. Behavior is compared to matched filter detection performance. It is shown that, while CDP can yield a first order ranking of camera systems under certain constraints, it fails to track detector performance for negative contrast targets and is relatively insensitive.


Fast track article for IS&T International Symposium on Electronic Imaging 2021: Autonomous Vehicles and Machines 2021 proceedings.


2021 ◽  
Vol 336 ◽  
pp. 07004
Author(s):  
Ruoyu Fang ◽  
Cheng Cai

Obstacle detection and target tracking are two major issues for intelligent autonomous vehicles. This paper proposes a new scheme to achieve target tracking and real-time obstacle detection of obstacles based on computer vision. ResNet-18 deep learning neural network is utilized for obstacle detection and Yolo-v3 deep learning neural network is employed for real-time target tracking. These two trained models can be deployed on an autonomous vehicle equipped with an NVIDIA Jetson Nano motherboard. The autonomous vehicle moves to avoid obstacles and follow tracked targets by camera. Adjusting the steering and movement of the autonomous vehicle according to the PID algorithm during the movement, therefore, will help the proposed vehicle achieve stable and precise tracking.


2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  

BACKGROUND Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and, suffer from lack of accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision has the potential to facilitate reliable quantification of food intake. OBJECTIVE To evaluate the accuracy of a novel smartphone application combining depth-sensing hardware with computer vision to quantify meal macronutrient content. METHODS The application ran on a smartphone with built-in depth sensor applying structured light (iPhone X) and estimated weight, macronutrient (carbohydrate, protein, fat) and energy content of 48 randomly chosen meals (type of meals: breakfast, cooked meals, snacks) encompassing 128 food items. Reference weight was generated by weighing individual food items using a precision scale. The study endpoints were fourfold: i) error of estimated meal weight; ii) error of estimated meal macronutrient content and energy content; iii) segmentation performance; and iv) processing time. RESULTS Mean±SD absolute error of the application’s estimate was 35.1±42.8g (14.0±12.2%) for weight, 5.5±5.1g (14.8±10.9%) for carbohydrate content, 2.4±5.6g (13.0±13.8%), 1.3±1.7g (12.3±12.8%) for fat content and 41.2±42.5kcal (12.7±10.8%) for energy content. While estimation accuracy was not affected by the viewing angle, the type of meal mattered with slightly worse performance for cooked meals compared to breakfast and snack. Segmentation required adjustment for 7 out of 128 items. Mean±SD processing time across all meals was 22.9±8.6s. CONCLUSIONS The present study evaluated the accuracy of a novel smartphone application with integrated depth-sensing camera and found a high accuracy in food estimation across all macronutrients. This was paralleled by a high segmentation performance and low processing time corroborating the high usability of this system.


2021 ◽  
pp. 69-72
Author(s):  
Aryan Verma

Presently computer vision is amongst the hottest topics in Artificial Intelligence and is being extensively used in Robotics, Detecting Objects, Classification of Images, Autonomous Vehicles & tracking, Semantic Segmentation along with photo correction in various apps. In Self driven cars/ vehicles, vision remains the main source of information for detecting lanes, traffic lights, pedestrian crossing and other visual features. [2]


2020 ◽  
Vol 12 (1–3) ◽  
pp. 1-308 ◽  
Author(s):  
Joel Janai ◽  
Fatma Güney ◽  
Aseem Behl ◽  
Andreas Geiger

2020 ◽  
Vol 2020 (16) ◽  
pp. 19-1-19-10
Author(s):  
Marc Geese

In this paper, we present an overview of automotive image quality challenges and link them to the physical properties of image acquisition. This process shows that the detection probability based KPIs are a helpful tool to link image quality to the tasks of the SAE classified supported and automated driving tasks. We develop questions around the challenges of the automotive image quality and show that especially color separation probability (CSP) and contrast detection probability (CDP) are a key enabler to improve the knowhow and overview of the image quality optimization problem. Next we introduce a proposal for color separation probability as a new KPI which is based on the random effects of photon shot noise and the properties of light spectra that cause color metamerism. This allows us to demonstrate the image quality influences related to color at different stages of the image generation pipeline. As a second part we investigated the already presented KPI Contrast Detection Probability and show how it links to different metrics of automotive imaging such as HDR, low light performance and detectivity of an object. As conclusion, this paper summarizes the status of the standardization status within IEEE P2020 of these detection probability based KPIs and outlines the next steps for these work packages.


Sign in / Sign up

Export Citation Format

Share Document