confidence estimation
Recently Published Documents


TOTAL DOCUMENTS

231
(FIVE YEARS 69)

H-INDEX

21
(FIVE YEARS 4)

Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 140
Author(s):  
Huixiang Shao ◽  
Zhijiang Zhang ◽  
Xiaoyu Feng ◽  
Dan Zeng

Point cloud registration is used to find a rigid transformation from the source point cloud to the target point cloud. The main challenge in the point cloud registration is in finding correct correspondences in complex scenes that may contain many noise and repetitive structures. At present, many existing methods use outlier rejections to help the network obtain more accurate correspondences, but they often ignore the spatial consistency between keypoints. Therefore, to address this issue, we propose a spatial consistency guided network using contrastive learning for point cloud registration (SCRnet), in which its overall stage is symmetrical. SCRnet consists of four blocks, namely feature extraction block, confidence estimation block, contrastive learning block and registration block. Firstly, we use mini-PointNet to extract coarse local and global features. Secondly, we propose confidence estimation block, which formulate outlier rejection as confidence estimation problem of keypoint correspondences. In addition, the local spatial features are encoded into the confidence estimation block, which makes the correspondence possess local spatial consistency. Moreover, we propose contrastive learning block by constructing positive point pairs and hard negative point pairs and using Point-Pair-INfoNCE contrastive loss, which can further remove hard outliers through global spatial consistency. Finally, the proposed registration block selects a set of matching points with high spatial consistency and uses these matching sets to calculate multiple transformations, then the best transformation can be identified by initial alignment and Iterative Closest Point (ICP) algorithm. Extensive experiments are conducted on KITTI and nuScenes dataset, which demonstrate the high accuracy and strong robustness of SCRnet on point cloud registration task.


2021 ◽  
Author(s):  
Zuozhen Liu ◽  
Ta Li ◽  
Pengyuan Zhang

2021 ◽  
Author(s):  
Seiya Tanaka ◽  
Andrew W. Vargo ◽  
Motoi Iwata ◽  
Koichi Kise

Author(s):  
Md. Rabiul Islam ◽  
Shuji Sakamoto ◽  
Yoshihiro Yamada ◽  
Andrew W. Vargo ◽  
Motoi Iwata ◽  
...  

Reading analysis can relay information about user's confidence and habits and can be used to construct useful feedback. A lack of labeled data inhibits the effective application of fully-supervised Deep Learning (DL) for automatic reading analysis. We propose a Self-supervised Learning (SSL) method for reading analysis. Previously, SSL has been effective in physical human activity recognition (HAR) tasks, but it has not been applied to cognitive HAR tasks like reading. We first evaluate the proposed method on a four-class classification task on reading detection using electrooculography datasets, followed by an evaluation of a two-class classification task of confidence estimation on multiple-choice questions using eye-tracking datasets. Fully-supervised DL and support vector machines (SVMs) are used as comparisons for the proposed SSL method. The results show that the proposed SSL method is superior to the fully-supervised DL and SVM for both tasks, especially when training data is scarce. This result indicates the proposed method is the superior choice for reading analysis tasks. These results are important for informing the design of automatic reading analysis platforms.


2021 ◽  
Author(s):  
Erik Lindgren ◽  
Christopher Zach

Abstract Within many quality-critical industries, e.g. the aerospace industry, industrial X-ray inspection is an essential as well as a resource intense part of quality control. Within such industries the X-ray image interpretation is typically still done by humans, therefore, increasing the interpretation automatization would be of great value. We claim, that safe automatic interpretation of industrial X-ray images, requires a robust confidence estimation with respect to out-of-distribution (OOD) data. In this work we have explored if such a confidence estimation can be achieved by comparing input images with a model of the accepted images. For the image model we derived an autoencoder which we trained unsupervised on a public dataset with X-ray images of metal fusion-welds. We achieved a true positive rate at 80–90% at a 4% false positive rate, as well as correctly detected an OOD data example as an anomaly.


2021 ◽  
Vol 25 (2) ◽  
pp. 51-59
Author(s):  
Jianqi Yu ◽  

Inferential procedures for a normal mean with an auxiliary variable are developed. First, the maximum likelihood estimation of the mean and its distribution are derived. Second, an F statistic based on the maximum likelihood estimation is proposed, and the hypothesis testing and confidence estimation are outlined. Finally, to illustrate the advantage of using auxiliary variable, Monte Carlo simulations are performed. The results indicate that using auxiliary variable can improve the efficiency of inference.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Lior Galanti ◽  
Dennis Shasha ◽  
Kristin C. Gunsalus

Abstract Background Systems biology increasingly relies on deep sequencing with combinatorial index tags to associate biological sequences with their sample, cell, or molecule of origin. Accurate data interpretation depends on the ability to classify sequences based on correct decoding of these combinatorial barcodes. The probability of correct decoding is influenced by both sequence quality and the number and arrangement of barcodes. The rising complexity of experimental designs calls for a probability model that accounts for both sequencing errors and random noise, generalizes to multiple combinatorial tags, and can handle any barcoding scheme. The needs for reproducibility and community benchmark standards demand a peer-reviewed tool that preserves decoding quality scores and provides tunable control over classification confidence that balances precision and recall. Moreover, continuous improvements in sequencing throughput require a fast, parallelized and scalable implementation. Results and discussion We developed a flexible, robustly engineered software that performs probabilistic decoding and supports arbitrarily complex barcoding designs. Pheniqs computes the full posterior decoding error probability of observed barcodes by consulting basecalling quality scores and prior distributions, and reports sequences and confidence scores in Sequence Alignment/Map (SAM) fields. The product of posteriors for multiple independent barcodes provides an overall confidence score for each read. Pheniqs achieves greater accuracy than minimum edit distance or simple maximum likelihood estimation, and it scales linearly with core count to enable the classification of > 11 billion reads in 1 h 15 m using < 50 megabytes of memory. Pheniqs has been in production use for seven years in our genomics core facility. Conclusion We introduce a computationally efficient software that implements both probabilistic and minimum distance decoders and show that decoding barcodes using posterior probabilities is more accurate than available methods. Pheniqs allows fine-tuning of decoding sensitivity using intuitive confidence thresholds and is extensible with alternative decoders and new error models. Any arbitrary arrangement of barcodes is easily configured, enabling computation of combinatorial confidence scores for any barcoding strategy. An optimized multithreaded implementation assures that Pheniqs is faster and scales better with complex barcode sets than existing tools. Support for POSIX streams and multiple sequencing formats enables easy integration with automated analysis pipelines.


Author(s):  
K. Heinrich ◽  
M. Mehltretter

Abstract. In recent years, the ability to assess the uncertainty of depth estimates in the context of dense stereo matching has received increased attention due to its potential to detect erroneous estimates. Especially, the introduction of deep learning approaches greatly improved general performance, with feature extraction from multiple modalities proving to be highly advantageous due to the unique and different characteristics of each modality. However, most work in the literature focuses on using only mono- or bi- or rarely tri-modal input, not considering the potential effectiveness of modalities, going beyond tri-modality. To further advance the idea of combining different types of features for confidence estimation, in this work, a CNN-based approach is proposed, exploiting uncertainty cues from up to four modalities. For this purpose, a state-of-the-art local-global approach is used as baseline and extended accordingly. Additionally, a novel disparity-based modality named warped difference is presented to support uncertainty estimation at common failure cases of dense stereo matching. The general validity and improved performance of the proposed approach is demonstrated and compared against the bi-modal baseline in an evaluation on three datasets using two common dense stereo matching techniques.


Sign in / Sign up

Export Citation Format

Share Document