bhattacharyya distance
Recently Published Documents


TOTAL DOCUMENTS

135
(FIVE YEARS 41)

H-INDEX

14
(FIVE YEARS 5)

Author(s):  
Sifeng Bi ◽  
Michael Beer

AbstractThis chapter presents the technique route of model updating in the presence of imprecise probabilities. The emphasis is put on the inevitable uncertainties, in both numerical simulations and experimental measurements, leading the updating methodology to be significantly extended from deterministic sense to stochastic sense. This extension requires that the model parameters are not regarded as unknown-but-fixed values, but random variables with uncertain distributions, i.e. the imprecise probabilities. The final objective of stochastic model updating is no longer a single model prediction with maximal fidelity to a single experiment, but rather the calibrated distribution coefficients allowing the model predictions to fit with the experimental measurements in a probabilistic point of view. The involvement of uncertainty within a Bayesian updating framework is achieved by developing a novel uncertainty quantification metric, i.e. the Bhattacharyya distance, instead of the typical Euclidian distance. The overall approach is demonstrated by solving the model updating sub-problem of the NASA uncertainty quantification challenge. The demonstration provides a clear comparison between performances of the Euclidian distance and the Bhattacharyya distance, and thus promotes a better understanding of the principle of stochastic model updating, as no longer to determine the unknown-but-fixed parameters, but rather to reduce the uncertainty bounds of the model prediction and meanwhile to guarantee the existing experimental data to be still enveloped within the updated uncertainty space.


Author(s):  
Xuguang Li ◽  
Li Ma ◽  
Lu Liu ◽  
Juan Dai ◽  
Hao Zhang ◽  
...  

2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


Rheumatology ◽  
2021 ◽  
Vol 60 (Supplement_1) ◽  
Author(s):  
Graham Dinsdale ◽  
Joanne Manning ◽  
Ariane Herrick ◽  
Mark Dickinson ◽  
Christopher Taylor

Abstract Background/Aims  The lack of objective outcome measures for Raynaud's phenomenon (RP) has been a major limiting factor in development of effective treatments. At present, the Raynaud's Condition Score (RCS) is the only validated outcome measure, and is highly subjective. Mobile phone technology could provide a way forward. We have developed a smartphone app for RP monitoring that guides the patient through the process of capturing images of their hands during RP episodes, as well as capturing other data through post-attack and daily questionnaires. One of the objectives of our research programme (reported here) was to compare digital image (photographic) parameters to the RCS. Methods  40 patients with RP (8 with primary RP, 32 with RP secondary to systemic sclerosis) were recruited (40 female, median age (range): 57 years (25-74), median (range) duration of RP symptoms: 17 (0-53) years). Patients were given a smartphone handset with a pre-installed Raynaud’s Monitoring app and were trained on how to use it/take usable photographs. They were then asked to take photographs of RP attacks over a 14 day period and also to record the RCS for each episode. The app specifically prompts the patient to take a picture of their hand every minute during an attack, until confirmation is given that the attack is complete. At a 2nd visit, the handsets, images, and data were collected for analysis. The mean colour change during each RP attack was quantified (semi-automated method) by the Bhattacharyya distance (BD) in colour space between a region of interest (e.g. a section of a digit) and a control region (dorsal hand). BD was then compared to the RCS using ANOVA, after controlling for patient variability in the range of RCS values used by each patient. Results  A total of 3,030 images were collected, describing 229 RP attacks. The median RCS reported was 6 (inter-quartile range [IQR]: 4), while the median for BD was 5.6 (IQR 3.2). ANOVA showed that measured values of the mean image BD were significantly different when different values of RCS were recorded by the patient (p < 0.001), i.e. attacks where patients selected different values of RCS had significantly different values of BD. Across all attacks/patients the F-value from ANOVA for RCS was 76.2, suggesting that the variation in BD for different values of RCS is much greater than the variation in BD for any one value of RCS. Conclusion  Patients successfully used a smartphone app to collect photographs and data during episodes of RP. A strong association was found between skin colour change (via BD) and the gold-standard RCS. Mobile phone-documented colour change therefore has potential as an objective measure of RP. Further validation work is now required, as well as studies examining sensitivity to change. Disclosure  G. Dinsdale: None. J. Manning: None. A. Herrick: None. M. Dickinson: None. C. Taylor: None.


2021 ◽  
Author(s):  
Daniel N. Baker ◽  
Nathan Dyjack ◽  
Vladimir Braverman ◽  
Stephanie C. Hicks ◽  
Ben Langmead

AbstractSingle-cell RNA-sequencing (scRNA-seq) analyses typically begin by clustering a gene-by-cell expression matrix to empirically define groups of cells with similar expression profiles. We describe new methods and a new open source library, minicore, for efficient k-means++ center finding and k-means clustering of scRNA-seq data. Minicore works with sparse count data, as it emerges from typical scRNA-seq experiments, as well as with dense data from after dimensionality reduction. Minicore’s novel vectorized weighted reservoir sampling algorithm allows it to find initial k-means++ centers for a 4-million cell dataset in 1.5 minutes using 20 threads. Minicore can cluster using Euclidean distance, but also supports a wider class of measures like Jensen-Shannon Divergence, Kullback-Leibler Divergence, and the Bhattacharyya distance, which can be directly applied to count data and probability distributions.Further, minicore produces lower-cost centerings more efficiently than scikit-learn for scRNA-seq datasets with millions of cells. With careful handling of priors, minicore implements these distance measures with only minor (<2-fold) speed differences among all distances. We show that a minicore pipeline consisting of k-means++, localsearch++ and minibatch k-means can cluster a 4-million cell dataset in minutes, using less than 10GiB of RAM. This memory-efficiency enables atlas-scale clustering on laptops and other commodity hardware. Finally, we report findings on which distance measures give clusterings that are most consistent with known cell type labels.AvailabilityThe open source library is at https://github.com/dnbaker/minicore. Code used for experiments is at https://github.com/dnbaker/minicore-experiments.


2021 ◽  
Author(s):  
Abdurrahman Alqahtani ◽  
Khaled Ali Abuhasel ◽  
Mohammed Alquraish

Abstract In many functional implementations of considerable engineering significance, cyber physical solutions have recently been developed where protection and privacy are essential. This led to the recent increase in interest in the development of advanced and emerging technology for anomaly and intrusion detection. The paper suggests a new frame for the distributed blind intrusion detection by modelling sensor measurements as the graph signal and using the statistical features of the graph signal for the detection of intrusion. The graphic similarity matrices is generated using the measured data of the sensors as well as the proximity of the sensors to completely take account of the underlying network structure. The scope of the collected data is modelled on the random field Gaussian Markov and the required precision matrix can be determined by adjusting to a graph called Laplacian matrix. For research statistics, the suggested technique for intrusion detection is based on the modified Bayesian probability ratio test and the closed-form expressions are derived. In the end, the time analysis of the actions of the network is calculated by computing the Bhattacharyya distance at consecutive times among the measurement distributions. Experiments are carried out, evaluated and equate the efficiency of the proposed system to the modern method. The findings indicate a detection value better than that offered by other existing systems via the proposed intrusion detection frame.


Author(s):  
Sifeng Bi ◽  
Michael Beer ◽  
Jingrui Zhang ◽  
Lechang Yang ◽  
Kui He

Abstract The Bhattacharyya distance has been developed as a comprehensive uncertainty quantification metric by capturing multiple uncertainty sources from both numerical predictions and experimental measurements. This work pursues a further investigation of the performance of the Bhattacharyya distance in different methodologies for stochastic model updating. The first procedure is the Bayesian model updating where the Bhattacharyya distance is utilized to define an approximate likelihood function and the transitional Markov chain Monte Carlo algorithm is employed to obtain the posterior distribution of the parameters. In the second model updating procedure, the Bhattacharyya distance is utilized to construct the objective function of an optimization problem. The objective function is defined as the Bhattacharyya distance between the samples of numerical prediction and the samples of the target data. The comparison study is performed on a four degree-of-freedoms mass-spring system. A challenging task is raised in this example by assigning different distributions to the parameters with imprecise distribution coefficients. This requires the stochastic updating procedure to calibrate not the parameters themselves, but their distribution properties. The performance of the Bhattacharyya distance in both Bayesian updating and optimization-based updating procedures are presented and compared. The results demonstrate the Bhattacharyya distance as a comprehensive and universal uncertainty quantification metric in stochastic model updating.


Sign in / Sign up

Export Citation Format

Share Document