Detectability of Multiple Flaws

Author(s):  
Lewis H. Geyer ◽  
Shantilal Patel ◽  
Ronald F. Perry

Two experiments measured detectability, d', and two measures of criterion location for multiple flaws presented separately versus mixed. In the first experiment, the different flaws were the same in type but differed in magnitude; in the second experiment the flaws were of different types. In Experiment 1 there was no difference in d' for either magnitude of flaw as a result of the separate versus mixed conditions, (i.e., each magnitude of flaw had its own d' in both separate and mixed conditions). However, in Experiment 2 all flaws had lower d's in the mixed condition than when each was presented alone. In neither experiment was there any difference in average criterion location between the separate and mixed conditions, nor did individual criterion locations for easy versus hard flaws presented separately differ in Experiment 1. In Experiment 2 there was a difference in individual criterion locations for the different flaws presented separately, such that for the hardest to detect flaw, false rejection error rate was lower and miss rate higher than for either of the two flaws with larger d's.

2018 ◽  
Author(s):  
Maria Montefinese ◽  
Erin Michelle Buchanan ◽  
David Vinson

Models of semantic representation predict that automatic priming is determined by associative and co-occurrence relations (i.e., spreading activation accounts), or to similarity in words' semantic features (i.e., featural models). Although, these three factors are correlated in characterizing semantic representation, they seem to tap different aspects of meaning. We designed two lexical decision experiments to dissociate these three different types of meaning similarity. For unmasked primes, we observed priming only due to association strength and not the other two measures; and no evidence for differences in priming for concrete and abstract concepts. For masked primes there was no priming regardless of the semantic relation. These results challenge theoretical accounts of automatic priming. Rather, they are in line with the idea that priming may be due to participants’ controlled strategic processes. These results provide important insight about the nature of priming and how association strength, as determined from word-association norms, relates to the nature of semantic representation.


2020 ◽  
Vol 11 ◽  
Author(s):  
Chris Ferguson ◽  
Herre van Oostendorp

The lostness measure, an implicit and unobtrusive measure originally designed for assessing the usability of hypertext systems, could be useful in Virtual Reality (VR) games where players need to find information to complete a task. VR locomotion systems with node-based movement mimic actions for exploration and browsing found in hypertext systems. For that reason, hypertext usability measures, such as “lostness” can be used to identify how disoriented a player is when completing tasks in an educational game by examining steps made by the player. An evaluation of two different lostness measures, global and local lostness, based on two different types of tasks, is described in a VR educational game using 13 college students between 14 and 18 years old in a first study and extended using 12 extra participants in a second study. Multiple Linear Regression analyses showed, in both studies, that local lostness, and not global lostness, had a significant effect on a post-game knowledge test. Therefore, we argued that local lostness was able to predict how well-participants would perform on a post-game knowledge test indicating how well they learned from the game. In-game experience aspects (engagement, cognitive interest, and presence) were also evaluated and, interestingly, it was also found that participants learned less when they felt more present in the game. We believe these two measures relate to cognitive overload, which is known to have an adverse effect on learning. Further research should investigate the lostness measure for use in an online adaptive game system and design the game system in such a way that the risk of cognitive overload is minimized when learning, resulting in higher retention of information.


Author(s):  
Lifang Chen ◽  
Dai Cao ◽  
Yuan Liu

Jigsaw puzzle algorithm is important as it can be applied to many areas such as biology, image editing, archaeology and incomplete crime-scene reconstruction. But, still, some problems exist in the process of practical application, for example, when there are a large number of similar objects in the puzzle fragments, the error rate will reach 30%–50%. When some fragments are missing, most algorithms fail to restore the images accurately. When the number of fragments of the jigsaw puzzle is large, efficiency is reduced. During the intelligent puzzle, mainly the Sum of Squared Distance Scoring (SSD), Mahalanobis Gradient Compatibility (MGC) and other metrics are used to calculate the similarity between the fragments. On the basis of these two measures, we put forward some new methods: 1. MGC is one of the most effective measures, but using MGC to reassemble the puzzle can cause an error image every 30 or 50 times, so we combine the Jaccard and MGC metric measure to compute the similarity between the image fragments, and reassemble the puzzle with a greedy algorithm. This algorithm not only reduces the error rate, but can also maintain a high accuracy in the case of a large number of fragments of similar objects. 2. For the lack of fragmentation and low efficiency, this paper uses a new method of SSD measurement and mark matrix, it is general in the sense that it can handle puzzles of unknown size, with fragments of unknown orientation, and even puzzles with missing fragments. The algorithm does not require any preset conditions and is more practical in real life. Finally, experiments show that the algorithm proposed in this paper improves not only the accuracy but also the efficiency of the operation.


2020 ◽  
Author(s):  
Richard J. Wang ◽  
Predrag Radivojac ◽  
Matthew W. Hahn

AbstractErrors in genotype calling can have perverse effects on genetic analyses, confounding association studies and obscuring rare variants. Analyses now routinely incorporate error rates to control for spurious findings. However, reliable estimates of the error rate can be difficult to obtain because of their variance between studies. Most studies also report only a single estimate of the error rate even though genotypes can be miscalled in more than one way. Here, we report a method for estimating the rates at which different types of genotyping errors occur at biallelic loci using pedigree information. Our method identifies potential genotyping errors by exploiting instances where the haplotypic phase has not been faithfully transmitted. The expected frequency of inconsistent phase depends on the combination of genotypes in a pedigree and the probability of miscalling each genotype. We develop a model that uses the differences in these frequencies to estimate rates for different types of genotype error. Simulations show that our method accurately estimates these error rates in a variety of scenarios. We apply this method to a dataset from the whole-genome sequencing of owl monkeys (Aotus nancymaae) in three-generation pedigrees. We find significant differences between estimates for different types of genotyping error, with the most common being homozygous reference sites miscalled as heterozygous and vice versa. The approach we describe is applicable to any set of genotypes where haplotypic phase can reliably be called, and should prove useful in helping to control for false discoveries.


1985 ◽  
Vol 31 (2) ◽  
pp. 206-212 ◽  
Author(s):  
A S Blum

Abstract I describe a program for definitive comparison of different quality-control statistical procedures. A microcomputer simulates quality-control results generated by repetitive analytical runs. It applies various statistical rules to each result, tabulating rule breaks to evaluate rules as routinely applied by the analyst. The process repeats with increasing amounts of random and systematic error. Rate of false rejection and true error detection for currently popular statistical procedures were comparatively evaluated together with a new multirule procedure described here. The nature of the analyst's response to out-of-control signals was also evaluated. A single-rule protocol that is as effective as the multirule protocol of Westgard et al. (Clin Chem 27:493, 1981) is reported.


1995 ◽  
Vol 32 (02) ◽  
pp. 482-493 ◽  
Author(s):  
Jie Mi

Availability is an important characteristic of a system. Different types of availability are defined. For the case when a sequence of bivariate random variables of lifetime and repair time are i.i.d. certain properties have been established previously. In practice, however, we need to consider the situation where these bivariate random variables are independent but not identically distributed. Properties of two measures of availability for the i.i.d. case are extended to this more general case.


Biometric Systems are well-known security systems that can be used anywhere for authentication, authorization or any kind of security verifications. In biometric systems, the samples are trained first and then it can be used for testing in long runs. Many recent researches have shown that a biometric system may fail or get compromised because of the aging of the biometric templates. The fact that temporal duration affects the performance of the biometric system has shattered the belief that iris does not change over lifetime. This is also possible in the case of iris. So, the main focus of this work is to analyze the effect of aging and also to propose a new system that can deal with template aging. We have proposed a new iris recognition system with an image enhancement mechanism and different feature extraction mechanisms. In this work, three different features are extracted, which are then fused to be used as one. The full system is trained on a dataset of 2500 samples for the year 2008 and testing is done in three different phases (i) No-Lapse, (ii) 1-Year Lapse and (iii) 2-Year Lapse. A portion of the ND-Iris-Template-Aging dataset [11] is used with a period of three years lapse. Results show that the performance of the hybrid classifier AHyBrK [17] is improved as compared to KNN and ANN and the effect of aging in terms of degraded performance is clear. The performance of this system is measured in terms of False Rejection Rate, Error Rate, and Accuracy. The overall performance of AHyBrK is 51.04% and 52.98% better than KNN and ANN respectively in terms of False Rejection Rate and Error Rate whereas the accuracy of this proposed system is also improved by 5.52% and 6.04% as compared to KNN and ANN respectively. This proposed system also achieved high accuracy for all the test phases.


2020 ◽  
Vol 17 ◽  
pp. 345-350
Author(s):  
Mateusz Proskura ◽  
Sylwia Podkościelna ◽  
Grzegorz Kozieł

The Internet is transforming into the main source of information. Standards are being developed so that people with different types of disabilities are not to feel excluded. Tools based on certain standards automate tasks, but they are also important to work correctly. The study consisted of checking the pages with selected tools discussed in the literature dealing with website accessibility. The validators are identically configured each time the website verifies compliance with the WCAG 2.1 standard. The number of errors detected per page varies significantly when using individual tools. The error rate calculated for the number of all HTML elements of the page ranges between less than 1% and 80%. It is not possible to unambiguously select the worst service on the basis of analyzing all results obtained, it can be a different service for each tool. The tools automate the activities of verifying availability. Each of the validators tested found errors, some of the same. The best solution is not to rely on just one tool, because the results obtained can relate to completely different elements of the page. It all depends on the care of the creators in the preparation of the tool and their care for compliance with standards.


2020 ◽  
Vol 10 (15) ◽  
pp. 5026
Author(s):  
Seon Man Kim

This paper proposes a technique for improving statistical-model-based voice activity detection (VAD) in noisy environments to be applied in an auditory hearing aid. The proposed method is implemented for a uniform polyphase discrete Fourier transform filter bank satisfying an auditory device time latency of 8 ms. The proposed VAD technique provides an online unified framework to overcome the frequent false rejection of the statistical-model-based likelihood-ratio test (LRT) in noisy environments. The method is based on the observation that the sparseness of speech and background noise cause high false-rejection error rates in statistical LRT-based VAD—the false rejection rate increases as the sparseness increases. We demonstrate that the false-rejection error rate can be reduced by incorporating likelihood-ratio order statistics into a conventional LRT VAD. We confirm experimentally that the proposed method relatively reduces the average detection error rate by 15.8% compared to a conventional VAD with only minimal change in the false acceptance probability for three different noise conditions whose signal-to-noise ratio ranges from 0 to 20 dB.


Genetics ◽  
2020 ◽  
Vol 217 (1) ◽  
Author(s):  
Richard J Wang ◽  
Predrag Radivojac ◽  
Matthew W Hahn

Abstract Errors in genotype calling can have perverse effects on genetic analyses, confounding association studies, and obscuring rare variants. Analyses now routinely incorporate error rates to control for spurious findings. However, reliable estimates of the error rate can be difficult to obtain because of their variance between studies. Most studies also report only a single estimate of the error rate even though genotypes can be miscalled in more than one way. Here, we report a method for estimating the rates at which different types of genotyping errors occur at biallelic loci using pedigree information. Our method identifies potential genotyping errors by exploiting instances where the haplotypic phase has not been faithfully transmitted. The expected frequency of inconsistent phase depends on the combination of genotypes in a pedigree and the probability of miscalling each genotype. We develop a model that uses the differences in these frequencies to estimate rates for different types of genotype error. Simulations show that our method accurately estimates these error rates in a variety of scenarios. We apply this method to a dataset from the whole-genome sequencing of owl monkeys (Aotus nancymaae) in three-generation pedigrees. We find significant differences between estimates for different types of genotyping error, with the most common being homozygous reference sites miscalled as heterozygous and vice versa. The approach we describe is applicable to any set of genotypes where haplotypic phase can reliably be called and should prove useful in helping to control for false discoveries.


Sign in / Sign up

Export Citation Format

Share Document