Evaluation of Techniques for the Presentation of Laboratory Data: Support of Pattern Recognition

2000 ◽  
Vol 39 (01) ◽  
pp. 88-92 ◽  
Author(s):  
J. O. O. Hoeke ◽  
B. Bonke ◽  
R. van Strik ◽  
E. S. Gelsema

Abstract:Two tabular and two graphical techniques for the presentation of laboratory test results were compared in a reaction-time experiment with 22 volunteers. The experimental setup was designed to determine whether one or more of the presentation techniques facilitated the recognition of four predefined combinations of abnormal test results. Using a conventional, tabular presentation technique as a reference, faster median response times were obtained with each of the other three presentation techniques, irrespective of pattern. The effect on accuracy was less clear, possibly due to the small number of errors made.

1997 ◽  
Vol 36 (01) ◽  
pp. 17-19 ◽  
Author(s):  
J. O. O. Hoeke ◽  
B. Bonke ◽  
R. van Strik ◽  
E. S. Gelsema ◽  
R. Verheij

Abstract:Four tabular and two graphical techniques for the presentation of laboratory test results were evaluated in a reaction time experiment with 25 volunteers. Artificial variables and values were used to represent sets of 12 laboratory tests to eliminate the possible effects of clinical experience. Analyses focused on four types of errors in interpretation. Color-coded tables and one of the color-coded graphs greatly (2.8 times or better) reduced the number of incorrectly classified test results, as compared to the reference presentation technique. This was mainly due to a reduction of the number of abnormal test results that were not noticed by the subjects when using these presentation techniques.


1997 ◽  
Vol 36 (01) ◽  
pp. 11-16 ◽  
Author(s):  
J. O. O. Hoeke ◽  
B. Bonke ◽  
R. van Strik ◽  
E. S. Gelsema ◽  
R. Verheij

Abstract:Four tabular and two graphical techniques for the presentation of laboratory test results were evaluated in a reaction-time experiment with 25 volunteers. Artificial variables and values were used to represent sets of 12 laboratory tests to eliminate the possible effects of clinical experience. Analyses focused on reaction times for correctly classified sets of data. For comparable data sets, Presentation Techniques (PT) that use color, always allow faster interpretation than PTs that do not use color, or use only a simple marker. Color-coded tables yielded an improvement in median reaction time of approximately six times or better, as compared to the reference PT (a tabular PT without any hints). For the color-coded graphs, the improvement rate was approximately 2.5 or better.


1980 ◽  
Vol 50 (1) ◽  
pp. 91-97 ◽  
Author(s):  
Edward J. Rinalducci

Comfort ratings and response times for changes in the experienced level of comfort were examined in 20 subjects using the NASA Flight Research Center's Jetstar aircraft modified to carry the GPAS system (General Purpose Airborne Simulator). Data were obtained for each of the subjects during two runs of 10 1-min. flight segments. In general, as the magnitude of aircraft motion increased in either the vertical or transverse (lateral) directions, there was an increase in feelings of discomfort and a decrease in response times to those changes. These results suggest parallels between the large body of laboratory data on human reaction time and that collected in this field study on response times to changes in ride comfort.


2017 ◽  
Vol 55 (8) ◽  
pp. 1112-1114 ◽  
Author(s):  
Giuseppe Lippi ◽  
Gianfranco Cervellin ◽  
Mario Plebani

AbstractThe management of laboratory data in unsuitable (hemolyzed) samples remains an almost unresolved dilemma. Whether or not laboratory test results obtained by measuring unsuitable specimens should be made available to the clinicians has been the matter of fierce debates over the past decades. Recently, an intriguing alternative to suppressing test results and recollecting the specimen has been put forward, entailing the definition and implementation of specific algorithms that would finally allow reporting a preanalytically altered laboratory value within a specific comment about its uncertainty of measurement. This approach carries some advantages, namely the timely communication of potentially life-threatening laboratory values, but also some drawbacks. These especially include the challenging definition of validated performance specifications for hemolyzed samples, the need to producing reliable data with the lowest possible uncertainty, the short turnaround time for repeating most laboratory tests, the risk that the comments may be overlooked in short-stay and frequently overcrowded units (e.g. the emergency department), as well as the many clinical advantages of a direct communication with the physician in charge of the patient. Despite the debate remains open, we continue supporting the suggestion that suppressing data in unsuitable (hemolyzed) samples and promptly notifying the clinicians about the need to recollect the samples remains the most (clinically and analytically) safe practice.


Author(s):  
Sanjaya Dhakal ◽  
Sherry L. Burrer ◽  
Carla A. Winston ◽  
Achintya Dey ◽  
Umed Ajani ◽  
...  

ObjectiveElectronic laboratory reporting has been promoted as a public health priority. The Office of the U.S. National Coordinator for Health Information Technology has endorsed two coding systems: Logical Observation Identifiers Names and Codes (LOINC) for laboratory test orders and Systemized Nomenclature of Medicine-Clinical Terms (SNOMED CT) for test results.  Materials and MethodsWe examined LOINC and SNOMED CT code use in electronic laboratory data reported in 2011 by 63 non-federal hospitals to BioSense electronic syndromic surveillance system.  We analyzed the frequencies, characteristics, and code concepts of test orders and results.ResultsA total of 14,028,774 laboratory test orders or results were reported. No test orders used SNOMED CT codes. To describe test orders, 77% used a LOINC code, 17% had no value, and 6% had a non-informative value, “OTH”. Thirty-three percent (33%) of test results had missing or non-informative codes. For test results with at least one informative value, 91.8% had only LOINC codes, 0.7% had only SNOMED codes, and 7.4% had both. Of 108 SNOMED CT codes reported without LOINC codes, 45% could be matched to at least one LOINC code.ConclusionMissing or non-informative codes comprised almost a quarter of laboratory test orders and a third of test results reported to BioSense by non-federal hospitals. Use of LOINC codes for laboratory test results was more common than use of SNOMED CT. Complete and standardized coding could improve the usefulness of laboratory data for public health surveillance and response.


2015 ◽  
Vol 22 (4) ◽  
pp. 900-904 ◽  
Author(s):  
Dean F Sittig ◽  
Daniel R Murphy ◽  
Michael W Smith ◽  
Elise Russo ◽  
Adam Wright ◽  
...  

Abstract Accurate display and interpretation of clinical laboratory test results is essential for safe and effective diagnosis and treatment. In an attempt to ascertain how well current electronic health records (EHRs) facilitated these processes, we evaluated the graphical displays of laboratory test results in eight EHRs using objective criteria for optimal graphs based on literature and expert opinion. None of the EHRs met all 11 criteria; the magnitude of deficiency ranged from one EHR meeting 10 of 11 criteria to three EHRs meeting only 5 of 11 criteria. One criterion (i.e., the EHR has a graph with y-axis labels that display both the name of the measured variable and the units of measure) was absent from all EHRs. One EHR system graphed results in reverse chronological order. One EHR system plotted data collected at unequally-spaced points in time using equally-spaced data points, which had the effect of erroneously depicting the visual slope perception between data points. This deficiency could have a significant, negative impact on patient safety. Only two EHR systems allowed users to see, hover-over, or click on a data point to see the precise values of the x–y coordinates. Our study suggests that many current EHR-generated graphs do not meet evidence-based criteria aimed at improving laboratory data comprehension.


2020 ◽  
pp. 86-90
Author(s):  
V. G. Akimov

The diagnosis is the result of the physician’s synthesis of all anamnestic, clinical and instrumental data from the patient’s examination. However, for various reasons, the putative clinical diagnosis is not always supported by laboratory test results. The article discusses the main reasons for such discrepancies, emphasizes the responsibility of the attending physician for reliable diagnosis of the disease.


2021 ◽  
Author(s):  
Di Jin ◽  
Qing Wang ◽  
Dezhi Peng ◽  
Jiajia Wang ◽  
Yating Cheng ◽  
...  

Abstract BackgroundValidation of the autoverification function is the most critical step to confirm its effectiveness before use. It is crucial to verify whether the programmed algorithm follows the expected logic and produces the expected results. In recent years, this process has always been centered on the assessment of human-machine consistency and mostly takes the form of manual recording, which is a time-consuming activity with inherent subjectivity and arbitrariness, and cannot guarantee a comprehensive, timely and continuous effectiveness evaluation of the autoverification function. To overcome these inherent limitations, we independently developed and implemented a laboratory information system (LIS)-based validation system for autoverification.MethodsWe developed a correctness verification and integrity validation method (hereinafter referred to as the "new method") in the form of a human-machine dialogue. The system records the personnel’s review steps and determines if the human-machine review results are consistent. If they are inconsistent, the laboratory personnel analyze the reasons for the inconsistency according to the system prompts, add to or modify the rules, reverify, and finally improve the accuracy of autoverification.ResultsThe validation system was successfully established and implemented. For a dataset consisting of 833 rules for 30 assays, 782 rules (93.87%) were successfully verified in the correctness verification phase, and 51 rules were deleted due to execution errors. In the integrity validation phase, 24 projects were easily verified, while the other 6 projects still required the addition of new rules or changes to the rule settings. From setting the rules to the automated reportion, the time difference between manual validation and the new method, was statistically significant (χ2=11.06, p=0.0009), with the new method greatly reducing validation time. Since 2017, the new method has been used in 32 laboratories, and 15.8 million reports have been automatically reviewed and issued without a single clinical complaint.ConclusionTo the best of our knowledge, this is the first report to realize autoverification validation in the form of a human-machine interaction.The new method can effectively control the risks of autoverification, shorten time consumption, and improve the efficiency of laboratory verification.


2003 ◽  
Vol 62 (4) ◽  
pp. 209-218
Author(s):  
A. N’gbala ◽  
N. R. Branscombe

When do causal attribution and counterfactual thinking facilitate one another, and when do the two responses overlap? Undergraduates (N = 78) both explained and undid, in each of two orders, events that were described either with their potential causes or not. The time to perform either response was recorded. Overall, mutation response times were shorter when performed after an attribution was made than before, while attribution response times did not vary as a consequence of sequence. Depending on whether the causes of the target events were described in the scenario or not, respondents undid the actor and assigned causality to another antecedent, or pointed to the actor for both responses. These findings suggest that counterfactual mutation is most likely to be facilitated by attribution, and that mutation and attribution responses are most likely to overlap when no information about potential causes of the event is provided.


Sign in / Sign up

Export Citation Format

Share Document