scholarly journals Measurement properties of the Dynavision D2 one-minute drill task in active adolescents

Neurology ◽  
2018 ◽  
Vol 91 (23 Supplement 1) ◽  
pp. S4.2-S4
Author(s):  
Tamara McLeod ◽  
R. Curtis Bay ◽  
Hannah Gray ◽  
Richelle Marie Williams

ObjectiveThe purpose of this study was to evaluate test-retest reliability and practice effects of the Dynavision D2 in active adolescents.BackgroundFollowing sport-related concussion, assessment of oculomotor function and vision is important. While clinical tests are recommended, computerized devices, such as the Dynavision D2, are emerging as viable tools for vision assessment. As with all concussion assessments, understanding test-retest reliability and susceptibility to practice effects is important for appropriate interpretation of serial assessments post-injury.Design/methodsParticipants included 20 female adolescents (age = 16.6 ± 1.10 years, mass = 62.0 ± 5.9 kg, height = 169.2 ± 5.1 cm). Participants completed 2 test sessions 1 week apart using the Dynavision D2. The Dynavision D2 includes a one-minute drill task where a single light illuminates, and participants hit the light as quickly as possible, completing 3 drills per trial. Participants completed 3 trials during the first session and 2 during the second. Independent variables were day (day 1, day 2) and drills (15 drills). Dependent variables were the number of hits per minute (Hits/min) and average reaction time (AvgRT). Within-day and between-day test-retest reliabilities were analyzed using two-way random effects intraclass correlation coefficients for consistency. Practice effects were analyzed with repeated measures analysis of variance and Helmert contrasts (p = 0=.05).ResultsModerate-to-strong reliability was demonstrated for Hits/min (within-day 1 [ICC = 0.74; 95% CI: 0.53, 0.87]; within-day 2 [ICC = 0.91; 95% CI. 77.97]; between-days [ICC = 0.86; 95% CI. 65.95]). Moderate-to-strong reliability was demonstrated for AvgRT (within-day 1 [ICC = 0.70, 95% CI. 48.86], within-day 2 [ICC = 0.92; 95% CI. 78.97]; between-days [ICC = 0.85; 95% CI: 0.64.94]). Practice effects were noted for Hits/Min (p = 0.001) and AvgRT (p < 0.001). Helmert contrasts suggested that the practice effect plateaued at drill 11 for Hits/min and drill 12 for AvgRT.ConclusionsModerate-to-excellent test-retest reliability was found for the one-minute task drill with better reliability noted on day 2 and between days, compared to day 1. This task is susceptible to practice effects, highlighting the need for familiarization or practice trials prior to documenting patient scores.

2002 ◽  
Vol 82 (4) ◽  
pp. 364-371 ◽  
Author(s):  
Douglas P Gross ◽  
Michele C Battié

Abstract Background and Purpose. Functional capacity evaluations (FCEs) are measurement tools used in predicting readiness to return to work following injury. The interrater and test-retest reliability of determinations of maximal safe lifting during kinesiophysical FCEs were examined in a sample of people who were off work and receiving workers' compensation. Subjects. Twenty-eight subjects with low back pain who had plateaued with treatment were enrolled. Five occupational therapists, trained and experienced in kinesiophysical methods, conducted testing. Methods. A repeated-measures design was used, with raters testing subjects simultaneously, yet independently. Subjects were rated on 2 occasions, separated by 2 to 4 days. Analyses included intraclass correlation coefficients (ICCs) and 95% confidence intervals. Results. The ICC values for interrater reliability ranged from .95 to .98. Test-retest values ranged from .78 to .94. Discussion and Conclusion. Inconsistencies in subjects' performance across sessions were the greatest source of FCE measurement variability. Overall, however, test-retest reliability was good and interrater reliability was excellent.


2020 ◽  
Vol 63 (11) ◽  
pp. 3743-3759
Author(s):  
Mehdi Bakhtiar ◽  
Min Ney Wong ◽  
Emily Ka Yin Tsui ◽  
Malcolm R. McNeil

Purpose This study reports the psychometric development of the Cantonese versions of the English Computerized Revised Token Test (CRTT) for persons with aphasia (PWAs) and healthy controls (HCs). Method The English CRTT was translated into standard Chinese for the Reading–Word Fade version (CRTT-R- WF -Cantonese) and into formal Cantonese for the Listening version (CRTT-L-Cantonese). Thirty-two adult native Cantonese PWAs and 42 HCs were tested on both versions of CRTT-Cantonese tests and on the Cantonese Aphasia Battery to measure the construct and concurrent validity of CRTT-Cantonese tests. The HCs were retested on both versions of the CRTT-Cantonese tests, whereas the PWAs were randomly assigned for retesting on either version to measure the test–retest reliability. Results A two-way, Group × Modality, repeated-measures analysis of variance revealed significantly lower scores for the PWA group than the HC group for both reading and listening. Other comparisons were not significant. A high and significant correlation was found between the CRTT-R- WF -Cantonese and the CRTT-L-Cantonese in PWAs, and 87% of the PWAs showed nonsignificantly different performance across the CRTT-Cantonese tests based on the Revised Standardized Difference Test. The CRTT-R- WF -Cantonese provided better aphasia diagnostic sensitivity (100%) and specificity (83.30%) values than the CRTT-L-Cantonese. Pearson correlation coefficients revealed significant moderate correlations between the Cantonese Aphasia Battery scores and the CRTT-Cantonese tests in PWAs, supporting adequate concurrent validity. Intraclass correlation coefficient showed high test–retest reliability (between .82 and .96, p < .001) for both CRTT-Cantonese tests for both groups. Conclusions Results support that the validly translated CRTT-R- WF -Cantonese and CRTT-L-Cantonese tests significantly differentiate the reading and listening comprehension of PWAs from HCs and provides acceptable concurrent validity and high test–retest reliability for both tests. Furthermore, favorable PWA versus HC sensitivity and specificity cutoff scores are presented for both CRTT-Cantonese listening and reading tests.


2000 ◽  
Vol 9 (2) ◽  
pp. 117-123 ◽  
Author(s):  
Michael D. Ross ◽  
Elizabeth G. Fontenot

Context:The standing heel-rise test has been recommended as a means of assessing calf-muscle performance. To the authors' knowledge, the reliability of the test using intraclass correlation coefficients (ICCs) has not been reported.Objective:To determine the test-retest reliability of the standing heel-rise test.Design:Single-group repeated measures.Participants:Seventeen healthy subjects.Settings and Infevention:Each subject was asked to perform as many standing heel raises as possible during 2 testing sessions separated by 7 days.Main Outcome Measures:Reliability data for the standing heel-rise test were studied through a repeated-measures analysis of variance, ICC2, 1 and SEMs.Results:The ICC2,1 and SEM values for the standing heel-rise test were .96 and 2.07 repetitions, respectively.Conclusions:The standing heel-rise test offers clinicians a reliable assessment of calfmuscle performance. Further study is necessary to determine the ability of the standing heel-rise test to detect functional deficiencies in patients recovering from lower leg injury or surgery


2020 ◽  
Vol 4 (1) ◽  
Author(s):  
Claudia Haberland ◽  
Anna Filonenko ◽  
Christian Seitz ◽  
Matthias Börner ◽  
Christoph Gerlinger ◽  
...  

Abstract Background To evaluate the psychometric and measurement properties of two patient-reported outcome instruments, the menstrual pictogram superabsorbent polymer-containing version 3 (MP SAP-c v3) and Uterine Fibroid Daily Bleeding Diary (UF-DBD). Test-retest reliability, criterion, construct validity, responsiveness, missingness and comparability of the MP SAP-c v3 and UF-DBD versus the alkaline hematin (AH) method and a patient global impression of severity (PGI-S) were analyzed in post hoc trial analyses. Results Analyses were based on data from up to 756 patients. The full range of MP SAP-c v3 and UF-DBD response options were used, with score distributions reflecting the cyclic character of the disease. Test-retest reliability of MP SAP-c v3 and UF-DBD scores was supported by acceptable intraclass correlation coefficients when stability was defined by the AH method and Patient Global Impression of Severity (PGI-S) scores (0.80–0.96 and 0.42–0.94, respectively). MP SAP-c v3 and UF-DBD scores demonstrated strong and moderate-to-strong correlations with menstrual blood loss assessed by the AH method. Scores increased in monotonic fashion, with greater disease severities, defined by the AH method and PGI-S scores; differences between groups were mostly statistically significant (P < 0.05). MP SAP-c v3 and UF-DBD were sensitive to changes in disease severity, defined by the AH method and PGI-S. MP SAP-c v3 and UF-DBD showed a lower frequency of missing patient data versus the AH method, and good agreement with the AH method. Conclusions This evidence supports the use of the MP SAP-c v3 and UF-DBD to assess clinical efficacy endpoints in UF phase III studies replacing the AH method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yanzhi Bi ◽  
Xin Hou ◽  
Jiahui Zhong ◽  
Li Hu

AbstractPain perception is a subjective experience and highly variable across time. Brain responses evoked by nociceptive stimuli are highly associated with pain perception and also showed considerable variability. To date, the test–retest reliability of laser-evoked pain perception and its associated brain responses across sessions remain unclear. Here, an experiment with a within-subject repeated-measures design was performed in 22 healthy volunteers. Radiant-heat laser stimuli were delivered on subjects’ left-hand dorsum in two sessions separated by 1–5 days. We observed that laser-evoked pain perception was significantly declined across sessions, coupled with decreased brain responses in the bilateral primary somatosensory cortex (S1), right primary motor cortex, supplementary motor area, and middle cingulate cortex. Intraclass correlation coefficients between the two sessions showed “fair” to “moderate” test–retest reliability for pain perception and brain responses. Additionally, we observed lower resting-state brain activity in the right S1 and lower resting-state functional connectivity between right S1 and dorsolateral prefrontal cortex in the second session than the first session. Altogether, being possibly influenced by changes of baseline mental state, laser-evoked pain perception and brain responses showed considerable across-session variability. This phenomenon should be considered when designing experiments for laboratory studies and evaluating pain abnormalities in clinical practice.


1997 ◽  
Vol 64 (5) ◽  
pp. 270-276 ◽  
Author(s):  
Johanne Desrosiers ◽  
Annie Rochette ◽  
Réjean Hébert ◽  
Gina Bravo

Several dexterity tests have been developed, including the Minnesota Rate of Manipulation Test (MRMT) and a new version, the Minnesota Manual Dexterity Test (MMDT). The objectives of the study were: a) to verify the test-retest reliability of the MMDT; b) to compare the MRMT and the MMDT; c) to study the concurrent validity of the MMDT; and d) to establish reference values for elderly people with the MMDT. Two hundred and forty-seven community-living healthy elderly were evaluated with the MMDT, and two other dexterity tests, the Box and Block Test (BBT) and the Purdue Pegboard (PP). Thirty-five of them were evaluated twice with the MMDT and 44 were evaluated with both the MMDT and MRMT. The results show that the test-retest reliability of the MMDT is acceptable to high (intraclass correlation coefficients of 0.79 to 0.87, depending on the subtest) and the validity of the test is demonstrated by significant correlations between the MMDT, the BBT and the PP (0.63 to 0.67). There is a high correlation (0.85 to 0.95) between the MMDT and the MMRT in spite of different results. The reference values will help occupational therapists to differentiate better between real dexterity difficulties and those that may be attributed to normal aging.


2008 ◽  
Vol 22 (6) ◽  
pp. 737-744 ◽  
Author(s):  
I-Ping Hsueh ◽  
Miao-Ju Hsu ◽  
Ching-Fan Sheu ◽  
Su Lee ◽  
Ching-Lin Hsieh ◽  
...  

Objective. To provide empirical justification for selecting motor scales for stroke patients, the authors compared the psychometric properties (validity, responsiveness, test-retest reliability, and smallest real difference [SRD]) of the Fugl-Meyer Motor Scale (FM), the simplified FM (S-FM), the Stroke Rehabilitation Assessment of Movement instrument (STREAM), and the simplified STREAM (S-STREAM). Methods. For the validity and responsiveness study, 50 inpatients were assessed with the FM and the STREAM at admission and discharge to a rehabilitation department. The scores of the S-FM and the S-STREAM were retrieved from their corresponding scales. For the test-retest reliability study, a therapist administered both scales on a different sample of 60 chronic patients on 2 occasions. Results. Only the S-STREAM had no notable floor or ceiling effects at admission and discharge. The 4 motor scales had good concurrent validity (rho ≥ .91) and satisfactory predictive validity (rho = .72-.77). The scales showed responsiveness (effect size d ≥ 0.34; standardized response mean ≥ 0.95; P < .0001), with the S-STREAM most responsive. The test-retest agreements of the scales were excellent (intraclass correlation coefficients ≥ .96). The SRD of the 4 scales was 10% of their corresponding highest score, indicating acceptable level of measurement error. The upper extremity and the lower extremity subscales of the 4 showed similar results. Conclusions. The 4 motor scales showed acceptable levels of reliability, validity, and responsiveness in stroke patients. The S-STREAM is recommended because it is short, responsive to change, and able to discriminate patients with severe or mild stroke.


2021 ◽  
pp. 1-9
Author(s):  
Adam J. Wells ◽  
Bri-ana D.I. Johnson

Context: The Dynavision D2™ Mode A test (ModeA) is a 1-minute reaction time (RT) test commonly used in sports science research and clinical rehabilitation. However, there is limited data regarding the effect of repeated testing (ie, training) or subsequent periods of no testing (ie, detraining) on test–retest reliability and RT performance. Therefore, the purpose of this study was to examine the test–retest reliability, training, and detraining effects associated with the D2™ ModeA test. Design: Repeated measures/reliability. Methods: Twenty-four recreationally active men and women completed 15 training sessions consisting of 2 ModeA tests per session (30 tests). The participants were then randomized to either 1 or 2 weeks of detraining prior to completing 15 retraining sessions (30 tests). The training and retraining periods were separated into 10 blocks for analysis (3 tests per block). The number of hits (hits) and the average RT per hit (AvgRT) within each block were used to determine RT performance. Intraclass correlation coefficients, SEM, and minimum difference were used to determine reliability. Repeated-measures analysis of variance/analysis of covariance were used to determine training and detraining effects, respectively. Results: The ModeA variables demonstrated excellent test–retest reliability (intraclass correlation coefficient2,3 > .93). Significant improvements in hits and AvgRT were noted within training blocks 1 to 5 (P < .05). No further improvements in RT performance were noted between training blocks 6 through 10. There was no effect of detraining period on RT. The RT performance was not different between blocks during retraining. Conclusions: It appears that 15 tests are necessary to overcome the training effect and establish reliable baseline performance for the ModeA test. Detraining for 1 to 2 weeks did not impact RT performance. The authors recommend that investigators and clinicians utilize the average of 3 tests when assessing RT performance using the D2 ModeA test.


2021 ◽  
Vol 12 ◽  
Author(s):  
Wei Xia ◽  
William Ho Cheung Li ◽  
Tingna Liang ◽  
Yuanhui Luo ◽  
Laurie Long Kwan Ho ◽  
...  

Objectives: This study conducted a linguistic and psychometric evaluation of the Chinese Counseling Competencies Scale-Revised (CCS-R).Methods: The Chinese CCS-R was created from the original English version using a standard forward-backward translation process. The psychometric properties of the Chinese CCS-R were examined in a cohort of 208 counselors-in-training by two independent raters. Fifty-three counselors-in-training were asked to undergo another counseling performance evaluation for the test-retest. The confirmatory factor analysis (CFA) was conducted for the Chinese CCS-R, followed by internal consistency, test-retest reliability, inter-rater reliability, convergent validity, and concurrent validity.Results: The results of the CFA supported the factorial validity of the Chinese CCS-R, with adequate construct replicability. The scale had a McDonald's omega of 0.876, and intraclass correlation coefficients of 0.63 and 0.90 for test-retest reliability and inter-rater reliability, respectively. Significantly positive correlations were observed between the Chinese CCS-R score and scores of performance checklist (Pearson's γ = 0.781), indicating a large convergent validity, and knowledge on drug abuse (Pearson's γ = 0.833), indicating a moderate concurrent validity.Conclusion: The results support that the Chinese CCS-R is a valid and reliable measure of the counseling competencies.Practice implication: The CCS-R provides trainers with a reliable tool to evaluate counseling students' competencies and to facilitate discussions with trainees about their areas for growth.


Sign in / Sign up

Export Citation Format

Share Document