Developing a mind–body exercise programme for stressed children

2016 ◽  
Vol 76 (2) ◽  
pp. 131-144 ◽  
Author(s):  
Claudia Wang ◽  
Dong-Chul Seo ◽  
Roy W Geib

Objective: To describe the process of developing a Health Qigong programme for stressed children using a formative evaluation approach. Methods: A multi-step formative evaluation method was utilised. These steps included (1) identifying programme content and drafting the curriculum, (2) synthesising effective and age-appropriate pedagogies, (3) consulting an expert panel, (4) teaching pilot lessons and soliciting feedback from students and (5) revising and finalising the programme. Results: A total of 16 theme-based lessons were generated in order to help children manage stress by imitating interesting plants and animals, such as sunflowers, pine trees, white cranes, tigers, sleeping lions and deer. Five age-appropriate teaching strategies were synthesised to make the programme fun and enjoyable for children. These included (1) using theme-based lesson plans, (2) building mind–body connections, (3) balancing repetition and creativity, (4) interweaving pictures, stories, volunteers and teamwork and (5) involving parents and school teachers. Modifications based on feedback from the expert panel and students were incorporated to make the programme relevant to elementary school settings. Conclusion: This Health Qigong for Stressed Children programme appears effective in reducing stress over a 16-week period. Future studies should explore the efficacy and wider applicability of the programme with a larger and more diverse population of children.

Author(s):  
Athanasis Karoulis ◽  
Stavros Demetriadis ◽  
Andreas Pombortsis

Interface evaluation of a software system is a procedure intended to identify and propose solutions for usability problems caused by the specific software design. The term evaluation generally refers to the process of “gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context” (Preece et al., 1994, p. 602). As already stated, the main goal of an interface evaluation is to discover usability problems. A usability problem may be defined as anything that interferes with a user’s ability to efficiently and effectively complete tasks (Karat et al., 1992). The most applied interface evaluation methodologies are the expert-based and the empirical (user-based) evaluations. Expert evaluation is a relatively cheap and efficient formative evaluation method applied even on system prototypes or design specifications up to the almost-ready-to-ship product. The main idea is to present the tasks supported by the interface to an interdisciplinary group of experts, who will take the part of would-be users and try to identify possible deficiencies in the interface design. According to Reeves (1993), expert-based evaluations are perhaps the most applied evaluation strategy. They provide a crucial advantage that makes them more affordable compared to the empirical ones; in general, it is easier and cheaper to find experts rather than users who are eager to perform the evaluation. The main idea is that experts from different cognitive domains (at least one from the domain of HCI and one from the cognitive domain under evaluation) are asked to judge the interface, everyone from his or her own point of view. It is important that they all are experienced, so they can see the interface through the eyes of the user and reveal problems and deficiencies of the interface. One strong advantage of the methods is that they can be applied very early in the design cycle, even on paper mock-ups. The expert’s expertise allows the expert to understand the functionality of the system under construction, even if the expert lacks the whole picture of the product. A first look at the basic characteristics would be sufficient for an expert. On the other hand, user-based evaluations can be applied only after the product has reached a certain level of completion.


CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S71
Author(s):  
J. Marsden ◽  
S. Drebit ◽  
R. Lindstrom ◽  
C. MacKinnon ◽  
C. Archibald ◽  
...  

Introduction: September 2017 saw the launch of the British Columbia (BC) Emergency Medicine Network (EM Network), an innovative clinical network established to improve emergency care across the province. The intent of the EM Network is to support the delivery of evidence-informed, patient-centered care in all 108 Emergency Departments and Diagnostic & Treatment Centres in BC. After one year, the Network undertook a formative evaluation to guide its growth. Our objective is to describe the evaluation approach and early findings. Methods: The EM Network was evaluated on three levels: member demographics, online engagement and member perceptions of value and progress. For member demographics and online engagement, data were captured from member registration information on the Network's website, Google Analytics and Twitter Analytics. Membership feedback was sought through an online survey using a social network analysis tool, PARTNER (Program to Analyze, Record, and Track Networks to Enhance Relationships), and semi-structured individual interviews. This framework was developed based on literature recommendations in collaboration with Network members, including patient representatives. Results: There are currently 622 EM Network members from an eligible denominator of approximately 1400 physicians (44%). Seventy-three percent of the Emergency Departments and Diagnostic and Treatment Centres in BC currently have Network members, and since launch, the EM Network website has been accessed by 11,154 unique IP addresses. Online discussion forum use is low but growing, and Twitter following is high. There are currently 550 Twitter followers and an average of 27 ‘mentions’ of the Network by Twitter users per month. Member feedback through the survey and individual interviews indicates that the Network is respected and credible, but many remain unaware of its purpose and offerings. Conclusion: Our findings underscore that early evaluation is useful to identify development needs, and for the Network this includes increasing awareness and online dialogue. However, our results must be interpreted cautiously in such a young Network, and thus, we intend to re-evaluate regularly. Specific action recommendations from this baseline evaluation include: increasing face-to-face visits of targeted communities; maintaining or accelerating communication strategies to increase engagement; and providing new techniques that encourage member contributions in order to grow and improve content.


2019 ◽  
Vol 795 ◽  
pp. 383-388 ◽  
Author(s):  
Xiao Tao Zheng ◽  
Zhi Yuan Ma ◽  
Hao Feng Chen ◽  
Jun Shen

The traditional Low Cycle Fatigue (LCF) evaluation method is based on elastic analysis with Neuber’s rule which is usually considered to be over conservative. However, the effective strain range at the steady cycle should be calculated by detailed cycle-by-cycle analysis for the alternative elastic-plastic method in ASME VIII-2, which is obviously time-consuming. A Direct Steady Cycle Analysis (DSCA) method within the Linear Matching Method (LMM) framework is proposed to assess the fatigue life accurately and efficiently for components with arbitrary geometries and cyclic loads. Temperature-dependent stress-strain relationships considering the strain hardening described by the Ramberg-Osgood (RO) formula are discussed and compared with those results obtained by the Elastic-Perfectly Plastic (EPP) model. Additionally, a Reversed Plasticity Domain Method (RPDM) based on the shakedown and ratchet limit analysis method and the DSCA approach within the LMM framework (LMM DSCA) is recommended to design cyclic load levels of LCF experiments with predefined fatigue life ranges.


2019 ◽  
Author(s):  
Mina Chookhachizadeh Moghadam ◽  
Ehsan Masoumi ◽  
Nader Bagherzadeh ◽  
Davinder Ramsingh ◽  
Guann-Pyng Li ◽  
...  

AbstractPurposePredicting hypotension well in advance provides physicians with enough time to respond with proper therapeutic measures. However, the real-time prediction of hypotension with high positive predictive value (PPV) is a challenge due to the dynamic changes in patients’ physiological status under the drug administration which is limiting the amount of useful data available for the algorithm.MethodsTo mimic real-time monitoring, we developed a machine learning algorithm that uses most of the available data points from patients’ record to train and test the algorithm. The algorithm predicts hypotension up to 30 minutes in advance based on only 5 minutes of patient’s physiological history. A novel evaluation method is proposed to assess the algorithm performance as a function of time at every timestamp within 30 minutes prior to hypotension. This evaluation approach provides statistical tools to find the best possible prediction window.ResultsDuring 181,000 minutes of monitoring of about 400 patients, the algorithm demonstrated 94% accuracy, 85% sensitivity and 96% specificity in predicting hypotension within 30 minutes of the events. A high PPV of 81% obtained and the algorithm predicted 80% of the events 25 minutes prior to their onsets. It was shown that choosing a classification threshold that maximizes the F1 score during the training phase contributes to a high PPV and sensitivity.ConclusionThis study reveals the promising potential of the machine learning algorithms in real-time prediction of hypotensive events in ICU setting based on short-term physiological history.


2017 ◽  
Vol 32 (3) ◽  
pp. 969-990 ◽  
Author(s):  
Wenqing Zhang ◽  
Lian Xie ◽  
Bin Liu ◽  
Changlong Guan

Abstract Track, intensity, and, in some cases, size are usually used as separate evaluation parameters to assess numerical model performance on tropical cyclone (TC) forecasts. Such an individual-parameter evaluation approach often encounters contradictory skill assessments for different parameters, for instance, small track error with large intensity error and vice versa. In this study, an intensity-weighted hurricane track density function (IW-HTDF) is designed as a new approach to the integrated evaluation of TC track, intensity, and size forecasts. The sensitivity of the TC track density to TC wind radius was investigated by calculating the IW-HTDF with density functions defined by 1) asymmetric, 2) symmetric, and 3) constant wind radii. Using the best-track data as the benchmark, IW-HTDF provides a specific score value for a TC forecast validated for a specific date and time or duration. This new TC forecast evaluation approach provides a relatively concise, integrated skill score compared with multiple skill scores when track, intensity and size are evaluated separately. It should be noted that actual observations of TC size data are very limited and so are the estimations of TC size forecasts. Therefore, including TC size as a forecast evaluation parameter is exploratory at the present. The proposed integrated evaluation method for TC track, intensity, and size forecasts can be used for evaluating the track forecast alone or in combination with intensity and size parameters. As observations and forecasts of TC size become routine in the future, including TC size as a forecast skill assessment parameter will become more imperative.


2018 ◽  
Vol 28 (Supp) ◽  
pp. 445-456 ◽  
Author(s):  
Bonnie T. Zima ◽  
Michael McCreary ◽  
Kristen Kenan ◽  
Michelle Churchey-Mims ◽  
Hannah Chi ◽  
...  

Objective: To describe the development and evaluation of two integrated care mod­els using a partnered formative evaluation approach across a private foundation, clinic leaders, providers and staff, and a universi­ty-based research center.Design: Retrospective cohort study using multiple data sources.Setting: Two federal qualified health care centers serving low-income children and families in Chicago.Participants: Private foundation, clinic and academic partners.Interventions: Development of two inte­grated care models and partnered evalua­tion design.Main Outcome Measures: Accomplish­ments and early lessons learned.Results: Together, the foundation-clinic-academic partners worked to include best practices in two integrated care models for children while developing the evaluation design. A shared data collection approach, which empowered the clinic partners to collect data using a web-based tool for a prospective longitudinal cohort study, was also created.Conclusion: Across three formative evalua­tion stages, the foundation, clinic, and aca­demic partners continued to reach beyond their respective traditional roles of project oversight, clinical service, and research as adjustments were collectively made to accommodate barriers and unanticipated events. Together, an innovative shared data collection approach was developed that extends partnered research to include data collection being led by the clinic partners and supported by the technical resources of a university-based research center.Ethn Dis. 2018;28(Suppl 2):445-456; doi:10.18865/ed.28.S2.445.


2017 ◽  
Vol 55 (7) ◽  
pp. 996-1021 ◽  
Author(s):  
Ruey-Shin Chen ◽  
I-Fan Liu

Currently, e-learning systems are being widely used in all stages of education. However, it is difficult for school administrators to accurately assess the actual usage performance of a new system, especially when an organization wishes to update the system for users from different backgrounds using new devices such as smartphones. To allow school administrators to conduct upgrades of e-learning systems that take into consideration students' current usage conditions, this study proposed a two-stage system evaluation approach to explore the adoption of new systems. We collected 352 samples in Stage I. The goal of this Stage I was to propose a research model to understand the usage intentions of college students toward campus e-learning systems and also the factors which showed significant differences between PC and smartphone usage. A total of 30 trained students participated in Stage II. The goal of Stage II was to propose a system performance evaluation method to evaluate the performance of the new and existing systems according to the concerned factors of smartphone users after actual system use. Finally, based on our research model and system performance evaluation method, we put forward conclusions and suggestions that schools could use as references for future system procurements and updates.


2002 ◽  
Vol 5 (2) ◽  
pp. 89-108 ◽  
Author(s):  
Edward L Meyen ◽  
Ronald J Aust ◽  
Yvonne N Bui ◽  
Eugene Ramp ◽  
Sean J Smith

Sign in / Sign up

Export Citation Format

Share Document