Leveraging Technology for Optimization of Health Survey Research

2019 ◽  
Vol 21 (4) ◽  
pp. 571-581
Author(s):  
Shobana Sivaraman ◽  
Punit Soni

Public health deals with promotion of health, prevention and treatment of communicable and non-communicable diseases by designing appropriate health interventions and services to deliver through the health systems. There is a need for robust database on the magnitude of disease burden, socio-demographic characteristics and associated risk factors for evidence-based effective planning and developing appropriate strategies, their implementation, monitoring and evaluation. Although India has vast information available through various large-scale surveys and research studies, it still lacks a reliable health information management system. The available data are seldom analysed to draw meaningful conclusions, to develop evidence for policies and strategies and to measure effectiveness of health programmes. The challenges faced in the survey research are multifaceted, from data collection in the field to its rapid transmission of data to central data servers. There is an increasing trend in using technology, especially computer-assisted personal interviews (CAPI) which is not only expensive but also requires extensive training and information management for transmission of data and its storage. This article examines the application of technology in survey research for efficient data management and to improve data quality. A software called Open Data Kit (ODK) was used for data collection and real-time monitoring of interviewers in field to improve the quality of data collection, achieve desired response rate (RR) and for better field operations’ management. The data collection and field reporting forms designed using ODK act as a significant tool to demonstrate how technology can be used to articulate research expectations at various levels with lower cost and higher efficiency. The research article examines all possible aspects of using technology in Health Survey Research. It aims to introduce further discussion of using technology for field data collection and monitoring.

Author(s):  
Jernej Berzelak ◽  
Vasja Vehovar

Data collection based on standardized questionnaires represents one of the central tools in many research areas. Early surveys date back to the 18th century (de Leeuw, 2005), while a major breakthrough came in the 1930s with the application of probability samples. By using surveys, today governments monitor conditions in the country, social scientists obtain data on social phenomena and managers direct their business by studying the characteristics of their target customers. The importance of survey research stimulates ongoing efforts to achieve higher data quality and optimized costs. Early on researchers recognized the potential of technological advances for the achievement of these goals. In the early 1970s telephone surveys started replacing expensive face-to-face interviews. Computer technology developments soon enabled computer-assisted telephone interviewing (“CATI”). The 1980s brought new approaches based on personal computers. Interviewers started to use laptops and respondents sometimes completed questionnaires on their own computers. Another revolution occurred with the Internet in the subsequent decade. The pervasive availability of Internet access, and the growing number of Internetsupported devices, coupled with the advance of interactive Web technologies (like Ajax) are facilitating developments in contemporary survey research. Internet surveys show the potential to become the leading survey approach in the future. According to the Council of American Survey Research Organizations (“CASRO”), the Internet already represents the primary data collection mode for 39% of research companies in the USA (DeAngelis, 2006). The rate of adoption is slower in academic and official research but it is far from negligible. These technological innovations have, however, created several new methodological challenges.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Xiaofeng Wu ◽  
Fangyuan Ren ◽  
Yiming Li ◽  
Zhenwei Chen ◽  
Xiaoling Tao

With the rapid development of the Internet of Things (IoT) technology, it has been widely used in various fields. IoT device as an information collection unit can be built into an information management system with an information processing and storage unit composed of multiple servers. However, a large amount of sensitive data contained in IoT devices is transmitted in the system under the actual wireless network environment will cause a series of security issues and will become inefficient in the scenario where a large number of devices are concurrently accessed. If each device is individually authenticated, the authentication overhead is huge, and the network burden is excessive. Aiming at these problems, we propose a protocol that is efficient authentication for Internet of Things devices in information management systems. In the proposed scheme, aggregated certificateless signcryption is used to complete mutual authentication and encrypted transmission of data, and a cloud server is introduced to ensure service continuity and stability. This scheme is suitable for scenarios where large-scale IoT terminal devices are simultaneously connected to the information management system. It not only reduces the authentication overhead but also ensures the user privacy and data integrity. Through the experimental results and security analysis, it is indicated that the proposed scheme is suitable for information management systems.


2005 ◽  
Vol 50 (10) ◽  
pp. 573-579 ◽  
Author(s):  
Ronald Gravel ◽  
Yves Béland

As part of the Canadian Community Health Survey (CCHS) biennial strategy, the provincial survey component of the first CCHS cycle (Cycle 1.2) focused on different aspects of the mental health and well-being of Canadians living in private dwellings. Moreover, the survey collected data on prevalences of specific mental disorders and problems, use of mental health services, and economic and personal costs of having a mental illness. Data collection began in May 2002 and extended over 8 months. More than 85% of all interviews were conducted face-to-face and used a computer-assisted application. The survey obtained a national response rate of 77%. This paper describes several key aspects of the questionnaire content, the sample design, interviewer training, and data collection procedures. A brief overview of the CCHS regional component (Cycle 1.1) is also given.


1990 ◽  
Vol 29 (02) ◽  
pp. 146-152 ◽  
Author(s):  
A. Mouaddib ◽  
P. Robaux ◽  
J.M. Martin

AbstractThree ways are proposed to help the occupational physician in constructing a worker’s job history or Curriculum Laboris (CL) with a PC. The quality and, therefore, the usefulness of any job history is greatly conditioned by the method and quality of data collection. The Curriculum Laboris method explained in a previous article has been briefly summarized as a basis of departure. Then, the workers who were submitted to special medical surveillance were considered. After this, the scrolling menu technique was applied in the elaboration of a job history. Finally, the authors show how the representation of company organization by means of a job exposure matrix (JEM) can help to efficiently elaborate job histories.


2001 ◽  
Vol 40 (03) ◽  
pp. 190-195 ◽  
Author(s):  
A. Junger ◽  
L. Quinzio ◽  
C. Fuchs ◽  
A. Michel ◽  
G. Sciuk ◽  
...  

Abstract:The influence of methods for record keeping on the documentation of vital signs was assessed for the Anesthesia Information Management System (AIMS) NarkoData. We compared manually entered blood-pressure readings with automatically collected data. These data were stored in a database and subsequently evaluated and analyzed. The data sets were split into two groups, ”manual“ and ”automatic“. We evaluated the effect of automatic data collection on the incidence of corrected data, data validity and data variation. Blood-pressure readings of 37,726 data sets were analyzed. We could assess that the method of documentation did influence the data quality. It could not be assessed whether the incorrectness of data during automatic data gathering was caused by artefacts or by the anesthesiologist.


2013 ◽  
Vol 671-674 ◽  
pp. 3130-3133
Author(s):  
Qing Yan Shuai ◽  
Ya Bo He

Project controlling information management system (PCIMS) is an important management facility and means for the construction of large-scale hydraulic engineering. Based on the analysis of construction management features for large-scale hydraulic engineering, the general design goal of PCIMS is put forward. The structure of PCIMS for large-scale hydraulic engineering is concretely designed and analyzed, which is consisted of five parts, including the account management module, database, input module, data processing module and output module. The key technologies in the process of developing PCIMS are also discussed,such as the technology of data warehouse, the real-time technology of the system and the comprehensive integration technology of various components. Due to the establishment of PCIMS for large-scale hydraulic engineering, a convenient information interaction platform is provided for all the project participants. As a result, the accuracy and timeliness of the information exchange is improved, that is benefit for the owners to make correct decision.


Sign in / Sign up

Export Citation Format

Share Document