scholarly journals Meaningful Integration of Data from Heterogeneous Health Services and Home Environment Based on Ontology

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1747 ◽  
Author(s):  
Cong Peng ◽  
Prashant Goswami

The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.

2016 ◽  
Vol 19 ◽  
pp. 133 ◽  
Author(s):  
Francesco Beretta ◽  
Thomas Riechert

Abstract: The paper presents a proposal for the Heloise Common Research Model (HCRM), to be implemented for the European research network on digital academic history – Heloise. The objective of Heloise is to interlink databases and other digital resources stemming from several research projects in the field of academic history, to provide an integrated database for federated research on the network databases. The HCRM defines three layers: the Repository Layer, the Application Layer and the Research Interface Layer, which are presented in detail. As part of the application and research interface layer, essential concepts are the symogih.org ontology and a Heloise network-specific thesaurus. The concepts have been tested on a sample of Heloise network’s datasets as a part of a prototype of the envisaged platform that the authors have started implementing. The paper concludes with future developments to be accomplished within the Heloise network.Keywords: academic history, domain ontologies, data interoperability, semantic web technologies, linked open data.Investigación colaborativa sobre la historia académica a través de los datos abiertos enlazados: una propuesta para el modelo de investigación común de HéloïseResumen: El artículo presenta una propuesta para el modelo de investigación común de Héloïse (HCRM en sus siglas en inglés) para su implementación por la red de investigación europea sobre la historia académica digital – Héloïse. El objetivo de Héloïse es interconectar las bases de datos y otros recursos digitales que pertenecen a varios proyectos de investigación en el campo de la historia académica con el fin de ofrecer una base de datos integrada para la investigación federada sobre las bases de datos de la Red. El HCRM define tres niveles: el nivel del repositorio, el nivel de la aplicación y el nivel de la interfaz de la investigación que se explican de forma detallada. Como parte del nivel de la interfaz de la investigación, la ontología symogih.org y un diccionario de sinónimos para la red Héloïse constituyen conceptos fundamentales. Los conceptos han sido probados sobre una muestra de datos de la red Héloïse como parte de un prototipo de la plataforma que los autores han empezado a desarrollar. El artículo concluye con propuestas de futuro desarrollo a realizar dentro del marco de la red Héloïse.Palabras clave: historia académica, ontologías de dominio, interoperabilidad de datos, tecnologías de web semántica, datos abiertos enlazados    


2021 ◽  
Author(s):  
Vassilis Kilintzis ◽  
Vasileios C. Alexandropoulos ◽  
Nikolaos Beredimas ◽  
Nicos Maglaveras

The process of maintenance of an underlying semantic model that supports data management and addresses the interoperability challenges in the domain of telemedicine and integrated care is not a trivial task when performed manually. We present a methodology that leverages the provided serializations of the Health Level Seven International (HL7) Fast Health Interoperability Resources (FHIR) specification to generate a fully functional OWL ontology along with the semantic provisions for maintaining functionality upon future changes of the standard. The developed software makes a complete conversion of the HL7 FHIR Resources along with their properties and their semantics and restrictions. It covers all FHIR data types (primitive and complex) along with all defined resource types. It can operate to build an ontology from scratch or to update an existing ontology, providing the semantics that are needed, to preserve information described using previous versions of the standard. All the results based on the latest version of HL7 FHIR as a Web Ontology Language (OWL-DL) ontology are publicly available for reuse and extension.


2018 ◽  
Author(s):  
Jason Walonoski ◽  
Robert Scanlon ◽  
Conor Dowling ◽  
Mario Hyland ◽  
Richard Ettema ◽  
...  

BACKGROUND There is wide recognition that the lack of health data interoperability has significant impacts. Traditionally, health data standards are complex and test-driven methods played important roles in achieving interoperability. The Health Level Seven International (HL7) standard Fast Healthcare Interoperability Resources (FHIR) may be a technical solution that aligns with policy, but systems need to be validated and tested. OBJECTIVE Our objective is to explore the question of whether or not the regular use of validation and testing tools improves server compliance with the HL7 FHIR specification. METHODS We used two independent validation and testing tools, Crucible and Touchstone, and analyzed the usage and result data to determine their impact on server compliance with the HL7 FHIR specification. RESULTS The use of validation and testing tools such as Crucible and Touchstone are strongly correlated with increased compliance and “practice makes perfect.” Frequent and thorough testing has clear implications for health data interoperability. Additional data analysis reveals trends over time with respect to vendors, use cases, and FHIR versions. CONCLUSIONS Validation and testing tools can aid in the transition to an interoperable health care infrastructure. Developers that use testing and validation tools tend to produce more compliant FHIR implementations. When it comes to health data interoperability, “practice makes perfect.”


Author(s):  
Jose María Alvarez Rodríguez ◽  
José Emilio Labra Gayo ◽  
Patricia Ordoñez de Pablos

The aim of this chapter is to present a proposal and a case study to describe the information about organizations in a standard way using the Linked Data approach. Several models and ontologies have been provided in order to formalize the data, structure and behaviour of organizations. Nevertheless, these tries have not been fully accepted due to some factors: (1) missing pieces to define the status of the organization; (2) tangled parts to specify the structure (concepts and relations) between the elements of the organization; 3) lack of text properties, and other factors. These divergences imply a set of incomplete approaches to formalize data and information about organizations. Taking into account the current trends of applying semantic web technologies and linked data to formalize, aggregate, and share domain specific information, a new model for organizations taking advantage of these initiatives is required in order to overcome existing barriers and exploit the corporate information in a standard way. This work is especially relevant in some senses to: (1) unify existing models to provide a common specification; (2) apply semantic web technologies and the Linked Data approach; (3) provide access to the information via standard protocols, and (4) offer new services that can exploit this information to trace the evolution and behaviour of the organization over time. Finally, this work is interesting to improve the clarity and transparency of some scenarios in which organizations play a key role, like e-procurement, e-health, or financial transactions.


2008 ◽  
pp. 3309-3320
Author(s):  
Csilla Farkas

This chapter investigates the threat of unwanted Semantic Web inferences. We survey the current efforts to detect and remove unwanted inferences, identify research gaps, and recommend future research directions. We begin with a brief overview of Semantic Web technologies and reasoning methods, followed by a description of the inference problem in traditional databases. In the context of the Semantic Web, we study two types of inferences: (1) entailments defined by the formal semantics of the Resource Description Framework (RDF) and the RDF Schema (RDFS) and (2) inferences supported by semantic languages like the Web Ontology Language (OWL). We compare the Semantic Web inferences to the inferences studied in traditional databases. We show that the inference problem exists on the Semantic Web and that existing security methods do not fully prevent indirect data disclosure via inference channels.


2020 ◽  
Vol 1 (1) ◽  
pp. 428-444 ◽  
Author(s):  
Silvio Peroni ◽  
David Shotton

OpenCitations is an infrastructure organization for open scholarship dedicated to the publication of open citation data as Linked Open Data using Semantic Web technologies, thereby providing a disruptive alternative to traditional proprietary citation indexes. Open citation data are valuable for bibliometric analysis, increasing the reproducibility of large-scale analyses by enabling publication of the source data. Following brief introductions to the development and benefits of open scholarship and to Semantic Web technologies, this paper describes OpenCitations and its data sets, tools, services, and activities. These include the OpenCitations Data Model; the SPAR (Semantic Publishing and Referencing) Ontologies; OpenCitations’ open software of generic applicability for searching, browsing, and providing REST APIs over resource description framework (RDF) triplestores; Open Citation Identifiers (OCIs) and the OpenCitations OCI Resolution Service; the OpenCitations Corpus (OCC), a database of open downloadable bibliographic and citation data made available in RDF under a Creative Commons public domain dedication; and the OpenCitations Indexes of open citation data, of which the first and largest is COCI, the OpenCitations Index of Crossref Open DOI-to-DOI Citations, which currently contains over 624 million bibliographic citations and is receiving considerable usage by the scholarly community.


Author(s):  
Jyotirmaya Nanda ◽  
Henri J. Thevenot ◽  
Timothy W. Simpson ◽  
Soundar R. T. Kumara ◽  
Steven B. Shooter

By sharing product design information across a family of products, companies can increase the flexibility and responsiveness of their product realization process while shortening lead-times and reducing cost. This paper describes a preliminary attempt at using semantic web paradigm, especially the Web Ontology Language (OWL), for product family information management. An overview of the ongoing work with Semantic Web is also presented. Formal product representation using OWL can not only store the structure of the product family but also help in capturing the evolution of different components of the product family. As an illustration, a group of single-use cameras, containing several products from the Kodak single-use camera family, is represented in OWL format. The methodology of ontology development that can support product family design is discussed in detail. Product family design representation using OWL promotes better learning across products and reduced development time, system complexity, and product design lead-time.


Author(s):  
Gerald Beuchelt ◽  
Harry Sleeper ◽  
Andrew Gregorowicz ◽  
Robert Dingwell

Health data interoperability issues limit the expected benefits of Electronic Health Record (EHR) systems. Ideally, the medical history of a patient is recorded in a set of digital continuity of care documents which are securely available to the patient and their care providers on demand. The history of electronic health data standards includes multiple standards organizations, differing goals, and ongoing efforts to reconcile the various specifications. Existing standards define a format that is too complex for exchanging health data effectively. We propose hData, a simple XML-based framework to describe health information. hData addresses the complexities of the current HL7 Clinical Document Architecture (CDA). hData is an XML design that can be completely validated by modern XML editors and is explicitly designed for extensibility to address future health information exchange needs. hData applies established best practices for XML document architectures to the health domain, thereby facilitating interoperability, increasing software developer productivity, and thus reducing the cost for creating and maintaining EHR technologies.


2016 ◽  
Vol 37 (6/7) ◽  
pp. 308-316 ◽  
Author(s):  
Myung-Ja K. Han

Purpose Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library collections via web portals instead of coming to library gateways. To meet these rapidly changing users’ needs, academic and research libraries are busy identifying new service models and areas of improvement. Cataloging and metadata services units in academic and research libraries are no exception. As discovery of library collections largely depends on the quality and design of metadata, cataloging and metadata services units must identify new areas of work and establish new roles by building sustainable workflows that utilize available metadata technologies. The paper aims to discuss these issues. Design/methodology/approach This paper discusses a list of challenges that academic libraries’ cataloging and metadata services units have encountered over the years, and ways to build sustainable workflows, including collaborations between units in and outside of the institution, and in the cloud; tools, technologies, metadata standards and semantic web technologies; and most importantly, exploration and research. The paper also includes examples and uses cases of both traditional metadata workflows and experimentation with linked open data that were built upon metadata technologies and will ultimately support emerging user needs. Findings To develop sustainable and scalable workflows that meet users’ changing needs, cataloging and metadata professionals need not only to work with new information technologies, but must also be equipped with soft skills and in-depth professional knowledge. Originality/value This paper discusses how cataloging and metadata services units have been exploiting information technologies and creating new scalable workflows to adapt to these changes, and what is required to establish and maintain these workflows.


Sign in / Sign up

Export Citation Format

Share Document