semantic annotation
Recently Published Documents


TOTAL DOCUMENTS

1092
(FIVE YEARS 189)

H-INDEX

30
(FIVE YEARS 4)

2022 ◽  
Vol 12 (2) ◽  
pp. 796
Author(s):  
Julia Sasse ◽  
Johannes Darms ◽  
Juliane Fluck

For all research data collected, data descriptions and information about the corresponding variables are essential for data analysis and reuse. To enable cross-study comparisons and analyses, semantic interoperability of metadata is one of the most important requirements. In the area of clinical and epidemiological studies, data collection instruments such as case report forms (CRFs), data dictionaries and questionnaires are critical for metadata collection. Even though data collection instruments are often created in a digital form, they are mostly not machine readable; i.e., they are not semantically coded. As a result, the comparison between data collection instruments is complex. The German project NFDI4Health is dedicated to the development of national research data infrastructure for personal health data, and as such searches for ways to enhance semantic interoperability. Retrospective integration of semantic codes into study metadata is important, as ongoing or completed studies contain valuable information. However, this is labor intensive and should be eased by software. To understand the market and find out what techniques and technologies support retrospective semantic annotation/enrichment of metadata, we conducted a literature review. In NFDI4Health, we identified basic requirements for semantic metadata annotation software in the biomedical field and in the context of the FAIR principles. Ten relevant software systems were summarized and aligned with those requirements. We concluded that despite active research on semantic annotation systems, no system meets all requirements. Consequently, further research and software development in this area is needed, as interoperability of data dictionaries, questionnaires and data collection tools is key to reusing and combining results from independent research studies.


Author(s):  
Noorul Wahab ◽  
Islam M Miligy ◽  
Katherine Dodd ◽  
Harvir Sahota ◽  
Michael Toss ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Mouhamed Gaith Ayadi ◽  
Riadh Bouslimi ◽  
Jalel Akaichi

Author(s):  
Paolo Zampognaro ◽  
Giovanni Paragliola ◽  
Vincenzo Falanga

AbstractInternet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020, there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection over multiple protocols (e.g. Bluetooth, MQTT, CoAP, ZigBEE, etc.) the interpretation, as well as the harmonization of the data format that derive from the existing huge amounts of heterogeneous IoT medical devices. In this respect, this study aims at proposing an advanced Home Gateway architecture that offers a unique data collection module, supporting direct data acquisition over multiple protocols (i.e.BLE, MQTT) and indirect data retrieval from cloud health services (i.e. GoogleFit). Moreover, the solution propose a mechanism to automatically convert the original data format, carried over BLE, in HL7 FHIR by exploiting device capabilities semantic annotation implemented by means of FHIR resource as well. The adoption of such annotation enables the dynamic plug of new sensors within the instrumented environment without the need to stop and adapt the gateway. This simplifies the dynamic devices landscape customization requested by the several telemedicine applications contexts (e.g. CVD, Diabetes) and demonstrate, for the first time, a concrete example of using the FHIR standard not only (as usual) for health resources representation and storage but also as instrument to enable seamless integration of IoT devices. The proposed solution also relies on mobile phone technology which is widely adopted aiming at reducing any obstacle for a larger adoption.


2021 ◽  
Vol 10 (12) ◽  
pp. 825
Author(s):  
Jarbas Nunes Vidal-Filho ◽  
Valéria Cesário Times ◽  
Jugurta Lisboa-Filho ◽  
Chiara Renso

The term Semantic Trajectories of Moving Objects (STMO) corresponds to a sequence of spatial-temporal points with associated semantic information (for example, annotations about locations visited by the user or types of transportation used). However, the growth of Big Data generated by users, such as data produced by social networks or collected by an electronic equipment with embedded sensors, causes the STMO to require services and standards for enabling data documentation and ensuring the quality of STMOs. Spatial Data Infrastructures (SDI), on the other hand, provide a shared interoperable and integrated environment for data documentation. The main challenge is how to lead traditional SDIs to evolve to an STMO document due to the lack of specific metadata standards and services for semantic annotation. This paper presents a new concept of SDI for STMO, named SDI4Trajectory, which supports the documentation of different types of STMO—holistic trajectories, for example. The SDI4Trajectory allows us to propose semi-automatic and manual semantic enrichment processes, which are efficient in supporting semantic annotations and STMO documentation as well. These processes are hardly found in traditional SDIs and have been developed through Web and semantic micro-services. To validate the SDI4Trajectory, we used a dataset collected by voluntary users through the MyTracks application for the following purposes: (i) comparing the semi-automatic and manual semantic enrichment processes in the SDI4Trajectory; (ii) investigating the viability of the documentation processes carried out by the SDI4Trajectory, which was able to document all the collected trajectories.


2021 ◽  
Author(s):  
Monica Palmirani ◽  
Francesco Sovrano ◽  
Davide Liga ◽  
Salvatore Sapienza ◽  
Fabio Vitali

This paper presents an AI use-case developed in the project “Study on legislation in the era of artificial intelligence and digitization” promoted by the EU Commission Directorate-General for Informatics. We propose a hybrid technical framework where AI techniques, Data Analytics, Semantic Web approaches and LegalXML modelisation produce benefits in legal drafting activity. This paper aims to classify the corrigenda of the EU legislation with the goal to detect some criteria that could prevent errors during the drafting or during the publication process. We use a pipeline of different techniques combining AI, NLP, Data Analytics, Semantic annotation and LegalXML instruments for enriching the non-symbolic AI tools with legal knowledge interpretation to offer to the legal experts.


2021 ◽  
Author(s):  
Adeline Nazarenko ◽  
François Lévy ◽  
Adam Wyner

Tools must be developed to help draft, consult, and explore textual legal sources. Between statistical information retrieval and the formalization of textual rules for automated legal reasoning, we defend a more pragmatic third way that enriches legal texts with a coarse-grained, interpretation-neutral, semantic annotation layer. The aim is that legal texts can be enriched on a large scale at a reasonable cost, paving the way for new search capabilities that will facilitate mining of legal sources. This new approach is illustrated on a proof-of-concept experiment that consisted in semantically annotating a significant part of the French version of the GDPR. The paper presents the design methodology of the annotation language, a first version of a Core Legal Annotation Language (CLAL), together with its formalization in XML, the gold standard resulting from the annotation of GDPR, and examples of user questions that can be better answered by semantic than by plain text search. This experimentation demonstrates the potential of the proposed approach and provides a basis for further development. All resources developed for that GDPR experiment are language independent and are publicly available.


2021 ◽  
Vol 72 (2) ◽  
pp. 319-329
Author(s):  
Aleksei Dobrov ◽  
Maria Smirnova

Abstract This article presents the current results of an ongoing study of the possibilities of fine-tuning automatic morphosyntactic and semantic annotation by means of improving the underlying formal grammar and ontology on the example of one Tibetan text. The ultimate purpose of work at this stage was to improve linguistic software developed for natural-language processing and understanding in order to achieve complete annotation of a specific text and such state of the formal model, in which all linguistic phenomena observed in the text would be explained. This purpose includes the following tasks: analysis of error cases in annotation of the text from the corpus; eliminating these errors in automatic annotation; development of formal grammar and updating of dictionaries. Along with the morpho-syntactic analysis, the current approach involves simultaneous semantic analysis as well. The article describes semantic annotation of the corpus, required by grammar revision and development, which was made with the use of computer ontology. The work is carried out with one of the corpus texts – a grammatical poetic treatise Sum-cu-pa (VII c.).


Sign in / Sign up

Export Citation Format

Share Document