scholarly journals Ontology specific visual canvas generation to facilitate sense-making-an algorithmic approach

Author(s):  
Kaneeka Vidanage ◽  
Noor Maizura Mohamad Noor ◽  
Rosmayati Mohemad ◽  
Zuriana Abu Bakar

Ontologies are domain-specific conceptualizations that are both human and machine-readable. Due to this remarkable attribute of ontologies, its applications are not limited to computing domains. Banking, medicine, agriculture, and law are a few of the non-computing domains, where ontologies are being used very effectively. When creating ontologies for non-computing domains, involvement of the non-computing domain specialists like bankers, lawyers, farmers become very vital. Hence, they are not semantic specialists, particularly designed visualization assistance is required for the ontology schema verifications and sense-making. Existing visualization methods are not fine-tuned for non-technical domain specialists and there are lots of complexities. In this research, a novel algorithm capable of generating domain specialists’ friendlier visualization canvas has been explored. This proposed algorithm and the visualization canvas has been tested for three different domains and overall success of 85% has been yielded.

Semantic Web ◽  
2020 ◽  
pp. 1-29
Author(s):  
Bettina Klimek ◽  
Markus Ackermann ◽  
Martin Brümmer ◽  
Sebastian Hellmann

In the last years a rapid emergence of lexical resources has evolved in the Semantic Web. Whereas most of the linguistic information is already machine-readable, we found that morphological information is mostly absent or only contained in semi-structured strings. An integration of morphemic data has not yet been undertaken due to the lack of existing domain-specific ontologies and explicit morphemic data. In this paper, we present the Multilingual Morpheme Ontology called MMoOn Core which can be regarded as the first comprehensive ontology for the linguistic domain of morphological language data. It will be described how crucial concepts like morphs, morphemes, word forms and meanings are represented and interrelated and how language-specific morpheme inventories can be created as a new possibility of morphological datasets. The aim of the MMoOn Core ontology is to serve as a shared semantic model for linguists and NLP researchers alike to enable the creation, conversion, exchange, reuse and enrichment of morphological language data across different data-dependent language sciences. Therefore, various use cases are illustrated to draw attention to the cross-disciplinary potential which can be realized with the MMoOn Core ontology in the context of the existing Linguistic Linked Data research landscape.


Author(s):  
M. Ben Ellefi ◽  
P. Drap ◽  
O. Papini ◽  
D. Merad ◽  
J. P. Royer ◽  
...  

<p><strong>Abstract.</strong> A key challenge in cultural heritage (CH) sites visualization is to provide models and tools that effectively integrate the content of a CH data with domain-specific knowledge so that the users can query, interpret and consume the visualized information. Moreover, it is important that the intelligent visualization systems are interoperable in the semantic web environment and thus, capable of establishing a methodology to acquire, integrate, analyze, generate and share numeric contents and associated knowledge in human and machine-readable Web. In this paper, we present a model, a methodology and a software Web-tools that support the coupling of the 2D/3D Web representation with the knowledge graph database of <i>Xlendi</i> shipwreck. The Web visualization tools and the knowledge-based techniques are married into a photogrammetry driven ontological model while at the same time, user-friendly web tools for querying and semantic consumption of the shipwreck information are introduced.</p>


Author(s):  
Loránd Lehel Tóth ◽  
Raymond Pardede ◽  
Gábor Hosszú

The article presents a method to decipher Rovash inscriptions made by the Szekelys in the 15th-18th centuries. The difficulty of the deciphering work is that a large portion of the Rovash inscriptions contains incomplete words, calligraphic glyphs or grapheme errors. Based on the topological parameters of the undeciphered symbols registered in the database, the presented novel algorithm estimates the meaning of the inscriptions by the matching accuracies of the recognized graphemes and gives a statistical probability for deciphering. The developed algorithm was implemented in software, which also contains a built-in dictionary. Based on the dictionary, the novel method takes into account the context in identifying the meaning of the inscription. The proposed algorithm offers one or more words in a different random values as a result, from which users can select the relevant one. The article also presents experimental results, which demonstrate the efficiency of method.


2021 ◽  
Author(s):  
Francesca Frexia ◽  
Cecilia Mascia ◽  
Luca Lianas ◽  
Giovanni Delussu ◽  
Alessandro Sulis ◽  
...  

AbstractThe FAIR Principles are a set of recommendations that aim to underpin knowledge discovery and integration by making the research outcomes Findable, Accessible, Interoperable and Reusable. These guidelines encourage the accurate recording and exchange of structured data, coupled with contextual information about their creation, expressed in domain-specific standards and machine readable formats. This paper analyses the potential support to FAIRness of the openEHR e-health standard, by theoretically assessing the compliance with each of the 15 FAIR principles of a hypothetical Clinical Data Repository (CDR) developed according to the openEHR specifications. Our study highlights how the openEHR approach, thanks to its computable semantics-oriented design, is inherently FAIR-enabling and is a promising implementation strategy for creating FAIR-compliant CDRs.


Author(s):  
Francesca Frexia ◽  
Cecilia Mascia ◽  
Luca Lianas ◽  
Giovanni Delussu ◽  
Alessandro Sulis ◽  
...  

The FAIR Principles are a set of recommendations that aim to underpin knowledge discovery and integration by making the research outcomes Findable, Accessible, Interoperable and Reusable. These guidelines encourage the accurate recording and exchange of data, coupled with contextual information about their creation, expressed in domain-specific standards and machine-readable formats. This paper analyses the potential support to FAIRness of the openEHR specifications and reference implementation, by theoretically assessing their compliance with each of the 15 FAIR principles. Our study highlights how the openEHR approach, thanks to its computable semantics-oriented design, is inherently FAIR-enabling and is a promising implementation strategy for creating FAIR-compliant Clinical Data Repositories (CDRs).


Author(s):  
Lynne C. Howarth

With the proliferation of digitized resources accessible via Internet and Intranet knowledge bases, and a pressing need to develop more sophisticated tools for the identification and retrieval of electronic resources, both general purpose and domain-specific metadata schemes have assumed a particular prominence. While recent work emanating from the World Wide Web Consortium (W3C) has focused on the Resource Description Framework (RDF), and metadata maps or Acrosswalks” have been created to support the interoperability of metadata standards -- thus converting metatags from diverse domains from simply “machine-readable” to “machine-understandable”-- the next iteration, to “human-understandable,” remains a challenge. This apparent gap provides a framework for three-phase research (Howarth, 2000, 1999) to develop a tool which will provide a “human-understandable” front-end search assist to any XML-compliant metadata scheme. Findings from phase one, the analyses and mapping of seven metadata schemes, identify the particular challenges of designing a common “namespace”, populated with element tags which are appropriately descriptive, yet readily understood by a lay searcher, when there is little congruence within, and a high degree of variability across, the metadata schemes under study. Implications for the subsequent design and testing of both the proposed “metalevel ontology” (phase two), and the prototype search assist tool (phase three) are examined.


Author(s):  
Felix S. Wang ◽  
Céline Gianduzzo ◽  
Mirko Meboldt ◽  
Quentin Lohmeyer

AbstractEye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification and measurement of distinct gaze patterns using traditional ET metrics has been challenging, and the insights gained shown to be inconclusive about the nature of expert gaze behavior. In this article, we introduce an algorithmic approach for the extraction of object-related gaze sequences and determine task-related expertise by investigating the development of gaze sequence patterns during a multi-trial study of a simplified airplane assembly task. We demonstrate the algorithm in a study where novice (n = 28) and expert (n = 2) eye movements were recorded in successive trials (n = 8), allowing us to verify whether similar patterns develop with increasing expertise. In the proposed approach, AOI sequences were transformed to string representation and processed using the k-mer method, a well-known method from the field of computational biology. Our results for expertise development suggest that basic tendencies are visible in traditional ET metrics, such as the fixation duration, but are much more evident for k-mers of k > 2. With increased on-task experience, the appearance of expert k-mer patterns in novice gaze sequences was shown to increase significantly (p < 0.001). The results illustrate that the multi-trial k-mer approach is suitable for revealing specific cognitive processes and can quantify learning progress using gaze patterns that include both spatial and temporal information, which could provide a valuable tool for novice training and expert assessment.


Author(s):  
Yuji Matsumoto

This article deals with the acquisition of lexical knowledge, instrumental in complementing the ambiguous process of NLP (natural language processing). Imprecise in nature, lexical representations are mostly simple and superficial. The thesaurus would be an apt example. Two primary tools for acquiring lexical knowledge are ‘corpora’ and ‘machine-readable dictionary’ (MRD). The former are mostly domain specific, monolingual, while the definitions in MRD are generally described by a ‘genus term’ followed by a set of differentiae. Auxiliary technical nuances of the acquisition process, find mention as well, such as ‘lexical collocation’ and ‘association’, referring to the deliberate co-occurrence of words that form a new meaning altogether and loses it whenever a synonym replaces either of the words. The first seminal work on collocation extraction from large text corpora, was compiled around the early 1990s, using inter-word mutual information to locate collocation. Abundant corpus data would be obtainable from the Linguistic Data Consortium (LDC).


2021 ◽  
Vol 2 (3) ◽  
pp. 147-157
Author(s):  
Kaneeka Vidanage ◽  
Noor Maizura Mohamad Noor ◽  
Rosmayati Mohemad ◽  
Zuriana Abu Bakar

Ontology sense-making or visual comprehension of the ontological schemata and structure are vital for cross-validation purposes of the ontology increment during the process of applied ontology construction. Also, it is important to query the ontology in order to verify the accuracy of the stored knowledge embeddings. This will boost the interactions between domain specialists and ontologists in applied ontology construction processes. Hence existing mechanisms have numerous of deficiencies (discussed in the paper), a new algorithm is proposed in this research to boost the efficiency of usage of tree-maps for effective ontology sense making. Proposed algorithm and prototype are quantitatively and qualitatively assessed for their accuracy and efficacy.


Sign in / Sign up

Export Citation Format

Share Document