Interactive knowledge capture in the new millennium: how the Semantic Web changed everything

2011 ◽  
Vol 26 (1) ◽  
pp. 45-51 ◽  
Author(s):  
Yolanda Gil

AbstractThe Semantic Web has radically changed the landscape of knowledge acquisition research. It used to be the case that a single user would edit a local knowledge base, that the user would have domain expertise to add to the system, and that the system would have a centralized knowledge base and reasoner. The world surrounding knowledge-rich systems changed drastically with the advent of the Web, and many of the original assumptions were no longer a given. Those assumptions had to be revisited and addressed in combination with new challenges that were put forward. Knowledge-rich systems today are distributed, have many users with different degrees of expertise, and integrate many shared knowledge sources of varying quality. Recent work in interactive knowledge capture includes new and exciting research on collaborative knowledge sharing, collecting knowledge from Web volunteers, and capturing knowledge provenance.

2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


2017 ◽  
Vol 22 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Matthew T. Mccarthy

The web of linked data, otherwise known as the semantic web, is a system in which information is structured and interlinked to provide meaningful content to artificial intelligence (AI) algorithms. As the complex interactions between digital personae and these algorithms mediate access to information, it becomes necessary to understand how these classification and knowledge systems are developed. What are the processes by which those systems come to represent the world, and how are the controversies that arise in their creation, overcome? As a global form, the semantic web is an assemblage of many interlinked classification and knowledge systems, which are themselves assemblages. Through the perspectives of global assemblage theory, critical code studies and practice theory, I analyse netnographic data of one such assemblage. Schema.org is but one component of the larger global assemblage of the semantic web, and as such is an emergent articulation of different knowledges, interests and networks of actors. This articulation comes together to tame the profusion of things, seeking stability in representation, but in the process, it faces and produces more instability. Furthermore, this production of instability contributes to the emergence of new assemblages that have similar aims.


Web Services ◽  
2019 ◽  
pp. 1068-1076
Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


Author(s):  
Rafael Cunha Cardoso ◽  
Fernando da Fonseca de Souza ◽  
Ana Carolina Salgado

Currently, systems dedicated to information retrieval/extraction perform an important role on fetching relevant and qualified information from the World Wide Web (WWW). The Semantic Web can be described as the Web’s future once it introduces a set of new concepts and tools. For instance, ontology is used to insert knowledge into contents of the current WWW to give meaning to such contents. This allows software agents to better understand the Web’s content meaning so that such agents can execute more complex and useful tasks to users. This work introduces an architecture that uses some Semantic Web concepts allied to Regular Expressions (REGEX) in order to develop a system that retrieves/extracts specific domain information from the Web. A prototype, based on such architecture, was developed to find information about offers announced on supermarkets Web sites.


Author(s):  
Daniel J. Weitzner ◽  
Jim Hendler ◽  
Tim Berners-Lee ◽  
Dan Connolly

In this chapter, we describe the motivations for, and development of, a rule-based policy management system that can be deployed in the open and distributed milieu of the World Wide Web. We discuss the necessary features of such a system in creating a “Policy Aware” infrastructure for the Web and argue for the necessity of such infrastructure. We then show how the integration of a Semantic Web rules language (N3) with a theorem prover designed for the Web (Cwm) makes it possible to use the Hypertext Transport Protocol (http) to provide a scalable mechanismfor the exchange of rules and, eventually, proofs for access control on the Web. We also discuss which aspects of the Policy Aware Web are enabled by the current mechanism and describe future research needed to make the widespread deployment of rules and proofs on the Web a reality.


Author(s):  
Christopher Walton

In the previous chapter we described three languages for representing knowledge on the Semantic Web: RDF, RDFS, and OWL. These languages enable us to create Web-based knowledge in a standard manner with a common semantics. We now turn our attention to the techniques that can utilize this knowledge in an automated manner. These techniques are fundamental to the construction of the Semantic Web, as without automation we do not gain any real benefit over the current Web. There are currently two views of the Semantic Web that have implications for the kind of automation that we can hope to achieve: 1. An expert system with a distributed knowledge base. 2. A society of agents that solve complex knowledge-based tasks. In the first view, the Semantic Web is essentially treated a single-user application that reasons about some Web-based knowledge. For example, a service that queries the knowledge to answer specific questions. This is a perfectly acceptable view, and its realization is significantly challenging. However, in this book we primarily subscribe to the second view. In this more-generalized view, the knowledge is not treated as a single body, and it is not necessary to obtain a global view of the knowledge. Instead, the knowledge is exchanged and manipulated in a peer-to-peer (P2P) manner between different entities. These entities act on behalf of human users, and require only enough knowledge to perform the task to which they are assigned. The use of entities to solve complex problems on the Web is captured by the notion of an agent. In human terms, an agent is an intermediary who makes a complex organization externally accessible. For example, a travel agent simplifies the problem of booking a holiday. This concept of simplifying the interface to a complex framework is a key goal of the Semantic Web. We would like to make it straightforward for a human to interact with a wide variety of disparate sources of knowledge without becoming mired in the details. To accomplish this, we want to define software agents that act with similar characteristics to human agents.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


Author(s):  
Kevin Curran ◽  
Gary Gumbleton

Tim Berners-Lee, director of the World Wide Web Consortium (W3C), states that, “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” (Berners-Lee, 2001). The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents, roaming from page to page, can readily carry out sophisticated tasks for users. The Semantic Web (SW) is a vision of the Web where information is more efficiently linked up in such a way that machines can more easily process it. It is generating interest not just because Tim Berners-Lee is advocating it, but because it aims to solve the problem of information being hidden away in HTML documents, which are easy for humans to get information out of but are difficult for machines to do so. We will discuss the Semantic Web here.


Author(s):  
Rui G. Pereira ◽  
Mario M. Freire

The World Wide Web (WWW, Web, or W3) is known as the largest accessible repository of human knowledge. It contains around 3 billion documents, which may be accessed by more than 500 million worldwide users. In only 13 years since its appearance in 1991, the Web suffered such a huge growth that it is safe to say there is no phenomenon in history that can compare to it. It reached such importance that it became an indispensable partner in the lives of people (Daconta, Obrst & Smith, 2003).


Sign in / Sign up

Export Citation Format

Share Document