Understanding Romanian Texts by Using Gamification Methods

Author(s):  
Ștefania-Eliza Berghia ◽  
Bogdan Pahomi ◽  
Daniel Volovici

AbstractIn recent years, there has been increasing interest in the field of natural language processing. Determining which syntactic function is right for a specific word is an important task in this field, being useful for a variety of applications like understanding texts, automatic translation and question-answering applications and even in e-learning systems. In the Romanian language, this is an even harder task because of the complexity of the grammar. The present paper falls within the field of “Natural Language Processing”, but it also blends with other concepts such as “Gamification”, “Social Choice Theory” and “Wisdom of the Crowd”. There are two main purposes for developing the application in this paper:a) For students to have at their disposal some support through which they can deepen their knowledge about the syntactic functions of the parts of speech, a knowledge that they have accumulated during the teaching hours at schoolb) For collecting data about how the students make their choices, how do they know which grammar role is correct for a specific word, these data being primordial for replicating the learning process

2021 ◽  
Vol 23 (2) ◽  
pp. 40-44
Author(s):  
Olivia Fragoso-Diaz ◽  
Vitervo Lopez Caballero ◽  
Juan Carlos Rojas-Perez ◽  
Rene Santaolaya-Salgado ◽  
Juan Gabriel Gonzalez-Serna

Poetics ◽  
1990 ◽  
Vol 19 (1-2) ◽  
pp. 99-120
Author(s):  
Stefan Wermter ◽  
Wendy G. Lehnert

2020 ◽  
Vol 34 (05) ◽  
pp. 8504-8511
Author(s):  
Arindam Mitra ◽  
Ishan Shrivastava ◽  
Chitta Baral

Natural Language Inference (NLI) plays an important role in many natural language processing tasks such as question answering. However, existing NLI modules that are trained on existing NLI datasets have several drawbacks. For example, they do not capture the notion of entity and role well and often end up making mistakes such as “Peter signed a deal” can be inferred from “John signed a deal”. As part of this work, we have developed two datasets that help mitigate such issues and make the systems better at understanding the notion of “entities” and “roles”. After training the existing models on the new dataset we observe that the existing models do not perform well on one of the new benchmark. We then propose a modification to the “word-to-word” attention function which has been uniformly reused across several popular NLI architectures. The resulting models perform as well as their unmodified counterparts on the existing benchmarks and perform significantly well on the new benchmarks that emphasize “roles” and “entities”.


2021 ◽  
Vol 47 (05) ◽  
Author(s):  
NGUYỄN CHÍ HIẾU

Knowledge Graphs are applied in many fields such as search engines, semantic analysis, and question answering in recent years. However, there are many obstacles for building knowledge graphs as methodologies, data and tools. This paper introduces a novel methodology to build knowledge graph from heterogeneous documents.  We use the methodologies of Natural Language Processing and deep learning to build this graph. The knowledge graph can use in Question answering systems and Information retrieval especially in Computing domain


Author(s):  
Saravanakumar Kandasamy ◽  
Aswani Kumar Cherukuri

Semantic similarity quantification between concepts is one of the inevitable parts in domains like Natural Language Processing, Information Retrieval, Question Answering, etc. to understand the text and their relationships better. Last few decades, many measures have been proposed by incorporating various corpus-based and knowledge-based resources. WordNet and Wikipedia are two of the Knowledge-based resources. The contribution of WordNet in the above said domain is enormous due to its richness in defining a word and all of its relationship with others. In this paper, we proposed an approach to quantify the similarity between concepts that exploits the synsets and the gloss definitions of different concepts using WordNet. Our method considers the gloss definitions, contextual words that are helping in defining a word, synsets of contextual word and the confidence of occurrence of a word in other word’s definition for calculating the similarity. The evaluation based on different gold standard benchmark datasets shows the efficiency of our system in comparison with other existing taxonomical and definitional measures.


Author(s):  
Rahul Sharan Renu ◽  
Gregory Mocko

The objective of this research is to investigate the requirements and performance of parts-of-speech tagging of assembly work instructions. Natural Language Processing of assembly work instructions is required to perform data mining with the objective of knowledge reuse. Assembly work instructions are key process engineering elements that allow for predictable assembly quality of products and predictable assembly lead times. Authoring of assembly work instructions is a subjective process. It has been observed that most assembly work instructions are not grammatically complete sentences. It is hypothesized that this can lead to false parts-of-speech tagging (by Natural Language Processing tools). To test this hypothesis, two parts-of-speech taggers are used to tag 500 assembly work instructions (obtained from the automotive industry). The first parts-of-speech tagger is obtained from Natural Language Processing Toolkit (nltk.org) and the second parts-of-speech tagger is obtained from Stanford Natural Language Processing Group (nlp.stanford.edu). For each of these taggers, two experiments are conducted. In the first experiment, the assembly work instructions are input to the each tagger in raw form. In the second experiment, the assembly work instructions are preprocessed to make them grammatically complete, and then input to the tagger. It is found that the Stanford Natural Language Processing tagger with the preprocessed assembly work instructions produced the least number of false parts-of-speech tags.


2019 ◽  
Vol 2 (1) ◽  
pp. 53-64
Author(s):  
Herwin H Herwin

STMIK Amik Riau memiliki portal pada website http://www.sar.ac.id difungsikan sebagai media penyebaran informasi bagi sivitas akademika dan stakeholder. Rerata pengunjung setiap hari dalam 3 bulan terakhir adalah 150 kunjungan, namun terjadi peningkatan pada saat penerimaan mahasiswa di setiap tahun akademik. Hal ini mengindikasikan terjadinya peningkatan minat masyarakat untuk mengetahui informasi STMIK Amik Riau. Sayangnya, sampai saat ini pemanfaatan portal web site masih satu arah, dari STMIK Amik Riau ke stakeholder dan masyarakat, tidak terjadi sebaliknya. Komunikasi stakeholder dengan PT sehubungan dengan muatan yang ada di dalam portal menggunakan media sosial dan tidak terintegrasi dengan web.  Begitu juga dengan masukan, koreksi, tanggapan, maupun komunikasi lain menggunakan media sosial.  Sampai saat ini, masyarakat yang mengunjungi portal website baik masyarakat luas, maupun stakeholder tidak dapat dideteksi waktu berkunjung sehingga tidak dapat disapa dengan filosofi “3S”, padahal masyarakat luas yang telah berkunjung merupakan pasar potensial untuk di edukasi. Masyarakat yang berkunjung ke portal website, dengan sopan di sapa oleh sistem, kemudian dilanjutkan dengan komunikasi langsung, tersedia mesin yang siap memberikan salam  dan melayani setiap pertanyaan yang diajukan oleh pengunjung. Penelitian ini bertujuan membuat chatbot yang mampu berkomunikasi dengan pengunjung website.  Chatbot  yang telah dibuat diberi nama STMIK Amik Riau Intelligence Virtual Information disingkat SILVI.  Chatbot dibuat berdasarkan Question Answering Systems (QAS), bekerja dengan algoritma kemiripan antara dua teks. Penelitian ini menghasilkan aplikasi yang siap digunakan, diberi nama SILVI, mampu berkomunikasi dengan pengunjung website. Chatbot mengoptimalkan komunikasi seolah tidak menyadari, tetap menganggap lawan bicara adalah pegawai yang tepat dalam tugas pokok dan fungsi.  


Author(s):  
Kiran Raj R

Today, everyone has a personal device to access the web. Every user tries to access the knowledge that they require through internet. Most of the knowledge is within the sort of a database. A user with limited knowledge of database will have difficulty in accessing the data in the database. Hence, there’s a requirement for a system that permits the users to access the knowledge within the database. The proposed method is to develop a system where the input be a natural language and receive an SQL query which is used to access the database and retrieve the information with ease. Tokenization, parts-of-speech tagging, lemmatization, parsing and mapping are the steps involved in the process. The project proposed would give a view of using of Natural Language Processing (NLP) and mapping the query in accordance with regular expression in English language to SQL.


Sign in / Sign up

Export Citation Format

Share Document