Analysis of tweets to find the basis of popularity based on events semantic similarity

2018 ◽  
Vol 14 (4) ◽  
pp. 438-452 ◽  
Author(s):  
Rajat Kumar Mudgal ◽  
Rajdeep Niyogi ◽  
Alfredo Milani ◽  
Valentina Franzoni

PurposeThe purpose of this paper is to propose and experiment a framework for analysing the tweets to find the basis of popularity of a person and extract the reasons supporting the popularity. Although the problem of analysing tweets to detect popular events and trends has recently attracted extensive research efforts, not much emphasis has been given to find out the reasons behind the popularity of a person based on tweets.Design/methodology/approachIn this paper, the authors introduce a framework to find out the reasons behind the popularity of a person based on the analysis of events and the evaluation of a Web-based semantic set similarity measure applied to tweets. The methodology uses the semantic similarity measure to group similar tweets in events. Although the tweets cannot contain identical hashtags, they can refer to a unique topic with equivalent or related terminology. A special data structure maintains event information, related keywords and statistics to extract the reasons supporting popularity.FindingsAn implementation of the algorithms has been experimented on a data set of 218,490 tweets from five different countries for popularity detection and reasons extraction. The experimental results are quite encouraging and consistent in determining the reasons behind popularity. The use of Web-based semantic similarity measure is based on statistics extracted from search engines, it allows to dynamically adapt the similarity values to the variation on the correlation of words depending on current social trends.Originality/valueTo the best of the authors’ knowledge, the proposed method for finding the reason of popularity in short messages is original. The semantic set similarity presented in the paper is an original asymmetric variant of a similarity scheme developed in the context of semantic image recognition.

The concept of relevancy is a most blazing topic in information regaining process. In the last few years there is a drastically increase the digital data so there is a need to increase the accuracy of information regaining process .Semantic Similarity measure the similarity between word-pair by using WordNet as ontology.We have analyzed the different category of semantic similarity algorithm to compute semantic closeness between word-pair and evaluate its value by using WordNet.We have compared various algorithms on Miller- Charles data set of 30 word-pair is used to rank them category wise.


2019 ◽  
Vol 38 (2) ◽  
pp. 399-419 ◽  
Author(s):  
M. Priya ◽  
Aswani Kumar Ch.

Purpose The purpose of this paper is to merge the ontologies that remove the redundancy and improve the storage efficiency. The count of ontologies developed in the past few eras is noticeably very high. With the availability of these ontologies, the needed information can be smoothly attained, but the presence of comparably varied ontologies nurtures the dispute of rework and merging of data. The assessment of the existing ontologies exposes the existence of the superfluous information; hence, ontology merging is the only solution. The existing ontology merging methods focus only on highly relevant classes and instances, whereas somewhat relevant classes and instances have been simply dropped. Those somewhat relevant classes and instances may also be useful or relevant to the given domain. In this paper, we propose a new method called hybrid semantic similarity measure (HSSM)-based ontology merging using formal concept analysis (FCA) and semantic similarity measure. Design/methodology/approach The HSSM categorizes the relevancy into three classes, namely highly relevant, moderate relevant and least relevant classes and instances. To achieve high efficiency in merging, HSSM performs both FCA part and the semantic similarity part. Findings The experimental results proved that the HSSM produced better results compared with existing algorithms in terms of similarity distance and time. An inconsistency check can also be done for the dissimilar classes and instances within an ontology. The output ontology will have set of highly relevant and moderate classes and instances as well as few least relevant classes and instances that will eventually lead to exhaustive ontology for the particular domain. Practical implications In this paper, a HSSM method is proposed and used to merge the academic social network ontologies; this is observed to be an extremely powerful methodology compared with other former studies. This HSSM approach can be applied for various domain ontologies and it may deliver a novel vision to the researchers. Originality/value The HSSM is not applied for merging the ontologies in any former studies up to the knowledge of authors.


2012 ◽  
Vol 38 (2) ◽  
pp. 229-235 ◽  
Author(s):  
Wen-Qing LI ◽  
Xin SUN ◽  
Chang-You ZHANG ◽  
Ye FENG

Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


2016 ◽  
Vol 24 (1) ◽  
pp. 93-115 ◽  
Author(s):  
Xiaoying Yu ◽  
Qi Liao

Purpose – Passwords have been designed to protect individual privacy and security and widely used in almost every area of our life. The strength of passwords is therefore critical to the security of our systems. However, due to the explosion of user accounts and increasing complexity of password rules, users are struggling to find ways to make up sufficiently secure yet easy-to-remember passwords. This paper aims to investigate whether there are repetitive patterns when users choose passwords and how such behaviors may affect us to rethink password security policy. Design/methodology/approach – The authors develop a model to formalize the password repetitive problem and design efficient algorithms to analyze the repeat patterns. To help security practitioners to analyze patterns, the authors design and implement a lightweight, Web-based visualization tool for interactive exploration of password data. Findings – Through case studies on a real-world leaked password data set, the authors demonstrate how the tool can be used to identify various interesting patterns, e.g. shorter substrings of the same type used to make up longer strings, which are then repeated to make up the final passwords, suggesting that the length requirement of password policy does not necessarily increase security. Originality/value – The contributions of this study are two-fold. First, the authors formalize the problem of password repetitive patterns by considering both short and long substrings and in both directions, which have not yet been considered in past. Efficient algorithms are developed and implemented that can analyze various repeat patterns quickly even in large data set. Second, the authors design and implement four novel visualization views that are particularly useful for exploration of password repeat patterns, i.e. the character frequency charts view, the short repeat heatmap view, the long repeat parallel coordinates view and the repeat word cloud view.


2015 ◽  
Vol 22 (4) ◽  
pp. 530-544 ◽  
Author(s):  
Arjen van Witteloostuijn

Purpose – The purpose of this paper is to argue that the time is ripe to establish a powerful tradition in Experimental International Business (IB). Probably due to what the Arjen van Witteloostuijn refers to as the external validity myth, experimental laboratory designs are underutilized in IB, which implies that the internal validity miracle of randomized experimentation goes largely unnoticed in this domain of the broader management discipline. Design/methodology/approach – In the following pages, the author explains why the author believes this implies a missed opportunity, providing arguments and examples along the way. Findings – Although an Experimental Management tradition has never really gained momentum, to the author, the lab experimental design has a very bright future in IB (and management at large). To facilitate the development of an Experimental IB tradition, initiating web-based tools would be highly instrumental. This will not only boost further progress in IB research, but will also increase the effectiveness and playfulness of IB teaching. Originality/value – Given the high potential of an Experimental IB, the Cross-Cultural and Strategic Management journal will offer a platform for such exciting and intriguing laboratory work, cumulatively contributing to the establishment of an Experimental IB tradition.


2017 ◽  
Vol 45 (2) ◽  
pp. 66-74
Author(s):  
Yufeng Ma ◽  
Long Xia ◽  
Wenqi Shen ◽  
Mi Zhou ◽  
Weiguo Fan

Purpose The purpose of this paper is automatic classification of TV series reviews based on generic categories. Design/methodology/approach What the authors mainly applied is using surrogate instead of specific roles or actors’ name in reviews to make reviews more generic. Besides, feature selection techniques and different kinds of classifiers are incorporated. Findings With roles’ and actors’ names replaced by generic tags, the experimental result showed that it can generalize well to agnostic TV series as compared with reviews keeping the original names. Research limitations/implications The model presented in this paper must be built on top of an already existed knowledge base like Baidu Encyclopedia. Such database takes lots of work. Practical implications Like in digital information supply chain, if reviews are part of the information to be transported or exchanged, then the model presented in this paper can help automatically identify individual review according to different requirements and help the information sharing. Originality/value One originality is that the authors proposed the surrogate-based approach to make reviews more generic. Besides, they also built a review data set of hot Chinese TV series, which includes eight generic category labels for each review.


Sign in / Sign up

Export Citation Format

Share Document