hybrid approach
Recently Published Documents


TOTAL DOCUMENTS

8635
(FIVE YEARS 3010)

H-INDEX

89
(FIVE YEARS 17)

Author(s):  
Tamilarasi Suresh ◽  
Tsehay Admassu Assegie ◽  
Subhashni Rajkumar ◽  
Napa Komal Kumar

Heart disease is one of the most widely spreading and deadliest diseases across the world. In this study, we have proposed hybrid model for heart disease prediction by employing random forest and support vector machine. With random forest, iterative feature elimination is carried out to select heart disease features that improves predictive outcome of support vector machine for heart disease prediction. Experiment is conducted on the proposed model using test set and the experimental result evidently appears to prove that the performance of the proposed hybrid model is better as compared to an individual random forest and support vector machine. Overall, we have developed more accurate and computationally efficient model for heart disease prediction with accuracy of 98.3%. Moreover, experiment is conducted to analyze the effect of regularization parameter (C) and gamma on the performance of support vector machine. The experimental result evidently reveals that support vector machine is very sensitive to C and gamma.


2022 ◽  
Vol 29 (1) ◽  
pp. 1-53
Author(s):  
Aditya Bharadwaj ◽  
David Gwizdala ◽  
Yoonjin Kim ◽  
Kurt Luther ◽  
T. M. Murali

Modern experiments in many disciplines generate large quantities of network (graph) data. Researchers require aesthetic layouts of these networks that clearly convey the domain knowledge and meaning. However, the problem remains challenging due to multiple conflicting aesthetic criteria and complex domain-specific constraints. In this article, we present a strategy for generating visualizations that can help network biologists understand the protein interactions that underlie processes that take place in the cell. Specifically, we have developed Flud, a crowd-powered system that allows humans with no expertise to design biologically meaningful graph layouts with the help of algorithmically generated suggestions. Furthermore, we propose a novel hybrid approach for graph layout wherein crowd workers and a simulated annealing algorithm build on each other’s progress. A study of about 2,000 crowd workers on Amazon Mechanical Turk showed that the hybrid crowd–algorithm approach outperforms the crowd-only approach and state-of-the-art techniques when workers were asked to lay out complex networks that represent signaling pathways. Another study of seven participants with biological training showed that Flud layouts are more effective compared to those created by state-of-the-art techniques. We also found that the algorithmically generated suggestions guided the workers when they are stuck and helped them improve their score. Finally, we discuss broader implications for mixed-initiative interactions in layout design tasks beyond biology.


2022 ◽  
Vol 169 ◽  
pp. 104629
Author(s):  
Xiaolong Wang ◽  
Yuling He ◽  
Haipeng Wang ◽  
Aijun Hu ◽  
Xiong Zhang

2022 ◽  
Vol 25 (1) ◽  
pp. 1-25
Author(s):  
Sibghat Ullah Bazai ◽  
Julian Jang-Jaccard ◽  
Hooman Alavizadeh

Multi-dimensional data anonymization approaches (e.g., Mondrian) ensure more fine-grained data privacy by providing a different anonymization strategy applied for each attribute. Many variations of multi-dimensional anonymization have been implemented on different distributed processing platforms (e.g., MapReduce, Spark) to take advantage of their scalability and parallelism supports. According to our critical analysis on overheads, either existing iteration-based or recursion-based approaches do not provide effective mechanisms for creating the optimal number of and relative size of resilient distributed datasets (RDDs), thus heavily suffer from performance overheads. To solve this issue, we propose a novel hybrid approach for effectively implementing a multi-dimensional data anonymization strategy (e.g., Mondrian) that is scalable and provides high-performance. Our hybrid approach provides a mechanism to create far fewer RDDs and smaller size partitions attached to each RDD than existing approaches. This optimal RDD creation and operations approach is critical for many multi-dimensional data anonymization applications that create tremendous execution complexity. The new mechanism in our proposed hybrid approach can dramatically reduce the critical overheads involved in re-computation cost, shuffle operations, message exchange, and cache management.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-21
Author(s):  
Linchao Bao ◽  
Xiangkai Lin ◽  
Yajing Chen ◽  
Haoxian Zhang ◽  
Sheng Wang ◽  
...  

We present a fully automatic system that can produce high-fidelity, photo-realistic three-dimensional (3D) digital human heads with a consumer RGB-D selfie camera. The system only needs the user to take a short selfie RGB-D video while rotating his/her head and can produce a high-quality head reconstruction in less than 30 s. Our main contribution is a new facial geometry modeling and reflectance synthesis procedure that significantly improves the state of the art. Specifically, given the input video a two-stage frame selection procedure is first employed to select a few high-quality frames for reconstruction. Then a differentiable renderer-based 3D Morphable Model (3DMM) fitting algorithm is applied to recover facial geometries from multiview RGB-D data, which takes advantages of a powerful 3DMM basis constructed with extensive data generation and perturbation. Our 3DMM has much larger expressive capacities than conventional 3DMM, allowing us to recover more accurate facial geometry using merely linear basis. For reflectance synthesis, we present a hybrid approach that combines parametric fitting and Convolutional Neural Networks (CNNs) to synthesize high-resolution albedo/normal maps with realistic hair/pore/wrinkle details. Results show that our system can produce faithful 3D digital human faces with extremely realistic details. The main code and the newly constructed 3DMM basis is publicly available.


2022 ◽  
Vol 16 (4) ◽  
pp. 1-30
Author(s):  
Muhammad Abulaish ◽  
Mohd Fazil ◽  
Mohammed J. Zaki

Domain-specific keyword extraction is a vital task in the field of text mining. There are various research tasks, such as spam e-mail classification, abusive language detection, sentiment analysis, and emotion mining, where a set of domain-specific keywords (aka lexicon) is highly effective. Existing works for keyword extraction list all keywords rather than domain-specific keywords from a document corpus. Moreover, most of the existing approaches perform well on formal document corpuses but fail on noisy and informal user-generated content in online social media. In this article, we present a hybrid approach by jointly modeling the local and global contextual semantics of words, utilizing the strength of distributional word representation and contrasting-domain corpus for domain-specific keyword extraction. Starting with a seed set of a few domain-specific keywords, we model the text corpus as a weighted word-graph. In this graph, the initial weight of a node (word) represents its semantic association with the target domain calculated as a linear combination of three semantic association metrics, and the weight of an edge connecting a pair of nodes represents the co-occurrence count of the respective words. Thereafter, a modified PageRank method is applied to the word-graph to identify the most relevant words for expanding the initial set of domain-specific keywords. We evaluate our method over both formal and informal text corpuses (comprising six datasets), and show that it performs significantly better in comparison to state-of-the-art methods. Furthermore, we generalize our approach to handle the language-agnostic case, and show that it outperforms existing language-agnostic approaches.


Author(s):  
Ali Ebrahimi ◽  
Kamal Mirzaie ◽  
Ali Mohamad Latif

There are several methods for categorizing images, the most of which are statistical, geometric, model-based and structural methods. In this paper, a new method for describing images based on complex network models is presented. Each image contains a number of key points that can be identified through standard edge detection algorithms. To understand each image better, we can use these points to create a graph of the image. In order to facilitate the use of graphs, generated graphs are created in the form of a complex network of small-worlds. Complex grid features such as topological and dynamic features can be used to display image-related features. After generating this information, it normalizes them and uses them as suitable features for categorizing images. For this purpose, the generated information is given to the neural network. Based on these features and the use of neural networks, comparisons between new images are performed. The results of the article show that this method has a good performance in identifying similarities and finally categorizing them.


Sign in / Sign up

Export Citation Format

Share Document