scholarly journals An Analytical Model for Evaluating Social Security Schemes-A Focus on “Ayushman Bharat” Universal Health Scheme in India

2019 ◽  
Vol 8 (3) ◽  
pp. 8929-8936

The government initiated social security schemes in countries such as India, target a large proportion of the population to provide various types of benefits that involve a number of stakeholders. Such schemes are executed by a large number of transactions between the Government agencies and the other stakeholders on a real time basis, thus resulting in large data sets. Current research advancements in the domain of social security schemes include analysis of sequential activities and debt occurrences for such transactions at the national level only. It has been a challenge in recent times to monitor and evaluate the performance of such gigantic schemes which also involves financial decision making at different levels. This paper proposes an innovative frame-work that combines data mining strategies with actuarial techniques to evaluate one of the popular schemes in India, namely AB-PMJAY (“Ayushman Bharat–Pradhan Mantri Jan Arogya Yojana”) launched by the Government in 2018 at family level. In the proposed framework, the scheme has been divided into a number of sub-processes for which various data mining techniques such as, clustering, classification, anomaly detection and actuarial techniques for pricing are proposed to evaluate the scheme effective at micro level

2021 ◽  
pp. 1826-1839
Author(s):  
Sandeep Adhikari, Dr. Sunita Chaudhary

The exponential growth in the use of computers over networks, as well as the proliferation of applications that operate on different platforms, has drawn attention to network security. This paradigm takes advantage of security flaws in all operating systems that are both technically difficult and costly to fix. As a result, intrusion is used as a key to worldwide a computer resource's credibility, availability, and confidentiality. The Intrusion Detection System (IDS) is critical in detecting network anomalies and attacks. In this paper, the data mining principle is combined with IDS to efficiently and quickly identify important, secret data of interest to the user. The proposed algorithm addresses four issues: data classification, high levels of human interaction, lack of labeled data, and the effectiveness of distributed denial of service attacks. We're also working on a decision tree classifier that has a variety of parameters. The previous algorithm classified IDS up to 90% of the time and was not appropriate for large data sets. Our proposed algorithm was designed to accurately classify large data sets. Aside from that, we quantify a few more decision tree classifier parameters.


2014 ◽  
Vol 644-650 ◽  
pp. 2120-2123 ◽  
Author(s):  
De Zhi An ◽  
Guang Li Wu ◽  
Jun Lu

At present there are many data mining methods. This paper studies the application of rough set method in data mining, mainly on the application of attribute reduction algorithm based on rough set in the data mining rules extraction stage. Rough set in data mining is often used for reduction of knowledge, and thus for the rule extraction. Attribute reduction is one of the core research contents of rough set theory. In this paper, the traditional attribute reduction algorithm based on rough sets is studied and improved, and for large data sets of data mining, a new attribute reduction algorithm is proposed.


Author(s):  
Md. Zakir Hossain ◽  
Md.Nasim Akhtar ◽  
R.B. Ahmad ◽  
Mostafijur Rahman

<span>Data mining is the process of finding structure of data from large data sets. With this process, the decision makers can make a particular decision for further development of the real-world problems. Several data clusteringtechniques are used in data mining for finding a specific pattern of data. The K-means method isone of the familiar clustering techniques for clustering large data sets.  The K-means clustering method partitions the data set based on the assumption that the number of clusters are fixed.The main problem of this method is that if the number of clusters is to be chosen small then there is a higher probability of adding dissimilar items into the same group. On the other hand, if the number of clusters is chosen to be high, then there is a higher chance of adding similar items in the different groups. In this paper, we address this issue by proposing a new K-Means clustering algorithm. The proposed method performs data clustering dynamically. The proposed method initially calculates a threshold value as a centroid of K-Means and based on this value the number of clusters are formed. At each iteration of K-Means, if the Euclidian distance between two points is less than or equal to the threshold value, then these two data points will be in the same group. Otherwise, the proposed method will create a new cluster with the dissimilar data point. The results show that the proposed method outperforms the original K-Means method.</span>


Author(s):  
Kimberly R. Huyser ◽  
Sofia Locklear

American Indian and Alaska Native (AIAN) Peoples are diverse, but their diversity is statistically flattened in national-level survey data and, subsequently, in contemporary understandings of race and inequality in the United States. This chapter demonstrates the utility of disaggregated data for gaining, for instance, nuanced information on social outcomes such as educational attainment and income levels, and shaping resource allocation accordingly. Throughout, it explores both reasons and remedies for AIAN invisibility in large data sets. Using their personal identities as a case in point, the authors argue for more refined survey instruments, informed by Indigenous modes of identity and affiliation, not only to raise the statistical salience of AIANs but also to paint a fuller picture of a vibrant, heterogeneous First Peoples all too often dismissed as a vanishing people.


Author(s):  
Ana Cristina Bicharra Garcia ◽  
Inhauma Ferraz ◽  
Adriana S. Vivacqua

AbstractMost past approaches to data mining have been based on association rules. However, the simple application of association rules usually only changes the user's problem from dealing with millions of data points to dealing with thousands of rules. Although this may somewhat reduce the scale of the problem, it is not a completely satisfactory solution. This paper presents a new data mining technique, called knowledge cohesion (KC), which takes into account a domain ontology and the user's interest in exploring certain data sets to extract knowledge, in the form of semantic nets, from large data sets. The KC method has been successfully applied to mine causal relations from oil platform accident reports. In a comparison with association rule techniques for the same domain, KC has shown a significant improvement in the extraction of relevant knowledge, using processing complexity and knowledge manageability as the evaluation criteria.


Author(s):  
LAWRENCE MAZLACK

Determining causality has been a tantalizing goal throughout human history. Proper sacrifices to the gods were thought to bring rewards; failure to make suitable observations were thought to lead to disaster. Today, data mining holds the promise of extracting unsuspected information from very large databases. Methods have been developed to build association rules from large data sets. Association rules indicate the strength of association of two or more data attributes. In many ways, the interest in association rules is that they offer the promise (or illusion) of causal, or at least, predictive relationships. However, association rules only calculate a joint probability; they do not express a causal relationship. If causal relationships could be discovered, it would be very useful. Our goal is to explore causality in the data mining context.


Sign in / Sign up

Export Citation Format

Share Document