scholarly journals Fusion of Random Projection, Multi-Resolution Features and Distance Weighted K Nearest Neighbor for Masses Detection in Mammographic Images

Author(s):  
Viet Dung Nguyen ◽  
Minh Dong Le

<p>Breast cancer is the top cancer in women both in the developed and the developing world. For early detection of the disease, mammography is still the most effective method beside ultrasound and magnetic resonance imaging. Computer Aided Detection systems have been developed to aid radiologists in diagnosing breast cancer. Different methods were proposed to overcome the main drawback of producing large number of False Positives.  In this paper, we presented a novel method for masses detection in mammograms. To describe masses, multi-resolution features were utilized. In feature extraction step, we calculated multi-resolution Block Difference Inverse Probability features and multi-resolution statistical features. Once the descriptors were extracted, we deployed random projection and distance weighted K Nearest Neighbor to classify the detected masses. The result is quite sanguine with sensitivity, false positive reduction and time for carrying out the algorithm</p>

2013 ◽  
Vol 23 (05) ◽  
pp. 1330013 ◽  
Author(s):  
REZA GHAFFARI ◽  
IOAN GROSU ◽  
DACIANA ILIESCU ◽  
EVOR HINES ◽  
MARK LEESON

In this study, we propose a novel method for reducing the attributes of sensory datasets using Master–Slave Synchronization of chaotic Lorenz Systems (DPSMS). As part of the performance testing, three benchmark datasets and one Electronic Nose (EN) sensory dataset with 3 to 13 attributes were presented to our algorithm to be projected into two attributes. The DPSMS-processed datasets were then used as input vector to four artificial intelligence classifiers, namely Feed-Forward Artificial Neural Networks (FFANN), Multilayer Perceptron (MLP), Decision Tree (DT) and K-Nearest Neighbor (KNN). The performance of the classifiers was then evaluated using the original and reduced datasets. Classification rate of 94.5%, 89%, 94.5% and 82% were achieved when reduced Fishers iris, crab gender, breast cancer and electronic nose test datasets were presented to the above classifiers.


10.29007/5gzr ◽  
2018 ◽  
Author(s):  
Cezary Kaliszyk ◽  
Josef Urban

Two complementary AI methods are used to improve the strength of the AI/ATP service for proving conjectures over the HOL Light and Flyspeck corpora. First, several schemes for frequency-based feature weighting are explored in combination with distance-weighted k-nearest-neighbor classifier. This results in 16% improvement (39.0% to 45.5% Flyspeck problems solved) of the overall strength of the service when using 14 CPUs and 30 seconds. The best premise-selection/ATP combination is improved from 24.2% to 31.4%, i.e. by 30%. A smaller improvement is obtained by evolving targetted E prover strategies on two particular premise selections, using the Blind Strategymaker (BliStr) system. This raises the performance of the best AI/ATP method from 31.4% to 34.9%, i.e. by 11%, and raises the current 14-CPU power of the service to 46.9%.


Data mining usually specifies the discovery of specific pattern or analysis of data from a large dataset. Classification is one of an efficient data mining technique, in which class the data are classified are already predefined using the existing datasets. The classification of medical records in terms of its symptoms using computerized method and storing the predicted information in the digital format is of great importance in the diagnosis of various diseases in the medical field. In this paper, finding the algorithm with highest accuracy range is concentrated so that a cost-effective algorithm can be found. Here the data mining classification algorithms are compared with their accuracy of finding exact data according to the diagnosis report and their execution rate to identify how fast the records are classified. The classification technique based algorithms used in this study are the Naive Bayes Classifier, the C4.5 tree classifier and the K-Nearest Neighbor (KNN) to predict which algorithm is the best suited for classifying any kind of medical dataset. Here the datasets such as Breast Cancer, Iris and Hypothyroid are used to predict which of the three algorithms is suitable for classifying the datasets with highest accuracy of finding the records of patients with the particular health problems. The experimental results represented in the form of table and graph shows the performance and the importance of Naïve Bayes, C4.5 and K-Nearest Neighbor algorithms. From the performance outcome of the three algorithms the C4.5 algorithm is a lot better than the Naïve Bayes and the K-Nearest Neighbor algorithm.


Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1870
Author(s):  
Yaghoub Pourasad ◽  
Esmaeil Zarouri ◽  
Mohammad Salemizadeh Parizi ◽  
Amin Salih Mohammed

Breast cancer is one of the main causes of death among women worldwide. Early detection of this disease helps reduce the number of premature deaths. This research aims to design a method for identifying and diagnosing breast tumors based on ultrasound images. For this purpose, six techniques have been performed to detect and segment ultrasound images. Features of images are extracted using the fractal method. Moreover, k-nearest neighbor, support vector machine, decision tree, and Naïve Bayes classification techniques are used to classify images. Then, the convolutional neural network (CNN) architecture is designed to classify breast cancer based on ultrasound images directly. The presented model obtains the accuracy of the training set to 99.8%. Regarding the test results, this diagnosis validation is associated with 88.5% sensitivity. Based on the findings of this study, it can be concluded that the proposed high-potential CNN algorithm can be used to diagnose breast cancer from ultrasound images. The second presented CNN model can identify the original location of the tumor. The results show 92% of the images in the high-performance region with an AUC above 0.6. The proposed model can identify the tumor’s location and volume by morphological operations as a post-processing algorithm. These findings can also be used to monitor patients and prevent the growth of the infected area.


2018 ◽  
Vol 19 (1) ◽  
pp. 144-157
Author(s):  
Mehdi Zekriyapanah Gashti

Exponential growth of medical data and recorded resources from patients with different diseases can be exploited to establish an optimal association between disease symptoms and diagnosis. The main issue in diagnosis is the variability of the features that can be attributed for particular diseases, since some of these features are not essential for the diagnosis and may even lead to a delay in diagnosis. For instance, diabetes, hepatitis, breast cancer, and heart disease, that express multitudes of clinical manifestations as symptoms, are among the diseases with higher morbidity rate. Timely diagnosis of such diseases can play a critical role in decreasing their effect on patients’ quality of life and on the costs of their treatment. Thanks to the large data set available, computer aided diagnosis can be an advanced option for early diagnosis of the diseases. In this paper, using a Flower Pollination Algorithm (FPA) and K-Nearest Neighbor (KNN), a new method is suggested for diagnosis. The modified model can diagnose diseases more accurately by reducing the number of features. The main purpose of the modified model is that the Feature Selection (FS) should be done by FPA and data classification should be performed using KNN. The results showed higher efficiency of the modified model on diagnosis of diabetes, hepatitis, breast cancer, and heart diseases compared to the KNN models. ABSTRAK: Pertumbuhan eksponen dalam data perubatan dan sumber direkodkan daripada pesakit dengan penyakit berbeza boleh disalah guna bagi membentuk kebersamaan optimum antara simptom penyakit dan mengenal pasti gejala penyakit (diagnosis). Isu utama dalam diagnosis adalah kepelbagaian ciri yang dimiliki pada penyakit tertentu, sementara ciri-ciri ini tidak penting untuk didiagnosis dan boleh mengarah kepada penangguhan dalam diagnosis. Sebagai contoh, penyakit kencing manis, radang hati, barah payudara dan penyakit jantung, menunjukkan banyak klinikal simptom jelas dan merupakan penyakit tertinggi berlaku dalam masyarakat. Diagnosis tepat pada penyakit tersebut boleh memainkan peranan penting dalam mengurangkan kesan kualiti  hidup dan kos rawatan pesakit. Terima kasih kepada set data yang banyak, diagnosis dengan bantuan komputer boleh menjadi pilihan maju menuju ke arah diagnosis awal kepada penyakit. Kertas ini menggunakan Algoritma Flower Pollination (FPA) dan K-Nearest Neighbor (KNN), iaitu kaedah baru dicadangkan bagi diagnosis. Model yang diubah suai boleh mendiagnosis penyakit lebih tepat dengan mengurangkan bilangan ciri-ciri. Tujuan utama model yang diubah suai ini adalah bagi Pemilihan Ciri (FS) perlu dilakukan menggunakan FPA and pengkhususan data perlu dijalankan menggunakan KNN. Keputusan menunjukkan model yang diubah suai lebih cekap dalam mendiagnosis penyakit kencing manis, radang hati, barah payudara dan penyakit jantung berbanding model KNN.


Author(s):  
Wan Nor Liyana Wan Hassan Ibeni ◽  
Mohd Zaki Mohd Salikon ◽  
Aida Mustapha ◽  
Saiful Adli Daud ◽  
Mohd Najib Mohd Salleh

The problem of imbalanced class distribution or small datasets is quite frequent in certain fields especially in medical domain. However, the classical Naive Bayes approach in dealing with uncertainties within medical datasets face with the difficulties in selecting prior distributions, whereby parameter estimation such as the maximum likelihood estimation (MLE) and maximum a posteriori (MAP) often hurt the accuracy of predictions. This paper presents the full Bayesian approach to assess the predictive distribution of all classes using three classifiers; naïve bayes (NB), bayesian networks (BN), and tree augmented naïve bayes (TAN) with three datasets; Breast cancer, breast cancer wisconsin, and breast tissue dataset. Next, the prediction accuracies of bayesian approaches are also compared with three standard machine learning algorithms from the literature; K-nearest neighbor (K-NN), support vector machine (SVM), and decision tree (DT). The results showed that the best performance was the bayesian networks (BN) algorithm with accuracy of 97.281%. The results are hoped to provide as base comparison for further research on breast cancer detection. All experiments are conducted in WEKA data mining tool.


2021 ◽  
Author(s):  
Gothai E ◽  
Usha Moorthy ◽  
Sathishkumar V E ◽  
Abeer Ali Alnuaim ◽  
Wesam Atef Hatamleh ◽  
...  

Abstract With the evolution of Internet standards and advancements in various Internet and mobile technologies, especially since web 4.0, more and more web and mobile applications emerge such as e-commerce, social networks, online gaming applications and Internet of Things based applications. Due to the deployment and concurrent access of these applications on the Internet and mobile devices, the amount of data and the kind of data generated increases exponentially and the new era of Big Data has come into existence. Presently available data structures and data analyzing algorithms are not capable to handle such Big Data. Hence, there is a need for scalable, flexible, parallel and intelligent data analyzing algorithms to handle and analyze the complex massive data. In this article, we have proposed a novel distributed supervised machine learning algorithm based on the MapReduce programming model and Distance Weighted k-Nearest Neighbor algorithm called MR-DWkNN to process and analyze the Big Data in the Hadoop cluster environment. The proposed distributed algorithm is based on supervised learning performs both regression tasks as well as classification tasks on large-volume of Big Data applications. Three performance metrics, such as Root Mean Squared Error (RMSE), Determination coefficient (R2) for regression task, and Accuracy for classification tasks are utilized for the performance measure of the proposed MR-DWkNN algorithm. The extensive experimental results shows that there is an average increase of 3–4.5% prediction and classification performances as compared to standard distributed k-NN algorithm and a considerable decrease of Root Mean Squared Error (RMSE) with good parallelism characteristics of scalability and speedup thus, proves its effectiveness in Big Data predictive and classification applications.


In today era credit card are extensively used for day to day business as well as other transactions. Ascent within the variety of transactions through master card has junction rectifier to rise in the dishonest activities. In trendy day's fraud is one in every of the most important concern within the monetary loses not solely to the merchants however additionally to the individual purchasers. Data processing had competed a commanding role within the detection of credit card in on-line group action. Our aim is to first of all establish the categories of the fraud secondly, the techniques like K-nearest neighbor, Hidden Markov model, SVM, logistic regression, decision tree and neural network. So fraud detection systems became essential for the banks to attenuate their loses. In this paper we have research about the various detecting techniques to identify and detect the fraud through varied techniques of data mining


2020 ◽  
Vol 8 (6) ◽  
Author(s):  
Pushpam Sinha ◽  
Ankita Sinha

Entropy based k-Nearest Neighbor pattern classification (EbkNN) is a variation of the conventional k-Nearest Neighbor rule of pattern classification, which exclusively optimizes the value of k-neighbors for each test data based on the calculations of entropy. The formula for entropy used in EbkNN is the one that has been defined popularly in information theory for a set of n different types of information (class) attached to a total of m objects (data points) with each object defined by f features. In EbkNN that value of k is chosen for discrimination of given test data for which the entropy is the least non-zero value. Other rules of conventional kNN are retained in EbkNN. It is concluded that EbkNN works best for binary classification. It is computationally prohibitive to use EbkNN for discriminating the data points of the test dataset into number of classes greater than two. The biggest advantage of EbkNN vis-à-vis the conventional kNN is that in one single run of EbkNN algorithm we get optimum classification of test data. But conventional kNN algorithm has to be run separately for each of the selected range of values of k, and then the optimum k to be chosen from amongst them. We also tested our EbkNN method on WDBC (Wisconsin Diagnostic Breast Cancer) dataset. There are 569 instances in this dataset and we made a random choice of first 290 instances as training dataset and the rest 279 instances as test dataset. We got an exceptionally remarkable result with EbkNN method- accuracy close to 100% and better than the ones got by most of the other researchers who worked on WDBC dataset.  


Diagnostics ◽  
2020 ◽  
Vol 10 (3) ◽  
pp. 136 ◽  
Author(s):  
Raúl Santiago-Montero ◽  
Humberto Sossa ◽  
David A. Gutiérrez-Hernández ◽  
Víctor Zamudio ◽  
Ignacio Hernández-Bautista ◽  
...  

Breast cancer is a disease that has emerged as the second leading cause of cancer deaths in women worldwide. The annual mortality rate is estimated to continue growing. Cancer detection at an early stage could significantly reduce breast cancer death rates long-term. Many investigators have studied different breast diagnostic approaches, such as mammography, magnetic resonance imaging, ultrasound, computerized tomography, positron emission tomography and biopsy. However, these techniques have limitations, such as being expensive, time consuming and not suitable for women of all ages. Proposing techniques that support the effective medical diagnosis of this disease has undoubtedly become a priority for the government, for health institutions and for civil society in general. In this paper, an associative pattern classifier (APC) was used for the diagnosis of breast cancer. The rate of efficiency obtained on the Wisconsin breast cancer database was 97.31%. The APC’s performance was compared with the performance of a support vector machine (SVM) model, back-propagation neural networks, C4.5, naive Bayes, k-nearest neighbor (k-NN) and minimum distance classifiers. According to our results, the APC performed best. The algorithm of the APC was written and executed in a JAVA platform, as well as the experimental and comparativeness between algorithms.


Sign in / Sign up

Export Citation Format

Share Document