scholarly journals Digital territory, digital flesh

2019 ◽  
Vol 8 (1) ◽  
pp. 70-80
Author(s):  
Tiara Roxanne

Western Indigenous cultures have been colonized, dehumanized and silenced. As AI grows and learns from colonial pre-existing biases, it also reinforces the notion that Natives no longer are but were. And since machine learning requires the input of categorical data, from which AI develops knowledge and understanding, compartmentalization is a natural behavior AI undertakes. As AI classifies Indigenous communities into a marginalized and historicized digital data set, the asterisk, the code, we fall into a cultural trap of recolonization. This necessitates an interference. A non-violent break. A different kind of rupture. One which fractures colonization and codification and opens a space for colonial recovery and survival. If we have not yetcontemporized the colonized Western Indigenous experience, how can we utilize tools of artificial intelligence such as the interface and digitality to create a space that de-codes colonial corporeality resulting in a sense of boundlessness, contemporization and survival?

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


BMJ Open ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. e037161
Author(s):  
Hyunmin Ahn

ObjectivesWe investigated the usefulness of machine learning artificial intelligence (AI) in classifying the severity of ophthalmic emergency for timely hospital visits.Study designThis retrospective study analysed the patients who first visited the Armed Forces Daegu Hospital between May and December 2019. General patient information, events and symptoms were input variables. Events, symptoms, diagnoses and treatments were output variables. The output variables were classified into four classes (red, orange, yellow and green, indicating immediate to no emergency cases). About 200 cases of the class-balanced validation data set were randomly selected before all training procedures. An ensemble AI model using combinations of fully connected neural networks with the synthetic minority oversampling technique algorithm was adopted.ParticipantsA total of 1681 patients were included.Major outcomesModel performance was evaluated using accuracy, precision, recall and F1 scores.ResultsThe accuracy of the model was 99.05%. The precision of each class (red, orange, yellow and green) was 100%, 98.10%, 92.73% and 100%. The recalls of each class were 100%, 100%, 98.08% and 95.33%. The F1 scores of each class were 100%, 99.04%, 95.33% and 96.00%.ConclusionsWe provided support for an AI method to classify ophthalmic emergency severity based on symptoms.


Different mathematical models, Artificial Intelligence approach and Past recorded data set is combined to formulate Machine Learning. Machine Learning uses different learning algorithms for different types of data and has been classified into three types. The advantage of this learning is that it uses Artificial Neural Network and based on the error rates, it adjusts the weights to improve itself in further epochs. But, Machine Learning works well only when the features are defined accurately. Deciding which feature to select needs good domain knowledge which makes Machine Learning developer dependable. The lack of domain knowledge affects the performance. This dependency inspired the invention of Deep Learning. Deep Learning can detect features through self-training models and is able to give better results compared to using Artificial Intelligence or Machine Learning. It uses different functions like ReLU, Gradient Descend and Optimizers, which makes it the best thing available so far. To efficiently apply such optimizers, one should have the knowledge of mathematical computations and convolutions running behind the layers. It also uses different pooling layers to get the features. But these Modern Approaches need high level of computation which requires CPU and GPUs. In case, if, such high computational power, if hardware is not available then one can use Google Colaboratory framework. The Deep Learning Approach is proven to improve the skin cancer detection as demonstrated in this paper. The paper also aims to provide the circumstantial knowledge to the reader of various practices mentioned above.


2020 ◽  
Vol 17 (1) ◽  
pp. 20-31
Author(s):  
D. D. Garri ◽  
S. V. Saakyan ◽  
I. P. Khoroshilova-Maslova ◽  
A. Yu. Tsygankov ◽  
O. I. Nikitin ◽  
...  

Machine learning is applied in every field of human activity using digital data. In recent years, many papers have been published concerning artificial intelligence use in classification, regression and segmentation purposes in medicine and in ophthalmology, in particular. Artificial intelligence is a subsection of computer science and its principles, and concepts are often incomprehensible or used and interpreted by doctors incorrectly. Diagnostics of ophthalmology patients is associated with a significant amount of medical data that can be used for further software processing. By using of machine learning methods, it’s possible to find out, identify and count almost any pathological signs of diseases by analyzing medical images, clinical and laboratory data. Machine learning includes models and algorithms that mimic the architecture of biological neural networks. The greatest interest in the field is represented by artificial neural networks, in particular, networks based on deep learning due to the ability of the latter to work effectively with complex and multidimensional databases, coupled with the increasing availability of databases and performance of graphics processors. Artificial neural networks have the potential to be used in automated screening, determining the stage of diseases, predicting the therapeutic effect of treatment and the diseases outcome in the analysis of clinical data in patients with diabetic retinopathy, age-related macular degeneration, glaucoma, cataracts, ocular tumors and concomitant pathology. The main characteristics were the size of the training and validation datasets, accuracy, sensitivity, specificity, AUROC (Area Under Receiver Operating Characteristic Curve). A number of studies investigate the comparative characteristics of algorithms. Many of the articles presented in the review have shown the results in accuracy, sensitivity, specificity, AUROC, error values that exceed the corresponding indicators of an average ophthalmologist. Their introduction into routine clinical practice will increase the diagnostic, therapeutic and professional capabilities of a clinicians, which is especially important in the field of ophthalmic oncology, where there is a patient survival matter.


Author(s):  
Adeolu Oluwaseyi Oyekan

This paper argues for the role of technology, such as artificial intelligence, which includes machine learning, in managing conflicts between herders and farmers in Nigeria. Conflicts between itinerant Fulani herders and farmers over the years have resulted in the destruction of lives, properties, and the displacement of many indigenous communities across Nigeria, with devastating social, economic and political consequences. Over time, the conflicts have morphed into ethnic stereotypes, allegations of ethnic cleansing, forceful appropriation and divisive entrenchment of labels that are inimical to national existence. The reality of climate change and increased urbanization suggest that conflicts are likely to exacerbate over shrinking resources in the near future. Finding solutions to the conflicts, therefore requires innovative thinking capable of addressing the limits of past approaches. While mindful of the human and political dimension of the conflicts, I argue using the method of philosophical analysis that technology possesses the capacity for social transformation, and make a case for the modernization of grazing culture and the curbing of crossborder grazing through machine learning (ML) and other forms of artificial intelligence. Machine Learning represents a transformative technology that addresses the security challenges of irregular migration, accommodates the nomadic and subsistent mode of farming associated with the conflicting parties while enabling a gradual but stable transition to full modernization. I conclude that machine learning holds many prospects for minimizing conflicts and attaining social cohesion between herders and farmers when properly complemented by other mechanisms of social cohesion that may be political in nature.


10.2196/28856 ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. e28856
Author(s):  
Zahid Ullah ◽  
Farrukh Saleem ◽  
Mona Jamjoom ◽  
Bahjat Fakieh

Background The use of artificial intelligence has revolutionized every area of life such as business and trade, social and electronic media, education and learning, manufacturing industries, medicine and sciences, and every other sector. The new reforms and advanced technologies of artificial intelligence have enabled data analysts to transmute raw data generated by these sectors into meaningful insights for an effective decision-making process. Health care is one of the integral sectors where a large amount of data is generated daily, and making effective decisions based on these data is therefore a challenge. In this study, cases related to childbirth either by the traditional method of vaginal delivery or cesarean delivery were investigated. Cesarean delivery is performed to save both the mother and the fetus when complications related to vaginal birth arise. Objective The aim of this study was to develop reliable prediction models for a maternity care decision support system to predict the mode of delivery before childbirth. Methods This study was conducted in 2 parts for identifying the mode of childbirth: first, the existing data set was enriched and second, previous medical records about the mode of delivery were investigated using machine learning algorithms and by extracting meaningful insights from unseen cases. Several prediction models were trained to achieve this objective, such as decision tree, random forest, AdaBoostM1, bagging, and k-nearest neighbor, based on original and enriched data sets. Results The prediction models based on enriched data performed well in terms of accuracy, sensitivity, specificity, F-measure, and receiver operating characteristic curves in the outcomes. Specifically, the accuracy of k-nearest neighbor was 84.38%, that of bagging was 83.75%, that of random forest was 83.13%, that of decision tree was 81.25%, and that of AdaBoostM1 was 80.63%. Enrichment of the data set had a good impact on improving the accuracy of the prediction process, which supports maternity care practitioners in making decisions in critical cases. Conclusions Our study shows that enriching the data set improves the accuracy of the prediction process, thereby supporting maternity care practitioners in making informed decisions in critical cases. The enriched data set used in this study yields good results, but this data set can become even better if the records are increased with real clinical data.


Author(s):  
Jayant Kumar A Rathod ◽  
Naveen Bhavani ◽  
Prenita Prinsal Saldanha ◽  
Preethi M Rao ◽  
Prasad Patil

Artificial Intelligence and Machine Learning are two fields that are causing substantial development in every field specifically in the field of medical sciences; for the stupendous potential that it can provide to assist the clinicians, researchers, in clinical decision making, automate time consuming procedures, medical imaging, and more. Most implementations of AI/ML rely on static data set, and this where the big data steps in. That is, these models are developed and trained on a data set that is already recorded and have been diligently reviewed for accuracy; leading to a precise decision-making process. Experts foresee that AI/ML based overarching care system will develop high-quality patient care and innovative research, aiding advanced decision support tools. In this paper we shall realize what are the current devices that are build and are being used for real time problem solving, also discuss the impact of Software as a Medical Device (SAMD) in future of medical sciences. [2,3,11]


Author(s):  
Yaser AbdulAali Jasim

Nowadays, technology and computer science are rapidly developing many tools and algorithms, especially in the field of artificial intelligence.  Machine learning is involved in the development of new methodologies and models that have become a novel machine learning area of applications for artificial intelligence. In addition to the architectures of conventional neural network methodologies, deep learning refers to the use of artificial neural network architectures which include multiple processing layers. In this paper, models of the Convolutional neural network were designed to detect (diagnose) plant disorders by applying samples of healthy and unhealthy plant images analyzed by means of methods of deep learning. The models were trained using an open data set containing (18,000) images of ten different plants, including healthy plants. Several model architectures have been trained to achieve the best performance of (97 percent) when the respectively [plant, disease] paired are detected. This is a very useful information or early warning technique and a method that can be further improved with the substantially high-performance rate to support an automated plant disease detection system to work in actual farm conditions.


Author(s):  
Paolo Massimo Buscema ◽  
William J Tastle

Data sets collected independently using the same variables can be compared using a new artificial neural network called Artificial neural network What If Theory, AWIT. Given a data set that is deemed the standard reference for some object, i.e. a flower, industry, disease, or galaxy, other data sets can be compared against it to identify its proximity to the standard. Thus, data that might not lend itself well to traditional methods of analysis could identify new perspectives or views of the data and thus, potentially new perceptions of novel and innovative solutions. This method comes out of the field of artificial intelligence, particularly artificial neural networks, and utilizes both machine learning and pattern recognition to display an innovative analysis.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
R Haneef ◽  
S Fuentes ◽  
R Hrzic ◽  
S Fosse-Edorh ◽  
S Kab ◽  
...  

Abstract Background The use of artificial intelligence is increasing to estimate and predict health outcomes from large data sets. The main objectives were to develop two algorithms using machine learning techniques to identify new cases of diabetes (case study I) and to classify type 1 and type 2 (case study II) in France. Methods We selected the training data set from a cohort study linked with French national Health database (i.e., SNDS). Two final datasets were used to achieve each objective. A supervised machine learning method including eight following steps was developed: the selection of the data set, case definition, coding and standardization of variables, split data into training and test data sets, variable selection, training, validation and selection of the model. We planned to apply the trained models on the SNDS to estimate the incidence of diabetes and the prevalence of type 1/2 diabetes. Results For the case study I, 23/3468 and for case study II, 14/3481 SNDS variables were selected based on an optimal balance between variance explained and using the ReliefExp algorithm. We trained four models using different classification algorithms on the training data set. The Linear Discriminant Analysis model performed best in both case studies. The models were assessed on the test datasets and achieved a specificity of 67% and a sensitivity of 62% in case study I, and a specificity of 97 % and sensitivity of 100% in case study II. The case study II model was applied to the SNDS and estimated the prevalence of type 1 diabetes in 2016 in France of 0.3% and for type 2, 4.4%. The case study model I was not applied to the SNDS. Conclusions The case study II model to estimate the prevalence of type 1/2 diabetes has good performance and will be used in routine surveillance. The case study I model to identify new cases of diabetes showed a poor performance due to missing necessary information on determinants of diabetes and will need to be improved for further research.


Sign in / Sign up

Export Citation Format

Share Document