scholarly journals Artificial neural network model for predicting changes in ion channel conductance based on cardiac action potential shapes generated via simulation

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Da Un Jeong ◽  
Ki Moo Lim

AbstractMany studies have revealed changes in specific protein channels due to physiological causes such as mutation and their effects on action potential duration changes. However, no studies have been conducted to predict the type of protein channel abnormalities that occur through an action potential (AP) shape. Therefore, in this study, we aim to predict the ion channel conductance that is altered from various AP shapes using a machine learning algorithm. We perform electrophysiological simulations using a single-cell model to obtain AP shapes based on variations in the ion channel conductance. In the AP simulation, we increase and decrease the conductance of each ion channel at a constant rate, resulting in 1,980 AP shapes and one standard AP shape without any changes in the ion channel conductance. Subsequently, we calculate the AP difference shapes between them and use them as the input of the machine learning model to predict the changed ion channel conductance. In this study, we demonstrate that the changed ion channel conductance can be predicted with high prediction accuracy, as reflected by an F1 score of 0.985, using only AP shapes and simple machine learning.

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 300
Author(s):  
Mark Lokanan ◽  
Susan Liu

Protecting financial consumers from investment fraud has been a recurring problem in Canada. The purpose of this paper is to predict the demographic characteristics of investors who are likely to be victims of investment fraud. Data for this paper came from the Investment Industry Regulatory Organization of Canada’s (IIROC) database between January of 2009 and December of 2019. In total, 4575 investors were coded as victims of investment fraud. The study employed a machine-learning algorithm to predict the probability of fraud victimization. The machine learning model deployed in this paper predicted the typical demographic profile of fraud victims as investors who classify as female, have poor financial knowledge, know the advisor from the past, and are retired. Investors who are characterized as having limited financial literacy but a long-time relationship with their advisor have reduced probabilities of being victimized. However, male investors with low or moderate-level investment knowledge were more likely to be preyed upon by their investment advisors. While not statistically significant, older adults, in general, are at greater risk of being victimized. The findings from this paper can be used by Canadian self-regulatory organizations and securities commissions to inform their investors’ protection mandates.


2021 ◽  
Author(s):  
Aria Abubakar ◽  
Mandar Kulkarni ◽  
Anisha Kaul

Abstract In the process of deriving the reservoir petrophysical properties of a basin, identifying the pay capability of wells by interpreting various geological formations is key. Currently, this process is facilitated and preceded by well log correlation, which involves petrophysicists and geologists examining multiple raw log measurements for the well in question, indicating geological markers of formation changes and correlating them with those of neighboring wells. As it may seem, this activity of picking markers of a well is performed manually and the process of ‘examining’ may be highly subjective, thus, prone to inconsistencies. In our work, we propose to automate the well correlation workflow by using a Soft- Attention Convolutional Neural Network to predict well markers. The machine learning algorithm is supervised by examples of manual marker picks and their corresponding occurrence in logs such as gamma-ray, resistivity and density. Our experiments have shown that, specifically, the attention mechanism allows the Convolutional Neural Network to look at relevant features or patterns in the log measurements that suggest a change in formation, making the machine learning model highly precise.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mohammad Nahid Hossain ◽  
Mohammad Helal Uddin ◽  
K. Thapa ◽  
Md Abdullah Al Zubaer ◽  
Md Shafiqul Islam ◽  
...  

Cognitive impairment has a significantly negative impact on global healthcare and the community. Holding a person’s cognition and mental retention among older adults is improbable with aging. Early detection of cognitive impairment will decline the most significant impact of extended disease to permanent mental damage. This paper aims to develop a machine learning model to detect and differentiate cognitive impairment categories like severe, moderate, mild, and normal by analyzing neurophysical and physical data. Keystroke and smartwatch have been used to extract individuals’ neurophysical and physical data, respectively. An advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) is proposed to classify the cognitive severity level (absence, mild, moderate, and severe) based on the Standardised Mini-Mental State Examination (SMMSE) questionnaire scores. The statistical method “Pearson’s correlation” and the wrapper feature selection technique have been used to analyze and select the best features. Then, we have conducted our proposed algorithm GBM on those features. And the result has shown an accuracy of more than 94%. This paper has added a new dimension to the state-of-the-art to predict cognitive impairment by implementing neurophysical data and physical data together.


2017 ◽  
Author(s):  
Aymen A. Elfiky ◽  
Maximilian J. Pany ◽  
Ravi B. Parikh ◽  
Ziad Obermeyer

ABSTRACTBackgroundCancer patients who die soon after starting chemotherapy incur costs of treatment without benefits. Accurately predicting mortality risk from chemotherapy is important, but few patient data-driven tools exist. We sought to create and validate a machine learning model predicting mortality for patients starting new chemotherapy.MethodsWe obtained electronic health records for patients treated at a large cancer center (26,946 patients; 51,774 new regimens) over 2004-14, linked to Social Security data for date of death. The model was derived using 2004-11 data, and performance measured on non-overlapping 2012-14 data.Findings30-day mortality from chemotherapy start was 2.1%. Common cancers included breast (21.1%), colorectal (19.3%), and lung (18.0%). Model predictions were accurate for all patients (AUC 0.94). Predictions for patients starting palliative chemotherapy (46.6% of regimens), for whom prognosis is particularly important, remained highly accurate (AUC 0.92). To illustrate model discrimination, we ranked patients initiating palliative chemotherapy by model-predicted mortality risk, and calculated observed mortality by risk decile. 30-day mortality in the highest-risk decile was 22.6%; in the lowest-risk decile, no patients died. Predictions remained accurate across all primary cancers, stages, and chemotherapies—even for clinical trial regimens that first appeared in years after the model was trained (AUC 0.94). The model also performed well for prediction of 180-day mortality (AUC 0.87; mortality 74.8% in the highest risk decile vs. 0.2% in the lowest). Predictions were more accurate than data from randomized trials of individual chemotherapies, or SEER estimates.InterpretationA machine learning algorithm accurately predicted short-term mortality in patients starting chemotherapy using EHR data. Further research is necessary to determine generalizability and the feasibility of applying this algorithm in clinical settings.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4299 ◽  
Author(s):  
Eui Jung Moon ◽  
Youngsik Kim ◽  
Yu Xu ◽  
Yeul Na ◽  
Amato J. Giaccia ◽  
...  

There has been strong demand for the development of an accurate but simple method to assess the freshness of food. In this study, we demonstrated a system to determine food freshness by analyzing the spectral response from a portable visible/near-infrared (VIS/NIR) spectrometer using the Convolutional Neural Network (CNN)-based machine learning algorithm. Spectral response data from salmon, tuna, and beef incubated at 25 °C were obtained every minute for 30 h and then categorized into three states of “fresh”, “likely spoiled”, and “spoiled” based on time and pH. Using the obtained spectral data, a CNN-based machine learning algorithm was built to evaluate the freshness of experimental objects. In addition, a CNN-based machine learning algorithm with a shift-invariant feature can minimize the effect of the variation caused using multiple devices in a real environment. The accuracy of the obtained machine learning model based on the spectral data in predicting the freshness was approximately 85% for salmon, 88% for tuna, and 92% for beef. Therefore, our study demonstrates the practicality of a portable spectrometer in food freshness assessment.


Author(s):  
Rahayu Abdul Rahman ◽  
◽  
Suraya Masrom ◽  
Nor Balkish Zakaria ◽  
Sunarti Halid

-External auditor is one of the governance mechanisms in mitigating corporate managerial misconduct and thereby enhance the credibility of accounting information. Thus, the main objective of this study is to develop machine learning prediction model on auditor choice of the firm which signal the quality of auditing and financial reporting processes.This paper presents the fundamental knowledge on the design and implementation of machine learning model based on four selected algorithms tested on the real dataset of 2,262 firm-year observations of companies listed on Malaysian stock exchange from 2000 to 2007. The performance of each machine learning algorithm on the auditor choice dataset has been observed based on three groups of features selection namely firm characteristics, governance and ownership. The findings indicated that the machine learning models present better accuracy performance with ownership features selection mainly with the Naïve Bayes algorithm. Keywords-Auditor Choice, Machine Learning, Prediction


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Noemí DeCastro-García ◽  
Ángel Luis Muñoz Castañeda ◽  
David Escudero García ◽  
Miguel V. Carriegos

Selecting the best configuration of hyperparameter values for a Machine Learning model yields directly in the performance of the model on the dataset. It is a laborious task that usually requires deep knowledge of the hyperparameter optimizations methods and the Machine Learning algorithms. Although there exist several automatic optimization techniques, these usually take significant resources, increasing the dynamic complexity in order to obtain a great accuracy. Since one of the most critical aspects in this computational consume is the available dataset, among others, in this paper we perform a study of the effect of using different partitions of a dataset in the hyperparameter optimization phase over the efficiency of a Machine Learning algorithm. Nonparametric inference has been used to measure the rate of different behaviors of the accuracy, time, and spatial complexity that are obtained among the partitions and the whole dataset. Also, a level of gain is assigned to each partition allowing us to study patterns and allocate whose samples are more profitable. Since Cybersecurity is a discipline in which the efficiency of Artificial Intelligence techniques is a key aspect in order to extract actionable knowledge, the statistical analyses have been carried out over five Cybersecurity datasets.


2021 ◽  
Author(s):  
Rushad Ravilievich Rakhimov ◽  
Oleg Valerievich Zhdaneev ◽  
Konstantin Nikolaevich Frolov ◽  
Maxim Pavlovich Babich

Abstract The ultimate objective of this paper is to describe the experience of using a machine learning model prepared by the ensemble method to prevent stuck pipe events during well construction process on extended reach wells. The tasks performed include collecting, analyzing and cleaning historical data, selecting and preparing a machine learning model, testing it on real-time data by means of desktop application. The idea is to display the solution at the rig floor, allowing Driller to quickly take actions for prevention of stuck pipe event. Historical data mining and analysis were performed using software for remote monitoring. Preparation, labelling and cleaning of historical and real-time data were executed using programmable scripts and big data techniques. The machine learning algorithm was developed using the ensemble method, which allows to combine several models to improve the final result. On the field of interest, the most common type of stuck pipe are solids induced pack offs. They occur due to insufficient hole cleaning from drilled cuttings and wellbore collapse due to rocks instability. Stuck pipe prevention on extended reach drilling (ERD) wells requires holistic approach meanwhile final role is assigned to the driller. Due to continuously exceeding ERD envelope and increased workloads on both personnel and drilling equipment, the effectiveness of preventing accidents is deteriorating. This leads to severe consequences: Bottom Hole Assembly lost in hole, the necessity to re-drill the bore and eventually to increased Non-Productive Time (NPT). Developed application based on ensemble machine learning algorithm shows prediction accuracy above 94%. Reacting on alarms, driller can quickly take measures to prevent downhole accidents during well construction of ERD wells.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5573
Author(s):  
Mohsen Gholami ◽  
Christopher Napier ◽  
Astrid García Patiño ◽  
Tyler J. Cuthbert ◽  
Carlo Menon

Fatigue is a multifunctional and complex phenomenon that affects how individuals perform an activity. Fatigue during running causes changes in normal gait parameters and increases the risk of injury. To address this problem, wearable sensors have been proposed as an unobtrusive and portable system to measure changes in human movement as a result of fatigue. Recently, a category of wearable devices that has gained attention is flexible textile strain sensors because of their ability to be woven into garments to measure kinematics. This study uses flexible textile strain sensors to continuously monitor the kinematics during running and uses a machine learning approach to estimate the level of fatigue during running. Five female participants used the sensor-instrumented garment while running to a state of fatigue. In addition to the kinematic data from the flexible textile strain sensors, the perceived level of exertion was monitored for each participant as an indication of their actual fatigue level. A stacked random forest machine learning model was used to estimate the perceived exertion levels from the kinematic data. The machine learning algorithm obtained a root mean squared value of 0.06 and a coefficient of determination of 0.96 in participant-specific scenarios. This study highlights the potential of flexible textile strain sensors to objectively estimate the level of fatigue during running by detecting slight perturbations in lower extremity kinematics. Future iterations of this technology may lead to real-time biofeedback applications that could reduce the risk of running-related overuse injuries.


Extending credits to corporates and individuals for the smooth functioning of growing economies like India is inevitable. As increasing number of customers apply for loans in the banks and non- banking financial companies (NBFC), it is really challenging for banks and NBFCs with limited capital to device a standard resolution and safe procedure to lend money to its borrowers for their financial needs. In addition, in recent times NBFC inventories have suffered a significant downfall in terms of the stock price. It has contributed to a contagion that has also spread to other financial stocks, adversely affecting the benchmark in recent times. In this paper, an attempt is made to condense the risk involved in selecting the suitable person who could repay the loan on time thereby keeping the bank’s non-performing assets (NPA) on the hold. This is achieved by feeding the past records of the customer who acquired loans from the bank into a trained machine learning model which could yield an accurate result. The prime focus of the paper is to determine whether or not it will be safe to allocate the loan to a particular person. This paper has the following sections (i) Collection of Data, (ii) Data Cleaning and (iii) Performance Evaluation. Experimental tests found that the Naïve Bayes model has better performance than other models in terms of loan forecasting.


Sign in / Sign up

Export Citation Format

Share Document