scholarly journals Using Real-Time Electricity Prices to Leverage Electrical Energy Storage and Flexible Loads in a Smart Grid Environment Utilizing Machine Learning Techniques

Processes ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 870 ◽  
Author(s):  
Moataz Sheha ◽  
Kody Powell

With exposure to real-time market pricing structures, consumers would be incentivized to invest in electrical energy storage systems and smart predictive automation of their home energy systems. Smart home automation through optimizing HVAC (heating, ventilation, and air conditioning) temperature set points, along with distributed energy storage, could be utilized in the process of optimizing the operation of the electric grid. Using electricity prices as decision variables to leverage electrical energy storage and flexible loads can be a valuable tool to optimize the performance of the power grid and reduce electricity costs both on the supply and demand sides. Energy demand prediction is important for proper allocation and utilization of the available resources. Manipulating energy prices to leverage storage and flexible loads through these demand prediction models is a novel idea that needs to be studied. In this paper, different models for proactive prediction of the energy demand for an entire city using different machine learning techniques are presented and compared. The results of the machine learning techniques show that the proposed nonlinear autoregressive with exogenous inputs neural network model resulted in the most accurate predictions. These prediction models pave the way for the demand side to become an important asset for grid regulation by responding to variable price signals through battery energy storage and passive thermal energy storage using HVAC temperature set points.

2019 ◽  
Vol 21 (1) ◽  
Author(s):  
Erin Magee ◽  
Meserret Karaca ◽  
Michelle Alvarado ◽  
Ernesto Escoto ◽  
Alvin Lawrence

University of Florida Counseling and Wellness Center (UF CWC) is one of the counseling centers that implemented a walk-in appointment policy for emergency needs. The attendance to UF CWC has increased in walk-in appointment traffic every year since data collection began in 2010, averaging a 7% increase in patient visits per year. However, demand for walk-in services is highly uncertain on an hourly, daily or weekly basis. Additionally, emergency needs of students should be met immediately before they become a catastrophic event. Thus, demand prediction becomes an important aspect to dynamically schedule counselors to deal with unexpected demand scenarios.  This project provides data visualization and utilizes machine learning techniques to predict future demand to assist with scheduling. We identified seasonal trends in historical visit data from the center, including peaks at the beginning of semesters and around finals. We then used the visit data to train a Gradient Boosting algorithm to predict demand. This model predicted demand with a mean of 4.2 patients per hour and mean square error of 1.75. Our results contribute to better demand prediction models for the UF CWC so that they may better support student needs with adequate staffing levels.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


2021 ◽  
Vol 297 ◽  
pp. 01073
Author(s):  
Sabyasachi Pramanik ◽  
K. Martin Sagayam ◽  
Om Prakash Jena

Cancer has been described as a diverse illness with several distinct subtypes that may occur simultaneously. As a result, early detection and forecast of cancer types have graced essentially in cancer fact-finding methods since they may help to improve the clinical treatment of cancer survivors. The significance of categorizing cancer suffers into higher or lower-threat categories has prompted numerous fact-finding associates from the bioscience and genomics field to investigate the utilization of machine learning (ML) algorithms in cancer diagnosis and treatment. Because of this, these methods have been used with the goal of simulating the development and treatment of malignant diseases in humans. Furthermore, the capacity of machine learning techniques to identify important characteristics from complicated datasets demonstrates the significance of these technologies. These technologies include Bayesian networks and artificial neural networks, along with a number of other approaches. Decision Trees and Support Vector Machines which have already been extensively used in cancer research for the creation of predictive models, also lead to accurate decision making. The application of machine learning techniques may undoubtedly enhance our knowledge of cancer development; nevertheless, a sufficient degree of validation is required before these approaches can be considered for use in daily clinical practice. An overview of current machine learning approaches utilized in the simulation of cancer development is presented in this paper. All of the supervised machine learning approaches described here, along with a variety of input characteristics and data samples, are used to build the prediction models. In light of the increasing trend towards the use of machine learning methods in biomedical research, we offer the most current papers that have used these approaches to predict risk of cancer or patient outcomes in order to better understand cancer.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E. Braat ◽  
...  

Abstract Background Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians. Methods In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques. Results Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years. Conclusion In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables. Trial registration Retrospective data were provided by the Scientific Registry of Transplant Recipients under Data Use Agreement number 9477 for analysis of risk factors after liver transplantation.


Author(s):  
Sofia Benbelkacem ◽  
Farid Kadri ◽  
Baghdad Atmani ◽  
Sondès Chaabane

Nowadays, emergency department services are confronted to an increasing demand. This situation causes emergency department overcrowding which often increases the length of stay of patients and leads to strain situations. To overcome this issue, emergency department managers must predict the length of stay. In this work, the researchers propose to use machine learning techniques to set up a methodology that supports the management of emergency departments (EDs). The target of this work is to predict the length of stay of patients in the ED in order to prevent strain situations. The experiments were carried out on a real database collected from the pediatric emergency department (PED) in Lille regional hospital center, France. Different machine learning techniques have been used to build the best prediction models. The results seem better with Naive Bayes, C4.5 and SVM methods. In addition, the models based on a subset of attributes proved to be more efficient than models based on the set of attributes.


2021 ◽  
Vol 8 ◽  
Author(s):  
Daniele Roberto Giacobbe ◽  
Alessio Signori ◽  
Filippo Del Puente ◽  
Sara Mora ◽  
Luca Carmisciano ◽  
...  

Sepsis is a major cause of death worldwide. Over the past years, prediction of clinically relevant events through machine learning models has gained particular attention. In the present perspective, we provide a brief, clinician-oriented vision on the following relevant aspects concerning the use of machine learning predictive models for the early detection of sepsis in the daily practice: (i) the controversy of sepsis definition and its influence on the development of prediction models; (ii) the choice and availability of input features; (iii) the measure of the model performance, the output, and their usefulness in the clinical practice. The increasing involvement of artificial intelligence and machine learning in health care cannot be disregarded, despite important pitfalls that should be always carefully taken into consideration. In the long run, a rigorous multidisciplinary approach to enrich our understanding in the application of machine learning techniques for the early recognition of sepsis may show potential to augment medical decision-making when facing this heterogeneous and complex syndrome.


2020 ◽  
Author(s):  
Nicola Bodini ◽  
Julie K. Lundquist ◽  
Mike Optis

Abstract. Current turbulence parameterizations in numerical weather prediction models at the mesoscale assume a local equilibrium between production and dissipation of turbulence. As this assumption does not hold at fine horizontal resolutions, improved ways to represent turbulent kinetic energy (TKE) dissipation rate (ε) are needed. Here, we use a 6-week data set of turbulence measurements from 184 sonic anemometers in complex terrain at the Perdigão field campaign to suggest improved representations of dissipation rate. First, we demonstrate that a widely used Mellor, Yamada, Nakanishi, and Niino (MYNN) parameterization of TKE dissipation rate leads to a large inaccuracy and bias in the representation of ε. Next, we assess the potential of machine-learning techniques to predict TKE dissipation rate from a set of atmospheric and terrain-related features. We train and test several machine-learning algorithms using the data at Perdigão, and we find that multivariate polynomial regressions and random forests can eliminate the bias MYNN currently shows in representing ε, while also reducing the average error by up to 30 %. Of all the variables included in the algorithms, TKE is the variable responsible for most of the variability of ε, and a strong positive correlation exists between the two. These results suggest further consideration of machine-learning techniques to enhance parameterizations of turbulence in numerical weather prediction models.


Sign in / Sign up

Export Citation Format

Share Document