scholarly journals Illustrating the Benefits of Openness: A Large-Scale Spatial Economic Dispatch Model Using the Julia Language

Energies ◽  
2019 ◽  
Vol 12 (6) ◽  
pp. 1153 ◽  
Author(s):  
Jens Weibezahn ◽  
Mario Kendziorski

In this paper we introduce a five-fold approach to open science comprised of open data, open-source software (that is, programming and modeling tools, model code, and numerical solvers), as well as open-access dissemination. The advantages of open energy models are being discussed. A fully open-source bottom-up electricity sector model with high spatial resolution using the Julia programming environment is then being developed, describing source code and a data set for Germany. This large-scale model of the electricity market includes both generation dispatch from thermal and renewable sources in the spot market as well as the physical transmission network, minimizing total system costs in a linear approach. It calculates the economic dispatch on an hourly basis for a full year, taking into account demand, infeed from renewables, storage, and exchanges with neighboring countries. Following the open approach, the model code and used data set are fully publicly accessible and we use open-source solvers like ECOS and CLP. The model is then being benchmarked regarding runtime of building and solving against a representation in GAMS as a commercial algebraic modeling language and against Gurobi, CPLEX, and Mosek as commercial solvers. With this paper we demonstrate in a proof-of-concept the power and abilities, as well as the beauty of open-source modeling systems. This openness has the potential to increase the transparency of policy advice and to empower stakeholders with fewer financial possibilities.

2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2019 ◽  
Vol 491 (3) ◽  
pp. 3290-3317 ◽  
Author(s):  
Oliver H E Philcox ◽  
Daniel J Eisenstein ◽  
Ross O’Connell ◽  
Alexander Wiegand

ABSTRACT To make use of clustering statistics from large cosmological surveys, accurate and precise covariance matrices are needed. We present a new code to estimate large-scale galaxy two-point correlation function (2PCF) covariances in arbitrary survey geometries that, due to new sampling techniques, runs ∼104 times faster than previous codes, computing finely binned covariance matrices with negligible noise in less than 100 CPU-hours. As in previous works, non-Gaussianity is approximated via a small rescaling of shot noise in the theoretical model, calibrated by comparing jackknife survey covariances to an associated jackknife model. The flexible code, rascalc, has been publicly released, and automatically takes care of all necessary pre- and post-processing, requiring only a single input data set (without a prior 2PCF model). Deviations between large-scale model covariances from a mock survey and those from a large suite of mocks are found to be indistinguishable from noise. In addition, the choice of input mock is shown to be irrelevant for desired noise levels below ∼105 mocks. Coupled with its generalization to multitracer data sets, this shows the algorithm to be an excellent tool for analysis, reducing the need for large numbers of mock simulations to be computed.


F1000Research ◽  
2014 ◽  
Vol 2 ◽  
pp. 288 ◽  
Author(s):  
Erik Butterworth ◽  
Bartholomew E. Jardine ◽  
Gary M. Raymond ◽  
Maxwell L. Neal ◽  
James B. Bassingthwaighte

JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it and about 400 human readable open source physiological/biophysical models are available at http://www.physiome.org/jsim/.


2020 ◽  
Author(s):  
Nicolas Bosc ◽  
Eloy Felix ◽  
Ricardo Arcila ◽  
David Mendez ◽  
Martin Saunders ◽  
...  

Abstract Malaria is a disease affecting hundreds of millions of people across the world, mainly in developing countries and especially in Sub-Saharan Africa. It is the cause of hundreds of thousands of deaths each year and there is an ever-present need to identify and develop effective new therapies to tackle the disease and overcome increasing drug resistance. Here, we extend a previous study in which a number of partners collaborated to develop a consensus in silico model that can be used to identify novel molecules that may have antimalarial properties. The performance of machine learning methods generally improves with the number of data points available for training. One practical challenge in building large training sets is that the data is often proprietary and cannot be straightforwardly integrated. Here, this was addressed by sharing QSAR models, each built on a private data set. We describe the development of an open-source software platform for creating such models, a comprehensive evaluation of methods to create a single consensus model and a web platform called MAIP available at https://www.ebi.ac.uk/chembl/maip/. MAIP is freely available for the wider community to make large-scale predictions of potential malaria inhibiting compounds. This project also highlights some of the practical challenges in reproducing published computational methods and the opportunities that open source software can offer to the community.


2020 ◽  
Author(s):  
Nicolas Bosc ◽  
Eloy Felix ◽  
Ricardo Arcila ◽  
David Mendez ◽  
Martin Saunders ◽  
...  

Abstract Malaria is a disease affecting hundreds of millions of people across the world, mainly in developing countries and especially in Sub-Saharan Africa. It is the cause of hundreds of thousands of deaths each year and there is an ever-present need to identify and develop effective new therapies to tackle the disease and overcome increasing drug resistance. Here, we extend a previous study in which a number of partners collaborated to develop a consensus in silico model that can be used to identify novel molecules that may have antimalarial properties. The performance of machine learning methods generally improves with the number of data points available for training. One practical challenge in building large training sets is that the data are often proprietary and cannot be straightforwardly integrated. Here, this was addressed by sharing QSAR models, each built on a private data set. We describe the development of an open-source software platform for creating such models, a comprehensive evaluation of methods to create a single consensus model and a web platform called MAIP available at https://www.ebi.ac.uk/chembl/maip/. MAIP is freely available for the wider community to make large-scale predictions of potential malaria inhibiting compounds. This project also highlights some of the practical challenges in reproducing published computational methods and the opportunities that open-source software can offer to the community.


Processes ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 75 ◽  
Author(s):  
Kris Villez ◽  
Julien Billeter ◽  
Dominique Bonvin

The computation and modeling of extents has been proposed to handle the complexity of large-scale model identification tasks. Unfortunately, the existing extent-based framework only applies when certain conditions apply. Most typically, it is required that a unique value for each extent can be computed. This severely limits the applicability of this approach. In this work, we propose a novel procedure for parameter estimation inspired by the existing extent-based framework. A key difference with prior work is that the proposed procedure combines structural observability labeling, matrix factorization, and graph-based system partitioning to split the original model parameter estimation problem into parameter estimation problems with the least number of parameters. The value of the proposed method is demonstrated with an extensive simulation study and a study based on a historical data set collected to characterize the isomerization of α -pinene. Most importantly, the obtained results indicate that an important barrier to the application of extent-based frameworks for process modeling and monitoring tasks has been lifted.


2021 ◽  
Author(s):  
Dongqi Wu ◽  
Xiangtian Zheng ◽  
Yixing Xu ◽  
Daniel Olsen ◽  
Bainan Xia ◽  
...  

Abstract Unprecedented winter storms that hit across Texas in February 2021 have caused at least 4.5 million customers to experience load shedding due to the wide-ranging generation capacity outage and record-breaking electricity demand. While much remains to be investigated on what, how, and why such wide-spread power outages occurred across Texas, it is imperative for the broader research community to develop insights based on a coherent electric grid model and data set. In this paper, we collaboratively release an open-source large-scale baseline model that is synthetic but nevertheless provides a realistic representation of the actual energy grid, accompanied by open-source cross-domain data sets. Leveraging the synthetic grid model, we reproduce the blackout event and critically assess several corrective measures that could have mitigated the blackout under such extreme weather conditions. We uncover the regional disparity of load shedding. The analysis also quantifies the sensitivity of several corrective measures with respect to mitigating the power outage, as measured in energy not served (ENS). This approach and methodology are generalizable for other regions experiencing significant energy portfolio transitions.


2021 ◽  
Author(s):  
Ivan Georgiev Raikov ◽  
Aaron D Milstein ◽  
Prannath Moolchand ◽  
Gergely G Szabo ◽  
Calvin J Schneider ◽  
...  

Large-scale computational models of the brain are necessary to accurately represent anatomical and functional variability in neuronal biophysics across brain regions and also to capture and study local and global interactions between neuronal populations on a behaviorally-relevant temporal scale. We present the methodology behind and an initial implementation of a novel open-source computational framework for construction, simulation, and analysis of models consisting of millions of neurons on high-performance computing systems, based on the NEURON and CoreNEURON simulators. This framework includes an HDF5-based data format for storing morphological, synaptic, and connectivity information of large neuronal network models, and an accompanying open source software library that provides efficient, scalable parallel storage and MPI-based data movement capabilities. We outline our approaches for constructing detailed large-scale biophysical models with topographical connectivity and input stimuli, and present simulation results obtained with a full-scale model of the dentate gyrus constructed with our framework. The model generates sparse and spatially selective population activity that fits well with in-vivo experimental data. Moreover, our approach is fully general and can be applied to modeling other regions of the hippocampal formation in order to rapidly evaluate specific hypotheses about large-scale neural architectural features.


2018 ◽  
Vol 40 ◽  
pp. 05040
Author(s):  
Mohamed F.M. Yossef ◽  
J. S. de Jong ◽  
A. Spruyt ◽  
M. Scholten

For decades, the decision-making process for water management in the Netherlands makes full utilisation of state of the art models. For rivers, two-dimensional hydrodynamic models are considered essential for a wide range of questions. Every five years, there is a major model revision that includes software updates, improved physical processes, new modelling strategy, and a new calibration. 2017 marked the setup and calibration of the first river model in the sixth generation of these models. In this paper, we discuss the most recent developments in two-dimensional hydrodynamic modelling of rivers. We give an overview of the process followed to agree on the functional design of the model and address the use of the recently developed Delft3D Flexible Mesh suite. We address, in some details: i) a mesh independent approach for model setup; ii) the utilisation of a new calibration technique, which is automated using data assimilation and includes spatial and discharge dependencies; and iii) the use of a novel operational module to control hydraulic structures. The first river model within the 6th generation of models is that of the Meuse River, where the new approaches are being successfully applied. In conclusion: the mesh independent modelling approach offers great flexibility and facilitates that the same data set can be used for multiple versions of the model (e.g. different grid resolution; or different model extent). The automated calibration approach makes it possible to utilise a comprehensive calibration data set for a large-scale model in a reproducible way. The increased complexity of modelling has become possible over the last decade due to the availability of large datasets and increased computational power. This paper is particularly relevant for modellers and decision makers alike.


Sign in / Sign up

Export Citation Format

Share Document