scholarly journals Testing the inferred transcription rates of a dynamic, gene network model in absolute units

Author(s):  
Uriel Urquiza-García ◽  
Andrew J Millar

Abstract The circadian clock coordinates plant physiology and development. Mathematical clock models have provided a rigorous framework to understand how the observed rhythms emerge from disparate, molecular processes. However, models of the plant clock have largely been built and tested against RNA timeseries data in arbitrary, relative units. This limits model transferability, refinement from biochemical data and applications in synthetic biology. Here, we incorporate absolute mass units into a detailed model of the clock gene network in Arabidopsis thaliana. We re-interpret the established P2011 model, highlighting a transcriptional activator that overlaps the function of REVEILLE 8/LHY-CCA1-LIKE 5. The U2020 model incorporates the repressive regulation of PRR genes, a key feature of the most detailed clock model KF2014, without greatly increasing model complexity. We tested the experimental error distributions of qRT-PCR data calibrated for units of RNA transcripts/cell and of circadian period estimates, in order to link the models to data more appropriately. U2019 and U2020 models were constrained using these data types, recreating previously-described circadian behaviours with RNA metabolic processes in absolute units. To test their inferred rates, we estimated a distribution of observed, transcriptome-wide transcription rates (Plant Empirical Transcription Rates, PETR) in units of transcripts/cell/hour. The PETR distribution and the equivalent degradation rates indicated that the models’ predicted rates are biologically plausible, with individual exceptions. In addition to updated clock models, FAIR data resources and a software environment in Docker, this validation process represents an advance in biochemical realism for models of plant gene regulation.

2021 ◽  
Author(s):  
Uriel Urquiza-Garcia ◽  
Andrew J Millar

The circadian clock coordinates plant physiology and development. Mathematical clock models have provided a rigorous framework to understand how the observed rhythms emerge from disparate, molecular processes. However, models of the plant clock have largely been built and tested against RNA timeseries data in arbitrary, relative units. This limits model transferability, refinement from biochemical data and applications in synthetic biology. Here, we incorporate absolute mass units into a detailed, gene circuit model of the clock in Arabidopsis thaliana. We re-interpret the established P2011 model, highlighting a transcriptional activator that overlaps the function of REVEILLE 8/LHY-CCA1-LIKE 5, and refactor dynamic equations for the Evening Complex. The U2020 model incorporates the repressive regulation of PRR genes, a key feature of the most detailed clock model F2014, without greatly increasing model complexity. We tested the experimental error distributions of qRT-PCR data calibrated for units of RNA transcripts/cell and of circadian period estimates, in order to link the models to data more appropriately. U2019 and U2020 models were constrained using these data types, recreating previously-described circadian behaviours with RNA metabolic processes in absolute units. To test their inferred rates, we estimated a distribution of observed, transcriptome-wide transcription rates (Plant Empirical Transcription Rates, PETR) in units of transcripts/cell/hour. The PETR distribution and the equivalent degradation rates indicated that the models' predicted rates are biologically plausible, with individual exceptions. In addition to updated, explanatory models of the plant clock, this validation process represents an advance in biochemical realism for models of plant gene regulation.


2011 ◽  
Vol 347-353 ◽  
pp. 2342-2346
Author(s):  
Rong Fu ◽  
Bao Yun Wang ◽  
Wan Peng Sun

With increasing installation capacity and wind farms penetration, wind power plays more important role in power systems, and the modeling of wind farms has become an interesting research topic. In this paper, a coherency-based equivalent model has been discussed for the doubly fed induction generator (DFIG). Firstly, the dynamic models of wind turbines, DFIG and the mechanisms are briefly introduced. Some existing dynamic equivalent methods such as equivalent wind model, variable speed wind turbine model, parameter identification method and modal equivalent method to be used in wind farm aggregation are discussed. Then, considering wind power fluctuations, a new equivalent model of a wind farm equipped with doubly-fed induction generators is proposed to represent the interactions of the wind farm and grid. The method proposed is based on aggregating the coherent group wind turbines into an equivalent one. Finally, the effectiveness of the equivalent model is demonstrated by comparison with the wind farm response obtained from the detailed model. The dynamic simulations show that the present model can greatly reduce the computation time and model complexity.


2020 ◽  
Vol 24 (4) ◽  
pp. 1677-1689 ◽  
Author(s):  
Matthew J. Knowling ◽  
Jeremy T. White ◽  
Catherine R. Moore ◽  
Pawel Rakowski ◽  
Kevin Hayley

Abstract. It has been advocated that history matching numerical models to a diverse range of observation data types, particularly including environmental tracer concentrations and their interpretations and derivatives (e.g., mean age), constitutes an effective and appropriate means to improve model forecast reliability. This study presents two regional-scale modeling case studies that directly and rigorously assess the value of discrete tritium concentration observations and tritium-derived mean residence time (MRT) estimates in two decision-support contexts; “value” is measured herein as both the improvement (or otherwise) in the reliability of forecasts through uncertainty variance reduction and bias minimization as a result of assimilating tritium or tritium-derived MRT observations. The first case study (Heretaunga Plains, New Zealand) utilizes a suite of steady-state and transient flow models and an advection-only particle-tracking model to evaluate the worth of tritium-derived MRT estimates relative to hydraulic potential, spring discharge and river–aquifer exchange flux observations. The worth of MRT observations is quantified in terms of the change in the uncertainty surrounding ecologically sensitive spring discharge forecasts via first-order second-moment (FOSM) analyses. The second case study (Hauraki Plains, New Zealand) employs paired simple–complex transient flow and transport models to evaluate the potential for assimilation-induced bias in simulated surface-water nitrate discharge to an ecologically sensitive estuary system; formal data assimilation of tritium observations is undertaken using an iterative ensemble smoother. The results of these case studies indicate that, for the decision-relevant forecasts considered, tritium observations are of variable benefit and may induce damaging bias in forecasts; these biases are a result of an imperfect model's inability to properly and directly assimilate the rich information content of the tritium observations. The findings of this study challenge the advocacy of the increasing use of tracers, and of diverse data types more generally, whenever environmental model data assimilation is undertaken with imperfect models. This study also highlights the need for improved imperfect-model data assimilation strategies. While these strategies will likely require increased model complexity (including advanced discretization, processes and parameterization) to allow for appropriate assimilation of rich and diverse data types that operate across a range of spatial and temporal scales commensurate with a forecast of management interest, it is critical that increased model complexity does not preclude the application of formal data assimilation and uncertainty quantification techniques due to model instability and excessive run times.


2019 ◽  
Author(s):  
Matthew J. Knowling ◽  
Jeremy T. White ◽  
Catherine R. Moore ◽  
Pawel Rakowski ◽  
Kevin Hayley

Abstract. It has been advocated that history-matching numerical models to a diverse range of observation data types, particularly including environmental tracer concentrations and their interpretations/derivatives (e.g., mean age), constitutes an effective and appropriate means to improve model forecast reliability. This study presents two regional-scale modeling case studies that directly and rigorously assess the value of discrete tritium concentration observations and tritium-derived mean residence time (MRT) estimates in two decision-support contexts; value herein is measured as the improvement (or otherwise) in the reliability of forecasts through uncertainty variance reduction and bias minimization as a result of assimilating tritium or tritium-derived MRT observations. The first case study (Heretaunga Plains, New Zealand) utilizes a suite of steady-state and transient flow models and an advection-only particle-tracking model to evaluate the worth of tritium-derived MRT estimates relative to hydraulic potential, spring discharge and river/aquifer exchange flux observations. The worth of MRT observations is quantified in terms of the change in the uncertainty surrounding ecologically-sensitive spring discharge forecasts via first-order second-moment analyses. The second case study (Hauraki Plains, New Zealand) employs paired simple/complex transient flow and transport models to evaluate the potential for assimilation-induced bias in simulated surface-water nitrate discharge to an ecologically-sensitive estuary system; formal data assimilation of tritium observations is undertaken using an iterative ensemble smoother. The results of these case studies indicate that, for the decision-relevant forecasts considered, tritium observations are of variable benefit and may induce damaging bias in forecasts; these biases are a result of an imperfect model's inability to properly and directly assimilate the rich information content of the tritium observations. The findings of this study challenge the unqualified advocacy of the increasing use of tracers, and diverse data types more generally, whenever environmental model data assimilation is undertaken with imperfect models. This study also highlights the need for improved imperfect-model data assimilation strategies. While these strategies will likely require increased model complexity (including advanced discretization, processes and parameterization) to allow for appropriate assimilation of rich and diverse data types that operate across a range of spatial and temporal scales commensurate with a forecast of management interest, it is critical that increased model complexity does not preclude the application of formal data assimilation and uncertainty quantification techniques due to model instability and excessive run times.


Author(s):  
Robert Iacob ◽  
Peter Mitrouchev ◽  
Jean-Claude Le´on

Simulations of Assembly/Disassembly (A/D) processes covers a large range of objectives, i.e. A/D sequencing, path finding, ergonomic analysis …, where the 3D shape description of the component plays a key role. In addition, the A/D simulations can be performed either from an automated or interactive point of view using standard computer equipment or through immersive and real-time simulation schemes. In order to address this diversity of configurations, this paper presents a simulation framework for A/D analysis based on a new simulation preparation process which allows a simulation process to address up to two types of shape representations, i.e. B-Rep NURBS and polyhedral ones, at the same time, thus handling efficiently the configurations where 3D shape representations of assemblies play a key role. In order to illustrate the simulation preparation process some specific steps are addressed. To this end, the automatic identification of contacts in a 3D product model and their corresponding list is described. After this first stage of identification, an interpretation of the results is needed in order to have the complete list with the mechanical contacts for a product. During the preparation process, three major stages of the framework are detailed: model tessellation, surface merging and contacts identification. Our framework is based on STEP exchange format. The contacts are related to basic geometrical surfaces like: planes, cylinders, cones, spheres. Some examples are provided in order to illustrate the contributions of the proposed framework. This software environment can assist designers to achieve a satisfactory assembly analysis rapidly and can reduce the lead-time of product development. Further consequences of the present work is its ability to produce models and treatments that improve integration of assembly models in immersive environments taking into account of the haptic and visual models needed.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e6281 ◽  
Author(s):  
Marlon E. Cobos ◽  
A. Townsend Peterson ◽  
Narayani Barve ◽  
Luis Osorio-Olvera

Background Ecological niche modeling is a set of analytical tools with applications in diverse disciplines, yet creating these models rigorously is now a challenging task. The calibration phase of these models is critical, but despite recent attempts at providing tools for performing this step, adequate detail is still missing. Here, we present the kuenm R package, a new set of tools for performing detailed development of ecological niche models using the platform Maxent in a reproducible way. Results This package takes advantage of the versatility of R and Maxent to enable detailed model calibration and selection, final model creation and evaluation, and extrapolation risk analysis. Best parameters for modeling are selected considering (1) statistical significance, (2) predictive power, and (3) model complexity. For final models, we enable multiple parameter sets and model transfers, making processing simpler. Users can also evaluate extrapolation risk in model transfers via mobility-oriented parity (MOP) metric. Discussion Use of this package allows robust processes of model calibration, facilitating creation of final models based on model significance, performance, and simplicity. Model transfers to multiple scenarios, also facilitated in this package, significantly reduce time invested in performing these tasks. Finally, efficient assessments of strict-extrapolation risks in model transfers via the MOP and MESS metrics help to prevent overinterpretation in model outcomes.


Author(s):  
Thorsten Meiser

Stochastic dependence among cognitive processes can be modeled in different ways, and the family of multinomial processing tree models provides a flexible framework for analyzing stochastic dependence among discrete cognitive states. This article presents a multinomial model of multidimensional source recognition that specifies stochastic dependence by a parameter for the joint retrieval of multiple source attributes together with parameters for stochastically independent retrieval. The new model is equivalent to a previous multinomial model of multidimensional source memory for a subset of the parameter space. An empirical application illustrates the advantages of the new multinomial model of joint source recognition. The new model allows for a direct comparison of joint source retrieval across conditions, it avoids statistical problems due to inflated confidence intervals and does not imply a conceptual imbalance between source dimensions. Model selection criteria that take model complexity into account corroborate the new model of joint source recognition.


1991 ◽  
Vol 6 (5) ◽  
pp. 347 ◽  
Author(s):  
Bruce I. Blum
Keyword(s):  

Author(s):  
A. A. Nedbaylov

The calculations required in project activities for engineering students are commonly performed in electronic spreadsheets. Practice has shown that utilizing those calculations could prove to be quite difficult for students of other fields. One of the causes for such situation (as well as partly for problems observed during Java and C programming languages courses) lies in the lack of a streamlined distribution structure for both the source data and the end results. A solution could be found in utilizing a shared approach for information structuring in spreadsheet and software environment, called “the Book Method”, which takes into account the engineering psychology issues regarding the user friendliness of working with electronic information. This method can be applied at different levels in academic institutions and at teacher training courses.


Sign in / Sign up

Export Citation Format

Share Document