scholarly journals Calibration of a simple and a complex model of global marine biogeochemistry

Author(s):  
Iris Kriest

Abstract. The assessment of the ocean biota's role in climate climate change is often carried out with global biogeochemical ocean models that contain many components, and involve a high level of parametric uncertainty. Examination the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common, but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types – a complex seven-component model (MOPS), and a very simple two-component model (RetroMOPS) – for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model, which contains only nutrients and dissolved organic phosphorus (DOP), is sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias as a useful additional constraint on model parameters. Dissolved organic phosphorous at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer – although difficult to measure – may be an important asset for model calibration.

2017 ◽  
Vol 14 (21) ◽  
pp. 4965-4984 ◽  
Author(s):  
Iris Kriest

Abstract. The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types – a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) – for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer – although difficult to measure – may be an important asset for model calibration.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1484
Author(s):  
Dagmar Dlouhá ◽  
Viktor Dubovský ◽  
Lukáš Pospíšil

We present an approach for the calibration of simplified evaporation model parameters based on the optimization of parameters against the most complex model for evaporation estimation, i.e., the Penman–Monteith equation. This model computes the evaporation from several input quantities, such as air temperature, wind speed, heat storage, net radiation etc. However, sometimes all these values are not available, therefore we must use simplified models. Our interest in free water surface evaporation is given by the need for ongoing hydric reclamation of the former Ležáky–Most quarry, i.e., the ongoing restoration of the land that has been mined to a natural and economically usable state. For emerging pit lakes, the prediction of evaporation and the level of water plays a crucial role. We examine the methodology on several popular models and standard statistical measures. The presented approach can be applied in a general model calibration process subject to any theoretical or measured evaporation.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2017 ◽  
Vol 12 (4) ◽  
Author(s):  
Yousheng Chen ◽  
Andreas Linderholt ◽  
Thomas J. S. Abrahamsson

Correlation and calibration using test data are natural ingredients in the process of validating computational models. Model calibration for the important subclass of nonlinear systems which consists of structures dominated by linear behavior with the presence of local nonlinear effects is studied in this work. The experimental validation of a nonlinear model calibration method is conducted using a replica of the École Centrale de Lyon (ECL) nonlinear benchmark test setup. The calibration method is based on the selection of uncertain model parameters and the data that form the calibration metric together with an efficient optimization routine. The parameterization is chosen so that the expected covariances of the parameter estimates are made small. To obtain informative data, the excitation force is designed to be multisinusoidal and the resulting steady-state multiharmonic frequency response data are measured. To shorten the optimization time, plausible starting seed candidates are selected using the Latin hypercube sampling method. The candidate parameter set giving the smallest deviation to the test data is used as a starting point for an iterative search for a calibration solution. The model calibration is conducted by minimizing the deviations between the measured steady-state multiharmonic frequency response data and the analytical counterparts that are calculated using the multiharmonic balance method. The resulting calibrated model's output corresponds well with the measured responses.


2017 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes have led to the formulation of the multiple hypotheses approach in hydrological modelling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step-by-step. This study tackles the problem the other way around: We start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash-Sutcliffe, percentage bias and the ratio between root mean square error to the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 with the final Model 15 and find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and less model parameters.


2017 ◽  
Vol 18 (8) ◽  
pp. 2215-2225 ◽  
Author(s):  
Andrew J. Newman ◽  
Naoki Mizukami ◽  
Martyn P. Clark ◽  
Andrew W. Wood ◽  
Bart Nijssen ◽  
...  

Abstract The concepts of model benchmarking, model agility, and large-sample hydrology are becoming more prevalent in hydrologic and land surface modeling. As modeling systems become more sophisticated, these concepts have the ability to help improve modeling capabilities and understanding. In this paper, their utility is demonstrated with an application of the physically based Variable Infiltration Capacity model (VIC). The authors implement VIC for a sample of 531 basins across the contiguous United States, incrementally increase model agility, and perform comparisons to a benchmark. The use of a large-sample set allows for statistically robust comparisons and subcategorization across hydroclimate conditions. Our benchmark is a calibrated, time-stepping, conceptual hydrologic model. This model is constrained by physical relationships such as the water balance, and it complements purely statistical benchmarks due to the increased physical realism and permits physically motivated benchmarking using metrics that relate one variable to another (e.g., runoff ratio). The authors find that increasing model agility along the parameter dimension, as measured by the number of model parameters available for calibration, does increase model performance for calibration and validation periods relative to less agile implementations. However, as agility increases, transferability decreases, even for a complex model such as VIC. The benchmark outperforms VIC in even the most agile case when evaluated across the entire basin set. However, VIC meets or exceeds benchmark performance in basins with high runoff ratios (greater than ~0.8), highlighting the ability of large-sample comparative hydrology to identify hydroclimatic performance variations.


Author(s):  
Sead Ahmed Swalih ◽  
Ercan Kahya

Abstract It is a challenge for hydrological models to capture complex processes in a basin with limited data when estimating model parameters. This study aims to contribute in this field by assessing the impact of incorporating spatial dimension on the improvement of model calibration. Hence, the main objective of this study was to evaluate the impact of multi-gauge calibration in hydrological model calibration for Ikizdere basin, Black Sea Region in Turkey. In addition, we have incorporated the climate change impact assessment for the study area. Four scenarios were tested for performance assessment of calibration: (1) using downstream flow data (DC), (2) using upstream data (UC), (3) using upstream and downstream data (Multi-Gauge Calibration – MGC), and (4) using upstream and then downstream data (UCDC). The results have shown that using individual gauges for calibration (1 and 2) improve the local predictive capacity of the model. MGC calibration significantly improved the model performance for the whole basin unlike 1 and 2. However, the local gauge calibrations statistical performance, compared to MGC outputs, was better for local areas. The UCDC yields the best model performance and much improved predictive capacity. Regarding the climate change, we did not observe an agreement amongst the future climate projections for the basin towards the end of the century.


Water ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2211
Author(s):  
Charlotte Love ◽  
Brian Skahill ◽  
John England ◽  
Gregory Karlovits ◽  
Angela Duren ◽  
...  

Extreme precipitation events are often localized, difficult to predict, and available records are often sparse. Improving frequency analysis and describing the associated uncertainty are essential for regional hazard preparedness and infrastructure design. Our primary goal is to evaluate incorporating Bayesian model averaging (BMA) within a spatial Bayesian hierarchical model framework (BHM). We compare results from two distinct regions in Oregon with different dominating rainfall generation mechanisms, and a region of overlap. We consider several Bayesian hierarchical models from relatively simple (location covariates only) to rather complex (location, elevation, and monthly mean climatic variables). We assess model predictive performance and selection through the application of leave-one-out cross-validation; however, other model assessment methods were also considered. We additionally conduct a comprehensive assessment of the posterior inclusion probability of covariates provided by the BMA portion of the model and the contribution of the spatial random effects term, which together characterize the pointwise spatial variation of each model’s generalized extreme value (GEV) distribution parameters within a BHM framework. Results indicate that while using BMA may improve analysis of extremes, model selection remains an important component of tuning model performance. The most complex model containing geographic and information was among the top performing models in western Oregon (with relatively wetter climate), while it performed among the worst in the eastern Oregon (with relatively drier climate). Based on our results from the region of overlap, site-specific predictive performance improves when the site and the model have a similar annual maxima climatology—winter storm dominated versus summer convective storm dominated. The results also indicate that regions with greater temperature variability may benefit from the inclusion of temperature information as a covariate. Overall, our results show that the BHM framework with BMA improves spatial analysis of extremes, especially when relevant (physical and/or climatic) covariates are used.


1988 ◽  
Vol 15 (1) ◽  
pp. 30-35 ◽  
Author(s):  
G. D. Grosz ◽  
R. L. Elliott ◽  
J. H. Young

Abstract Growth simulation models provide potential benefit in the study of peanut (Arachis hypogaea L.) production. Two physiologically-based peanut simulation models of varying complexity were adapted and calibrated to simulate the growth and yield of Spanish peanut under Oklahoma conditions. Field data, including soil moisture measurements and sequential yield samples, were collected at four sites during the 1985 growing season. An automated weather station provided the necessary climatic data for the models. PNUTMOD, the simpler model originally developed for educational purposes, requires seven varietal input parameters in addition to temperature and solar radiation data. The seven model parameters were calibrated using data from two of the four field sites, and model performance was evaluated using the remaining two data sets. The more complex model, PEANUT, simulates individual plant physiological processes and utilizes a considerably larger set of input parameters. Since PEANUT was developed for the Virginia type peanut, several input parameters required adjustment for the Spanish type peanut grown in Oklahoma. PEANUT was calibrated using data from all four study sites. Both models performed well in simulating pod yield. PNUTMOD, which does not allow for leaf senescence, did not perform as well as PEANUT in predicting vegetative growth.


2021 ◽  
Author(s):  
David C. Finger ◽  
Anna E. Sikorska-Senoner

<p>Environmental models, such as hydrological models or water quality models, are incorporate numerical algorithms that describe either empirically or physical-based the large variety of natural processes that govern the flow of water (or other variables) and its components. The purposes of these models range from improving our understanding of the principles of hydrological processes at a catchment scale to making predictions about how anthropogenic activities will influence future water resources. To be applicable, these models require calibration with observed output data, which is most often streamflow for hydrological models. Yet, the complex nature of hydrological processes, on the one hand, and the limited observed data to inform model parameters, on the other hand, evoke the unavoidable equifinality issue in the calibration of these models. This equifinality issue is expressed with the presence of several optimal model parameters that have different values but lead to similar model performance. One way of dealing with this issue is through providing a parameter ensemble with optimal solutions instead of a single parameter set, reported often as parametric model uncertainty.</p><p>However, this equifinality issue is far from being solved, as also highlighted by one of 23 Unsolved Problems in Hydrology (UPH). This is particularly the case if more variables than only streamflow are of interest. Our hypothesis is that using more than one dataset for calibrating any environmental model helps reducing the equifinality issue during model calibration and thus improves the identifiability of model parameters. In this review-based study, we present recent examples of hydrological (and water quality) models from literature that have been calibrated within a multiple dataset framework to reduce the equifinality issue. We demonstrate that a multi-dataset calibration yields a better model performance regardless of the complexity of the model. Finally, we show that coupling a multi-dataset model calibration with metaheuristics (such as Monte Carlo or Genetic Algorithm) can help reducing the equifinality of model parameters and improving the Pareto frontier. At the bottom of this study, we outline how such a multi-dataset calibration can lead to better model predictions and how it can help emerging water resources problems due to an emerging climate crisis.</p><p>This work contributes to one of the seven major themes of 23 UPH, i.e., Modelling methods. It paths a way forward towards reducing parameter uncertainty in hydrological predictions (UPH question #20) and thus towards improving modelling of hydrologic responses in the extrapolation phase, i.e., under changed catchment conditions (UPH question #19).</p>


Sign in / Sign up

Export Citation Format

Share Document