scholarly journals The Multiensemble Approach: The NAEFS Example

2009 ◽  
Vol 137 (5) ◽  
pp. 1655-1665 ◽  
Author(s):  
Guillem Candille

Abstract The North American Ensemble Forecasting System (NAEFS) is the combination of two Ensemble Prediction Systems (EPS) coming from two operational centers: the Canadian Meteorological Centre (CMC) and the National Centers for Environmental Prediction (NCEP). This system provides forecasts of up to 16 days and should improve the predictability skill of the probabilistic system, especially for the second week. First, a comparison between the two components of the NAEFS is performed for several atmospheric variables with “objective” verification tools developed at CMC [i.e., the continuous ranked probability score (CRPS) and its reliability-resolution decomposition, the reduced centered random variable, and confidence intervals estimated with bootstrap methods]. The CMC system is more reliable, especially because of a better ensemble dispersion, while the NCEP system has better probabilistic resolution. The NAEFS, compared to the CMC and NCEP EPSs, shows significant improvements both in terms of reliability and resolution. The predictability has been improved by 1–2 forecast days in the second week. That improvement is not only a result of the increased ensemble size in the EPS—from 20 members to 40 in the present case—but also to the combination of different models and initial condition perturbations. By randomly mixing members from the CMC and NCEP systems in a 20-member EPS, an intrinsic skill improvement of the system is observed.

2010 ◽  
Vol 138 (11) ◽  
pp. 4268-4281 ◽  
Author(s):  
Guillem Candille ◽  
Stéphane Beauregard ◽  
Normand Gagnon

Abstract Previous studies have shown that the raw combination (i.e., the combination of the direct output model without any postprocessing procedure) of the National Centers for Environmental Prediction (NCEP) and Meteorological Service of Canada (MSC) ensemble prediction systems (EPS) improves the probabilistic forecast both in terms of reliability and resolution. This combination palliates the lack of reliability of the NCEP EPS because of the too small dispersion of the predicted ensemble and the lack of probabilistic resolution of the MSC EPS. Such a multiensemble, called the North American Ensemble Forecast System (NAEFS), especially shows bias reductions and dispersion improvements that could only come from the combination of different forecast errors. It is then legitimate to wonder whether these improvements in terms of biases and dispersions, and by extension the skill improvements, are only due to the balancing between opposite model errors. In the NAEFS framework, bias corrections “on the fly,” where the bias is updated over time, are applied to the operational EPSs. Each model of the EPS components (NCEP/MSC) is individually bias corrected against its own analysis with the same process. The bias correction improves the reliability of each EPS component. It also slightly improves the accuracy of the predicted ensembles and thus the probabilistic resolution of the forecasts. Once the EPSs are combined, the improvements due to the bias correction are not so obvious, tending to show that the success of the multiensemble method does not only come from the cancellation of different biases. This study also shows that the combination of the raw EPS components (NAEFS) is generally better than either the bias corrected NCEP or MSC ensembles.


2019 ◽  
Vol 147 (6) ◽  
pp. 1967-1987 ◽  
Author(s):  
Minghua Zheng ◽  
Edmund K. M. Chang ◽  
Brian A. Colle

Abstract Empirical orthogonal function (EOF) and fuzzy clustering tools were applied to generate and validate scenarios in operational ensemble prediction systems (EPSs) for U.S. East Coast winter storms. The National Centers for Environmental Prediction (NCEP), European Centre for Medium-Range Weather Forecasts (ECMWF), and Canadian Meteorological Centre (CMC) EPSs were validated in their ability to capture the analysis scenarios for historical East Coast cyclone cases at lead times of 1–9 days. The ECMWF ensemble has the best performance for the medium- to extended-range forecasts. During this time frame, NCEP and CMC did not perform as well, but a combination of the two models helps reduce the missing rate and alleviates the underdispersion. All ensembles are underdispersed at all ranges, with combined ensembles being less underdispersed than the individual EPSs. The number of outside-of-envelope cases increases with lead time. For a majority of the cases beyond the short range, the verifying analysis does not lie within the ensemble mean group of the multimodel ensemble or within the same direction indicated by any of the individual model means, suggesting that all possible scenarios need to be taken into account. Using the EOF patterns to validate the cyclone properties, the NCEP model tends to show less intensity and displacement biases during 1–3-day lead time, while the ECMWF model has the smallest biases during 4–6 days. Nevertheless, the ECMWF forecast position tends to be biased toward the southwest of the other two models and the analysis.


2011 ◽  
Vol 139 (9) ◽  
pp. 3052-3068 ◽  
Author(s):  
Dominik Renggli ◽  
Gregor C. Leckebusch ◽  
Uwe Ulbrich ◽  
Stephanie N. Gleixner ◽  
Eberhard Faust

The science of seasonal predictions has advanced considerably in the last decade. Today, operational predictions are generated by several institutions, especially for variables such as (sea) surface temperatures and precipitation. In contrast, few studies have been conducted on the seasonal predictability of extreme meteorological events such as European windstorms in winter. In this study, the predictive skill of extratropical wintertime windstorms in the North Atlantic/European region is explored in sets of seasonal hindcast ensembles from the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) and the ENSEMBLE-based predictions of climate changes and their impacts (ENSEMBLES) projects. The observed temporal and spatial climatological distributions of these windstorms are reasonably well reproduced in the hindcast data. Using hindcasts starting on 1 November, significant predictive skill is found for the December–February windstorm frequency in the period 1980–2001, but also for the January–April storm frequency. Specifically, the model suite run at Météo France shows consistently high skill. Some aspects of the variability of skill are discussed. Predictive skill in the 1980–2001 period is usually higher than for the 1960–2001 period. Furthermore, the level of skill turns out to be related to the storm frequency of a given winter. Generally, winters with high storm frequency are better predicted than winters with medium storm frequency. Physical mechanisms potentially leading to such a variability of skill are discussed.


2018 ◽  
Vol 18 (8) ◽  
pp. 2183-2202 ◽  
Author(s):  
Ekrem Canli ◽  
Martin Mergili ◽  
Benni Thiebes ◽  
Thomas Glade

Abstract. Landslide forecasting and early warning has a long tradition in landslide research and is primarily carried out based on empirical and statistical approaches, e.g., landslide-triggering rainfall thresholds. In the last decade, flood forecasting started the operational mode of so-called ensemble prediction systems following the success of the use of ensembles for weather forecasting. These probabilistic approaches acknowledge the presence of unavoidable variability and uncertainty when larger areas are considered and explicitly introduce them into the model results. Now that highly detailed numerical weather predictions and high-performance computing are becoming more common, physically based landslide forecasting for larger areas is becoming feasible, and the landslide research community could benefit from the experiences that have been reported from flood forecasting using ensemble predictions. This paper reviews and summarizes concepts of ensemble prediction in hydrology and discusses how these could facilitate improved landslide forecasting. In addition, a prototype landslide forecasting system utilizing the physically based TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability) model is presented to highlight how such forecasting systems could be implemented. The paper concludes with a discussion of challenges related to parameter variability and uncertainty, calibration and validation, and computational concerns.


2016 ◽  
Vol 29 (3) ◽  
pp. 995-1012 ◽  
Author(s):  
Stefan Siegert ◽  
David B. Stephenson ◽  
Philip G. Sansom ◽  
Adam A. Scaife ◽  
Rosie Eade ◽  
...  

Abstract Predictability estimates of ensemble prediction systems are uncertain because of limited numbers of past forecasts and observations. To account for such uncertainty, this paper proposes a Bayesian inferential framework that provides a simple 6-parameter representation of ensemble forecasting systems and the corresponding observations. The framework is probabilistic and thus allows for quantifying uncertainty in predictability measures, such as correlation skill and signal-to-noise ratios. It also provides a natural way to produce recalibrated probabilistic predictions from uncalibrated ensembles forecasts. The framework is used to address important questions concerning the skill of winter hindcasts of the North Atlantic Oscillation for 1992–2011 issued by the Met Office Global Seasonal Forecast System, version 5 (GloSea5), climate prediction system. Although there is much uncertainty in the correlation between ensemble mean and observations, there is strong evidence of skill: the 95% credible interval of the correlation coefficient of [0.19, 0.68] does not overlap zero. There is also strong evidence that the forecasts are not exchangeable with the observations: with over 99% certainty, the signal-to-noise ratio of the forecasts is smaller than the signal-to-noise ratio of the observations, which suggests that raw forecasts should not be taken as representative scenarios of the observations. Forecast recalibration is thus required, which can be coherently addressed within the proposed framework.


2007 ◽  
Vol 135 (7) ◽  
pp. 2688-2699 ◽  
Author(s):  
G. Candille ◽  
C. Côté ◽  
P. L. Houtekamer ◽  
G. Pellerin

Abstract A verification system has been developed for the ensemble prediction system (EPS) at the Canadian Meteorological Centre (CMC). This provides objective criteria for comparing two EPSs, necessary when deciding whether or not to implement a new or revised EPS. The proposed verification methodology is based on the continuous ranked probability score (CRPS), which provides an evaluation of the global skill of an EPS. Its reliability/resolution partition, proposed by Hersbach, is used to measure the two main attributes of a probabilistic system. Also, the characteristics of the reliability are obtained from the two first moments of the reduced centered random variable (RCRV), which define the bias and the dispersion of an EPS. Resampling bootstrap techniques have been applied to these scores. Confidence intervals are thus defined, expressing the uncertainty due to the finiteness of the number of realizations used to compute the scores. All verifications are performed against observations to provide more independent validations and to avoid any local systematic bias of an analysis. A revised EPS, which has been tested at the CMC in a parallel run during the autumn of 2005, is described in this paper. This EPS has been compared with the previously operational one with the verification system presented above. To illustrate the verification methodology, results are shown for the temperature at 850 hPa. The confidence intervals are computed by taking into account the spatial correlation of the data and the temporal autocorrelation of the forecast error. The revised EPS performs significantly better for all the forecast ranges, except for the resolution component of the CRPS where the improvement is no longer significant from day 7. The significant improvement of the reliability is mainly due to a better dispersion of the ensemble. Finally, the verification system correctly indicates that variations are not significant when two theoretically similar EPSs are compared.


2007 ◽  
Vol 135 (7) ◽  
pp. 2545-2567 ◽  
Author(s):  
Lizzie S. R. Froude ◽  
Lennart Bengtsson ◽  
Kevin I. Hodges

Abstract The prediction of extratropical cyclones by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) ensemble prediction systems (EPSs) has been investigated using an objective feature tracking methodology to identify and track the cyclones along the forecast trajectories. Overall the results show that the ECMWF EPS has a slightly higher level of skill than the NCEP EPS in the Northern Hemisphere (NH). However in the Southern Hemisphere (SH), NCEP has higher predictive skill than ECMWF for the intensity of the cyclones. The results from both EPSs indicate a higher level of predictive skill for the position of extratropical cyclones than their intensity and show that there is a larger spread in intensity than position. Further analysis shows that the predicted propagation speed of cyclones is generally too slow for the ECMWF EPS and shows a slight bias for the intensity of the cyclones to be overpredicted. This is also true for the NCEP EPS in the SH. For the NCEP EPS in the NH the intensity of the cyclones is underpredicted. There is small bias in both the EPS for the cyclones to be displaced toward the poles. For each ensemble forecast of each cyclone, the predictive skill of the ensemble member that best predicts the cyclone’s position and intensity was computed. The results are very encouraging showing that the predictive skill of the best ensemble member is significantly higher than that of the control forecast in terms of both the position and intensity of the cyclones. The prediction of cyclones before they are identified as 850-hPa vorticity centers in the analysis cycle was also considered. It is shown that an indication of extratropical cyclones can be given by at least 1 ensemble member 7 days before they are identified in the analysis. Further analysis of the ECMWF EPS shows that the ensemble mean has a higher level of skill than the control forecast, particularly for the intensity of the cyclones, from day 3 of the forecast. There is a higher level of skill in the NH than the SH and the spread in the SH is correspondingly larger. The difference between the ensemble mean error and spread is very small for the position of the cyclones, but the spread of the ensemble is smaller than the ensemble mean error for the intensity of the cyclones in both hemispheres. Results also show that the ECMWF control forecast has ½ to 1 day more skill than the perturbed members, for both the position and intensity of the cyclones, throughout the forecast.


2005 ◽  
Vol 133 (5) ◽  
pp. 1076-1097 ◽  
Author(s):  
Roberto Buizza ◽  
P. L. Houtekamer ◽  
Gerald Pellerin ◽  
Zoltan Toth ◽  
Yuejian Zhu ◽  
...  

Abstract The present paper summarizes the methodologies used at the European Centre for Medium-Range Weather Forecasts (ECMWF), the Meteorological Service of Canada (MSC), and the National Centers for Environmental Prediction (NCEP) to simulate the effect of initial and model uncertainties in ensemble forecasting. The characteristics of the three systems are compared for a 3-month period between May and July 2002. The main conclusions of the study are the following:the performance of ensemble prediction systems strongly depends on the quality of the data assimilation system used to create the unperturbed (best) initial condition and the numerical model used to generate the forecasts;a successful ensemble prediction system should simulate the effect of both initial and model-related uncertainties on forecast errors; andfor all three global systems, the spread of ensemble forecasts is insufficient to systematically capture reality, suggesting that none of them is able to simulate all sources of forecast uncertainty.The relative strengths and weaknesses of the three systems identified in this study can offer guidelines for the future development of ensemble forecasting techniques.


Sign in / Sign up

Export Citation Format

Share Document