scholarly journals A novel approach to surface reanalysis

2021 ◽  
Author(s):  
Sabrina Wahl ◽  
Clarissa Figura ◽  
Jan D. Keller

<p>Reanalysis is a procedure to merge numerical model integrations and observations to obtain a synergetic representation of the past climatological state of a system, e.g., of the atmosphere. An alternative to running a full reanalysis scheme is a so-called surface reanalysis. Here, an existing reanalysis is used as prior information (for the near-surface state). This first guess is then corrected in a data assimilation step preferrably by applying observations not used in the original assimilation. In such a scheme, an additional downscaling is often performed to enhance the spatial representation of the surface reanalysis.</p><p>We present here the development of a new approach aiming to establish such a data set based on the COSMO-REA6 regional reanalysis of the Hans-Ertel-Centre and Deutscher Wetterdienst (DWD). The data assimilation step is based on the operational Local Ensemble Transform Kalman Filter (LETKF) of DWD. While the data assimilation is often performed univariately in such surface reanalysis schemes, here we apply it to various parameters at once thus conserving the covariances among the parameters and allowing for a consistent multivariate utilization of the data. Further, this reanalysis will not be restricted to the ground level and near-surface parameters. Instead, it will be extended to the lower part of the boundary layer aiming at an improved representation of wind speeds in wind turbine hub heights especially relevant for renewable energy applications. The envisaged resolution is approximately 1km and therefore enables an enhanced representation of spatial variability and heterogeneity on small scales. In addition, the LETKF is an ensemble-based data assimilation scheme which also provides uncertainty estimates through an ensemble of the re-analyzed parameters which can also be used as input for downstream applications.</p>

Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2015 ◽  
Vol 12 (1) ◽  
pp. 187-198 ◽  
Author(s):  
A. K. Kaiser-Weiss ◽  
F. Kaspar ◽  
V. Heene ◽  
M. Borsche ◽  
D. G. H. Tan ◽  
...  

Abstract. Reanalysis near-surface wind fields from multiple reanalyses are potentially an important information source for wind energy applications. Inter-comparing reanalyses via employing independent observations can help to guide users to useful spatio-temporal scales. Here we compare the statistical properties of wind speeds observed at 210 traditional meteorological stations over Germany with the reanalyses' near-surface fields, confining the analysis to the recent years (2007 to 2010). In this period, the station time series in Germany can be expected to be mostly homogeneous. We compare with a regional reanalysis (COSMO-REA6) and two global reanalyses, ERA-Interim and ERA-20C. We show that for the majority of the stations, the Weibull parameters of the daily mean wind speed frequency distribution match remarkably well with the ones derived from the reanalysis fields. High correlations (larger than 0.9) can be found between stations and reanalysis monthly mean wind speeds all over Germany. Generally, the correlation between the higher resolved COSMO-REA6 wind fields and station observations is highest, for both assimilated and non-assimilated (i.e., independent) observations. As expected from the lower spatial resolution and reduced amount of data assimilated into ERA-20C, the correlation of monthly means decreases somewhat relative to the other reanalyses (in our investigated period of 2007 to 2010). Still, the inter-annual variability connected to the North Atlantic Oscillation (NAO) found in the reanalysis surface wind anomalies is in accordance with the anomalies recorded by the stations. We discuss some typical examples where differences are found, e.g., where the mean wind distributions differ (probably related to either height or model topography differences) and where the correlations break down (because of unresolved local topography) which applies to a minority of stations. We also identified stations with homogeneity problems in the reported station values, demonstrating how reanalyses can be applied to support quality control for the observed station data. Finally, as a demonstration of concept, we discuss how comparing feedback files of the different reanalyses can guide users to useful scales of variability.


2021 ◽  
Author(s):  
Hannah Livingston ◽  
Nicola Bodini ◽  
Julie K. Lundquist

Abstract. Hub-height turbulence is essential for a variety of wind energy applications, ranging from wind plant siting to wind turbine control strategies. Because deploying hub-height meteorological towers can be a challenge, alternative ways to estimate hub-height turbulence are desired. In this paper, we assess to what degree hub-height turbulence can be estimated via other hub-height variables or ground-level atmospheric measurements in complex terrain, using observations from three meteorological towers at the Perdigão and WFIP2 field campaigns. We find a large variability across the three considered towers when trying to model hub-height turbulence intensity (TI) and turbulence kinetic energy (TKE) from hub-height or near-surface measurements of either wind speed, TI, or TKE. Moreover, we find that based on the characteristics of the specific site, atmospheric stability and upwind fetch either determine a significant variability in hub-height turbulence or are not a main driver of the variability in hub-height TI and TKE. Our results highlight how hub-height turbulence is simultaneously sensitive to numerous different factors, so that no simple and universal relationship can be determined to vertically extrapolate turbulence from near-surface measurements, or model it from other hub-height variables when considering univariate relationships. We suggest that a multivariate approach should instead be considered, possibly leveraging the capabilities of machine learning nonlinear algorithms.


2021 ◽  
Vol 18 ◽  
pp. 115-126
Author(s):  
Sebastian Brune ◽  
Jan D. Keller ◽  
Sabrina Wahl

Abstract. A correct spatio-temporal representation of retrospective wind speed estimates is of large interest for the wind energy sector. In this respect, reanalyses provide an invaluable source of information. However, the quality of the various reanalysis estimates for wind speed are difficult to assess. Therefore, this study compares wind measurements at hub heights from 14 locations in Central Europe with two global (ERA5, MERRA-2) and one regional reanalysis (COSMO-REA6). Employing metrics such as bias, RMSE and correlation, we evaluate the performance of the reanalyses with respect to (a) the local surface characteristics (offshore, flat onshore, hilly onshore), (b) various height levels (60 to 200 m) and (c) the diurnal cycle. As expected, we find that the reanalyses show the smallest errors to observations at offshore sites. Over land, MERRA-2 generally overestimates wind speeds, while COSMO-REA6 and ERA5 represent the average wind speed more realistically. At sites with flat terrain, ERA5 correlates better with observations than COSMO-REA6. In contrast, COSMO-REA6 performs slightly better over hilly terrain, which can be explained by the higher horizontal resolution. In terms of diurnal variation, ERA5 outperforms both other reanalyses. While the overestimation of MERRA-2 is consistent throughout the day, COSMO-REA6 significantly underestimates wind speed at night over flat and hilly terrain due to a misrepresentation of nightly low level jets and mountain and valley breezes. Regarding the representation of downtime of wind turbines due to low/high wind speeds, we find that MERRA-2 is consistently underperforming with respect to the other reanalyses. Here COSMO-REA6 performs better over the ocean, while ERA5 shows the best results over land.


2012 ◽  
Vol 5 (1) ◽  
pp. 61-96
Author(s):  
P. N. den Outer ◽  
A. van Dijk ◽  
H. Slaper ◽  
A. V. Lindfors ◽  
H. De Backer ◽  
...  

Abstract. The Lambertian Equivalent Reflection (LER) produced by satellite-carried instruments is used to determine cloud effects on ground level UltraViolet (UV) radiation. The focus is on data use from consecutive operating instruments: the Total Ozone Mapping Spectrometers (TOMS) flown on Nimbus 7 from 1979 to 1992, TOMS on Earth Probe from 1996 to 2005, and the Ozone Monitoring Instrument (OMI) flown on Aura since 2004. The LER data produced by TOMS on Earth Probe is only included until 2002. The possibility to use the Radiative Cloud Fraction (RCF)-product of OMI is also investigated. A comparison is made with cloud effects inferred from ground-based pyranometer measurements at over 83 World Radiation Data Centre stations. Modelled UV irradiances utilizing LER data are compared with measurements of UV irradiances at eight European low elevation stations. The LER data set of the two TOMS instruments shows a consistent agreement, and the required corrections are of low percentage i.e. 2–3%. In contrast, the LER data of OMI requires correction of 7–10%, and a solar angle dependency therein is more pronounced. These corrections were inferred from a comparison with pyranometer data, and tested using the UV measurements. The RCF product of OMI requires a large correction but can then be implemented as a cloud effect proxy. However, a major drawback of RCF is the large number of clipped data, i.e. 18%, and results are not better than those obtained with the corrected LER product of OMI. The average reduction of UV radiation due to clouds for all sites together indicate a small trend: a diminishing cloudiness, in line with ground-based UV observations. Uncorrected implementation of LER would have indicated the opposite. An optimal field of view of 1.25° was established for LER data to calculate UV radiations levels. The corresponding area can be traversed within 5–7 h at the average wind speeds found for the West European continent.


2013 ◽  
Vol 10 (1) ◽  
pp. 51-58 ◽  
Author(s):  
P. E. Bett ◽  
H. E. Thornton ◽  
R. T. Clark

Abstract. We present initial results of a study on the variability of wind speeds across Europe over the past 140 yr, making use of the recent Twentieth Century Reanalysis data set, which includes uncertainty estimates from an ensemble method of reanalysis. Maps of the means and standard deviations of daily wind speeds, and the Weibull-distribution parameters, show the expected features, such as the strong, highly-variable wind in the north-east Atlantic. We do not find any clear, strong long-term trends in wind speeds across Europe, and the variability between decades is large. We examine how different years and decades are related in the long-term context, by looking at the ranking of annual mean wind speeds. Picking a region covering eastern England as an example, our analyses show that the wind speeds there over the past ~ 20 yr are within the range expected from natural variability, but do not span the full range of variability of the 140-yr data set. The calendar-year 2010 is however found to have the lowest mean wind speed on record for this region.


2020 ◽  
Vol 20 (24) ◽  
pp. 15617-15633
Author(s):  
Wenjie Wang ◽  
David D. Parrish ◽  
Xin Li ◽  
Min Shao ◽  
Ying Liu ◽  
...  

Abstract. In the past decade, average PM2.5 concentrations decreased rapidly under the strong pollution control measures in major cities in China; however, ozone (O3) pollution emerged as a significant problem. Here we examine a unique (for China) 12-year data set of ground-level O3 and precursor concentrations collected at an urban site in Beijing (PKUERS, campus of Peking University), where the maximum daily 8 h average (MDA8) O3 concentration and daytime Ox (O3+NO2) concentration in August increased by 2.3±1.2 ppbv (+3.3±1.8 %) yr−1 and 1.4±0.6 (+1.9±0.8 %) yr−1, respectively, from 2005 to 2016. In contrast, daytime concentrations of nitrogen oxides (NOx) and the OH reactivity of volatile organic compounds (VOCs) both decreased significantly. Over this same time, the decrease of particulate matter (and thus the aerosol optical depth) led to enhanced solar radiation and photolysis frequencies, with near-surface J(NO2) increasing at a rate of 3.6±0.8 % yr−1. We use an observation-based box model to analyze the combined effect of solar radiation and ozone precursor changes on ozone production rate, P(O3). The results indicate that the ratio of the rates of decrease of VOCs and NOx (about 1.1) is inefficient in reducing ozone production in Beijing. P(O3) increased during the decade due to more rapid atmospheric oxidation caused to a large extent by the decrease of particulate matter. This elevated ozone production was driven primarily by increased actinic flux due to PM2.5 decrease and to a lesser extent by reduced heterogeneous uptake of HO2. Therefore, the influence of PM2.5 on actinic flux and thus on the rate of oxidation of VOCs and NOx to ozone and to secondary aerosol (i.e., the major contributor to PM2.5) is important for determining the atmospheric effects of controlling the emissions of the common precursors of PM2.5 and ozone when attempting to control these two important air pollutants.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. D503-D518 ◽  
Author(s):  
Jeremy Maurer ◽  
Rosemary Knight

Nuclear magnetic resonance (NMR) logging provides a relatively new approach for estimating the hydraulic conductivity [Formula: see text] of unconsolidated aquifers. We have evaluated results from model validation and uncertainty quantification using direct-push measurements of NMR mean relaxation times and [Formula: see text] in sands and gravels at three field sites. We have tested four models that have been proposed for predicting [Formula: see text] from NMR data, including the Schlumberger-Doll research, Seevers, and sum-of-echoes equations, all of which use empirically determined constants, as well as the Kozeny-Godefroy model, which predicts [Formula: see text] from several physical parameters. We have applied four methods of analysis to reanalyze NMR and [Formula: see text] data from the three field sites to quantify how well each model predicted [Formula: see text] from the mean log NMR relaxation time [Formula: see text] given the uncertainties in the data. Our results show that NMR-estimated porosity does not improve prediction of [Formula: see text] in our data set for any model and that all of the models can predict [Formula: see text] to within an order of magnitude using the calibrated constants we have found. We have shown the value of rigorous uncertainty quantification using the methods we used for analyzing [Formula: see text]-NMR data sets, and we have found that incorporating uncertainty estimates in our analysis gives a more complete understanding of the relationship between NMR-derived parameters and hydraulic conductivity than can be obtained through simple least-squares fitting. There is little variability in our data set in the calibrated constants we find, given the uncertainty present in the data, and therefore we suggest that the constants we find could be used to obtain first-order estimates of hydraulic conductivity in unconsolidated sands and gravels at new sites with NMR data available.


2017 ◽  
Author(s):  
Steffen Tietsche ◽  
Magdalena Alonso-Balmaseda ◽  
Patricia Rosnay ◽  
Hao Zuo ◽  
Xiangshan Tian-Kunze ◽  
...  

Abstract. L-band radiance measurements such as these from the SMOS satellite can be used to distinguish thin from thick ice under cold surface conditions. However, uncertainties can be large due to assumptions in the forward model that converts brightness temperatures into ice thickness, and due to uncertainties in ancillary fields which need to be independently modelled or observed. It is therefore advisable to perform a critical assessment with independent observational and model data, before using these data for model validation or data assimilation. Here, we discuss version 3.1 of the University of Hamburg L3C SMOS sea-ice thickness data set (SMOS-SIT) from autumn 2010 to spring 2017, and compare it to the results of the global ocean-sea ice analysis ORAS5. It is concluded that SMOS-SIT provides valuable and unique information on thin sea ice during winter, both in terms of the seasonal evolution and interannual variability. Overall, there is a promising match between SMOS-SIT and ORAS5 early in the freezing season (October-December), while later in winter, sea ice is consistently modelled thicker than observed. This seems to be mostly due to deficiencies of the model to simulate polynyas and fracture zones. However, there are regions where biases in the observational data seem to play a role, as comparison to independent observational data suggests. Both the reanalysis and the observations are provided with uncertainty estimates. While the reanalysis uncertainty estimates for the thickness of thin sea ice are probably too small and do not include structural uncertainty of the simulation, these of SMOS-SIT are often large, and do not seem to adequately characterise the complex uncertainties of the retrieval model. Therefore, careful and manual assessment of the data when using it for model evaluation and data assimilation is advisable. Interannual variability and trends of the large-scale distribution of thin sea ice are in good agreement between SMOS-SIT and ORAS5. In summary, SMOS-SIT presents a unique source of information about thin sea ice in the winter-time Arctic, and its use in sea ice modelling, assimilation and forecasting application is nascent and promising.


2020 ◽  
Author(s):  
Marc Philipp Bahlke ◽  
Natnael Mogos ◽  
Jonny Proppe ◽  
Carmen Herrmann

Heisenberg exchange spin coupling between metal centers is essential for describing and understanding the electronic structure of many molecular catalysts, metalloenzymes, and molecular magnets for potential application in information technology. We explore the machine-learnability of exchange spin coupling, which has not been studied yet. We employ Gaussian process regression since it can potentially deal with small training sets (as likely associated with the rather complex molecular structures required for exploring spin coupling) and since it provides uncertainty estimates (“error bars”) along with predicted values. We compare a range of descriptors and kernels for 257 small dicopper complexes and find that a simple descriptor based on chemical intuition, consisting only of copper-bridge angles and copper-copper distances, clearly outperforms several more sophisticated descriptors when it comes to extrapolating towards larger experimentally relevant complexes. Exchange spin coupling is similarly easy to learn as the polarizability, while learning dipole moments is much harder. The strength of the sophisticated descriptors lies in their ability to linearize structure-property relationships, to the point that a simple linear ridge regression performs just as well as the kernel-based machine-learning model for our small dicopper data set. The superior extrapolation performance of the simple descriptor is unique to exchange spin coupling, reinforcing the crucial role of choosing a suitable descriptor, and highlighting the interesting question of the role of chemical intuition vs. systematic or automated selection of features for machine learning in chemistry and material science.


Sign in / Sign up

Export Citation Format

Share Document