scholarly journals HydroGFD3.0: a 25 km global near real-time updated precipitation and temperature data set

Author(s):  
Peter Berg ◽  
Fredrik Almén ◽  
Denica Bozhinova

Abstract. HydroGFD (Hydrological Global Forcing Data) is a data set of bias adjusted reanalysis data for daily precipitation, and minimum, mean, and maximum temperature. It is mainly intended for large scale hydrological modeling, but is also suitable for other impact modeling. The data set has an almost global land area coverage, excluding the Antarctic continent, at a horizontal resolution of 0.25°, i.e. about 25 km. It is available for the complete ERA5 reanalysis time period; currently 1979 until five days ago. This period will be extended back to 1950 once the back catalogue of ERA5 is available. The historical period is adjusted using global gridded observational data sets, and to acquire real-time data, a collection of several reference data sets is used. Consistency in time is attempted by relying on a background climatology, and only making use of anomalies from the different data sets. Precipitation is adjusted for mean bias as well as the number or wet days in a month. The latter is relying on a calibrated statistical method with input only of the monthly precipitation anomaly, such that no additional input data about the number of wet days is necessary. The daily mean temperature is adjusted toward the monthly mean of the observations, and applied to 1 h timesteps of the ERA5 reanalysis. Daily mean, minimum and maximum temperature are then calculated. The performance of the HydroGFD3 data set is on par with other similar products, although there are significant differences in different parts of the globe, especially where observations are uncertain. Further, HydroGFD3 tends to have higher precipitation extremes, partly due to its higher spatial resolution. In this paper, we present the methodology, evaluation results, and how to access to the data set at https://doi.org/10.5281/zenodo.3871707.

2021 ◽  
Vol 13 (4) ◽  
pp. 1531-1545
Author(s):  
Peter Berg ◽  
Fredrik Almén ◽  
Denica Bozhinova

Abstract. HydroGFD3 (Hydrological Global Forcing Data) is a data set of bias-adjusted reanalysis data for daily precipitation and minimum, mean, and maximum temperature. It is mainly intended for large-scale hydrological modelling but is also suitable for other impact modelling. The data set has an almost global land area coverage, excluding the Antarctic continent and small islands, at a horizontal resolution of 0.25∘, i.e. about 25 km. It is available for the complete ERA5 reanalysis time period, currently 1979 until 5 d ago. This period will be extended back to 1950 once the back catalogue of ERA5 is available. The historical period is adjusted using global gridded observational data sets, and to acquire real-time data, a collection of several reference data sets is used. Consistency in time is attempted by relying on a background climatology and only making use of anomalies from the different data sets. Precipitation is adjusted for mean bias as well as the number of wet days in a month. The latter is relying on a calibrated statistical method with input only of the monthly precipitation anomaly such that no additional input data about the number of wet days are necessary. The daily mean temperature is adjusted toward the monthly mean of the observations and applied to 1 h time steps of the ERA5 reanalysis. Daily mean, minimum, and maximum temperature are then calculated. The performance of the HydroGFD3 data set is on par with other similar products, although there are significant differences in different parts of the globe, especially where observations are uncertain. Further, HydroGFD3 tends to have higher precipitation extremes, partly due to its higher spatial resolution. In this paper, we present the methodology, evaluation results, and how to access the data set at https://doi.org/10.5281/zenodo.3871707 (Berg et al., 2020).


2014 ◽  
Vol 571-572 ◽  
pp. 497-501 ◽  
Author(s):  
Qi Lv ◽  
Wei Xie

Real-time log analysis on large scale data is important for applications. Specifically, real-time refers to UI latency within 100ms. Therefore, techniques which efficiently support real-time analysis over large log data sets are desired. MongoDB provides well query performance, aggregation frameworks, and distributed architecture which is suitable for real-time data query and massive log analysis. In this paper, a novel implementation approach for an event driven file log analyzer is presented, and performance comparison of query, scan and aggregation operations over MongoDB, HBase and MySQL is analyzed. Our experimental results show that HBase performs best balanced in all operations, while MongoDB provides less than 10ms query speed in some operations which is most suitable for real-time applications.


2020 ◽  
Author(s):  
Banafsheh Abdollahi ◽  
Rolf Hut ◽  
Nick van de Giesen

<p>Irrigation is crucial for sustaining food security for the growing population around the world. Irrigation affects the hydrological cycle both directly, during the process of water abstraction and irrigation, and indirectly, because of infrastructures that have been built in support of irrigation, such as canals, dams, reservoirs, and drainage systems. For evaluating the availability of freshwater resources in the light of growing food demand, modeling the global hydrological cycle is vital. The GlobWat model is one of the models that have been designed for large scale hydrological modeling, with a specific focus on considering irrigated agriculture water use. Both models’ underlying assumptions and the global input data sets used to feed the model could be sources of uncertainty in the output. One of the most challenging input data sets in global hydrological models is the climate input data set. There are several climate forcings available on a global scale like ERA5 and ERA-Interim. In this study, we assess the sensitivity of the GlobWat model to these climate forcing. Pre-processing climate data at a large scale used to be difficult. Recently, this has become much easier by data and scripts provided by eWaterCycle team at the eSience center, Amsterdam, The Netherlands. We will use eWaterCycle's freely available data sources for our assessment and then we will compare the model results with observed data at a local scale.</p>


2002 ◽  
Vol 20 (7) ◽  
pp. 1039-1047 ◽  
Author(s):  
P. T. Newell ◽  
T. Sotirelis ◽  
J. M. Ruohoniemi ◽  
J. F. Carbary ◽  
K. Liou ◽  
...  

Abstract. The location of the auroral oval and the intensity of the auroral precipitation within it are basic elements in any adequate characterization of the state of the magnetosphere. Yet despite the many ground-based and spacecraft-borne instruments monitoring various aspects of auroral behavior, there are no clear and consistent answers available to those wishing to locate the auroral oval or to quantify its intensity. The purpose of OVATION is to create a tool which does so. OVATION is useful both for archival purposes and for space weather nowcasting. The long-running DMSP particle data set, which covers both hemispheres, and has operated since the early 1980s, and which will continue to operate well into the next decade, is chosen as a calibration standard. Other data sets, including global images from Polar UVI, SuperDARN boundaries, and meridian scanning photometer images, are cross-calibrated to the DMSP standard. Each incorporated instrument has its average offset from the DMSP standard determined as a function of MLT, along with the standard deviations. The various data can, therefore, be combined in a meaningful manner, with the weight attached to a given boundary measurement varying inversely with the variance (square of the standard deviation). OVATION currently spans from December 1983 through the present, including real-time data. Participation of additional experimenters is highly welcomed. The only prerequisites are a willingness to conduct the prescribed cross-calibration procedure, and to make the data available online. The real-time auroral oval location can be found here: http://sd-www.jhuapl.edu/Aurora/ovation live/northdisplay.html.Key words. Magnetospheric physics (auroral phenomena; energetic particles, precipitating; magnetosphere – ionosphere interactions)


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Stefan Polanski ◽  
Annette Rinke ◽  
Klaus Dethloff

The regional climate model HIRHAM has been applied over the Asian continent to simulate the Indian monsoon circulation under present-day conditions. The model is driven at the lateral and lower boundaries by European reanalysis (ERA40) data for the period from 1958 to 2001. Simulations with a horizontal resolution of 50 km are carried out to analyze the regional monsoon patterns. The focus in this paper is on the validation of the long-term summer monsoon climatology and its variability concerning circulation, temperature, and precipitation. Additionally, the monsoonal behavior in simulations for wet and dry years has been investigated and compared against several observational data sets. The results successfully reproduce the observations due to a realistic reproduction of topographic features. The simulated precipitation shows a better agreement with a high-resolution gridded precipitation data set over the central land areas of India and in the higher elevated Tibetan and Himalayan regions than ERA40.


Author(s):  
Sepehr Fathizadan ◽  
Feng Ju ◽  
Kyle Rowe ◽  
Alex Fiechter ◽  
Nils Hofmann

Abstract Production efficiency and product quality need to be addressed simultaneously to ensure the reliability of large scale additive manufacturing. Specifically, print surface temperature plays a critical role in determining the quality characteristics of the product. Moreover, heat transfer via conduction as a result of spatial correlation between locations on the surface of large and complex geometries necessitates the employment of more robust methodologies to extract and monitor the data. In this paper, we propose a framework for real-time data extraction from thermal images as well as a novel method for controlling layer time during the printing process. A FLIR™ thermal camera captures and stores the stream of images from the print surface temperature while the Thermwood Large Scale Additive Manufacturing (LSAM™) machine is printing components. A set of digital image processing tasks were performed to extract the thermal data. Separate regression models based on real-time thermal imaging data are built on each location on the surface to predict the associated temperatures. Subsequently, a control method is proposed to find the best time for printing the next layer given the predictions. Finally, several scenarios based on the cooling dynamics of surface structure were defined and analyzed, and the results were compared to the current fixed layer time policy. It was concluded that the proposed method can significantly increase the efficiency by reducing the overall printing time while preserving the quality.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


Sign in / Sign up

Export Citation Format

Share Document