scholarly journals Identification of models of nonlinear dynamic processes in mining on the basis of Volterra nuclei

2020 ◽  
Vol 201 ◽  
pp. 01028
Author(s):  
Natalia Morkun ◽  
Iryna Zavsiehdashnia ◽  
Oleksandra Serdiuk ◽  
Iryna Kasatkina

Solving the problem of improving efficiency of technological processes of mineral concentration is one of the essential for providing sustainability of mining enterprises. Currently, special attention is paid to optimization of technological processes in concentration of useful minerals. This approach calls for availability of high-quality data on the process, formation of corresponding databases and their subsequent processing to build adequate and efficient mathematical models of processes and systems. In order to improve quality of mathematical description of forming fractional characteristics of ore through applying technological aggregates in concentration, the authors suggest using power Volterra series that provide characteristics of a controlled object (its condition) as a sequence of multidimensional weight functions invariant to the type of an input signal – Volterra nuclei. Application of Volterra structures enables decreasing the modelling error to 0.039 under the root-mean-square error of 0.0594.

2021 ◽  
Vol 14 (1) ◽  
pp. 89-116
Author(s):  
Camille Yver-Kwok ◽  
Carole Philippon ◽  
Peter Bergamaschi ◽  
Tobias Biermann ◽  
Francescopiero Calzolari ◽  
...  

Abstract. The Integrated Carbon Observation System (ICOS) is a pan-European research infrastructure which provides harmonized and high-precision scientific data on the carbon cycle and the greenhouse gas budget. All stations have to undergo a rigorous assessment before being labeled, i.e., receiving approval to join the network. In this paper, we present the labeling process for the ICOS atmosphere network through the 23 stations that were labeled between November 2017 and November 2019. We describe the labeling steps, as well as the quality controls, used to verify that the ICOS data (CO2, CH4, CO and meteorological measurements) attain the expected quality level defined within ICOS. To ensure the quality of the greenhouse gas data, three to four calibration gases and two target gases are measured: one target two to three times a day, the other gases twice a month. The data are verified on a weekly basis, and tests on the station sampling lines are performed twice a year. From these high-quality data, we conclude that regular calibrations of the CO2, CH4 and CO analyzers used here (twice a month) are important in particular for carbon monoxide (CO) due to the analyzer's variability and that reducing the number of calibration injections (from four to three) in a calibration sequence is possible, saving gas and extending the calibration gas lifespan. We also show that currently, the on-site water vapor correction test does not deliver quantitative results possibly due to environmental factors. Thus the use of a drying system is strongly recommended. Finally, the mandatory regular intake line tests are shown to be useful in detecting artifacts and leaks, as shown here via three different examples at the stations.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4486 ◽  
Author(s):  
Mohan Li ◽  
Yanbin Sun ◽  
Yu Jiang ◽  
Zhihong Tian

In sensor-based systems, the data of an object is often provided by multiple sources. Since the data quality of these sources might be different, when querying the observations, it is necessary to carefully select the sources to make sure that high quality data is accessed. A solution is to perform a quality evaluation in the cloud and select a set of high-quality, low-cost data sources (i.e., sensors or small sensor networks) that can answer queries. This paper studies the problem of min-cost quality-aware query which aims to find high quality results from multi-sources with the minimized cost. The measurement of the query results is provided, and two methods for answering min-cost quality-aware query are proposed. How to get a reasonable parameter setting is also discussed. Experiments on real-life data verify that the proposed techniques are efficient and effective.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Janet E. Squires ◽  
Alison M. Hutchinson ◽  
Anne-Marie Bostrom ◽  
Kelly Deis ◽  
Peter G. Norton ◽  
...  

Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers’ perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.


2005 ◽  
Vol 42 ◽  
pp. 389-394 ◽  
Author(s):  
Per Holmlund ◽  
Peter Jansson ◽  
Rickard Pettersson

AbstractThe use of glacier mass-balance records to assess the effects of glacier volume change from climate change requires high-quality data. The methods for measuring glacier mass balance have been developed in tandem with the measurements themselves, which implies that the quality of the data may change with time. We have investigated such effects on the mass-balance record of Storglaciären, Sweden, by re-analyzing the records using a better map base and applying successive maps over appropriate time periods. Our results show that errors <0.8 m occur during the first decades of the time series. Errors decrease with time, which is consistent with improvements in measurement methods. Comparison between the old and new datasets also shows improvements in the relationships between net balance, equilibrium-line altitude and summer temperature. A time-series analysis also indicates that the record does not contain longer-term (>10 year) oscillations. The pseudo-cyclic signal must thus be explained by factors other than cyclically occurring phenomena, although the record may still be too short to establish significant signals. We strongly recommend re-analysis of long mass-balance records in order to improve the mass-balance records used for other analyses.


Author(s):  
A Cecile JW Janssens ◽  
Gary W Miller ◽  
K Venkat Narayan

The US National Institutes of Health (NIH) recently announced that they would limit the number of grants per scientist and redistribute their funds across a larger group of researchers. The policy was withdrawn a month later after criticism from the scientific community. Even so, the basis of this defunct policy was flawed and it merits further examination. The amount of grant support would have been quantified using a new metric, the Grant Support Index (GSI), and limited to a maximum of 21 points, the equivalent of three R01 grants. This threshold was decided based upon analysis of a new metric of scientific output, the annual weighted Relative Citation Ratio, which showed a pattern of diminishing returns at higher values of the GSI. In this commentary, we discuss several concerns about the validity of the two metrics and the quality of the data that the NIH had used to set the grant threshold. These concerns would have warranted a re-analysis of new data to confirm the legitimacy of the GSI threshold. Data-driven policies that affect the careers of scientists should be justified by nothing less than a rigorous analysis of high-quality data.


2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.


2020 ◽  
Author(s):  
Camille Yver-Kwok ◽  
Carole Philippon ◽  
Peter Bergamaschi ◽  
Tobias Biermann ◽  
Francescopiero Calzolari ◽  
...  

Abstract. The Integrated Carbon Observation System (ICOS) is a pan-European research infrastructure which provides harmonized and high precision scientific data on the carbon cycle and the greenhouse gas (GHG) budget. All stations have to undergo a rigorous assessment before being labeled, i.e. receiving approval to join the network. In this paper, we present the labeling process for the ICOS atmospheric network through the 23 stations that have been labeled between November 2017 and November 2019. We describe the labeling steps as well as the quality controls used to verify that the ICOS data (CO2, CH4, CO and meteorological measurements) attain the expected quality level defined within ICOS. To ensure the quality of the GHG data, three to four calibration gases and two target gases, one measured two to three times a day, the other with the calibration gases (twice a month) are measured. The data are controlled on a weekly basis and tests on the station sampling lines are performed twice a year. From these high-quality data, we conclude that regular calibrations of the CO2, CH4 and CO analyzers used here (twice a month) are important in particular for carbon monoxide (CO) due to the analyzer's variability and that reducing the number of calibration injections (from four to three) in a calibration sequence is possible and permits saving gas and extend the calibration gas lifespan. We also show that currently, the on-site water vapor correction test does not deliver quantitative results possibly due to environmental factors. Thus the use of a drying system is strongly recommended. Finally, the mandatory regular intake line tests are shown to be useful to detect artifacts and leaks as shown here via three different examples at the stations.


2021 ◽  
Vol 2 ◽  
Author(s):  
Julia Adelöf ◽  
Jaime M. Ross ◽  
Madeleine Zetterberg ◽  
Malin Hernebring

Lifespan analyses are important for advancing our understanding of the aging process. There are two major issues in performing lifespan studies: 1) late-stage animal lifespan analysis may include animals with non-terminal, yet advanced illnesses, which can pronounce indirect processes of aging rather than the aging process per se and 2) they often involves challenging welfare considerations. Herein, we present an option to the traditional way of performing lifespan studies by using a novel method that generates high-quality data and allows for the inclusion of excluded animals, even animals removed at early signs of disease. This Survival-span method is designed to be feasibly done with simple means by any researcher and strives to improve the quality of aging studies and increase animal welfare.


1994 ◽  
Vol 8 (4) ◽  
pp. 883-886 ◽  
Author(s):  
Janet L. Andersen

The Environmental Protection Agency (EPA) is required by law to assure that the use of pesticides does not cause unreasonable risks to humans or the environment when risks are compared with benefits. Weed scientists conduct hundreds of comparative efficacy tests each year, but the results are often of little use to the Agency in benefit assessments because the tests are unpublished or otherwise unavailable to the Agency, the tests are conducted in a manner unusable for regulatory purposes, or there are inconsistencies between tests conducted year to year or at different sites. Despite the lack of high quality data, the Agency is compelled to make the best regulatory decision possible with the information at hand, and it may appear to some that decisions are based more on policy than science. EPA is looking for experimental methods that will improve the quality of benefits data available to the Agency.


Sign in / Sign up

Export Citation Format

Share Document