scholarly journals Application of Neural Networks in the Problem of Quantitative Analysis of Air Composition

2020 ◽  
Vol 24 (1) ◽  
pp. 159-174
Author(s):  
O. G. Bondar ◽  
E. O. Brezhneva ◽  
R. E. Chernyshov

Purpose of reseach is to develop a method for generating training data to enable the use of artificial neural networks (ANN) method in gas analyzer systems. The problem of increasing the accuracy of separate determination of gas concentrations in multicomponent mixtures under conditions of environmental parameters changes is considered. It is proposed to increase the accuracy of determining target gas concentrations by using the ANN method for joint processing of sensor signals.Methods: Training data for the neural network were generated using numerical experiments and mathematical simulation methods. To assess the accuracy of training, the standard deviation (SD) was used and the relative error was calculated. ANN training and research were conducted in the MATLAB environment (the Neural Networks Toolbox application). When developing mathematical models of gas sensors, the theory of electrical circuits, electronic theory of chemisorption and the adsorption theory of heterogeneous catalysis were applied.Results: A method for generating training data sets using mathematical models of gas sensors is described. The proposed training method has been tested on a specific task, in particular, a decision-making device based on ANN for a four-component gas analyzer has been developed. The efficiency of using neural networks for tuning out from the mutual cross-sensitivity of sensors was evaluated.Conclusion: A method for generating training data using simulation models is proposed, which allows automazing the process of training, research, choosing the architecture and structure of ANN and their testing. The method was tested. Based on the analysis of the obtained errors, conclusions are made about the efficiency of using neural networks to reduce errors caused by cross sensitivity at different concentrations of the main and interfering gases.

2005 ◽  
Vol 9 (4) ◽  
pp. 313-321 ◽  
Author(s):  
R. R. Shrestha ◽  
S. Theobald ◽  
F. Nestmann

Abstract. Artificial neural networks (ANNs) provide a quick and flexible means of developing flood flow simulation models. An important criterion for the wider applicability of the ANNs is the ability to generalise the events outside the range of training data sets. With respect to flood flow simulation, the ability to extrapolate beyond the range of calibrated data sets is of crucial importance. This study explores methods for improving generalisation of the ANNs using three different flood events data sets from the Neckar River in Germany. An ANN-based model is formulated to simulate flows at certain locations in the river reach, based on the flows at upstream locations. Network training data sets consist of time series of flows from observation stations. Simulated flows from a one-dimensional hydrodynamic numerical model are integrated for network training and validation, at a river section where no measurements are available. Network structures with different activation functions are considered for improving generalisation. The training algorithm involved backpropagation with the Levenberg-Marquardt approximation. The ability of the trained networks to extrapolate is assessed using flow data beyond the range of the training data sets. The results of this study indicate that the ANN in a suitable configuration can extend forecasting capability to a certain extent beyond the range of calibrated data sets.


2021 ◽  
Vol 25 (1) ◽  
pp. 138-161
Author(s):  
O. G. Bondar ◽  
E. O. Brezhneva ◽  
O. G. Dobroserdov ◽  
K. G. Andreev ◽  
N. V. Polyakov

Purpose of research: search and analysis of existing models of gas-sensitive sensors. Development of mathematical models of gas-sensitive sensors of various types (semiconductor, thermocatalytic, optical, electrochemical) for their subsequent use in the training of artificial neural networks (INS). Investigation of main physicochemical patterns underlying the principles of sensor operation, consideration of the influence of environmental factors and cross-sensitivity on the sensor output signal. Comparison of simulation results with actual characteristics produced by the sensor industry. The concept of creating mathematical models is described. Their parameterization, research and assessment of adequacy are carried out.Methods. Numerical methods, computer modeling methods, electrical circuit theory, the theory of chemosorption and heterogeneous catalysis, the Freundlich and Langmuir equations, the Buger-Lambert-Behr law, the foundations of electrochemistry were used in creating mathematical models. Standard deviation (MSE) and relative error were calculated to assess the adequacy of the models.Results. The concept of creating mathematical models of sensors based on physicochemical patterns is described. This concept allows the process of data generation for training artificial neural networks used in multi-component gas analyzers for the purpose of joint information processing to be automated. Models of semiconductor, thermocatalytic, optical and electrochemical sensors were obtained and upgraded, considering the influence of additional factors on the sensor signal. Parameterization and assessment of adequacy and extrapolation properties of models by graphical dependencies presented in technical documentation of sensors were carried out. Errors (relative and RMS) of discrepancy of real data and results of simulation of gas-sensitive sensors by basic parameters are determined. The standard error of reproduction of the main characteristics of the sensors did not exceed 0.5%.Conclusion. Multivariable mathematical models of gas-sensitive sensors are synthesized, considering the influence of main gas and external factors (pressure, temperature, humidity, cross-sensitivity) on the output signal and allowing to generate training data for sensors of various types.


2021 ◽  
pp. 1-17
Author(s):  
Luis Sa-Couto ◽  
Andreas Wichert

Abstract Convolutional neural networks (CNNs) evolved from Fukushima's neocognitron model, which is based on the ideas of Hubel and Wiesel about the early stages of the visual cortex. Unlike other branches of neocognitron-based models, the typical CNN is based on end-to-end supervised learning by backpropagation and removes the focus from built-in invariance mechanisms, using pooling not as a way to tolerate small shifts but as a regularization tool that decreases model complexity. These properties of end-to-end supervision and flexibility of structure allow the typical CNN to become highly tuned to the training data, leading to extremely high accuracies on typical visual pattern recognition data sets. However, in this work, we hypothesize that there is a flip side to this capability, a hidden overfitting. More concretely, a supervised, backpropagation based CNN will outperform a neocognitron/map transformation cascade (MTCCXC) when trained and tested inside the same data set. Yet if we take both models trained and test them on the same task but on another data set (without retraining), the overfitting appears. Other neocognitron descendants like the What-Where model go in a different direction. In these models, learning remains unsupervised, but more structure is added to capture invariance to typical changes. Knowing that, we further hypothesize that if we repeat the same experiments with this model, the lack of supervision may make it worse than the typical CNN inside the same data set, but the added structure will make it generalize even better to another one. To put our hypothesis to the test, we choose the simple task of handwritten digit classification and take two well-known data sets of it: MNIST and ETL-1. To try to make the two data sets as similar as possible, we experiment with several types of preprocessing. However, regardless of the type in question, the results align exactly with expectation.


Author(s):  
Pramit Ghosh ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri ◽  
Dipak Kumar Basu

Low cost solutions for the development of intelligent bio-medical devices that not only assist people to live in a better way but also assist physicians for better diagnosis are presented in this chapter. Two such devices are discussed here, which are helpful for prevention and diagnosis of diseases. Statistical analysis reveals that cold and fever are the main culprits for the loss of man-hours throughout the world, and early pathological investigation can reduce the vulnerability of disease and the sick period. To reduce this cold and fever problem a household cooling system controller, which is adaptive and intelligent in nature, is designed. It is able to control the speed of a household cooling fan or an air conditioner based on the real time data, namely room temperature, humidity, and time for which system is active, which are collected from environment. To control the speed in an adaptive and intelligent manner, an associative memory neural network (Kramer) has been used. This embedded system is able to learn from training set; i.e., the user can teach the system about his/her feelings through training data sets. When the system starts up, it allows the fan to run freely at full speed, and after certain interval, it takes the environmental parameters like room temperature, humidity, and time as inputs. After that, the system takes the decision and controls the speed of the fan.


2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Yosra Bahri ◽  
Sebastian A. Schober ◽  
Cecilia Carbonelli ◽  
Robert Wille

Chemiresistive gas sensors are a crucial tool for monitoring gases on a large scale. For the estimation of gas concentrations based on the signals provided by such sensors, pattern recognition tools, such as neural networks, are widely used after training them on data measured by sample sensors and reference devices. However, in the production process of low-cost sensor technologies, small variations in their physical properties can occur, which can alter the measuring conditions of the devices and make them less comparable to the sample sensors, leading to less adapted algorithms. In this work, we study the influence of such variations with a focus on changes in the operating and heating temperature of graphene-based gas sensors in particular. To this end, we trained machine learning models on synthetic data provided by a sensor simulation model. By varying the operation temperatures between −15% and +15% from the original values, we could observe a steady decline in algorithm performance, if the temperature deviation exceeds 10%. Furthermore, we were able to substantiate the effectiveness of training the neural networks with several temperature parameters by conducting a second, comparative experiment. A well-balanced training set has shown to improve the prediction accuracy metrics significantly in the scope of our measurement setup. Overall, our results provide insights into the influence of different operating temperatures on the algorithm performance and how the choice of training data can increase the robustness of the prediction algorithms.


2020 ◽  
Vol 12 (20) ◽  
pp. 3358
Author(s):  
Vasileios Syrris ◽  
Ondrej Pesek ◽  
Pierre Soille

Automatic supervised classification with complex modelling such as deep neural networks requires the availability of representative training data sets. While there exists a plethora of data sets that can be used for this purpose, they are usually very heterogeneous and not interoperable. In this context, the present work has a twofold objective: (i) to describe procedures of open-source training data management, integration, and data retrieval, and (ii) to demonstrate the practical use of varying source training data for remote sensing image classification. For the former, we propose SatImNet, a collection of open training data, structured and harmonized according to specific rules. For the latter, two modelling approaches based on convolutional neural networks have been designed and configured to deal with satellite image classification and segmentation.


Author(s):  
Frank Padberg

The author uses neural networks to estimate how many defects are hidden in a software document. Input for the models are metrics that get collected when effecting a standard quality assurance technique on the document, a software inspection. For inspections, the empirical data sets typically are small. The author identifies two key ingredients for a successful application of neural networks to small data sets: Adapting the size, complexity, and input dimension of the networks to the amount of information available for training; and using Bayesian techniques instead of cross-validation for determining model parameters and selecting the final model. For inspections, the machine learning approach is highly successful and outperforms the previously existing defect estimation methods in software engineering by a factor of 4 in accuracy on the standard benchmark. The author’s approach is well applicable in other contexts that are subject to small training data sets.


2019 ◽  
Vol 17 (2) ◽  
pp. 6-14
Author(s):  
V. N. Gridin ◽  
V. V. Doenin ◽  
V. V. Panishchev ◽  
I. S. Razzhivaykin

In today’s world, many processes and events depend on forecasting. With development of mathematical models, an increasing number of factors influencing the final result of the forecast are taken into account, which in turn leads to the use of neural networks. But for training a neural network, source data sets are required, which are often not always sufficient or may not exist at all. The article describes a method of obtaining information as close to reality as possible. The proposed approach is to generate input data using simulation models of an object. The solution of a problem of generation of data sets and of training of a neural network is shown at the example of a typical marshalling railway station, and of a simulation of operations of a shunting hump. The considered examples confirmed the validity of the proposed methodological approach to generation of source data for neural networks using simulation models of a real object, based on a digital mathematical model, which makes it possible to obtain a simulation model of movement of transport objects, which is reliable in forecasting transport processes and creating relevant control algorithms.


Sign in / Sign up

Export Citation Format

Share Document