Robustness in Neural Networks

Author(s):  
Cesare Alippi ◽  
Manuel Roveri ◽  
Giovanni Vanini

The robustness analysis for neural networks aims at evaluating the influence on accuracy induced by perturbations affecting the computational flow; as such it allows the designer for estimating the resilience of the neural model w.r.t perturbations. In the literature, the robustness analysis of neural networks generally focuses on the effects of perturbations affecting biases and weights. The study of the network’s parameters is relevant both from the theoretical and the application point of view, since free parameters characterize the “knowledge space” of the neural model and, hence, its intrinsic functionality. A robustness analysis must also be taken into account when implementing a neural network (or the intelligent computational system into which a neural network is inserted) in a physical device or in intelligent wireless sensor networks. In these contexts, perturbations affecting the weights of a neural network abstract uncertainties such as finite precision representations, fluctuations of the parameters representing the weights in analog solutions (e.g., associated with the production process of a physical component), ageing effects or more complex, and subtle uncertainties in mixed implementations.

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 745 ◽  
Author(s):  
Malathy Emperuman ◽  
Srimathi Chandrasekaran

Sensor devices in wireless sensor networks are vulnerable to faults during their operation in unmonitored and hazardous environments. Though various methods have been proposed by researchers to detect sensor faults, only very few research studies have reported on capturing the dynamics of the inherent states in sensor data during fault occurrence. The continuous density hidden Markov model (CDHMM) is proposed in this research to determine the dynamics of the state transitions due to fault occurrence, while neural networks are utilized to classify the faults based on the state transition probability density generated by the CDHMM. Therefore, this paper focuses on the fault detection and classification using the hybridization of CDHMM and various neural networks (NNs), namely the learning vector quantization, probabilistic neural network, adaptive probabilistic neural network, and radial basis function. The hybrid models of each NN are used for the classification of sensor faults, namely bias, drift, random, and spike. The proposed methods are evaluated using four performance metrics which includes detection accuracy, false positive rate, F1-score, and the Matthews correlation coefficient. The simulation results show that the learning vector quantization NN classifier outperforms the detection accuracy rate when compared to the other classifiers. In addition, an ensemble NN framework based on the hybrid CDHMM classifier is built with majority voting scheme for decision making and classification. The results of the hybrid CDHMM ensemble classifiers clearly indicates the efficacy of the proposed scheme in capturing the dynamics of change of statesm which is the vital aspect in determining rapidly-evolving instant faults that occur in wireless sensor networks.


Author(s):  
Cesare Alippi ◽  
Giovanni Vanini

A robustness analysis for neural networks, namely the evaluation of the effects induced by perturbations affecting the network weights, is a relevant theoretical aspect since weights characterise the “knowledge space” of the neural model and, hence, its inner nature.


2022 ◽  
Author(s):  
Md. Sarkar Hasanuzzaman

Abstract Hyperspectral imaging is a versatile and powerful technology for gathering geo-data. Planes and satellites equipped with hyperspectral cameras are currently the leading contenders for large-scale imaging projects. Aiming at the shortcomings of traditional methods for detecting sparse representation of multi-spectral images, this paper proposes wireless sensor networks (WSNs) based single-hyperspectral image super-resolution method based on deep residual convolutional neural networks. We propose a different strategy that involves merging cheaper multispectral sensors to achieve hyperspectral-like spectral resolution while maintaining the WSN's spatial resolution. This method studies and mines the nonlinear relationship between low-resolution remote sensing images and high-resolution remote sensing images, constructs a deep residual convolutional neural network, connects multiple residual blocks in series, and removes some unnecessary modules. For this purpose, a decision support system is used that provides the outcome to the next layer. Finally, this paper, fully explores the similarities between natural images and hyperspectral images, use natural image samples to train convolutional neural networks, and further use migration learning to introduce the trained network model to the super-resolution problem of high-resolution remote sensing images, and solve the lack of training samples problem. A comparison between different algorithms for processing data on datasets collected in situ and via remote sensing is used to evaluate the proposed approach. The experimental results show that the method has good performance and can obtain better super-resolution effects.


Author(s):  
C. Alippi

This chapter presents a general methodology for evaluating the loss in performance of a generic neural network once its weights are affected by perturbations. Since weights represent the “knowledge space” of the neural model, the robustness analysis can be used to study the weights/performance relationship. The perturbation analysis, which is closely related to sensitivity issues, relaxes all assumptions made in the related literature, such as the small perturbation hypothesis, specific requirements on the distribution of perturbations and neural variables, the number of hidden units and a given neural structure. The methodology, based on Randomized Algorithms, allows reformulating the computationally intractable problem of robustness/sensitivity analysis in a probabilistic framework characterised by a polynomial time solution in the accuracy and confidence degrees.


Author(s):  
A. E. Khaytbaev ◽  
A. M. Eshmuradov

The purpose of the article is to study the possibilities of improving the efficiency of the sensory network management technique, using the neural network method. The presented model of the wireless sensor network takes into account the charging of the environment. The article also tests the hypothesis of the possibility of organizing distributed computing in wireless sensor networks. To achieve this goal, a number of tasks are allocated: review and analysis of existing methods for managing BSS nodes; definition of simulation model components and their properties of neural networks and their features; testing the results of using the developed method. The article explores the major historical insights of the application of the neural network technologies in wireless sensor networks in the following practical fields: engineering, farming, utility communication networks, manufacturing, emergency notification services, oil and gas wells, forest fires prevention equipment systems, etc. The relevant applications for the continuous monitoring of security and safety measures are critically analyzed in the context of the relevancy of specific decisions to be implemented within the system architecture. The study is focused on the modernization of methods of control and management for the wireless sensor networks considering the environmental factors to be allocated using senor systems for data maintenance, including the information on temperature, humidity, motion, radiation, etc. The article contains the relevant and adequate comparative analysis of the updated versions of node control protocols, the components of the simulation model, and the control method based on neural networks to be identified and tested within the practical organizational settings.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3445 ◽  
Author(s):  
Jianlin Liu ◽  
Fenxiong Chen ◽  
Jun Yan ◽  
Dianhong Wang

Data compression is a useful method to reduce the communication energy consumption in wireless sensor networks (WSNs). Most existing neural network compression methods focus on improving the compression and reconstruction accuracy (i.e., increasing parameters and layers), ignoring the computation consumption of the network and its application ability in WSNs. In contrast, we pay attention to the computation consumption and application of neural networks, and propose an extremely simple and efficient neural network data compression model. The model combines the feature extraction advantages of Convolutional Neural Network (CNN) with the data generation ability of Variational Autoencoder (VAE) and Restricted Boltzmann Machine (RBM), we call it CBN-VAE. In particular, we propose a new efficient convolutional structure: Downsampling-Convolutional RBM (D-CRBM), and use it to replace the standard convolution to reduce parameters and computational consumption. Specifically, we use the VAE model composed of multiple D-CRBM layers to learn the hidden mathematical features of the sensing data, and use this feature to compress and reconstruct the sensing data. We test the performance of the model by using various real-world WSN datasets. Under the same network size, compared with the CNN, the parameters of CBN-VAE model are reduced by 73.88% and the floating-point operations (FLOPs) are reduced by 96.43% with negligible accuracy loss. Compared with the traditional neural networks, the proposed model is more suitable for application on nodes in WSNs. For the Intel Lab temperature data, the average Signal-to-Noise Ratio (SNR) value of the model can reach 32.51 dB, the average reconstruction error value is 0.0678 °C. The node communication energy consumption can be reduced by 95.83%. Compared with the traditional compression methods, the proposed model has better compression and reconstruction accuracy. At the same time, the experimental results show that the model has good fault detection performance and anti-noise ability. When reconstructing data, the model can effectively avoid fault and noise data.


2021 ◽  
Vol 26 (jai2021.26(1)) ◽  
pp. 32-41
Author(s):  
Bodyanskiy Y ◽  
◽  
Antonenko T ◽  

Modern approaches in deep neural networks have a number of issues related to the learning process and computational costs. This article considers the architecture grounded on an alternative approach to the basic unit of the neural network. This approach achieves optimization in the calculations and gives rise to an alternative way to solve the problems of the vanishing and exploding gradient. The main issue of the article is the usage of the deep stacked neo-fuzzy system, which uses a generalized neo-fuzzy neuron to optimize the learning process. This approach is non-standard from a theoretical point of view, so the paper presents the necessary mathematical calculations and describes all the intricacies of using this architecture from a practical point of view. From a theoretical point, the network learning process is fully disclosed. Derived all necessary calculations for the use of the backpropagation algorithm for network training. A feature of the network is the rapid calculation of the derivative for the activation functions of neurons. This is achieved through the use of fuzzy membership functions. The paper shows that the derivative of such function is a constant, and this is a reason for the statement of increasing in the optimization rate in comparison with neural networks which use neurons with more common activation functions (ReLU, sigmoid). The paper highlights the main points that can be improved in further theoretical developments on this topic. In general, these issues are related to the calculation of the activation function. The proposed methods cope with these points and allow approximation using the network, but the authors already have theoretical justifications for improving the speed and approximation properties of the network. The results of the comparison of the proposed network with standard neural network architectures are shown


Sign in / Sign up

Export Citation Format

Share Document