scholarly journals ПРИМЕНЕНИЕ НЕЙРОСЕТЕЙ ДЛЯ ВЫБОРА ИНСТРУМЕНТАЛЬНЫХ СРЕДСТВ ТЕСТИРОВАНИЯ WEB-ПРИЛОЖЕНИЙ НА ПРОНИКНОВЕНИЕ

Author(s):  
Артём Григорьевич Тецкий

Penetration testing is conducted to detect and further to fix the security problems of the Web application. During testing, tools are actively used that allows to avoid performing a large number of monotonous operations by the tester. The problem with selecting the tools is that there are a number of similar tools for testing the same class of security problems, and it is not known which tool is most suitable for a particular case. Such a problem is most often found among novice testers, more experienced testers use their own sets of tools to find specific security problems. Such kits are formed during the work, and each tester finds the most suitable tools for him. The goal of the paper is to create a method that will help to choose a tool for a particular case, based on the experience of experts in security testing of Web applications. To achieve the goal, it is proposed to create a Web service that will use the neural net-work to solve the problem of choice. Data for training a neural network in the form of a matrix of tools and their criteria are provided by experts in the field of security testing of Web applications. To find the most suitable tool, a vector of requirements should be formed, i.e. the user of service must specify the criteria for the search. As a result of the search, several most suitable for the request tools are shown to the user. Also, the user can save the result of his choice, if it differs from the proposed one. In this way, a set of learning examples can be extended. It is advisable to have two neural networks, the first one is trained only on data from experts; the second one is trained on data from experts and on data of users who have retained their choice. The usage of neural networks allows to realize correspondence between several input data sets to the one output data set. The described method can be used to select software in various applications.

2020 ◽  
Vol 22 ◽  
pp. 18-22
Author(s):  
M.-V. Lyba ◽  
L. Uhryn

With the development of information technology, humanity is increasingly delving into the world of gadgets, cloud technology, virtual reality, and artificial intelligence. Through web applications, we receive and distribute information, including confidential. During the pandemic, most people switched to online work and study. As a result, most of the data stored on personal computers, company servers, and cloud storage needs protection from cyberattacks. The problem of cybersecurity at the moment is incredibly relevant due to the hacking of cryptocurrencies, websites of ministries, bitcoin wallets or social network accounts. It is necessary to conduct high-quality testing of developed applications to detect cyber threats, to ensure reliable protection of different information. The article states that when testing applications, it checks for vulnerabilities that could arise as a result of incorrect system setup or due to shortcomings in software products. The use of innovation is necessary to improve quality. Modern realities have become a challenge for the development of cybersecurity products. Improvement of technology requires modern companies to update their IT systems and conduct regular security audits. The research is devoted to the analysis of modern OWASP testing tools that contribute to data security, with a view to their further use. The Open Web Application Security Project is an open security project. The research revealed a list of the most dangerous vectors of attacks on Web-applications, in particular, OWASP ZAP performs analyzes the sent and received data system security scanning at the primary level, MSTG performs security testing of mobile applications iOS and Android mobile devices. The practical result of the work is to test a specially developed web-application and identify vulnerabilities of different levels of criticality.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592092800
Author(s):  
Erin M. Buchanan ◽  
Sarah E. Crain ◽  
Ari L. Cunningham ◽  
Hannah R. Johnson ◽  
Hannah Stash ◽  
...  

As researchers embrace open and transparent data sharing, they will need to provide information about their data that effectively helps others understand their data sets’ contents. Without proper documentation, data stored in online repositories such as OSF will often be rendered unfindable and unreadable by other researchers and indexing search engines. Data dictionaries and codebooks provide a wealth of information about variables, data collection, and other important facets of a data set. This information, called metadata, provides key insights into how the data might be further used in research and facilitates search-engine indexing to reach a broader audience of interested parties. This Tutorial first explains terminology and standards relevant to data dictionaries and codebooks. Accompanying information on OSF presents a guided workflow of the entire process from source data (e.g., survey answers on Qualtrics) to an openly shared data set accompanied by a data dictionary or codebook that follows an agreed-upon standard. Finally, we discuss freely available Web applications to assist this process of ensuring that psychology data are findable, accessible, interoperable, and reusable.


2018 ◽  
Vol 30 (1) ◽  
pp. 116-128 ◽  
Author(s):  
Stephanie M. Smith ◽  
Ian Krajbich

When making decisions, people tend to choose the option they have looked at more. An unanswered question is how attention influences the choice process: whether it amplifies the subjective value of the looked-at option or instead adds a constant, value-independent bias. To address this, we examined choice data from six eye-tracking studies ( Ns = 39, 44, 44, 36, 20, and 45, respectively) to characterize the interaction between value and gaze in the choice process. We found that the summed values of the options influenced response times in every data set and the gaze-choice correlation in most data sets, in line with an amplifying role of attention in the choice process. Our results suggest that this amplifying effect is more pronounced in tasks using large sets of familiar stimuli, compared with tasks using small sets of learned stimuli.


Author(s):  
Tu Renwei ◽  
Zhu Zhongjie ◽  
Bai Yongqiang ◽  
Gao Ming ◽  
Ge Zhifeng

Unmanned Aerial Vehicle (UAV) inspection has become one of main methods for current transmission line inspection, but there are still some shortcomings such as slow detection speed, low efficiency, and inability for low light environment. To address these issues, this paper proposes a deep learning detection model based on You Only Look Once (YOLO) v3. On the one hand, the neural network structure is simplified, that is the three feature maps of YOLO v3 are pruned into two to meet specific detection requirements. Meanwhile, the K-means++ clustering method is used to calculate the anchor value of the data set to improve the detection accuracy. On the other hand, 1000 sets of power tower and insulator data sets are collected, which are inverted and scaled to expand the data set, and are fully optimized by adding different illumination and viewing angles. The experimental results show that this model using improved YOLO v3 can effectively improve the detection accuracy by 6.0%, flops by 8.4%, and the detection speed by about 6.0%.


2020 ◽  
Vol 34 (04) ◽  
pp. 5620-5627 ◽  
Author(s):  
Murat Sensoy ◽  
Lance Kaplan ◽  
Federico Cerutti ◽  
Maryam Saleki

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.


2021 ◽  
pp. 1-17
Author(s):  
Luis Sa-Couto ◽  
Andreas Wichert

Abstract Convolutional neural networks (CNNs) evolved from Fukushima's neocognitron model, which is based on the ideas of Hubel and Wiesel about the early stages of the visual cortex. Unlike other branches of neocognitron-based models, the typical CNN is based on end-to-end supervised learning by backpropagation and removes the focus from built-in invariance mechanisms, using pooling not as a way to tolerate small shifts but as a regularization tool that decreases model complexity. These properties of end-to-end supervision and flexibility of structure allow the typical CNN to become highly tuned to the training data, leading to extremely high accuracies on typical visual pattern recognition data sets. However, in this work, we hypothesize that there is a flip side to this capability, a hidden overfitting. More concretely, a supervised, backpropagation based CNN will outperform a neocognitron/map transformation cascade (MTCCXC) when trained and tested inside the same data set. Yet if we take both models trained and test them on the same task but on another data set (without retraining), the overfitting appears. Other neocognitron descendants like the What-Where model go in a different direction. In these models, learning remains unsupervised, but more structure is added to capture invariance to typical changes. Knowing that, we further hypothesize that if we repeat the same experiments with this model, the lack of supervision may make it worse than the typical CNN inside the same data set, but the added structure will make it generalize even better to another one. To put our hypothesis to the test, we choose the simple task of handwritten digit classification and take two well-known data sets of it: MNIST and ETL-1. To try to make the two data sets as similar as possible, we experiment with several types of preprocessing. However, regardless of the type in question, the results align exactly with expectation.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2019 ◽  
Vol 67 (5) ◽  
pp. 383-401
Author(s):  
Steffen Pfrang ◽  
Anne Borcherding ◽  
David Meier ◽  
Jürgen Beyerer

Abstract Industrial automation and control systems (IACS) play a key role in modern production facilities. On the one hand, they provide real-time functionality to the connected field devices. On the other hand, they get more and more connected to local networks and the internet in order to facilitate use cases promoted by “Industrie 4.0”. A lot of IACS are equipped with web servers that provide web applications for configuration and management purposes. If an attacker gains access to such a web application operated on an IACS, he can exploit vulnerabilities and possibly interrupt the critical automation process. Cyber security research for web applications is well-known in the office IT. There exist a lot of best practices and tools for testing web applications for different kinds of vulnerabilities. Security testing targets at discovering those vulnerabilities before they can get exploited. In order to enable IACS manufacturers and integrators to perform security tests for their devices, ISuTest was developed, a modular security testing framework for IACS. This paper provides a classification of known types of web application vulnerabilities. Therefore, it makes use of the worst direct impact of a vulnerability. Based on this analysis, a subset of open-source vulnerability scanners to detect such vulnerabilities is selected to be integrated into ISuTest. Subsequently, the integration is evaluated. This evaluation is twofold: At first, willful vulnerable web applications are used. In a second step, seven real IACS, like a programmable logic controller, industrial switches and cloud gateways, are used. Both evaluation steps start with the manual examination of the web applications for vulnerabilities. They conclude with an automated test of the web applications using the vulnerability scanners automated by ISuTest. The results show that the vulnerability scanners detected 53 % of the existing vulnerabilities. In a former study using commercial vulnerability scanners, 54 % of the security flaws could be found. While performing the analysis, 45 new vulnerabilities were detected. Some of them did not only break the web server but crashed the whole IACS, stopping the critical automation process. This shows that security testing is crucial in the industrial domain and needs to cover all services provided by the devices.


Author(s):  
Peter Grabusts

This paper describes a method of rule extraction from trained artificial neural networks. The statement of the problem is given. The aim of rule extraction procedure and suitable neural networks for rule extraction are outlined. The RULEX rule extraction algorithm is discussed that is based on the radial basis function (RBF) neural network. The extracted rules can help discover and analyze the rule set hidden in data sets. The paper contains an implementation example, which is shown through standalone IRIS data set.


Sign in / Sign up

Export Citation Format

Share Document