Highly available systems
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 13)

H-INDEX

0
(FIVE YEARS 0)

Published By Publishing House &Quot;Radiotekhnika&Quot;

2072-9472

Author(s):  
A.A. Kuzmitsky ◽  
M.I. Truphanov ◽  
O.B. Tarasova ◽  
D.V. Fedosenko

One of the key tasks associated with the fast identification of powerful tropical hurricanes, the assessment of the growth of their power, is the formation of such an input dataset, which is based on data that are technically easy and accurately recorded and calculated using existing sources located in the open accessibility. The presented work is based on the analysis of satellite images as the main data sources, and on weather data as peripheral. An obvious advantage of satellite images in comparison with other sources of data on weather conditions is their high spatial resolution, as well as the ability to obtain data from various satellites, which increases the timeliness and accuracy of retrieving primary information. The developed approach consists in performing the following main interconnected iteratively performed groups of subtasks: calculation of feature points describing the location of individual cloud areas at different points in time by using different descriptors; comparison of the same cloud areas at specified times to analyze the local directions of cloud movements; tracking of cloudiness for specified time intervals; calculation of local features for selected points of cloudiness to recognize the origin and analyze turbulence; the formation of the dynamics of changes in the local area near the trajectory of the point; recognition of primary characteristic features characterizing the transformation of local turbulences into a stable vortex formation; identification of signs of the growing of a hurricane and assessment of the primary dynamics of the increase in its power; generalization and refinement of a priori given features by analyzing similar features of known cyclones. To detect points, a modified algorithm for finding them has been introduced. To describe the points, additional descriptors are introduced based on the normalized gradient measured for the neighborhood of neighboring points and cyclically changing in the polar coordinate system. A comparative analysis of the results of applying the created method and algorithm when compared with known similar solutions revealed the following distinctive features: introduction of additional invariant orientations of features when describing characteristic points and greater stability of detecting characteristic points when analyzing cloudiness, identification of cloudiness turbulence and analysis of changes in their local characteristics and movement parameters, formation of a set of generalizing distributions when analyzing a set of moving points for the subsequent recognition of the signs of a hurricane at its initial stages of formation. The developed approach was tested experimentally in the analysis of hurricanes video recordings and their movement in the Atlantic region for the period from 2010 to 2020. The developed general approach and a specific algorithm for estimating hurricane parameters based on cloud analysis are presented. The approach is applicable for practical implementation and allows accumulating data for detecting hurricanes in real time based on publicly available data for the development of a physical and mathematical model.


Author(s):  
T.K. Biryukova

Classic neural networks suppose trainable parameters to include just weights of neurons. This paper proposes parabolic integrodifferential splines (ID-splines), developed by author, as a new kind of activation function (AF) for neural networks, where ID-splines coefficients are also trainable parameters. Parameters of ID-spline AF together with weights of neurons are vary during the training in order to minimize the loss function thus reducing the training time and increasing the operation speed of the neural network. The newly developed algorithm enables software implementation of the ID-spline AF as a tool for neural networks construction, training and operation. It is proposed to use the same ID-spline AF for neurons in the same layer, but different for different layers. In this case, the parameters of the ID-spline AF for a particular layer change during the training process independently of the activation functions (AFs) of other network layers. In order to comply with the continuity condition for the derivative of the parabolic ID-spline on the interval (x x0, n) , its parameters fi (i= 0,...,n) should be calculated using the tridiagonal system of linear algebraic equations: To solve the system it is necessary to use two more equations arising from the boundary conditions for specific problems. For exam- ple the values of the grid function (if they are known) in the points (x x0, n) may be used for solving the system above: f f x0 = ( 0) , f f xn = ( n) . The parameters Iii+1 (i= 0,...,n−1 ) are used as trainable parameters of neural networks. The grid boundaries and spacing of the nodes of ID-spline AF are best chosen experimentally. The optimal selection of grid nodes allows improving the quality of results produced by the neural network. The formula for a parabolic ID-spline is such that the complexity of the calculations does not depend on whether the grid of nodes is uniform or non-uniform. An experimental comparison of the results of image classification from the popular FashionMNIST dataset by convolutional neural 0, x< 0 networks with the ID-spline AFs and the well-known ReLUx( ) =AF was carried out. The results reveal that the usage x x, ≥ 0 of the ID-spline AFs provides better accuracy of neural network operation than the ReLU AF. The training time for two convolutional layers network with two ID-spline AFs is just about 2 times longer than with two instances of ReLU AF. Doubling of the training time due to complexity of the ID-spline formula is the acceptable price for significantly better accuracy of the network. Wherein the difference of an operation speed of the networks with ID-spline and ReLU AFs will be negligible. The use of trainable ID-spline AFs makes it possible to simplify the architecture of neural networks without losing their efficiency. The modification of the well-known neural networks (ResNet etc.) by replacing traditional AFs with ID-spline AFs is a promising approach to increase the neural network operation accuracy. In a majority of cases, such a substitution does not require to train the network from scratch because it allows to use pre-trained on large datasets neuron weights supplied by standard software libraries for neural network construction thus substantially shortening training time.


Author(s):  
V.G. Belenkov ◽  
V.I. Korolev ◽  
V.I. Budzko ◽  
D.A. Melnikov

The article discusses the features of the use of the cryptographic information protection means (CIPM)in the environment of distributed processing and storage of data of large information and telecommunication systems (LITS).A brief characteristic is given of the properties of the cryptographic protection control subsystem - the key system (CS). A description is given of symmetric and asymmetric cryptographic systems, required to describe the problem of using KS in LITS.Functional and structural models of the use of KS and CIPM in LITS, are described. Generalized information about the features of using KS in LITS is given. The obtained results form the basis for further work on the development of the architecture and principles of KS construction in LITS that implement distributed data processing and storage technologies. They can be used both as a methodological guide, and when carrying out specific work on the creation and development of systems that implement these technologies, as well as when forming technical specifications for the implementation of work on the creation of such systems.


Author(s):  
O.P. Arkhipov ◽  
M.V. Tsukanov

The development of automatic methods for comparing panoramas obtained at different times during the inspection flight of UAVs of the same area is currently an urgent and popular task. In this connection, a new algorithmic model for detecting anomalies on multi-time panoramas was proposed, based on the comparison of the found singular points and descriptors, establishing their mutual correspondence on panoramas, and highlighting the found differences in non-overlapping areas of anomalies. The strategy aimed at bringing the panoramas to a single view and their subsequent synchronization is proposed. The results of the algorithm are presented, using the example of multi-time panoramas of the selected inspected area. We managed to synchronize the panoramas at different times to minimize differences in the shooting angles and illumination. Perform a search for anomalies on multi-time panoramas, excluding the selection of anomalies of the "noise" type and minor deviations in the color and geometric coordinates of special points. Sort the found anomalies by importance groups.


Author(s):  
A.V. Solovyev

When developing information systems aimed at long-term storage of digital data, it is necessary to think over solutions that allow not only to ensure storage security, storage reliability, but also to ensure the safety of digital data in the event of a possible destabilizing effect, including a catastrophic one. That is, there is a need to ensure the stability of digital data under natural, man-made, anthropogenic or other stored influences. The first step towards digital data resilience is to assess data resilience. The aim of the work is to develop a methodological apparatus for assessing the stability of digital data in information systems operating in conditions of disasters and destabilizing factors. The article presents a methodological approach to assessing the stability of digital data, including long-term storage. The stability of digital data to destabilizing influences in the article is understood as the ability to recover in a minimum period of time both the data itself and the operability of applications responsible for the interpretation of this data, as well as the operability of other software and hardware, without which the use of digital data is not possible. The author of the article proposes a methodology for creating a mathematical model for assessing sustainability, presents a model for assessing the sustainability indicator in general. The main steps for the development of a mathematical model of stability are described. Areas of further research on the development of methodological and algorithmic apparatus for modeling the stability of digital data have been identified. The methodological approach proposed in the article can be used to solve the problems of stability of digital data of a fairly wide class of applied information systems operating in conditions of disasters and destabilizing factors. The proposed approach presupposes redundancy of software and hardware of information systems, additional time spent in the design for compiling models, additional costs for storing the "history" of the functioning of information systems, description of destabilizing factors, etc. However, according to the author of the article, this is necessary and justified to ensure the safety of valuable digital data with a long storage period.


Author(s):  
I.N. Sinitsyn ◽  
V.I. Sinitsyn ◽  
E.R. Korepanov ◽  
T.D. Konashenkova

The article proceeds the thematic cycle dedicated to software tools for stochastic systems with high availability (StSHA) functioning at shock disturbances (ShD) and is dedicated to wavelet synthesis according to complex statistical criteria (CsC). Short survey concerning corresponding works for mean square criteria (msc) is given. In Sect.1 basic CsC definitions and approaches are given. Sect. 2 dedicated to CsC wavelet necessary and sufficient conditions of optimality for scalar non-stationary shock StSHA (StSSHA). Methodological support is based on Haar wavelets. Sect. 4 and Sect.5 are devoted to CSK optimization ShStSHA (basic wavelet equations, algorithms, software tools and example). Several advantages of wavelet algorithms and tools are described and stated for complex ShD. Some generalization of CsC algorithms based on wavelet canonical expansion of StP in ShStSHA mentioned.


Author(s):  
A.A. Nistratov

With the widespread adoption and development of the process approach, it became clear that the standard processes used in the life cycle of highly available systems undoubtedly have a cumulative impact on risks that arise. However, the possibilities for predicting risks in practice are significantly limited: private and integral risks of violation of the acceptable performance of implemented processes, estimated by simplified methods, do not reflect the real picture, and specialized models of specific systems and processes require painstaking and long-term scientific and methodological study. Thus, there was a critical methodological contradiction between objective needs and real capabilities in predicting private and integral risks. Carrying out a scientific search for ways to eliminate the identified contradiction, the main goal of this work (in two parts) is to create scientifically based methodological and software-technological solutions for analytical prediction of the integral risk of violation of the acceptable performance of a given set of standard processes in the life cycle of systems. In the first part of the work, for 30 standard processes defined by GOST R 57193 (characterized by typical actions and real or hypothetical input data for modeling and linked to possible scenarios for their use in the creation and/or operation and/or disposal of systems), mathematical models and methods for predicting the integral risk of violating the acceptable performance of a given set of standard processes with the possibility of traceable analytical dependence on influencing factors are proposed. The second part of the work is devoted to the description of the proposed software-technological solutions for risks prediction using models and methods of the first part for solving practical problems of system engineering.


Author(s):  
A.I. Belozubova ◽  
A.V. Epishkina ◽  
K.G. Kogos

Lampson was the first to introduce a covert channel as a channel that was not designed for information transmission. The problem of information leakage via network covert channels has a large scale due to the facts that IP protocol is widely used and has a lot of features to use it for hidden information transmission. Usually covert channels are divided into two groups by transmission technic: storage and timing covert channels. In the paper authors provide brief survey for network timing and storage covert channels as well as methods of information leakage counteraction. According to best practices, information systems and infrastructure have an information security policy with the requirements about allowable level of covert channel capacity. However, to take a decision about any method activation it is important not to allow underestimation of covert channel capacity. For the effective prevention of information leakage via network covert channels authors suggest a way to assess timing covert channel capacity. Two binary timing channels have been investigated: on/off and channel based on inter packet intervals modulation. In on/off covert channel the sender sends a packet during a preliminarily agreed time interval to transmit the bit «1» and does not send to transmit the bit «0». In a covert channel based on inter packet intervals modulation the sender sends packets with different time intervals defining different bits. The scientific novelty consists in taking into account network load conditions while assessing maximum amount of information that can be stealthily transmitted from secure infrastructure to an illegitimate receiver beyond secure perimeter. Authors investigated cases when packet transfer time from the sender to the receiver in the network (PTT) is defined by normal and exponential distribution – the most common distribution according to current research. Covert channel capacity is evaluated as a function of covert channel parameters and parameters of the PTT distribution (DPTT). Conducted research shows that in case when secure officer does not take into account typical load for the network and DPTT type maximum covert channel capacity will most likely be underestimated. If allowable level of covert channel capacity is set up, obtained results allow to take right decision about activation of countermeasures to prevent information leakage.


Author(s):  
I.N. Sinitsyn ◽  
A.P. Karpenko ◽  
M.K. Sakharov

Paper presents the new multi-memetic modification of the Mind Evolutionary Computation (MEC) algorithm with the incorporated landscape analysis (LA) for solving global optimization in problems complex highly available systems (HAS). The proposed landscape analysis is based on the concept of Lebesgue integral and allows one to divide objective functions into three categories. Each category suggests a usage of specific hyper-heuristics for adaptive meme selection. The new algorithm and its software tools were utilized to solve an optimal control problem for the epidemic’s propagation model, based on the SIER model with pulse vaccination. Results of the numerical experiments demonstrate a significant influence of vaccination’s start time, frequency and intensity on the maximum number of infected individuals. Results of the numerical experiments demonstrate a significant influence of vaccination’s start time, frequency and intensity on the maximum number of infected individuals. The proposed algorithm helped to find and the optimal vaccination schedule in order to minimize the number of infect-ed individuals while also maintaining the volume of the utilized vaccine at the low level.


Sign in / Sign up

Export Citation Format

Share Document