stochastic approximation
Recently Published Documents


TOTAL DOCUMENTS

1440
(FIVE YEARS 146)

H-INDEX

57
(FIVE YEARS 3)

2021 ◽  
Vol 19 (6) ◽  
pp. 575-583
Author(s):  
Rasha Atwa ◽  
Rasha Abd- El - Wahab ◽  
Ola Barakat

The stochastic approximation procedure with delayed groups of delayed customers is investigated. The Robbins-Monro stochastic approximation procedure is adjusted to be usable in the presence of delayed groups of delayed customers. Two loss systems are introduced to get an accurate description of the proposed procedure. Each customer comes after fixed time-intervals with the stage of the following customer is accurate according to the outcome of the preceding one, where the serving time of a customer is assumed to be discrete random variable. Some applications of the procedure are given where the analysis of their results is obtained. The analysis shows that efficiencies of the procedure can be increased by minimizing the number of customers of a group irrespective of their service times that may take maximum values. Efficiencies depend on the maximum service time of the customer and on the number of customers of the group. The most important result is that efficiencies of the procedure are increased by increasing the service time distributions as well as service times of customers .This new situation can be applied to increase the number of served customers where the number of served groups will also be increased. The results obtained seem to be acceptable. In general, our proposal can be utilized to other stochastic approximation procedures to increase the production in many fields such as medicine, computer sciences, industry, and applied sciences.


2021 ◽  
pp. 233-276
Author(s):  
Joachim Gwinner ◽  
Baasansuren Jadamba ◽  
Akhtar A. Khan ◽  
Fabio Raciti

2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Yang Liu ◽  
Robert J. B. Goudie

AbstractBayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed form. Previous studies have proposed algorithms to sample from this distribution, but these algorithms have unclear theoretical convergence properties. To address this, we propose a new algorithm called the stochastic approximation cut (SACut) algorithm as an alternative. The algorithm is divided into two parallel chains. The main chain targets an approximation to the cut distribution; the auxiliary chain is used to form an adaptive proposal distribution for the main chain. We prove convergence of the samples drawn by the proposed algorithm and present the exact limit. Although SACut is biased, since the main chain does not target the exact cut distribution, we prove this bias can be reduced geometrically by increasing a user-chosen tuning parameter. In addition, parallel computing can be easily adopted for SACut, which greatly reduces computation time.


2021 ◽  
Vol 10 ◽  
pp. 13-32
Author(s):  
Petro Kravets ◽  
◽  
Volodymyr Pasichnyk ◽  
Mykola Prodaniuk ◽  
◽  
...  

This paper proposes a new application of the stochastic game model to solve the problem of self- organization of the Hamiltonian cycle of a graph. To do this, at the vertices of the undirected graph are placed game agents, whose pure strategies are options for choosing one of the incident edges. A random selection of strategies by all agents forms a set of local paths that begin at each vertex of the graph. Current player payments are defined as loss functions that depend on the strategies of neighboring players that control adjacent vertices of the graph. These functions are formed from a penalty for the choice of opposing strategies by neighboring players and a penalty for strategies that have reduced the length of the local path. Random selection of players’ pure strategies is aimed at minimizing their average loss functions. The generation of sequences of pure strategies is performed by a discrete distribution built on the basis of dynamic vectors of mixed strategies. The elements of the vectors of mixed strategies are the probabilities of choosing the appropriate pure strategies that adaptively take into account the values of current losses. The formation of vectors of mixed strategies is determined by the Markov recurrent method, for the construction of which the gradient method of stochastic approximation is used. During the game, the method increases the value of the probabilities of choosing those pure strategies that lead to a decrease in the functions of average losses. For given methods of forming current payments, the result of the stochastic game is the formation of patterns of self-organization in the form of cyclically oriented strategies of game agents. The conditions of convergence of the recurrent method to collectively optimal solutions are ensured by observance of the fundamental conditions of stochastic approximation. The game task is extended to random graphs. To do this, the vertices are assigned the probabilities of recovery failures, which cause a change in the structure of the graph at each step of the game. Realizations of a random graph are adaptively taken into account when searching for Hamiltonian cycles. Increasing the probability of failure slows down the convergence of the stochastic game. Computer simulation of the stochastic game provided patterns of self-organization of agents’ strategies in the form of several local cycles or a global Hamiltonian cycle of the graph, depending on the ways of forming the current losses of players. The reliability of experimental studies is confirmed by the repetition of implementations of self-organization patterns for different sequences of random variables. The results of the study can be used in practice for game-solving NP-complex problems, transport and communication problems, for building authentication protocols in distributed information systems, for collective decision-making in conditions of uncertainty.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 567
Author(s):  
Julien Gacon ◽  
Christa Zoufal ◽  
Giuseppe Carleo ◽  
Stefan Woerner

The Quantum Fisher Information matrix (QFIM) is a central metric in promising algorithms, such as Quantum Natural Gradient Descent and Variational Quantum Imaginary Time Evolution. Computing the full QFIM for a model with d parameters, however, is computationally expensive and generally requires O(d2) function evaluations. To remedy these increasing costs in high-dimensional parameter spaces, we propose using simultaneous perturbation stochastic approximation techniques to approximate the QFIM at a constant cost. We present the resulting algorithm and successfully apply it to prepare Hamiltonian ground states and train Variational Quantum Boltzmann Machines.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takashi Goda ◽  
Yuki Yamada

AbstractThe concept of probabilistic parameter threshold analysis has recently been introduced as a way of probabilistic sensitivity analysis for decision-making under uncertainty, in particular, for health economic evaluations which compare two or more alternative treatments with consideration of uncertainty on outcomes and costs. In this paper we formulate the probabilistic threshold analysis as a root-finding problem involving the conditional expectations, and propose a pairwise stochastic approximation algorithm to search for the threshold value below and above which the choice of conditionally optimal decision options changes. Numerical experiments for both a simple synthetic testcase and a chemotherapy Markov model illustrate the effectiveness of our proposed algorithm, without any need for accurate estimation or approximation of conditional expectations which the existing approaches rely upon. Moreover we introduce a new measure called decision switching probability for probabilistic sensitivity analysis in this paper.


2021 ◽  
Vol 34 (5) ◽  
pp. 1681-1702
Author(s):  
Shuhang Chen ◽  
Adithya Devraj ◽  
Andrey Berstein ◽  
Sean Meyn

Sign in / Sign up

Export Citation Format

Share Document