Topsis Application to Fuzzy Game Problem

2019 ◽  
Vol 10 (7) ◽  
pp. 1426-1434
Author(s):  
M. Thirucheran ◽  
E. R. Meena Kumari
Keyword(s):  
Author(s):  
K. Selvakumari ◽  
S. Lavanya
Keyword(s):  

The main aim of this paper is to deal with a two person zero sum game involving fuzzy payoff matrix comprising of heptagonal and hendecagonal fuzzy numbers. Ranking of fuzzy numbers is a hard task. Many methods have been proposed to rank different fuzzy numbers such as triangular, trapezoidal, hexagonal, octagonal etc. In this paper, a matrix game is considered whose payoffs are heptagonal and hendecagonal fuzzy numbers and ranking method is used to solve the matrix game. By using this proposed approach the fuzzy game problem is converted into crisp problem and then solved by applying the usual game problem techniques. The validity of proposed method is illustrated with the help of two different practical examples; one where the two companies are venturing into online restaurant business and the other where the two political parties with conflicting interests during elections are competing with each other.


2021 ◽  
pp. 2150011
Author(s):  
Wei Dong ◽  
Jianan Wang ◽  
Chunyan Wang ◽  
Zhenqiang Qi ◽  
Zhengtao Ding

In this paper, the optimal consensus control problem is investigated for heterogeneous linear multi-agent systems (MASs) with spanning tree condition based on game theory and reinforcement learning. First, the graphical minimax game algebraic Riccati equation (ARE) is derived by converting the consensus problem into a zero-sum game problem between each agent and its neighbors. The asymptotic stability and minimax validation of the closed-loop systems are proved theoretically. Then, a data-driven off-policy reinforcement learning algorithm is proposed to online learn the optimal control policy without the information of the system dynamics. A certain rank condition is established to guarantee the convergence of the proposed algorithm to the unique solution of the ARE. Finally, the effectiveness of the proposed method is demonstrated through a numerical simulation.


2007 ◽  
Vol 03 (02) ◽  
pp. 259-269 ◽  
Author(s):  
AREEG ABDALLA ◽  
JAMES BUCKLEY

In this paper, we consider a two-person zero-sum game with fuzzy payoffs and fuzzy mixed strategies for both players. We define the fuzzy value of the game for both players [Formula: see text] and also define an optimal fuzzy mixed strategy for both players. We then employ our fuzzy Monte Carlo method to produce approximate solutions, to an example fuzzy game, for the fuzzy values [Formula: see text] for Player I and [Formula: see text] for Player II; and also approximate solutions for the optimal fuzzy mixed strategies for both players. We then look at [Formula: see text] and [Formula: see text] to see if there is a Minimax theorem [Formula: see text] for this fuzzy game.


2020 ◽  
Vol 5 (6) ◽  
pp. 7467-7479
Author(s):  
Jamilu Adamu ◽  
◽  
Kanikar Muangchoo ◽  
Abbas Ja’afaru Badakaya ◽  
Jewaidu Rilwan ◽  
...  

Aerospace ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 299
Author(s):  
Bin Yang ◽  
Pengxuan Liu ◽  
Jinglang Feng ◽  
Shuang Li

This paper presents a novel and robust two-stage pursuit strategy for the incomplete-information impulsive space pursuit-evasion missions considering the J2 perturbation. The strategy firstly models the impulsive pursuit-evasion game problem into a far-distance rendezvous stage and a close-distance game stage according to the perception range of the evader. For the far-distance rendezvous stage, it is transformed into a rendezvous trajectory optimization problem and a new objective function is proposed to obtain the pursuit trajectory with the optimal terminal pursuit capability. For the close-distance game stage, a closed-loop pursuit approach is proposed using one of the reinforcement learning algorithms, i.e., the deep deterministic policy gradient algorithm, to solve and update the pursuit trajectory for the incomplete-information impulsive pursuit-evasion missions. The feasibility of this novel strategy and its robustness to different initial states of the pursuer and evader and to the evasion strategies are demonstrated for the sun-synchronous orbit pursuit-evasion game scenarios. The results of the Monte Carlo tests show that the successful pursuit ratio of the proposed method is over 91% for all the given scenarios.


Sign in / Sign up

Export Citation Format

Share Document