Analysis and mathematical computation of some dynamic function for strontium stannate

2021 ◽  
Vol 583 (1) ◽  
pp. 243-251
Author(s):  
Ali Riza Askun
1950 ◽  
Vol 1 (1) ◽  
pp. 429-438 ◽  
Author(s):  
M. V. Wilkes

2019 ◽  
Vol 23 (1) ◽  
pp. 28-40 ◽  
Author(s):  
Yong Dou ◽  
Kiran Dhatt-Gauthier ◽  
Kyle J.M. Bishop
Keyword(s):  

2019 ◽  
Vol 157 ◽  
pp. 1-13 ◽  
Author(s):  
Meysam Asgari-Chenaghlu ◽  
Mohammad-Ali Balafar ◽  
Mohammad-Reza Feizi-Derakhshi

2021 ◽  
Author(s):  
Agnieszka Tymula ◽  
Yuri Imaizumi ◽  
Takashi Kawai ◽  
Jun Kunimatsu ◽  
Masayuki Matsumoto ◽  
...  

Research in behavioral economics and reinforcement learning has given rise to two influential theories describing human economic choice under uncertainty. The first, prospect theory, assumes that decision-makers use static mathematical functions, utility and probability weighting, to calculate the values of alternatives. The second, reinforcement learning theory, posits that dynamic mathematical functions update the values of alternatives based on experience through reward prediction error (RPE). To date, these theories have been examined in isolation without reference to one another. Therefore, it remains unclear whether RPE affects a decision-maker's utility and/or probability weighting functions, or whether these functions are indeed static as in prospect theory. Here, we propose a dynamic prospect theory model that combines prospect theory and RPE, and test this combined model using choice data on gambling behavior of captive macaques. We found that under standard prospect theory, monkeys, like humans, had a concave utility function. Unlike humans, monkeys exhibited a concave, rather than inverse-S shaped, probability weighting function. Our dynamic prospect theory model revealed that probability distortions, not the utility of rewards, solely and systematically varied with RPE: after a positive RPE, the estimated probability weighting functions became more concave, suggesting more optimistic belief about receiving rewards and over-weighted subjective probabilities at all probability levels. Thus, the probability perceptions in laboratory monkeys are not static even after extensive training, and are governed by a dynamic function well captured by the algorithmic feature of reinforcement learning. This novel evidence supports combining these two major theories to capture choice behavior under uncertainty.


2018 ◽  
pp. 1-27
Author(s):  
Pavel Vyacheslavovich Kurakin ◽  
Georgii Gennadyevich Malinetskii ◽  
Nikolay Alexeevich Mitin

2018 ◽  
Vol 12 (8) ◽  
pp. 965-969 ◽  
Author(s):  
Peng Zan ◽  
Yankai Liu ◽  
Meihan Chang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document