scholarly journals Dynamic programming principle and associated Hamilton-Jacobi-Bellman equation for stochastic recursive control problem with non-Lipschitz aggregator

2018 ◽  
Vol 24 (1) ◽  
pp. 355-376 ◽  
Author(s):  
Jiangyan Pu ◽  
Qi Zhang

In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lipschitz with respect to the first unknown variable and the control, and monotonic with respect to the first unknown variable. The dynamic programming principle and the connection between the value function and the viscosity solution of the associated Hamilton-Jacobi-Bellman equation are established in this setting by the generalized comparison theorem for backward stochastic differential equations and the stability of viscosity solutions. Finally we take the control problem of continuous-time Epstein−Zin utility with non-Lipschitz aggregator as an example to demonstrate the application of our study.

1984 ◽  
Vol 16 (1) ◽  
pp. 16-16
Author(s):  
Domokos Vermes

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
J. Y. Li ◽  
M. N. Tang

In this paper, we study a two-player zero-sum stochastic differential game with regime switching in the framework of forward-backward stochastic differential equations on a finite time horizon. By means of backward stochastic differential equation methods, in particular that of the notion from stochastic backward semigroups, we prove a dynamic programming principle for both the upper and the lower value functions of the game. Based on the dynamic programming principle, the upper and the lower value functions are shown to be the unique viscosity solutions of the associated upper and lower Hamilton–Jacobi–Bellman–Isaacs equations.


Author(s):  
Juan Li ◽  
Wenqiang Li ◽  
Qingmeng Wei

By introducing a stochastic differential game whose dynamics and multi-dimensional cost functionals form a multi-dimensional coupled forward-backward stochastic differential equation with jumps, we give a probabilistic interpretation to a system of coupled Hamilton-Jacobi-Bellman-Isaacs equations. For this, we generalize the definition of the lower value function  initially defined only for deterministic times $t$ and states $x$ to  stopping times $\tau$ and random variables $\eta\in L^2(\Omega,\mathcal {F}_\tau,P; \mathbb{R})$. The generalization plays a key role in the proof of a strong dynamic programming principle. This strong dynamic programming principle allows us to show that the lower value function is a viscosity solution of our system of multi-dimensional coupled Hamilton-Jacobi-Bellman-Isaacs equations. The uniqueness is obtained for a particular but important case.


Sign in / Sign up

Export Citation Format

Share Document