A modified pseudospectral method for indirect solving a class of switching optimal control problems

Author(s):  
Mohammad A Mehrpouya

In the present paper, an efficient pseudospectral method for solving the Hamiltonian boundary value problems arising from a class of switching optimal control problems is presented. For this purpose, based on the Pontryagin's minimum principle, the first-order necessary conditions of optimality are derived. Then, by partitioning the time interval related to the problem under study into some subintervals, the states (and costates) and control functions are approximated on each subintervals with piecewise interpolating polynomials based on Legendre–Gauss–Radau points and a piecewise constant function, respectively. As a result, solution of the problem is turned into solution of a number of algebraic equations, in which the values of the states (and costates) and control functions at Legendre–Gauss–Radau points as well as switching and terminal points are allowed to be unknown. Numerical examples are presented at the end to show the efficiency of the proposed method.

Author(s):  
Mahmood Dadkhah ◽  
Kamal Mamehrashi

In this paper, a numerical technique based on the Hartley series for solving a class of time-delayed optimal control problems (TDOCPs) is introduced. The main idea is converting such TDOCPs into a system of algebraic equations. Thus, we first expand the state and control variables in terms of the Hartley series with undetermined coefficients. The delay terms in the problem under consideration are expanded in terms of the Hartley series. Applying the operational matrices of the Hartley series including integration, differentiation, dual, product, delay, and substituting the estimated functions into the cost function, the given TDOCP is reduced to a system of algebraic equations to be solved. The convergence of the proposed method is extensively investigated. At last, the precision and applicability of the proposed method is studied through different types of numerical examples.


Author(s):  
Hossein Hassani ◽  
Zakieh Avazzadeh ◽  
José António Tenreiro Machado

This paper studies two-dimensional variable-order fractional optimal control problems (2D-VFOCPs) having dynamic constraints contain partial differential equations such as the convection–diffusion, diffusion-wave, and Burgers' equations. The variable-order time fractional derivative is described in the Caputo sense. To overcome computational difficulties, a novel numerical method based on transcendental Bernstein series (TBS) is proposed. In fact, we generalize the Bernstein polynomials to the larger class of functions which can provide more accurate approximate solutions. In this paper, we introduce the TBS and their properties, and subsequently, the privileges and effectiveness of these functions are demonstrated. Furthermore, we describe the approximation procedure which shows for solving 2D-VFOCPs how the needed basis functions can be determined. To do this, first we derive a number of new operational matrices of TBS. Second, the state and control functions are expanded in terms of the TBS with unknown free coefficients and control parameters. Then, based on these operational matrices and the Lagrange multipliers method, an optimization method is presented to an approximate solution of the state and control functions. Additionally, the convergence of the proposed method is analyzed. The results for several illustrative examples show that the proposed method is efficient and accurate.


Author(s):  
Inseok Hwang ◽  
Jinhua Li ◽  
Dzung Du

A novel numerical method based on the differential transformation is proposed for solving nonlinear optimal control problems in this paper. The differential transformation is a linear operator that transforms a function from the original time and/or space domain into another domain in order to simplify the differential calculations. The optimality conditions for the optimal control problems can be represented by algebraic and differential equations. Using the differential transformation, these algebraic and differential equations with their boundary conditions are first converted into a system of nonlinear algebraic equations. Then the numerical optimal solutions are obtained in the form of finite-term Taylor series by solving the system of nonlinear algebraic equations. The differential transformation algorithm is similar to the spectral element methods in that the computational region splits into several subregions but it uses polynomials of high degrees by keeping a small number of subregions. The differential transformation algorithm could solve the finite- (or infinite-) time horizon optimal control problems formulated as either the algebraic and ordinary differential equations using Pontryagin’s minimum principle or the Hamilton–Jacobi–Bellman partial differential equation using dynamic programming in one unified framework. In addition, the differential transformation algorithm can efficiently solve optimal control problems with the piecewise continuous dynamics and/or nonsmooth control. The performance is demonstrated through illustrative examples.


Author(s):  
Martin Gugat

AbstractIn this paper the turnpike phenomenon is studied for problems of optimal control where both pointwise-in-time state and control constraints can appear. We assume that in the objective function, a tracking term appears that is given as an integral over the time-interval $$[0,\, T]$$ [ 0 , T ] and measures the distance to a desired stationary state. In the optimal control problem, both the initial and the desired terminal state are prescribed. We assume that the system is exactly controllable in an abstract sense if the time horizon is long enough. We show that that the corresponding optimal control problems on the time intervals $$[0, \, T]$$ [ 0 , T ] give rise to a turnpike structure in the sense that for natural numbers n if T is sufficiently large, the contribution of the objective function from subintervals of [0, T] of the form $$\begin{aligned} {[}t - t/2^n,\; t + (T-t)/2^n] \end{aligned}$$ [ t - t / 2 n , t + ( T - t ) / 2 n ] is of the order $$1/\min \{t^n, (T-t)^n\}$$ 1 / min { t n , ( T - t ) n } . We also show that a similar result holds for $$\epsilon $$ ϵ -optimal solutions of the optimal control problems if $$\epsilon >0$$ ϵ > 0 is chosen sufficiently small. At the end of the paper we present both systems that are governed by ordinary differential equations and systems governed by partial differential equations where the results can be applied.


Sign in / Sign up

Export Citation Format

Share Document