A Fast Solver for anH1Regularized PDE-Constrained Optimization Problem

2016 ◽  
Vol 19 (1) ◽  
pp. 143-167 ◽  
Author(s):  
Andrew T. Barker ◽  
Tyrone Rees ◽  
Martin Stoll

AbstractIn this paper we consider PDE-constrained optimization problems which incorporate anH1regularization control term. We focus on a time-dependent PDE, and consider both distributed and boundary control. The problems we consider include bound constraints on the state, and we use a Moreau-Yosida penalty function to handle this. We propose Krylov solvers and Schur complement preconditioning strategies for the different problems and illustrate their performance with numerical examples.

2013 ◽  
Vol 479-480 ◽  
pp. 861-864
Author(s):  
Yi Chih Hsieh ◽  
Peng Sheng You

In this paper, an artificial evolutionary based two-phase approach is proposed for solving the nonlinear constrained optimization problems. In the first phase, an immune based algorithm is applied to solve the nonlinear constrained optimization problem approximately. In the second phase, we present a procedure to improve the solutions obtained by the first phase. Numerical results of two benchmark problems are reported and compared. As shown, the solutions by the new proposed approach are all superior to those best solutions by typical approaches in the literature.


2021 ◽  
Vol 13 (2) ◽  
pp. 90
Author(s):  
Bouchta RHANIZAR

We consider the constrained optimization problem  defined by: $$f (x^*) = \min_{x \in  X} f(x)\eqno (1)$$ where the function  f : \pmb{\mathbb{R}}^{n} → \pmb{\mathbb{R}} is convex  on a closed bounded convex set X. To solve problem (1), most methods transform this problem into a problem without constraints, either by introducing Lagrange multipliers or a projection method. The purpose of this paper is to give a new method to solve some constrained optimization problems, based on the definition of a descent direction and a step while remaining in the X convex domain. A convergence theorem is proven. The paper ends with some numerical examples.


Author(s):  
Xinghuo Yu ◽  
◽  
Weixing Zheng ◽  
Baolin Wu ◽  
Xin Yao ◽  
...  

In this paper, a novel penalty function approach is proposed for constrained optimization problems with linear and nonlinear constraints. It is shown that by using a mapping function to "wrap" up the constraints, a constrained optimization problem can be converted to an unconstrained optimization problem. It is also proved mathematically that the best solution of the converted unconstrained optimization problem will approach the best solution of the constrained optimization problem if the tuning parameter for the wrapping function approaches zero. A tailored genetic algorithm incorporating an adaptive tuning method is then used to search for the global optimal solutions of the converted unconstrained optimization problems. Four test examples were used to show the effectiveness of the approach.


Sign in / Sign up

Export Citation Format

Share Document