An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem

2019 ◽  
Vol 98 ◽  
pp. 74-80 ◽  
Author(s):  
Qun Li ◽  
Bing Zheng ◽  
Yutao Zheng
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Ahmad Alhawarat ◽  
Thoi Trung Nguyen ◽  
Ramadan Sabra ◽  
Zabidin Salleh

To find a solution of unconstrained optimization problems, we normally use a conjugate gradient (CG) method since it does not cost memory or storage of second derivative like Newton’s method or Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. Recently, a new modification of Polak and Ribiere method was proposed with new restart condition to give a so-call AZPRP method. In this paper, we propose a new modification of AZPRP CG method to solve large-scale unconstrained optimization problems based on a modification of restart condition. The new parameter satisfies the descent property and the global convergence analysis with the strong Wolfe-Powell line search. The numerical results prove that the new CG method is strongly aggressive compared with CG_Descent method. The comparisons are made under a set of more than 140 standard functions from the CUTEst library. The comparison includes number of iterations and CPU time.


2017 ◽  
Vol 13 (4) ◽  
pp. 588-592
Author(s):  
Muhammad Imza Fakhri ◽  
Mohd Rivaie Mohd Ali ◽  
Ibrahim Jusoh

Conjugate Gradient (CG) methods are well-known method for solving unconstrained optimization problem and popular for its low memory requirement. A lot of researches and efforts have been done in order to improve the efficiency of this CG method. In this paper, a new inexact line search is proposed based on Bisection line search. Initially, Bisection method is the easiest method to solve root of a function. Thus, it is an ideal method to employ in CG method. This new modification is named n-th section. In a nutshell, this proposed method is promising and more efficient compared to the original Bisection line search. 


2020 ◽  
Vol 34 (09) ◽  
pp. 13620-13621
Author(s):  
Sören Laue ◽  
Matthias Mitterreiter ◽  
Joachim Giesen

Most problems from classical machine learning can be cast as an optimization problem. We introduce GENO (GENeric Optimization), a framework that lets the user specify a constrained or unconstrained optimization problem in an easy-to-read modeling language. GENO then generates a solver, i.e., Python code, that can solve this class of optimization problems. The generated solver is usually as fast as hand-written, problem-specific, and well-engineered solvers. Often the solvers generated by GENO are faster by a large margin compared to recently developed solvers that are tailored to a specific problem class.An online interface to our framework can be found at http://www.geno-project.org.


Author(s):  
K. J. KACHIASHVILI

There are different methods of statistical hypotheses testing.1–4 Among them, is Bayesian approach. A generalization of Bayesian rule of many hypotheses testing is given below. It consists of decision rule dimensionality with respect to the number of tested hypotheses, which allows to make decisions more differentiated than in the classical case and to state, instead of unconstrained optimization problem, constrained one that enables to make guaranteed decisions concerning errors of true decisions rejection, which is the key point when solving a number of practical problems. These generalizations are given both for a set of simple hypotheses, each containing one space point, and hypotheses containing a finite set of separated space points.


Sign in / Sign up

Export Citation Format

Share Document