Abstract We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search
into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate
diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points
and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions
and strictly convex quadratic functions satisfying specified conditions.
into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate
diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points
and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions
and strictly convex quadratic functions satisfying specified conditions.
- Content Type Journal Article
- Category Original Paper
- DOI 10.1007/s11075-009-9350-8
- Authors
- Predrag S. Stanimirović, University of Niš Department of Mathematics, Faculty of Science Višegradska 33 18000 Niš Serbia
- Marko B. Miladinović, University of Niš Department of Mathematics, Faculty of Science Višegradska 33 18000 Niš Serbia
- Journal Numerical Algorithms
- Online ISSN 1572-9265
- Print ISSN 1017-1398
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.