We consider trust region methods for seeking the unconstrained minimum of an objective function F(
),
, when the gradient
(
),
, is available. The methods are iterative with
1 being given. The new vector of variables
k+1 is derived from a quadratic approximation to F that interpolates F(
k) and
, where k is the iteration number. The second derivative matrix of the quadratic approximation, Bk say, can be indefinite, because the approximation is employed only if the vector of variables
satisfies
, where k is a "trust region radius" that is adjusted automatically. Thus the approximation is useful if
is sufficiently large and if ||Bk|| and k are sufficiently small. It is proved under mild assumptions that the condition
is achieved after a finite number of iterations, where is any given positive constant, and then it is usual to end the calculation. The assumptions include a Lipschitz condition on
and also F has to be bounded below. The termination property is established in a single theorem that applies to a wide range of trust region methods that force the sequence F(
k), k = 1, 2, 3, ..., to decrease monotonically. Any choice of each symmetric matrix Bk is allowed, provided that ||Bk|| is bounded above by a constant multiple of k.
viernes, 22 de enero de 2010
On the convergence of a wide range of trust region methods for unconstrained optimization
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.