For large-scale problems, the cpu time of NTR is much more than NLMTR, and for some problems NTR fails, especially when .
In Section 3, a Newton-like trust region method for large-scale unconstrained nonconvex minimization is proposed and the convergence property is proved under some reasonable assumptions. In this section, we deduce a straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation, which employs both the gradients and function values to construct the approximate Hessian and is a compensation for the missing data in limited memory techniques.
And then we apply the derived formula in trust region method.
All tests are implemented by using Matlab R2008a on a PC with CPU 2.00 GHz and 2.00 GB RAM. All numerical results are listed in Table 2, in which iter stands for the number of iterations, which equals the number of gradient evaluations; nf stands for the number of objective function evaluations; Prob stands for the problem label; Dim stands for the number of variables of the tested problem; cpu denotes the CPU time for solving the problems; . From Table 2, we can see that for small-scale problems, the optimal values and the gradient norms of NTR are more accurate than NLMTR.
The test problem collections for nonconvex unconstrained minimization are taken from Moré et al. For middle-scale problems, the accuracy of NTR is higher, but the cpu time of NLMTR is shorter.
It is shown that the matrices generated have some desirable properties.
The resulting algorithms are tested numerically and compared with several well-known methods.
And a new straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation is deduced to construct the trust region subproblem, in which the information of both the function value and gradient is used to construct approximate Hessian. Numerical results indicate that the proposed method is competitive and efficient on some classical large-scale nonconvex test problems. Trust region methods [1–14] are robust, can be applied to ill-conditioned problems, and have strong global convergence properties.
Another advantage of trust region methods is that there is no need to require the approximate Hessian of the trust region subproblem to be positive definite.
So, trust region methods are important and efficient for nonconvex optimization problems [6–8, 10, 12, 14]. Trust region methods ensure that at least a Cauchy (steepest descent-like) decrease on each iteration satisfies an evaluation complexity bound of the same order under identical conditions .