On the optimization of constrained functions: comparison of sequential gradient-restoration algorithm and gradient-projection algorithm
作者:
Highlights:
•
摘要
The problem of minimizing a function fnof(x) subject to the nonlinear constraint ϕ(x) = 0 is considered, where fnof is a scalar, x is an n-vector, and ϕ is a q-vector, with q < n. The sequential gradient-restoration algorithm (SGRA: Miele, [1, 2]) and the gradient-projection algorithm (GPA: Rosen, [3, 4]) are considered. These algorithms have one common characteristic: they are all composed of the alternate succession of gradient phases and restoration phases. However, they are different in several aspects, namely, (a) problem formulation, (b) structure of the gradient phase, and (c) structure of the restoration phase. First, a critical summary of SGRA and GPA is presented. Then, a comparison is undertaken by considering the speed of convergence and, above all, robustness (that is, the capacity of an algorithm to converge to a solution). The comparison is done through 16 numerical examples. In order to understand the separate effects of characteristics (a), (b), (c), six new experimental algorithms are generated by combining parts of Miele's algorithm with parts of Rosen's algorithm. Thus, the total number of algorithms investigated is eight. The numerical results show that Miele's method is on the average faster than Rosen's method. More importantly, regarding robustness, Miele's method compares favorably with Rosen's method. Through the examples, it is shown that Miele's advantage in robustness is more prominent as the curvature of the constraint increases. While this advantage is due to the combined effect of characteristics (a), (b), (c), it is characteristic (c) that plays the dominant role. Indeed, Miele's restoration provides a better search direction as well as better step-size control than Rosen's restoration.
论文关键词:
论文评审过程:Available online 22 March 2002.
论文官网地址:https://doi.org/10.1016/0096-3003(76)90006-0