Operations Research Models and Methods / Methods / Network Methods /
Teach Nonlinear Programming Add-in

 Conjugate Gradient Direct Search

 The Congugate Gradient method uses a different procedure to compute search directions. In most cases the number of function evaluations to reach a stationary point is much smaller than required for the gradient search method. For a quadratic function, the method should theoretically terminate after N+1 line searches, where N is the dimension of the decision vector. In this section, we illustrate the method as it is implemented in the add-in. We leave the discussion of the details of the algorithm to the text book. The method is initiated by clicking the Conj. Gradient button on the Optimize dialog. We again use the G(Y) function as an illustration. The search results are shown below. The process immediately moves to the negative quadrant and the objective becomes a large negative value. We have discovered a region of the variable space that has an unbound objective value. The minimum found by the Gradient method was a local minimum point. There is no global minimum, because the objective function is unbounded from below. The cubic terms in the function definition cause this result. The program recognizes the unboundedness when the search coordinates go beyond the limits expressed in the parameters. The termination message shows the following. The previously discovered local minimum can be found by the conjugate gradient method by reducing the maximum step size of the Golden Section line search method. The following result was obtained by reducing the maximum step size from 5 to 1. Note that the solution is obtained with only four line searches and a much smaller number of functional evaluations than required for the gradient method.

Next

Operations Research Models and Methods
by Paul A. Jensen and Jon Bard, University of Texas, Copyright by the Authors