A new conjugate gradient method for finding the minimum of non linear functions

This paper presents the development and implementation of a new numerical algorithm for solving nonlinear optimization problems. The algorithm is implemented inexact line searches. Powell restarting restart criterion is applied to all the above versions and give dramatic saving in computational efficiency. The results obtained both theoretically and experimentally indicate that in general the new algorithm is superior an standard algorithms using seven nonlinear test-functions with (20) differs dimensions.


Introduction
Conjugate gradient methods (CG) are iterative methods, which generate a sequence of approximations to minimize a function ) (x f .The basis of all CG-methods is to determine new directions of search using information relation only to gradient of a quadratic objective function, in such a way that successive search directions are conjugate with respect to the matrix A of that quadratic form .At each stage i the direction i d is obtained by combining linearly the gradient at , and the previous conjugate directions , ., ,......... , , , where the coefficients of the linear combination is chosen in such a way that i d is conjugate to all previous directions .There fore, the direction vector i d is computed recursively by the rule : Where i β is the conjugate coefficient and it has several values which are introduced from several others as follows: Where We can define the classical CG algorithm as follows Beale [3].
Where i λ is obtained from a line search procedure.
Step (4)-Check for convergence if Where i β is the conjugac coefficient.
Step (6)-Check for restarting criterion if
The CG-method is developed in order to overcome the difficulties of the zigzagging method that is very slow in [2].
In order to improve the local rates of convergence and the efficiency of the classical CG method, several established algorithms are discussed by Dixon's [4] asi gradient prediction method, Nazareth's [8] three-term formula & Nazareth and Nocedal's [9] multi step method.They have all shown that such algorithms are able to generate conjugate directions for a quadratic function without performing exact searches, they will satisfy the quadratic termination property by using an error vector.Other important algorithm of this type, developed by Sloboda [11] and Al-Assady and Al-Bayati [1], They satify the quadratic termination property without the use of error vector, some modifications where developed by several authors such as Touati & Story [13] for a such type of algorithms.

Development of a New CG -algorithm for the quadratic function:
In this section a new general way for the (CG -F/R) type methods is presented.
This new approach has the property of the quadratic termination even if the line search is not exact, Where 1 i G + is the gradient of the quadratic function at Then the following Lemma is hold.

Lemma (1)
For the quadratic function the term * G is obtained by Hestenes and Stiefel [6].

Proof:
Let we define iiii GAxbanddGa =−=− …(4a) The biorthogonalization process of Hestenes and Stiefel is defined as follows: for exact line searches and A is a symmetric positive definite matrix.Thus G , which is defined in eq.( 1) as follows : From the definition then rewriting eq.(4) , we get Now multiplying and dividing the second term of the eq.( 4) by , λ it becomes as From the definition we have in the above equation, then we get: Thus this equation is identical to the equation which is defined in eq. ( 1) as: and orthogonal as in H/S method in quadratic function.From the above argument we have the following two corollaries:

Corollary (1)
The term * 1 + i G which is defined in eq. ( 3) is used to obtained by the following form: Is parallel to that search direction given by H/S algorithm for quadratic function.

Corollary (2)
The search direction which is defined in eq. ( 5) is descent direction even if for non-quadratic function i.e 0 * 1 .

Proof:
Rewrite the direction in eq. ( 5) Multiplying this direction by * 1 + i G then we have: This result is true, because we have , 0 it is easy to prove this result, such as follows we have,

The New Algorithm
Step (1)-Set λ is obtained from the line search procedure. Step , then stop.Otherwise go to step (4) .
Step (4)-Compute * 1 + i G as defined in eq. ( 3).Step ( 7)-Check for restarting criterion in [10] 1 1 1 1 8 .0 f then set i=0 and go to step(1) else go to step (2) .This algorithm is identical to the original CG-methods in quadratic function, because of the ortho geniality property and the lemma (1)is holds.For general function this algorithm is reduced to the P/R algorithm even it inexact searches can used as we have that in Corollary (2).
The compression performance of the algorithm are evaluated by considering both the total no. of function evaluations and the total no. of iterations.The stopping criterion is taken to be : We evaluate for all the algorithms the number of calls of the function evaluations (NOF) , and the number of iterations (NOI).Overall totals are also given for NOF & NOI with each algorithm.
Table (1) contains the numerical results for the new algorithm with (DX) formula & the standard CG-algorithm with the same formula.In this table we see that the new algorithm is more efficient than the standard CGalgorithm, this is obtained from the NOF & NOI of both algorithms.
Table (2) contains the numerical results for the new algorithm with (P/R) formula & the standard CG-algorithm with the same formula.In this table we see that the new algorithm is more efficient than the standard CGalgorithm, this is obtained from the NOF & NOI of both algorithms.

Table ( 1) Comparative performance of the two algorithms for group of test function by using β : (DX)
This paper contains a new form for the gradient estimations with the use of inexact line search and error-terms.The new algorithm satisfies the quadratic termination property with ELS and it is very promising for the general functions.