New Secant Hyperbolic Model for Conjugate Gradient Method

Abbas Y. Al-Bayati profabbasalbayati@yahoo.com Ban Ahmed Mitras dr.banah.mitras@gmail.com College of Computer Sciences and Mathematics University of Mosul, Iraq Received on: 28/06/2003 Accepted on: 13/12/2004 ABSTRACT  New hyperbolic model different from the quadratic ones is proposed for solving unconstrained optimization problems which modify the classical conjugate gradient method. This new model was compared with established methods over a variety of standard non-linear test functions. The numerical results show that the use of non-quadratic model is beneficial in most of the problems considered especially when the dimensional of the problems increases. Keyword: CG method, hyperbolic model, unconstrained optimization


Introduction:
Hestenes and Stiefel proposed a CG-method in 1952 for solving systems of linear equations. This method is used for unconstrained optimization, where CG methods are applied to quadratic functions. Fletcher and Reeves first did the extension of the CG method for solving nonlinear equation systems and its use in solving general unconstrained minimization problems in 1964.
The basis of all conjugate gradient (CG) methods is to determine a new direction of search using information related only to the gradient of a quadratic objective function. In such away the successive search directions are conjugate with respect to positive definite Hessian matrix G of the quadratic form. At each stage k the direction dk is obtained by linear combination of the gradient -gk and the point xk and the set direction d0,…, dk-1, such that dk is conjugate to all previous directions based on the conjugacy coefficients. Therefore, the search direction dk+1 is obtained by the following rule: (Bazarra, 2000).

Extension of the CG-Methods:
Among the existing conjugate direction algorithms the updating process rarely takes into account any deviation of the objective function from quadratic behaviour. However, quadratic models may not always be adequate to incorporate all the information which might be needed to represent the objective function f(x) successfully.
In order to improve the rate of convergence for the minimization algorithms when applied to non-quadratic method rather than quadratic, we introduce in this paper a new algorithm which is invariant to non-linear scaling of quadratic function.
If q(x) is a quadratic function then a function f is defined as a nonlinear scaling q(x) if the following condition holds: (Spedicato, 1976).
Many authors have proposed a special models to determine k  where k  is defined as follows: , for q > 0 and f is an increasing monotonic function, which may better represent the objective function f and q is a quadratic function. Fried (1971) has described a CGmethod capable of minimizing the function F(q) = (q(x)) p ; p > 0, x R n , in at most n iterations.
Another special case, namely Al-Assady (1991) developed a model as follows: F(q(x)) = ln (q(x)) …(3) Al-Bayati (1993) has developed a new rational models which is defined as follows: Also, Al-Bayati (1995) developed an extended conjugate gradient algorithm which is based on a general logarithmic model: Al-Assady and Huda (1997) described the extended CGalgorithm which is based on the natural log function for the rational q(x) function Al-Assady and Al-Ta'ai (2002) developed an extended conjugate gradient algorithm which is: Also, Al-Assady and Al-Ta'ai (2002) suggested a new model that is:

New Hyperbolic Model for the CG-Methods:
In this paper, a new tangent hyperbolic model is investigated and tested on a set of standard test functions. If q(x) is a quadratic function, then a function f is defined as a non-linear scaling of q(x) if the following condition holds: (See Biggs, 1973;Tassopolus and Storey, 1984).
An extended conjugate pair algorithm is developed which is based on this new model which scales q(x) by the natural sech function for the rational q(x) functions )) ( …(10) We first observe that q(x) and F(q(x)) given by eq.(10) have identical contours, though with different function values, and have the same unique minimum point denoted by x*.

The Outlines of the New Algorithm:
Step (1): k=1.

Numerical Results:
To test the effect of the suggested method, a sample of six problems was solved, in order to compare the new algorithm with the original CGalgorithm by using H/S formula.
For all cases the stopping criterion is taken to be The line search routine used was a cubic interpolation which uses function and gradient values which is described in Bunday (1984).
The following table gives the comparison between the results of the suggested algorithm with standard CG-algorithm, the results in this table indicate that the new algorithm is superior to the standard CG-algorithm with respect to the total of number of iteration NOI and number of function evaluation NOF.