New Hybrid CG Algorithm Based on PR and FR-CG Steps

In this paper, a new hybrid conjugate gradient algorithm is proposed for unconstrained optimization. This algorithm combines the desirable computation aspects of Polak-Ribier steps and useful theoretical features of Fletcher-Reeves CG-steps. Computational results for this algorithm are given and compared with those of the Fletcher and Polak standard CG methods showing a considerable improvement over the latter two methods. Abbas Y. Al-Bayati, Khalil K. Abbo &Asma M. Abdalah 28


Introduction
The problem of interest can be stated as that of finding a local solution x * to the problem. Minimize f(x) ; x  R n . . . . . . (1) Usually x * exists and is locally unique. There are two particular types that must be made. One is that these methods do not guarantee to fined a global solution of equation (1). Another type is that the objective function f(x) must be sufficiently smooth in some cenes, for more detail see (Fletcher, 1993). There are some basic theoretical results on the nonguadratic models (see Al-Bayati, 1993).
Methods for unconstrained optimization differ according to how much information the user is able to provide. The most desirable situation from the point of view of providing useful information is that the user provides subroutines from which f(x), g(x) (where g(x) = f(x)) and G(x), (G(x) =  2 f(x)) can be evaluated for any x. These methods are generally iterative methods in which the user typically provides an initial estimate x of x * and possibly some additional information. Such that at each step the search for minimum is carried out along the descent direction dk i.e dk T gk < o . . . . . .
(3) If the line search is exact, the step size k is defined by In practice however an exact line search is not usually possible and any value of k that satisfies certain standard conditions is accepted Fletcher, (1980) suggests that k is such that xk + 1 satisfies the condition |g T k+1dk| < - g T kdk . . . . . .
Where   (0, 1/2),  (0, 1) and  <  Conjugate gradient (CG) method is one of the few practical methods for solving large dimensionality problems because it does not require matrix storage and its iteration cost is very low. Normally the initial direction dk is given by d1 = -g1 . . . . . .
The search direction for the next iteration has the following form (see Al-Bayati and Al-Assady, 1994). dk+1 = -gk+1 + k dk . . . . . . (8) Where k is a constant parameter defined either by The definition k in (9a) is due to Fletcher and Reeves 1964 and k in (9b) is due to Polak-Ribier 1969. Extensive numerical experience has shown that the PR algorithm is more efficient than the original FR algorithm.
There is theoretical explanation which shows that PR-formula is batter than FR formula. On general non-quadratic functions it can happen (see Fletcher, 1987) that the search direction dk becomes almost orthogenal to -gk and hence little progress can be made. In this event, xk+1 = xk and gk+1 = gk so FR method then gives dk+1  -gk+1= dk (10a) While the PR method becomes dk+1  -gk+1 (10b) So, in this circumstance the PR algorithm tends to restart automatically to the steepest descent direction. Thus, it seems that this formula should be used when solving large problems. Many extensions and modifications have been proposed in this field (see Al-Bayati and Ahmed, 1996).

Theoretical results on CG methods:
Various formulas for k hare given suggested in equation (8), but for purpose of this paper, attention will be focused on the (9a and 9b). We shall assume that the level set is bounded. This assumption will ensure that k is well defined for all k. Its clear that d T 1 g1 = -g T 1 g1 < 0 so the descent property in eq.(2) holds on the first iteration for any conjugate gradient method. Moreover, if the line search is exact, then (12) therefore from equation (8) and (12) it follows that This shows that a descent property holds on all iterations for any conjugate gradient method with exact line search, and in particular for both FRCG and PRCG methods. Powell, (1983) shows that if the level set eq. (11) is bounded, if k is defined so that eq.(12) holds for all k and if f(X) is twice continuously differentiable then FRCG method achieves the limit: Furthermore Al-Baali (1984) extends this result to show that even for an inexact line search satisfying (5) and (6), the descent property holds for all k and global convergence is achieved for the Fletcher-Reeves method.
Although in numerical computations (9b) is generally far more successful than formula (9a) (see Powell, 1977Powell, , 1985, for a theoretical explanation). It has not been possible to establish these global convergence results for the Polak-Ribier method unless the additional condition is imposed that the step lengths ||xk+1 -xk|| tend to zero (see Powell, 1977). In fact, Powell, (1983) shows that if k is chosen to satisfy (9b) rather than (9a), then even with exact line search and exact arithmetic there exist twice continuously differentiable functions with bounded level set eq. (11) for which the gradient norms ||gk||, k = 1,2,. . . are bounded away from zero. This has consequently led to thoughts on how to combine the desirable computational aspects of formula (9b) and the useful theoretical features of formula (9a).

New hybrid CG algorithm:
In this new hybrid algorithm we assume that an inexact line search is used for non-quadratic objective function. It can be shown that if at every iteration of the Polak-Ribier algorithm (see Story and Touti, 1990) we have Then the convergence proofs given by Powell (1983) and Al-Baali (1985) for Fletcher-Reeves method apply to Polak-Ribier method alsoequation (15) Where  is the angle between gk+1 and gk without loss of generality suppose (0, /2) and hence (17) Consequently we considered the use of hybrid conjugate gradient using formula (9b) whenever condition in eq. (17) is satisfied and formula (9a) otherwise, a descent property holds for all k and global convergence is achieved for this new hybrid algorithm when either an exact or an inexact line search is used. This algorithm was tested on several test functions and the results obtained show, in many cases, a significant improvement on the Fletcher-Reeves and Polak-Ribier methods. There were also cases, however, where the Polak algorithm performed better than this new hybrid algorithm.

Algorithm (New hybrid CG):
Step1: Let x1 be an initial estimate at the minimizer x * of f Step2: set k = 1 and set dk = -gk Step3: do a line search : set xk+1 = xk + kdk Step4: if ||gk+1|| < , where  = 5 x 10 -5 , take x * as xk+1 and stop otherwise go to step 5 Step5: if k + 1 > n > 2 then go to step 11; otherwise, go to step 6 Step6: set the vector g g d d Step8: if 0 < PR  FR - FR set k = PR and go to step 9 otherwise k = FR go to step 9 Step9: set the search direction at the iteration (k + 1) dk+1 = -gk+1 + k d k Step10: set k = k + 1 and go to step 3 Step11: set xk+1 = x1 and go to step 2

Conclusions:
A new proposed hybird algorithm which combines PR and FR steps is investigated both theoretically and experimentally with obtaining a roubst numerical results.