A New Family of Spectral CG-Algorithm

A new family of CG –algorithms for large-scale unconstrained optimization is introduced in this paper using the spectral scaling for the search directions, which is a generalization of the spectral gradient method proposed by Raydan [14]. Two modifications of the method are presented, one using Barzilai line search, and the others take


Introduction
Unconstrained optimization is one of the fundamental problems of numerical analysis with numerous applications.
The problem is the following: For a function R R f n → : and an initial point 0 x , find a point * x (the minimizer of f ) which minimizes the function x exists and is locally unique.It is a assumed that f is continuously differentiable for all k where k is the number of iterations.
Methods for unconstrained optimization are generally iterative methods in which the user typically provides an initial estimate 0 x of * x with possibly some additional information.A sequence of iterates } { k x is then generated according to some algorithm.Usually function values ).A well-known algorithm for solving problem given in equation( 1) is the Steepest Descent method first proposed by Cauchy in 1874.The iterations are made according to the following equation: where and k  is a step-size, which is obtained by carrying out an exact line search.It's well-known that the negative gradient direction has the following optimal property (see [7]).
Despite the simplicity of the method and the optimal property (3), the Steepest Descent method converges slowly and is badly affected by illconditioning (see [9] or [15]).
In 1988, a paper by Barzilai and Borwein [5] proposed a Steepest Descent method (the BB method) that uses a different strategy for choosing the step-size k  along the negative gradient direction which is obtained from two point approximation to the secant equation underlying Quasi-Newton methods, Considering , yielding (see [2] or [5]), with these, the method of Barzilai and Borwein is given by the following iterative scheme: where The scalar BB  has been already used as scaling factor in the Quasi- Newton algorithms or Conjugate Gradient algorithms (see [4] and [11]).
The BB method has been shown to converge [14] and it's convergence is linear [13], despite at these advances of BB method on quadratic functions, still there are many open questions about this method on non-quadratic functions although Fletcher [9] shows that the method be very low on some test functions.
In recent paper Abbo [1] proposed a modification of BB by the following way [1].Let and using Taylor's series for g(x) about

Conjugate Gradient Method (CG-Methods)
Conjugate Gradient Methods depend on the fact that for quadratic function, if we search along a set of n mutually conjugate directions , then we will find the minimum in at most n steps if line searches are exact.Moreover, if we generate this set of directions by known gradients, then each direction can be simply expressed as 0 0 where All these k  's are equivalent on quadratic function with exact line searches and starting with steepest descent direction, but when extended to general non-linear functions, the conjugate gradient algorithm with different  are quite different in efficiency.Formula (11) gives better algorithms than (10) in practice , a reason for this is given by Powell [13].One of the reasons for the inefficiency of CG-method is that none of the  in ( 10) and (11) takes into consideration the effect of inexact line searches [10].To overcome this drawback some authors proposed the so called spectral conjugate gradient methods (see for example [3], [6]).
Birgin and Martinez in [6] introduced an spectral conjugate gradient (SCG), in which the search directions are generated by where this formula was introduced by Perry in [12], if we assume that Finally, assuming that the successive gradients are orthogonal, we obtain the generalization of FR formula: In fact, SCG algorithm is a generalization of the Raydan [14] spectral gradient algorithm defined by where  as in (13).

New family of SCG methods (NSCG say)
In [10] Birgin gives a nice comparison by asking the following questions: 1-Is the choice (13) better than 15) and ( 16)? 3-Which is the best choice of k  ?
According to these inquires let us consider the following: From the last term in (7) and substituting in (6) we obtain where I is n n identity matrix and put  14) is very effective since the line search which is used in this paper is not exact.To answer the 3 rd inquiry we suggest a new hybrid computations for the scalar  as shown in step(2) from the new algorithm.
We are going to list outlines of the new proposed algorithm (NSCG).

Theorem:
Suppose that f is bounded below in n R and that f is continuously differentiable in neighborhood of the level set . Assume also that the gradient k g is Lipchitz continuous i.e. there exists a constant 0  c ) ( 24) and (25) we get

Consider any iteration of the form
using equations ( 22) and (26) we have then the equation ( 27) can be written as summing the expression in equation ( 28) and since f is bounded below, we obtain

Numerical results
The comparative test involves eleven well-known standard test functions(given in the appendix) with different dimensions.The results are given in the Table (1) is specifically quoting the number of function evaluations (NOF) .All programs are written in FORTRAN 90 language and for all cases the stopping criterion is taken to be New SCG 64% 56% From table (2) it is clear that the new proposed algorithm with it's both versions has an improvements of about (33-36)% NOF according to our selected number of test functions.

Appendix :
All the test functions used in this paper are from general literature: the identity matrix as an approximation of Hessian matrix k G , from convex combination of forward and backward Euler's scheme

.
To answer the 2 nd inquiry, it is clear that k  in (


satisfies Wolf condition defined by (22) and (23) then the search direction will be descent , i.e.

Table ( 1) Comparison results between the new (NSCG) and Birgin spectral standard SCG for
FR Eva.