Automatic Self-Scaling Strategies For VM updates

In this paper a class of self-scaling VM-algorithms for unconstrained optimization is investigated. Some theoretical results are given on the scaling strategies that guarantee the global convergence of the new proposed algorithm.


Introduction
Consider the unconstrained optimization problem min n R x f(x) where f is a nonlinear differentiable function.
Assume that an exact line search is used at the beginning of each iteration k, and that for an estimate vector x k there is a symmetric and positive definite matrix B k . The new iteration is computed by where g k is the gradient of the objective function at k x . k  is a steplength satisfies exact line search strategy, i.e.
See Al-Bayati [1 ,2] for more details and properties of this algorithm.

New Suggestion
In this section we describe the prototype for the new suggested class of algorithms with self-scaling strategies:

Algorithm:
1-For a starting point x, and non singular matrix V 1 ; set k =1.

Abbas Y. AL-Bayati & Maha S. Al -Salih
Note that: (6) and the update is performed directly on V k .

1-In the above algorithm
2-It will be shown that one has considered freedom in choosing k  and k  of every iteration, while still maintaining global convergence of the above algorithm. It is necessary that the choice of these values be made carefully.

Global Convergence of the New Algorithm
In this section, we prove that the new algorithm suggested in section (2) with an appropriate choice of the scaling parameters is globally convergent on strictly convex objective functions.
Where tr, denotes trace of any matrix see [7].

Proof: For any two matrices A and B
tr (AB) = tr (BA) Eq. (7) follows directly from the last equality #

Lemma3.2: Let h(u) = In u-u for u>0
x h y h x y and x (9) Proof: To prove eq.(8) we first note that h(u) is strictly concave and its maximum occurs at u =1. If On the other hand, if . We can prove eq.(9) in a similar line with 4 ]. Details and explanations can be found in [3]. Now let G(x) denotes the Hessian matrix of f at x.
This result has been used by Byrd and Nocedal [4] and Griewank [5] in their analysis of QN methods.
The assumption on f also implies that (since the series is genomeritic and it converges to a finite sum) This proves the global convergence of our new proposed algorithm. #

Final Remarks
We have described in this paper the conditions under which a new automatic self-scaling algorithm based on the direct form of Al-Bayati's VM-Update [2] can be proven to be globally convergent. It should be noted that using extra theoretical results the super linear convergence of this new algorithm may be found.
Also some sort of numerical experiments needs to inform the effectiveness of the new proposed algorithm. This will be certainly done in our next research paper.
It is also possible to describe another similar algorithm based on the inverse scaled-BFGS algorithm. Also a column-scaling algorithm which was proposed by Siegel [9] may be modified and implemented with this family of algorithms by Nocedal [8].
However, optimal values of 6k, ~Ik selected in the new algorithm may be described in our further work, but for this proposed algorithm let 6 k =0.5 and ~t,,=1. It might occasionally be better to increase 6 k and to decrease ~t k . in any case, the theory developed in this paper will prove to be useful for analyzing the global convergence of the algorithm and it may be useful to prove the super linear convergent of the new proposed algorithm in the following research paper.