Self-Scaling Variable Metric in Constrained Optimization

In this paper, we investigated of a new self-scaling by use quasi-Newton method and conjugate gradient method. The new algorithm satisfies a quasi-newton condition and mutually conjugate, and practically proved its efficiency when compared with the well-known algorithms in this domain, by depending on efficiency measure, number of function, number of iteration, and number of constrained, NOF, NOI and NOC.


Interior Point Method (Barrier Method)
Sequential minimization techniques available to solve the constraint optimization problems is known as barrier function methods. This approach was first proposed by Carroll in 1961 [3] under the name created response surface technique. The approach was also used to solve non linear inequality constrained problems by Box, Davies, and Swam [1969] and Kowalik [1966]. The barrier function approach has been thoroughly investigated and popularized by Fiacco andMcCormick [1964, 1968]. Himmelblau [1972] also discussed effective unconstrained optimization algorithms for solving barrier methods. Similar to penalty functions, barrier functions are also used to transform a constrained problem into unconstrained or into a sequence of unconstrained problems.
The function ∅( , ) is defined so that it becomes infinite at the boundary of the feasible region R , i.e barriers are constructed on each constraint , and the solution ( ) ∈ ; then * , is approached from the interior of R in a sequence defined by the controlling parameter r. the barrier function method is only suitable for inequality constraints [11].

The logarithmic barrier method (Frisch, 1955)
The logarithmic barrier function is defined as The logarithmic barrier function is well defined the interior { : ( ) > 0 , = 1, … , } of the feasible set , but because of the singularity of the logarithm at zero , the logarithmic barrier function grows to +∞ as x approaches a boundary point of the feasible set [7].

The inverse barrier function (carroll ,1961)
The inverse barrier function is defined as [4]

Properties of the Barrier Function Methods
The function in (1)  exists and is equal of ( 0 ) iv) if [ ] is a sequence such that ↓ 0 and suppose that [ ] is a sequence in 0 that convergence to a point ̅ , then lim

The SUMT Method by Using Barrier Function Methods
For the sequential unconstrained minimization techniques (SUMT) with inverse barrier function , we can solve the constrained problem defined as min ( ) ∈ … (7) Subject to the constraints Now , our aim is to minimize the function ∅( , ) by starting from a feasible point 0 and with an initial value 0 which is derived as Then , the gradient of ∅( , ) is ∅( , ) = ( ) + ( ) … (12) The squared magnitude of this vector is given by ∇ ( ) ∇ ( ) + 2 ∇ ( )∇ ( ) + 2 ∇ ( ) ∇ ( ) … (13) and this is minimum when This initial value for , as suggested by Fiacco and McCarmick [5] appears to give good results ; in general , the method of reducing is simple iterative method such that where is constant equal to 10 and the search direction in this case can be defined = − … (16) Where is the n*n positive definite symmetric, approximation to the inverse Hessian matrix −1 , and g is the gradient vector of the function ∅( , ) where = ( ) = ∇∅( , ) . At the i-th iteration given the current iterative x and the search direction , the next is obtained by +1 = + … (17) Where optimal step size which is obtained by cubic interpolation . We start with = 2(twice the newton step length) and test if +1 is feasible . We thus test ( +1 ) to see that it is positive for all i, but if a constraint is violated we replace by ⁄ , from a new point +1 and test again .
Eventually , we find a feasible +1 and we can then proceed with the interpolation . Our choice a=2 becomes close to the distance to nearest constraint boundary ; then the matrix is updated by a correction matrix to get +1 = + … (18) Where , is a correction matrix which satisfies Quasi-Newton like condition (namely = ) where and are different vectors between two successive points and gradients respectively and is any scalar .The initial matrix 1 can be any positive definite matrix . However , H is usually chosen to be the identity matrix I. is updated through the formula of the (BFGS) update (Fletcher , 1970) [6].
… (19) Omitting the subscript i and defining * = +1 we have The minimization of ∅( , ) is carried out until two successive function values 1 and 2 are found such that Where , is any small positive number 0.000001 and terminate when Where , is any small value number is equal 0.000001 and +1 = 10 … (25) Now , we are going to give the outlines of the well-known Barrier function algorithm in the following section .

Self-Scaling Quasi-Newton Methods
The general strategy of self-scaling quasi-Newton method is to scale the Hessian approximation matrix before it is updated at each iteration. This is to avoid alarge difference in the eigenvalues of the approximated Hessian of the objective function. Self-scaling variable metric algorithms was introduced by Oren (see [9] and [10]).
The Hessian approximation matrix can be updated according to a self-scaling BFGS update of the form where , is the self-scaling factor. For a general convex objective function, Nocedal and Yuan [8] proves global convergence of a Self scaling-BFGS in (26) with Wolfe line search. They also present results indicating that the un scaled BFGS method in general is superior to the Self Scaling-BFGS with its of Oren and Luenberger.
A suggestion of Al-Baali, see [2], is to modify the self-scaling factor to = min{ , 1} … (28) This modification of gives a global convergent Self scaling-BFGS method which is competitive with the unscaled BFGS method.
In order to eliminate the truncation and rounding errors, the new scalar parameter σ is added to make the sequence and efficiency as problem dimension increase. The poor scaling is an imbalance between the values of the function and change in x. The function values may be change very little even though x is changing significantly. This difficulty can sometimes be remove by good scaling factor for the updating H and the performance of self-scaling methods is undoubtedly favorable in some cases especially when the number variables are large [3].

Derivation of New Self scaling
Suppose the search direction in quasi newton method is defined by And the search direction conjugate gradient method is defined by +1 = − +1 + … (30) Since the search direction equality is BFGS update and is conjucay coefficient
To answer this question, we suggest the following theorem:

Theorem: -
The search directions generated by +1 = − * +1 are conjugate. The objective function is quadratic.

Proof:-
Let ( ) = 1 2 ⁄ + , be a quadratic function. choose an initial approximation matrix 1 = is symmetric positive definite. We have to prove that for an ELS, the search direction d must satisfies
Step (5) < is satisfied then stop the algorithm Step (8):-otherwise set +1 = 10 ⁄ and take = * as a new starting point ; set i=i+1 and go to step 2.

Numerical Results:
Several standard non-linear constrained test functions were minimized to compare the new algorithms with standard algorithm see (appendix). With 1 ≤ n ≤ 10 and 1 ≤ ( ) ≤ 10 and 1 ≤ ℎ ( ) ≤ 10 All programs are written in fortran language and for all cases the stopping criterion taken to be The new algorithm has proven its efficiency in practice the comparative performance for all of these algorithms are evaluated by considering NOF, NOI, and NOC, where NOF is the number of function evaluations and NOI is the number of iterations and NOC is the number of constrained evaluations.