Investigated Non-Conic Model for Constrained Optimization

ABSETRACT In this search, we develop the nonlinear constrained optimization by investigation a new region of solution depending on extended conic model to non-conic model by using conjugate gradient method. The new method is too effective when compared with other established algorithms to solve standard constrained optimization problems it performance from evaluations were the number of function (NOF),number of iteration (NOI) and the number of constrained (NOC).

, see [4] To solve this problem we use a method based on the combination of interior and exterior point methods. We describe these methods in the following sections. This section explains both interior and exterior point methods in the context of their development. It illustrates how the two methods are related and why their integration is a reasonable approach for solving nonlinear programming problems. In the past two decades interior point methods have proven to be efficient and are widely used for solving linear and nonlinear programming problems. The interior point methods are closely related to the sequential unconstrained minimization technique (sumt) developed by Fiacco and McCormick for solving constrained optimization problem with inequality constraints, see [8].
Barrier and penalty methods are designed to solve P by instead of solving a sequence of specially constructed unconstrained optimization problems.
In a penalty method, the feasible region of P is expanded from F to all of n, but a large cost or "penalty" is added to the objective function for points that lie outside of the original feasible region F, see [3].
In a barrier method, we presume that we are given a point 0 x that lies in the interior of the feasible region F, and we impose a very large cost on feasible points that lie ever closer to the boundary of F, there by creating a "barrier" to exiting the feasible region.

Penalty methods
Penalty methods transform constrained problems into unconstrained. Appropriate methods can then be used. We distinguish so called interior and exterior penalty methods see [6]. With exterior penalty methods some function is added to the objective to penalize infeasible solutions. On the other hand such a function prevents interior penalty methods from leaving the feasible domain. That is the reason why the interior penalty method is also called barrier function method. In both cases the penalty factor adjusts the effect. As the factor approaches infinity the solution is increasingly better approximated while the problem condition worsens. As a consequence numerical problems occur if the factor is too large. The advantage of the interior method is that intermediate solutions are always feasible, the disadvantage is that equality constraints cannot be handled. The latter is no problem for exterior methods, these, however, converge from the exterior, i.e. the infeasible, domain towards the solution. Intermediate solutions can only be used if they are scaled back to the boundary of the feasible domain. The main advantage of both methods is that they can be handled very easily and make only little use of complicated theory, see [11]. They are very successful in practical application for all kind of problems. A well-known procedure is SUMT, sequential unconstrained minimization technique, which generates a series of solutions with increasing penalty factor which allows for controlled approach to the optimum until the solution is accepted or the problem condition collapses. In this context, the penalty factor can be understood as a continuation parameter, a feature which this rather old method has in common with the inner point methods. These approach from the interior domain to the solution as interior penalty function methods do but are on a strong mathematical basis in contrast to the more intuitive motivation of SUMT , see [9].
There are two approaches for such a problem: -Indirect Methods: They transform the constrained problem to unconstrained problem and uses the unconstrained numerical techniques to solve the problem. Very popular in the industry. These methods are called Sequential Unconstrained Minimization Techniques (SUMT). The following represents some of the families of these methods: 1-Exterior penalty function methods. 2-Interior penalty function methods. 3-Mixed penalty function methods. See [4] and [10].

-Direct Methods:
These methods handle the complete problem as formulated. They are based on linearizing the functions involved. The objective function however in some cases is expanded as a quadratic. They are currently popular see [5]. There are several methods and many of them have minor differences. Among these are:

Conjugate Gradient Method for Conic Model
The conic model method for constrained optimization has been studied by Davidon and Sun & Yuan see [2] , [12]. Conic model methods are generalization of quadratic methods, they have more freedom in the model they can choose the model to approximate the original problem better such as using more interpolation conditions. Atypical conic model for constrained optimization is: and G is an n n  positive definite and symmetric matrix. the vector k a is the associated vector for the collinear scaling in the k-th iteration and it is normally called the horizontal vector . appears once in the second term on the right-hand side of (3), and twice in the third term, it is clear that by letting: the conic function becomes a quadratic in the variable w , We can express s in term of w : To simplify the formulas, we define , 1 where , the singular hyper plane, and note that if , then x and y lie on opposite sides of E , see [7].
We will need to relate the derivative of f to that of c , since It follows from the chain rule that where g denotes the gradient of f . The conic function (3) is useful as a model function for minimization only if it has a unique minimize. This is ensured by the conditions …(10) If eq.(10) holds, f will be called a normal conic function see [2].
The horizon vector a can be computed using function and gradient values at any three collinear points. In particular, consider the iterates k x and 1 + k x and let t x be given by where as before For more detail see [7]. Step3: Set

3-1 Outline of Conic Model
, where i  is obtained from the line search procedure.
Step4: Compute satisfied go to step 2, otherwise set i=i+1 and go to step 3.

New Conjugate Gradient Method for Extended Conic Models (Non-Conic Method)
The original conjugate direction methods were developed in such a way that they find a minimum of a conic function after a finite number of steps making use of perfect line searches. This paper describes the conjugate direction methods which minimize extended conic functions after a finite number of steps.
A new broad class of globalization approach for solving constrained optimization is the class. The main idea of non-conic model is, at a current iterate k x to calculate a new point compute Conic models may not always be adequate to incorporate all the information which might be needed to represent the objective function ) x ( f successfully. In order to obtain better global rate of convergence for minimization algorithms when applied to more general functions than the Conic when applied in this paper several new algorithms, which are invariant to nonlinear scaling of Conic functions. There is some precedent for this approach, if ) x ( c is conic function then a function f is defined as nonlinear scaling of ) x ( c if the following condition holds Where * x is minimizer of ) x ( c with respect to x.
The following properties are immediately derived from above condition.
i-Every contour line of ) x ( c is a contour line of the f ; iii-that * x is global minimum of ) x ( c does not necessarily mean that it is a global minimum of f .

Derivation of the New Non-Conic Model
The implementation of the extended method has been performed for general function )) ( ( x c F of the form (14). The unknown quantities  were expressed in term of available quantities of the algorithm (i.e. function and gradient value of the objective function) using the expression for  : is defined by eq(9) so we have: since for any exact line search 0 g d i

A Sine Function as a conic Functional Model
In this paper a new triangular function model is investigated and tested on a set of standard non-linear constrained test function, it is assumed that condition (3) holds. The extended conjugate gradient algorithm is developed which is based on this new model is as follows: If we express 1 i F −  and i F as follows, using the derivation of the sine function, Step3: Set , where i  is obtained from the line search procedure.
Step4: Compute Step6: Compute the new search direction defined by 1 Satisfied go to step 2, otherwise set 1 i i + = and go to step 3.

Numerical Results
In order to explain the new algorithm which is highly effective when compared with other standard optimization. Two minimization algorithms are tested over (9) non-linear equality constrained problems see (Appendix) with All the algorithms in this paper use the same ELS strategy which is the cubic fitting technique fully described from Bunday see [1] The comparative performance for all of these algorithms are evaluated by considering NOF, NOI and NOC, where NOF is the number of function evaluations, NOI is the number of iteration and NOC is the number of constrained evaluations. 1-Conic model 2-Non Conic model Our numerical results are presented in table (1) confirm that the Non Conic model is superior to Quadratic model , Non-quadratic model and Conic model with respect to the total of NOF , NOI and NOC.  33  15  18  36  17  19  8-58  22  25  43  18  21  9-74  21  22  60  20  21  Total  1620  411  434  1327  307