A Modified Augmented Lagrange Multiplier Method for Non-Linear Programming

In this paper, we have investigated a new algorithm which employs an Augmented Lagrangian Method (ALM). It overcomes many of the difficulties associated with the Penalty function method. The new incorporate algorithm has been proved very effective with an efficient convergence criterion.


Introduction
The class of the general constrained optimization problems seeks the solution by replacing the original constrained problem with a sequence of unconstrained subproblems in which the objective function is formed by the original objective function of the constrained optimization plus additional 'penalty' terms. The 'penalty' terms are made up of constraint functions multiplied by a positive coefficient. By making this coefficient larger and larger along the optimization of the sequential unconstrained subproblems, we force the Minimization of the objective function closer and closer to the feasible region of the original constrained problem.
However, as the penalty coefficient grows to be too large, the objective function of the unconstrained optimization sub-problem may become ill conditioned, thus, making the optimization of the sub-problem dilute. This issue is avoided, after the proof of convergence, by the so-called ' Augmented Lagrangian Method' (ALM) in which an explicit estimate of the Lagrange multipliers is included in the objective function. Hence, the objective function becomes optimality condition in the above method in order to improve its sufficiency and Reliability. The above technique is based on solid theoretical considerations, and the methods commonly recommended for the initial choice of Lagrange multipliers [3]. It has the following attractive features: 1. It's acceleration is achieved by updating the Lagrange multipliers. 2. It's starting point may be either feasible or infeasible. 3. At the optimum, it's value will automatically identify the active constraint set [1].

Quasi-Newton Methods
We use a quasi-Newton updating scheme to define the matrices k H in our quadratic model The quadratic function: where a is a scalar, b is constant vector and G is a positive definite symmetric matrix . Quasi-Newton methods use the curvature information from the current iteration, and possibly the matrix k H to define , where E is a matrix of low rank, usually one or two. By using a rank-two update, we may also arrange that k H is always a positive definite, symmetric matrix. Many rank-two formulas may be used, but probably the most famous is the BFGS update, It is well known (see, for instance, Fletcher [4] . Usually, one also requires that 2 / 1 1   so that the Wolfe condition (5) is met by the exact minimizer of a quadratic function, and then takes . We observe that x satisfies the Wolfe condition on the gradient (6), then . Obviously, their method will not satisfy the quasi-Newton condition (4) at each iteration, but it will keep k H positive definite [2].

Inequality Constraints
We first consider the inequality constrained minimization problem: where x is an n-dimensional vector and ( ) , and L and G are nonintersecting index sets. It is assumed throughout that f and c are twice-continuously differetiable and usually as-summed to possess continuous second partial derivatives. The constraints in eq.(9) are referred to as functional constraints. The classical method of solving this problem is due to Lagrange. The method removes the inequality constraints by considering the function and reduces the problem to the unconstrained case: denotes the set of Lagrange multipliers for this problem.

Outlines of the Augmented Lagrangian Multiplier Method (Interior Penalty)
Consider the Augmented Lagrangian Multiplier Method by minimization the Augmented Lagrangian Function as a pseudo-objective function with interior Penalty function .i.e.
Step 1: Set  , x 0 (initial point, scalar termination), start with an arbitrary but small i  (or take alternatively Then it start with a right direction ) Step 2: Call BFGS to minimize ) , , (

Equality Constraints
We consider the equality constrained minimization problem: 0 are continuous and usually as-summed to possess continuous second partial derivatives. The constraints in eq.(13) are referred to as functional constraints. In order to obtain a new update. Thus, the new problem can be converted into an unconstrained minimization problem by constructing a function of the form denotes the set of Lagrange multipliers for this problem [4], where  is an vector of Lagrange multipliers, one for each constraint. In general, we can set the partial derivatives to zero to find the minimum:,

Outlines of the Augmented Lagrangian Multiplier Method (Exterior penalty):
This methods is represented in the following form: minimize the Augmented Lagrangian Function as a pseudo-objective function with Exterior Penalty Function Method One method is to treat i  's as design variables. This increases unknown design variables. The other method is normally taken as described below: Step 1: Set  , x 0 (initial point, scalar termination), start with 1,...L i i = = , 0  and arbitrary but small i  (or take alternatively then it start with a right direction).
Step 2: and iterates until the convergence is obtained.

Features of Augmented Lagrangian Multiplier Methods:
The Augmented Lagrangian Multiplier Method with proper  is taken to be zero or one [6].

General Introduction to Nonlinear Constrained:
The general constrained minimization problem where x is an n-dimensional vector and the functions ( ), are continuous and usually as-summed to possess continuous second partial derivatives. The constraints in eq.(8) are referred to as functional constraints [2].
There exits an important class of methods to solve the general constrained optimization. This class of methods seeks the solution by replacing the original constrained problem with a sequence of unconstrained sub problems in which the objective function is formed by the original objective of the constrained optimization plus additional 'penalty' terms. The 'penalty' terms are made up of constraint functions multiplied by a positive coefficient. By making this coefficient larger and larger along the optimization of the sequential unconstrained sub problems, we force the minimizer of the objective function closer and closer to the feasible region of the original constrained problem. However, as the penalty coefficient grows to be too large, the objective function of the unconstrained optimization sub problem may become ill conditioned, thus, making the optimization of the sub problem difficult. This issue is avoided, after the proof of convergence, by the so called 'Augmented Lagrange method' in which an explicit estimate of the Lagrange multipliers  , is included in the objective. Hence, the objective function becomes [7].

Outlines of the general Augmented Lagrangian Multiplier Method:
The general optimization problem in eq.(8) is transformed as Minimize: Now follow these steps: Step 1: Set , 0  x (initial point, scalar termination), start with an arbitrary but small i  (or take alternatively Then it start with a right direction ) and iterate until it converges.
Step 4: Convergence for ALM ,  . If all side constrains are satisfies if n i = ( = i iteration counter = n number of variables) then converged, stop, otherwise continue.

New Incorporate Augmented Lagrangian Multiplier Method
Infeasible sub-optimums. i.e. infeasible sub-optimums is not practical to solve problem because the objective function is not defined outside region, and discontinuous on the boundary, so that feasible sub-optimums is continuous everywhere. Prasad presented a formulation which offers a general class of penalty functions and avoids the occurrence of extremely large numerical values for the penalty associated with large constraint violations. Let's include the optimality condition into the algorithm in order to improve its efficiency and reliability. Because the way how this penalty is imposed often leads to a numerically ill-conditioned problem, a method is devised whereby only a moderate penalty is provided in the initial stages and this penalty is increased as the optimization progresses. The New Incorporate Augmented Lagrangian Multiplier Method based on the system defined in (16)-(18) may be modified further as:

Outlines of the New Proposed Algorithm:
Step 1: Set   Step 5: Stopping Criteria: Step 6: . Go to Step 2

Convergence Analysis of the New Proposed Algorithm:
The convergence analysis of augmented Lagrangian method is similar to that of the quadratic penalty method, but significantly more complicated because there are two parameters  , instead of just one. As a straightforward generalization of the previous method, we can define: and solve for ) , ( and  as parameters. First of all, assuming as usual that for all 0   . Moreover, the Jacobean of F (with respect to the variables : Assuming  x is a nonsingular point of the NLP, and using the sufficient condition the matrix : . Therefore, there exists 0   such that ) : : . The implicit function theorem then implies that there exists a neighborhood N of *    such that there exist function : Substituting this into (17) then produces 0 )) : Rearranging the last equation shows that 0 )) : , In other words, it is straightforward to show that 0 ) : : ); : and for  sufficiently small. We have therefore proved the following theorem.
Using the triangle inequality for integrals, it follows that The function are defined by the equations 0 ) : ( )) : Differentiating these equations with respect to  , and simplifying the results yields:

Numerical Results
In order to assess the performance of the new algorithm is tested over (10)  All the algorithm in this paper use the same ELS strategy which is the quadratic interpolation technique.
The comparative performance for all of these algorithms are evaluated by considering NOF, NOI and NOC, where NOF is the number of function evaluations, NOI is the number of iteration and NOC is the number of constrained evaluations where especially NOF is the best measure of actual work done, it is depended on the linear search and accuracy required.