Below mentioned methods can be used to solve and optimizationproblem with constraints.Graphical optimization methodIt is the simplest and ratherconvenient method to solve an optimization problem involving one or two designvariables. If the optimization problem is a function of single design variable,then optimum solution can be acquired by plotting the objective function andfinding maxima or minima. As for optimization problems involving two designvariables, firstly feasible region is identified by plotting constraint functions.Then contours (contour represents the curve comprised of points with same valuefor objective function) of objective function are drawn and optimum value isfound by visual assessment.

-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-Optimality criteria method Optimality criteria are thecondition an objective function must satisfy at its minimum point. Only minimumfor an objective function can be found by using optimality criteria i.e. itcannot be used to find maximum of objective function. To find maximum offunction, first minimize it and then multiply it by -1 for maximization.For unconstrained problemsThe first order optimalitycondition for local minimum of an unconstrained optimality problem f(x) withsingle design variable around optimum point is given as?f () = 0Where ?f () is the gradient of function f(x).

Points that satisfy this condition are called stationary points.This condition is necessary but notsufficient to find minimum as it tells that function value in proximity of is not increasing.Sufficient condition for minimumof f(x) is given as = 0Where represents hessian matrix.· If hessian matrix is positive definite,stationary point is local maximum.

· If hessian matrix is negative definite,stationary point is local minimum.· If hessian matrix is indefinite, stationarypoint is inflexion point.· If hessian matrix is semi-definite, sufficientconditions are inconclusive. For constrained problemsLagrangianfunction combines the constraints (equality/inequality) and objective function.The function is formulated such that the necessary conditions for its minimumare same as that for constrained problem.

Beforethe function definition, all the inequality constraints if any are converted toequality constraints such that + = 0Where represent the inequality constraint and is known as slack variable. It is squared toavoid addition of negative number to inequality constraint. Therefore the Lagrangian function is writtenasL (x, u, v,s) = f(x) + (x) + +(x)Herex is vector of design variablesu ? 0 isvector of Lagrangian multipliers for inequality constraintsv isvector of Lagrangian multipliers for equality constraintss isvector of slack variablesAs above mentioned function is unconstrained, necessaryconditions for its minimum areKarush-Kuhn-Tucker (KT)conditions are first order necessary conditions for the minimization ofobjective function having constraints on design variables.Gradient conditions foroptimality of constrained minimization problem is given as f + + = 0Here and , i=1,…,p are theLagrange multipliers associated with constraints 0 (inequalityconstraints) and = 0 (equalityconstraints). Lastset of conditions i.e. are called complementary slackness. It showsthat either or.

Ifthe slack variable that is is zero then the subsequent constraint isactive with a positive Lagrange multiplier . Ifthe Lagrange multiplier is zero, then the corresponding constraint isinactive. This shows that first set of condition involve only active inequalityconstraints. For a point to be minimum of a function subjected to equality constraints and inequalityconstraints, below mentioned conditions are necessary.· should be a regular point· Gradient condition f() + + = 0 must be satisfied· Constraints should be satisfied at optimum.· Switching conditions must be met.· Slackvariables must satisfy feasibility conditions i.e.

· Lagrangianmultipliers should always have positive sign Linear programmingAn optimization problem which haslinear cost/objective function and linear constraint equations is called linearprogramming problem. The constraint equations can either be of equality,inequality or both. But most of the real life optimization problems are ofnon-linear type. These problems can be transformed in the linear type problemsby converting them in succession of linear programs. Non-linear problems canalso be converted to linear programming problems by linearizing the objectivefunction about a known point.

Standard for of linearprogramming (L.P) problem can be stated asMinimize Subjected toconstraints Where is mxn matrix consisting of constraintscoefficients. Is a vector of design variables. Is a vector comprising of objective functioncoefficients.Vector consisting ofR.

H.S of constraint equations.It can be seen that thestandard for of LP problems is of minimization, i.e. goal is to minimizeobjective / cost function. Also all the constraints are linear having R.H.

Spositive. Furthermore all the design variables are limited to non-negative.Conversion to standard LP formTo transform a given optimizationproblem to the stand LP form, following steps should be taken· If objective function/cost function is ofmaximization, convert it into minimization type by multiplying it by -1.· Standard LP form entails that there must bepositive constant on right hand side of constraints. It there is a negativeconstant, multiply it by negative sign.· For less than type constraints, slack variableis added to convert the constraint to equality. Polarity of slack variableshould be kept positive.· To convert greater than type constraints toequality, positive surplus variable is added.

· In case of unrestricted variables, they arewritten in term of the difference between two new variables. If the differenceif positive, the original variable is positive and vice versa.· If a constant is present in objective functioneither it can simply be ignored during solving problem as objective function isnot affected by a constant.

After obtaining solution, optimum value is alteredto account for constant. Or a duplicate design variable is multiplied byconstant and corresponding constraint to design variable is set with value 1.Linear programming problems areconvex problems. When LP problems are in their standard form, they stand for n no.

of constraint equations and m no.of optimization variables.Following cases may arise· If m=n, solution can be obtained withoutconsideration of objective function with only solving constraint equations.· If m>n, this represents the case that someconstraints are linearly dependent on other constraints.· When m

Constrain equations are solvedfor basic variables in term of non-basic variables. Non-basic variables areassigned 0 value as or intended purposed is to minimize objective function. Subsequentlyvalues for basic variables are obtained by substituting values of non-basicvariables. Basic solution is feasible if all the variables are non-negative otherwiseinfeasible.

Value ofobjective function can be found by solving it for design variables.Number of basicsolutions which can be obtained by solving LP through above discussed methodare given as: Simplex method.Simplexmethod for LP problems allow to find optimum solution to design problem withoutprobing all the possibilities.Theoperating procedure of simplex method is to find basic feasible solution havinglower value if objective function than the previous basic feasible solution.

Ateach iteration, a new basic variable is introduced to the basic variable setand a current basic variable is moved to set of non-basic variables.Optimumsolution is obtained when no non basic variable can be moved to set of basicvariables.For less than Type constraints.If a design problem has less thantype constraints, solving it through simplex method involve below mentionedsteps · Convert Objective function to standard LP form.

· Convert constraints to standard LP form byadding positive slack variable.Starting basic feasible solution canbe obtained by treating slack variables as basic variables.For next iteration, select avariable for basic set of variables which has the largest negative coefficientin objective function (causes greatest decrease in objective function value).To move a basic out to non-basicset of variables, take ratios of the right hand side constants of constraintswith the coefficients of variable to be moved to basic set. The smallest rationcorresponds to the removal of current basic variable to non-variable set. Theconstraint containing the smallest ratio is made pivot row by makingcoefficient of new basic variable 1.

New basic variable is removed from otherconstraints by performing arithmetic operations with pivot row. Objectivefunction is represented in term of non-basic variables. This process iscontinued until all the coefficients of objective function are positive. Thisis the indication that the function has reach the lowest possible value andcurrent feasible solution is optimum.

Objective function value correspondingto optimum solution is obtained by evaluating objective function by inserting 0in non-basic variables.For greater than and equal to Type constraints.For greater than and equal totype constraints, no unique variable can be selected as basic variable to findinitial feasible solution. GE type constraints have surplus variable but it hasnegative polarity and will give infeasible solution if it is treated as basicvariable. So to eradicate this problem, uniquepositive variables called artificial variables are added to GE and EQ type constraints.

Artificial objective function ?(x) is introduced which is the sum ofartificial variables. Simplex method for GE and EQ typeconstraints involve two phases. During phase 1, we minimize artificialobjective function till ?(x) =0. This means that the artificial variables are out ofbasis and optimum solution of phase-1 is a basic feasible solution for problembefore introducing artificial variables.

Now phase-2 can be started asstarted and is proceeded on footsteps of a standard simplex while ignoring theartificial objective function. The process of introducing new basic variablesis continued till all variables in objective function are positive. This meansthat the function has reach the lowest possible value and current feasiblesolution is optimum.Objective function value correspondingto optimum solution is obtained by evaluating objective function by inserting 0in non-basic variables.Since the under considerationoptimization problem is of non-linear type i.

e. it has non-linear objectivefunction and non-linear constraints, so graphical optimization and Linearprogramming method cannot be used to solve this problem.