WIT Press


Search Direction Improvement For Gradient-based Optimization Problems

Price

Free (open access)

Volume

80

Pages

10

Published

2005

Size

428 kb

Paper DOI

10.2495/OP050011

Copyright

WIT Press

Author(s)

S. Ganguly & W. L. Neu

Abstract

Most gradient-based optimization algorithms calculate the search vector using the gradient or the Hessian of the objective function. This causes the optimization algorithm to perform poorly in cases where the dimensionality of the objective function is less than that of the problem. Though some methods like the Modified Method of Feasible Directions tend to overcome this shortcoming, they again perform poorly in situations of competing constraints. This paper introduces a simple modification in the calculation of the search vector that not only provides significant improvements in the solutions of optimization problems but also helps to reduce or, in some cases, overcome the problem of competing constraints. Keywords: optimization, multidisciplinary design optimization, gradient-based algorithm, method of feasible directions, search direction. 1 Introduction An optimization problem can be defined, in terms of a set of decision variables, X, as: Minimize Objective Function: ( ), Subject to: ( ) 0 1... Inequality Constraints, ( ) 0 1... Equality Constraints, j k l u i i i g j m h k l X X X ≤ ∀ = = ∀ = ≤ ≤ ∀ X X X F 1... Side Constraints. i n = (1) A number of numerical algorithms have been devised to solve this problem. Gradient-based algorithms are based on the following recursive equation: 1 q q α − = + X X S (2) where 1 q− X and q X are the vectors of the decision variables in the ( 1)th q − and the th q iterations respectively. Considering the optimization problem in an

Keywords

optimization, multidisciplinary design optimization, gradient-based algorithm, method of feasible directions, search direction.