Methods (Electrical Machine) Part 1

By

In general, numerical optimisation algorithms are iterative methods, constructed to reach the desired optimum in successive steps. This is performed following particular rules to vary the object variables and to determine the search direction. The various algorithms differ only in the choice of step-length, determination of the search direction and in the choice of a stopping criterion. A general form of an optimisation algorithm can be given by applying:

tmpF-110_thumb

step 0: Choose a start-vectortmpF-111_thumbin the admissible space and set the counter of iterationtmpF-112_thumb step 1: Evaluate the solution-vector according to a quality function, step 2: Check whether a stopping criterion is fulfilled. If yes, stop the optimisation; if not, settmpF-113_thumb step 3: Generate a new solution-vector by variation of the objective variables using a suitable step-length and search direction. Continue with step 1.

With the given properties of the electromagnetic optimisation problems, the requirements of the optimisation algorithms can be formulated. Numerical methods have to be examined with regard to the following criteria:

• reliability

• robustness

• insensibility to stochastic disturbances

• application range

• accuracy

• stable solutions

• performance.

Optimisation algorithms can be classified into:

• deterministic or stochastic and

• direct or indirect methods.

Deterministic methods are basically local optimisation methods, often based on the construction of derivatives or approximations of the derivative of the objective function (Fletcher 40, Bertsekas l0, Rao Such gradient based methods, e.g. Conjugate Gradient (CG), Newton, Quasi Newton, Broyden-Fletcher-Goldfarb-Shanno (BFGS), etc. are very popular, as they are effective and converge to the local optimum in a small number of steps. This low number of quality function evaluations would be ideal when applying a computationally rather expensive FEM analysis to evaluate the objective function. If no analytical objective function exists or the derivative is difficult to obtain, the use of these methods is not appropriate. Furthermore, these methods are very sensitive to stochastic disturbances, especially present in the derivative information they are based upon. Most deterministic methods additionally require the transformation of a constrained optimisation problem into an unconstrained one. In the case of a multimodal objective function, as is often the case in multiobjective optimisations, these methods are unable to find the global minimum (optimum).

An effective approach to compute the sensitivity information during a FE-analysis is introduced to field computation by Park et a).86> 87: the method of adjoint variables. This method was previously successful^ applied in electronic circuit optimisation (Director & Rohrer , Vandewalle et al. I17). Here, the sensitivity of the objective function with respect to a set of design parameters can be computed with only two solutions. This basically requires the development of a mesh generator and/or solver specialised for a particular optimisation task (Dappen Ramirez & Freeman M).

In general, the human interaction involved in formulating an optimisation problem, in particular in finding the derivatives, is a considerable economical factor when evaluating the efficiency of any optimisation method. The preparation for such an optimisation task might require weeks, while the execution of the actual optimisation run is a matter of minutes. Over the past years, research has been carried out for achieving automatic differentiation of computer codes. The idea is to provide first and higher order derivatives of coded vector functions, without human interaction. A variety of automatic differentiation software is already available, such as ADIC (Bischof et al. 14), ADOL-C (Coleman & Jonsson 26), PCOMP (Dobmann et al. 33), etc. At present, software code contained in a single file with up to 10.000 lines of code can be automatically differentiated.

Stochastic optimisation methods, on the contrary, such as simulated annealing, evolution strategy and genetic algorithms, do not require derivative information. Any kind of design constraint can be implemented in a simple manner, by just rejecting a design that violates any constraint or by using penalty terms in combination with the objective function. These methods are capable of handling large dimensional optimisation problems and are less sensitive to stochastic disturbances of the objective function value (Kasper 6I). The major drawback of these methods is the large number of function evaluations required when compared to deterministic methods. This fact has, in a first view, an even greater impact when considering FEM based objective function evaluations. The first combinations of the finite element technique and stochastic optimisation methods considered partial models only (Preis & Ziegler 89, Mohammad M). Since then, a large variety of optimisation problems have been solved using the combination of stochastic methods and finite element function evaluation (Palko 8S). Although the plain execution time of such optimisations is large when compared to deterministic approaches, the simplified set-up of the optimisation task and their ability to find the global optimum make such an overall optimisation procedure attractive.

The rather high computational expense of the FEM has always resulted in attempts to reduce the number of function evaluations by applying statistical methods to sample the search space efficiently. A variety of methods can be entitled as indirect, as the optimisation algorithms are executed on an approximation of the real objective function. The combination of the Response Surface Methodology (RSM) and Design of Experiments offers a whole set of statistical tools not only to optimise a design, but also to evaluate the main and interactive effects of the design parameters (Box & Draper "). Only a few applications of this method have been reported in electromagnetics in conjunction with FEM function evaluations (e.g. Brandiski et al. 20). A major drawback of these methods is the fact that due to the use of first or second order (global) polynomials, there is only a remote possibility of finding the global optimum in a search space with several local optima. This problem has recently been relaxed by the application of radial basis functions for the approximation of objective functions. The first applications, employing the so-called General Response Surface Method (GRSM) have been introduced to the electromagnetics community by Alotto et al.2’3′ 4. The experience has shown, however, that these methods are applicable to rather low dimensional problems only, as their practical efficiency deteriorates with a high number of design variables. Other methods utilise the derivative information made available by the approximation based on radial basis functions, as reported in Suykens & Vandewalle ,09. These methods increase the probability of finding the global optimum present in the approximation.

To be able to choose the appropriate algorithm, in this section various methods will be discussed. Based upon the most likely classification of the electromagnetic optimisation problems discussed in the previous topic, a pre-selection of optimisation algorithms has been derived. Direct search algorithms are selected. No algorithm requires derivative information of the objective function.

Non-stochastic direct search algorithms

In non-stochastic direct search algorithms, the search direction (parameter variation) and step lengths are fixed by a predefined scheme rather than in an optimal way. The advantage is that only the value of the objective function has to be available. No derivative information is required, nor does it need to be constructed from possibly erroneous objective function values.

Strategy of Hooke and Jeeves

The strategy of Hooke and Jeeves (Schwefel l0°) is a direct pattern search method. This unconstrained optimisation method is characterised by a sequence of two kinds of moves:

• First, an exploratory move in each iteration, consisting of a sequence of single discrete steps per design variable (co-ordinate direction)

• This is followed by a pattern move, being an extrapolation towards an assumed favourable search direction (defined along the line from the initial design in the iteration and the best design encountered by the exploratory moves). An exact description of the extrapolation can be found in Schwefel 10°.

Such an extrapolation does not necessarily lead to an improvement of the design. It is merely a guess, based on the pattern of the previously successful moves. This extrapolation determines the name: pattern search. The success of this last move is tested only after the following exploratory move. The parameter variation step length S, of the exploratory moves is decreased in each iteration by the multiplication with a step length adjustment factortmpF-117_thumbDue to this sequential structure, a parallel implementation is impossible. The Hooke and Jeeves algorithm is characterised by the following properties:

• derivative free, unconstrained optimisation method

• dependency of the global convergence on the choice of the starting step length S, and step length factor a.

The value of the objective function is required only for detecting the best exploratory move per iteration. A constrained optimisation task might be transformed into an unconstrained optimisation. This is not always possible. If only side constraints are defined in the optimisation task, such a problem can be transformed in a "quasi-unconstrained" optimisation problem. This is achieved by setting the objective function to a very high value in case of violation of the constraints. This infeasible design does not need to be evaluated. This should not be confused with the penalty method. In the penalty method, a violation-dependent value (determined by a function) is applied after the design has been evaluated. If the violation of the constraints leads to a design that cannot be evaluated (e.g. invalid geometry), the application of the penalty method is prohibited.

The Hooke and Jeeves algorithm is favoured compared to supposedly better algorithms such as Rosenbrock’s algorithms (Schwefel l0°) and the Nelder and Mead Simplex method, due to the following reasons:

• Nelder and Mead’s Simplex method (an unconstrained method as well) requires the ranking of the evaluated designs in the n-dimensional simplex following their objective function value. This ranking decides on the new search direction. The "quasi-unconstrained" mode described above must fail, as all infeasible designs are equally valued.

• Rosenbrock’s algorithm may include inequality constraints. However, the approach of the constraint must already be detected, not always being possible, especially in the case of nonlinear constraints.

A theoretical optimisation example

To illustrate the convergence of the selected optimisation methods, and also to allow a comparison of the different algorithms, a two-dimensional optimisation tmpF-118_thumb

problem is chosen as described in Dappen 29 and Alotto et al. 4. This particular optimisation problem is defined by:

tmpF-119_thumb

subject to the constraints:

tmpF-120_thumb

This function is chosen as an example as its objective function can be visualised (Fig. 7.4) including the search paths of the different algorithms. There are four local minima of approximately equal value. The global minimum is located at (-4.454,-4.454). Such an objective function resembles to a large extent typical objective functions found in engineering applications. The starting point of the visually presented optimisations are always (0,0). This point is located on the slope towards the local optimum 03 (Fig. 7.4). Any gradient-based method would converge to 03. However, the stochastic algorithms are restarted from different starting points.

Visualisation of the objective function surface of the optimisation problem defined in (7.5), (7.6) with the optimal solution at

Fig. 7.4. Visualisation of the objective function surface of the optimisation problem defined in (7.5), (7.6) with the optimal solution at tmpF-122_thumb

Fig. 7.5 illustrates a successful optimisation run. The initial step size is 6.0 and the step length factortmpF-123_thumbThe path connects the best trials per iteration.

Fig. 7.6 illustrates the problem with the Hooke and Jeeves algorithm: there is no guarantee for global convergence, as it entirely depends on a "good" choice of the strategy parameters and the starting point.

Path of a successful (global convergent) optimisation using the Hooke and Jeeves algorithm and convergence of the error

Fig. 7.5. Path of a successful (global convergent) optimisation using the Hooke and Jeeves algorithm and convergence of the errortmpF-127_thumb

Influence of the choice of the initial step length to find the global optimum. Start at(local optimum 02), and (global optimum Ol).

Fig. 7.6. Influence of the choice of the initial step length to find the global optimum. Start attmpF-131_thumb(local optimum 02), andtmpF-132_thumb (global optimum Ol).

Stochastic direct search algorithms

Stochastic search algorithms have steadily gained interest over the past years due to the increase in the computing power available. Various algorithms have already been developed and applied to a wide variety of problems in different fields of science, technology and economics. Simulated annealing serves mainly as a basis for comparison, while the two evolution strategy algorithms are valuable optimisers in combination with finite element fiinction evaluations.