# Methods (Electrical Machine) Part 4

By

### General response surface methodology

The inherent problem of the response surface methodology to provide "good" fitting response surfaces has led to the introduction of the general response surface methodology (GRSM) capable of finding the global optimum of a multimodal design space (Alotto et al.3′The basic idea is the same as in the RSM: find a response surface function and apply an optimisation algorithm to find the optimum of this function. The new feature of the GRSM is the usage of multiquadrics (a special form of radial basis functions) to achieve a response surface including multiple minima. Due to the possibility of including multiple minima, gradient based optimisation algorithms are not permitted. Alotto proposes the use of simulated annealing instead.

GRSM uses approximations of the objective function at any point of the form:

withthe approximation coefficients, M the number of experiments and a possible radial basis functionchosen to be:

(In geometric termsdenotes the distance (radius) between the two points in the ^-dimensional space defined by the vectors x and xj. This is the origin of the term radial basis functions.) The shift factor s is a parameter defining the curvature of the approximated iV-dimensional surface. Alotto presents a statistical method (Bootstrapping (Alotto et al. *) to define s in a near optimal way. However, experiments have shown that choosing s smaller than the average spacing of the sample points is sufficient for most applications.

If one substitutes the interpolation conditionin (7.24), the matrix equation for the unique coefficientsis obtained:

with the coefficients of the matrixThe matrix H is a full matrix with all diagonal elements zero. As long as the number of sample points is relatively small (up to a few hundred points) and singularities due to duplicate points are avoided, this system can be solved by any pivotation method. The approximation function fits exactly all sample points. This is in contrast to the response surface methodology based on second order polygons, in which the least square difference is minimised. However, the GRSM does not give any information on the main and interaction effects as does the RSM, due to the fact that the coefficients c; are not related any more to a particular design parameter.

The attraction of the classical RSM method is based mainly on the statistical methods that provide interpretable fitted first or second order polynomials. The experimental design developed for this purpose provide the tool to construct these curve fittings efficiently for a well defined polynomial order. The curve fitting based on radial basis functions does not have such a well-defined global polynomial order. Alotto et al.4 proposes to combine the multiquadrics approximation with the theory of design of experiments with the goal of reducing function calls in comparison to the RSM. This can be successful only if higher level designs are used, resulting in a larger number of experiments. The same number of experiments as designed for the RSM will deliver not much more information with the GRSM (Fig. 7.17).

The advantage of the multiquadrics approximation (on a multimimina objective function) arises with higher factorial designs (Fig. 7.18) or by accumulating the sample points in successive zooming steps. More detail of the original curve is present in the approximation, however at the expense of significantly more sample points. Such a resolution cannot be achieved by a global second order polynomial approximation.

Fig. 7.17. Multiquadrics response surface based on a full 3w-factorial; a) shows a contour plot of the response surface and indicates the positions where samples are taken, while b) visualises the agreement between response surface and original curve. The result is similar to Fig. 7.16.

Comparing RSM and GRSM leads to the conclusion that the advantage of GRSM is the more accurate approximation that can be achieved when accumulating the sample information over the course of the optimisation or by higher level factorial designs. Additionally, the approximated surface passes through all sample points taken. The accuracy of the fit depends on the number of sample points if no prior information about the curvature of the design space is available. A global optimum might still be missed if the sample grid is chosen unfavourably or the successive zooming steps narrow the search region too fast. Furthermore, the statistical methods introduced by the design of experiments method, to evaluate main and interaction effects, are mostly not applicable to radial basis functions.

Fig. 7.18. Multiquadric approximation based on a full 5w-factorial (25 samples); a) shows a contour plot of the response surface and indicates the positions at which samples are taken, while b) visualises the agreement between response surface and original curve.

## Adaptive coupling of evolution strategy and multiquadrics approximation

Arriving at this point, the question of a possible combination of the interesting features of the presented methods arises. One of the most significant features of the evolution strategies is their ability to converge fast to the region of interest. The fine tuning of the design, on the other hand, requires a high number of evaluations of the objective function. The presented approximation method based on radial basis functions, on the other hand, is able to approximate multiminima functions with a high accuracy. The accuracy of the fit is not determined by a predefined spacing of the sample grid as in RSM, but rather by sampling "in the right place".

Here, a new approach is introduced, that could be characterised as an adaptive coupling of evolution strategy and multiquadrics approximation. The idea is to combine the above-mentioned features of both methods to reduce the overall optimisation time when using computationally expensive function calls (e.g. FEM analysis). The basic steps of this method are: ___

stepl: Start an evolution strategy optimisation using the objective function (direct search iteration), step 2: Construct a multiquadrics approximation f(x) after each iteration, including only points within a defined radius from the active optimum. This radius is a function of the active step length, ensuring an automatic contraction of the active approximation space.

step 3: Compute the next evolutionary iteration using the objective function, but determine the predicted value from the approximation f(x) as well, step 4: Record each point within a maximal radius k- 5 from the active optimum and construct an updated approximation f(x). If more than a defined ratio of predicted experiments are accepted function during an iteration, go to step 5, otherwise return to step 3.

step 5: Start a new evolutionary iteration, but now using the approximation f(x) only (indirect search iteration), step 6: Depending on the acceptance ratio determined in step 4, continue an adaptive number of evolutionary iterations using the approximation f(x) only, step 7: Stop if the stopping criterion is fulfilled; if not return to step 3 and start sampling on the real objective function again.

This algorithm can be applied to any basic evolution strategy. Here, it has been employed to the differential evolution strategy (DE). There are three levels of adaptivity determining the algorithm: 1. The contraction or zooming of the approximated region is adaptive to the progress of the optimisation by considering a search space with a maximum radius ofThe factor k is empirically chosen as:

with a the step length factor of the evolution strategy. For the DE-strategy,ensuring that the approximation region is wide enough for any progress during the indirect iterations (Fig. 7.21). If the step length is rapidly decreaseda smaller approximation region is permitted. If a is larger, the probability that the evolution strategy reaches the boundary of the approximation region in case of bad fitting, is higher. Therefore the chosen region must be large enough. The reason for this special measure is the bad approximation near the boundary of the active region. The points are determined by the evolution strategy instead of a regular grid as in the RSM or GRSM. This inherits a higher accuracy of the approximation towards the centre of the search space when using the evolution strategy. At the same moment, the shift factor s for the multiquadrics approximation eqn.(7.24) is defined by the average step length 8 in the last iteration.

2. The acceptance of the approximation is determined based on the variance of the objective function value of the iteration underlying the active approximation. Accepting predicted experiments with an error less than 10% of the variance has given good results. The error is computed by:

with the trial being accepted based on the variance of the objective function during the last iteration:

3. The number of indirect search iterations only depends on the acceptance ratio achieved with the active approximation. A higher acceptance ratio allows a larger number of iterations onTests indicate that a non-linear dependency of acceptance ratio and number of indirect search iterations nh is required to ensure a high accuracy of the optimisation throughout the approximation period:

withthe number of accepted trials per iteration and X the population size.

Due to the choice of the DE-strategy, particular attention has to be paid to avoid singularity of the matrix H to solve the approximation equation (7.26). Such singularity is caused by duplicate points in the pool of recorded experiments. In DE, this is possible if the crossover probability is high. In this case, an identical replicate of a previous sample is generated for the new population. Such points are not taken into the sample pool to construct the approximation. The contraction of the approximation region is necessary primarily to reduce the number of unknowns in the matrix equation (7.26).

A further problem that is inherent to all evolution strategies appears when returning from an approximation step: the selection and mutation generates new population members entirely based on members originating from an approximated function. Their approximated function value is erroneous, probably even lower then the global minimum of the objective function value. The greedy criteria inherent to all evolutionary algorithms could cause these approximated results to survive. To prevent this wrong development, all population members of the last approximated iterations must be mortal, and thus not allowed to survive this first iteration using the objective function evaluation. This causes the new method to require slightly more steps than the classical DE alone would need to converge to the optimum. However, a large percentage of these steps is taken on the approximated function.

The performance of the method is demonstrated or the test example in eqns.(7.5), (7.6). Using the same DE-strategy settings as in section 7.3.2.3 (Fig. 7.13), only 60 objective function evaluations are required, whereas 80 function calls are performed on the approximated function. In Fig. 7.19, the convergence history of the error is shown.

Fig. 7.19. Convergence of the error using the new coupled method based on a (6/10,0.5,0.9)-DE-strategy.

In Fig. 7.20 to Fig. 7.21 the approximation surfaces during the progress of the optimisation are shown. Only 5 multiquadric approximations are calculated. After three iterations (2,3 and 4) the approximation is accurate enough for accepting 80% of the predicted function values. Four iterations using this approximation follow. During this period, the approximations are naturally not updated. These are followed by two direct search iterations, 10 and II. After these two steps, the updated approximation is accurate enough to return to the approximation again. It must be noted that the probability of finding the global optimum is entirely dependent on the basic strategy parameters of the evolution strategy. However, choosing the population size larger than when using the classical DE-strategy offers the advantage of finding a better approximation with less direct, and computationally expensive, iterations.

Fig. 7.20. Updated approximations during the initial direct search iteration a) after 2 and b) after 4 iterations of differential evolution. The approximations span approximately 10 to 20 times this active search region.

Fig. 7.21. Updated approximations during the final two direct search iterations, a) after 10, and b) after 11 iterations of the differential evolution algorithm.

The reason for choosing the approximated region considerably larger then the active search region is visible in Fig. 7.21. The approximation accuracy outside the active region is poor. If the evolution strategy reached this region during the indirect sampling, the optimisation might fail. Taking the approximation region wider than the active sampling region of the evolution strategy, as proposed in eq.(7.27), is essential and effectively prevents failure of the method.

A remarkable feature of this proposed algorithm is that the efficiency depends on the curvature of the objective function. Tests with simple second-order surfaces have shown reductions of objective function calls of up to 80%.

It must be mentioned at this point that this algorithm performs well for low dimensional optimisation problems (tested up to 5 on the above objective function and other test functions). Further studies are required to improve and extend the algorithm, especially the adaptation criteria, to provide equal performance for higher dimensional multimodal problems.