# Adaptive refinement for 2D triangular meshes (Electrical Machine) Part 1

By

To obtain a high accuracy of the approximated solution using the FEM, the number of elements has to be high. Refining uniformly causes many finite elements in regions where they are not recommended. The idea of an automated adaptive mesh refinement is to refine the discretisation locally in different iteration steps, by starting with a minimum mesh and terminating this approach with a quality mesh.

The problem in this approach is to know where more elements are recommended and where fewer elements are sufficient to obtain the desired global accuracy. Here, error estimation plays an important role.

To enhance the quality of a finite element discretisation, different strategies can be followed. Errors in the solution can be identified at different stages of the field analysis:

• a priori, before the field is computed

• a posteriori using the field solution to estimate the error.

Adaptive mesh refinement is often a combination of a-priori, a-posteriori error estimation and a refinement algorithm. Error estimators indicate which elements to refine. Due to a-priori mesh quality indicators, various steps such as node movements or edge swapping are performed. Mesh refinement based on an error estimator improves the convergence when compared to a uniform refinement of all elements. The slope of the global errorin Fig. 5.48 is an indication for the convergence rate.

with W the energy stored in the model,the number of nodes, energy stored in a model (exact solution). In practice,is calculated with a sufficiently high number of nodes.

Fig. 5.48. Global error versus the number of nodes.

The proper combination of the various error estimators with the appropriate element refinement algorithm ensures acceptable computational efforts and efficient use of the available memory resources. In the following sections, some notations are introduced to explain the described refinement techniques.

## Type of element refinement

Two different types of isotropic refinements are considered here. It is assumed that the problem solution is unknown and thus no field information is available. Therefore, an anisotropic mesh refinement is not discussed here.

Red I, the first method of refinement, is element-based. Here, a new node is inserted in the centre of an element (Fig. 5.49).

Fig. 5.49. Element and edge-based refinement.

For an element-based refinement, the number of new elements per refined element and the total number of new elements per adaptation step is low because the refinement has no influence on the neighbouring elements. The initial aspect ratio of the new elements is worse than that of the original element.

The boundaries of the studied domain must be treated in a special way using this strategy (Fig. 5.50).

Fig. 5.50. Element-based refinement at boundaries.

The second type is an edge-based refinement, i.e. a number of new nodes are inserted along the edges of an element. Using an edge-based refinement approach, the number of new elements per refined element is high and neighbouring elements are influenced. Adjacent elements must be refined with Green I and II as well. Three different types of edge-refinement can be distinguished indicated by the number of marked edges (Fig. 5.49).

No special treatment of boundaries is recommended here. The initial aspect ratio of the new elements is equal to the aspect ratio of the original element.

## A priori error estimation

This type of error estimation is performed before a numerical computation to enhance the quality of the FEM mesh. Because no solution is known, geometrical properties, such as the shape or angles of a finite element of the mesh, are chosen to estimate the quality of the discretisation. But such a priori error estimation does not permit a general statement about the accuracy of the field solution because the overall accuracy of the field approximation depends on the precise discretisation of the geometry in regions with a large change of the field quantities as well. Therefore, an a posteriori error estimator is necessary to estimate the relative error of the field solution. Following a particular meshing strategy, minimum or quality mesh, various actions can be taken to minimise the numerical approximation error of the overall field solution before starting the field computation itself.

Fig, 5.51. Geometrical definitions for a standard triangular finite element.

Some general rules to generate a uniformly distributed discretisation with triangular elements can be given. Using standard triangular finite elements, the quadratic deviation from the angles of the trianglefrom the standard angleshould tend to a minimum:

The ratio of largest and smallest angle in the mesh should tend to unity:

The ratio of maximum to minimum side length of a triangle should tend to unity:

Further details can be found in (Babuska & Aziz ).

Aspect ratio The aspect ratioof a triangular element is calculated as the ratio of radius R of the circumscribed circle to twice the radius r of the inscribed circle (Fig. 5.52). An equilateral triangle has an aspect ratio of one. Therefore, within the discretisation, the ratio of outer to twice the inner radius of a triangle should tend to unity to generate a uniform mesh:

with a, b and c the length of the three edges and s the semi-perimeter of the triangle.

Fig. 5.52. Inscribed and circumscribed circle and edge definition of a triangle.

### Delaunay triangulation

The quality of a generated mesh is characterised by

• the size of the elements referred to the outer dimensions of the area of interest

• an average element aspect ratio close to one

• a low worst element aspect ratio

• the ability to restore the original geometry.

A Delaunay triangulation is a first step to a high quality mesh. Delaunay triangulation is defined in the following way:

For each pair of two adjacent triangles, the minimum of the six angles in the two triangles is larger than it would have been if the diagonal of the quadrilateral had been swapped.

This abstract definition means avoiding small angles and thus long elements with a large aspect ratio. The shape of two considered triangles must be unchanged.

Fig. 5.53. Swapping the diagonals of a quadrilateral, formed by two adjacent triangles.

Each pair of adjacent triangles holding the same region label is tested. The circumscribed circle of one of the triangles is calculated out of the co-ordinates of the nodes. The diagonal is swapped if the fourth node lies inside the circumscribed circle (Fig. 5.53). The circumscribed circle is in practice never calculated because this is too time consuming.

Fig. 5.54. Test for a Delaunay triangulation.

### Cline and Renka test

If node b lies in the circumscribed circle of the triangle formed by nodes a, c and d, then this yields according to Fig. 5.55:

Fig. 5.55. Cline and Renka test.

Round-off errors may cause problems ifis close to zero.

This happens when:

is near

are both near

are both near

Node b lies almost on the circumscribed circle in the first case and swapping the diagonal has no bad effects. Swapping the diagonal may result in a wrong triangulation in the last two cases. Therefore, additional tests are included in the algorithm.

Table 5.6. Cline and Renka algorithm.

### Lawson’s Delaunay algorithm

This swapping algorithm can be used for an element-based refinement strategy. Fig. 5.56 shows the initial triangulation with 11 nodes and 12 elements. When a new node is inserted, the existing triangulation is updated to a new Delaunay triangulation. This means in practice that the new nodes are inserted on a node-per-node base and each time some tests are performed.

In the Delaunay algorithm of Lawson all the triangles adjacent to the edges opposite the new node are placed on a LJFO-stack (last-in, first-out). A maximum of three elements is placed on the stack in the first step. All elements are taken one by one from the stack and a test is made as to whether the new node lies inside the circumscribed circle. If the new node lies in the circumscribed circle, the diagonal of the quadrilateral is swapped to generate two better-shaped elements. The triangles that are now adjacent to the edges opposite the new node are added to the stack.

Fig. 5.56. Initial triangulation.

Fig. 5.57 illustrates Lawson’s algorithm for the initial triangulation. According to an a posteriori error estimator, element number 6 is chosen to be refined. The new node receives the number 11 and is inserted in the centre of the element (Fig. 5.57a). Fig. 5.57b shows the three new elements. One of the three new elements gets the original element number and the other two get the numbers 12 and 13. For each inserted node the number of elements increases by two. The three elements adjacent to the edges opposite the new node are put on the stack (Fig. 5.57c).

The element that is put last on the stack is taken first from the stack and tested as to whether the new node lies inside the circumscribed circle of the element (Fig. 5.57d). If so, the diagonal of the quadrilateral is swapped (Fig. 5.57e). Because there are no elements that are now adjacent to the edges opposite the new node, no elements are added to the stack. The next element is taken from the stack and is tested for whether the new node lies inside the circumscribed circle (Fig. 5.57f). The diagonal of the quadrilateral is swapped (Fig. 5.57g). Two elements are now adjacent to the edges opposite the new node and are added to the stack (Fig. 5.57h). Three elements are now on the stack.

The next element is taken from the stack and a test is performed (Fig. 5.57i). No swapping is necessary and the same applies for the next element on the stack (Fig. 5.57j). The last element from the stack is tested (Fig. 5.57k). The new node does not lie inside the circumscribed circle of the last element taken from the stack. Fig. 5.57/ shows the resulting Delaunay triangulation.

Fig. 5.57. Lawson’s Delaunay algorithm.

## Reconstruction of the original geometry

Another characteristic of a refinement algorithm to obtain high quality meshes is the ability to restore the original geometry when elements are refined. Of each edge of an element on an outline, the primitive (arc, circle or line) to which it belongs, is stored. If an edge of an arc or circle is refined, the new node is not inserted in the middle of the edge. The new node is inserted in the middle in terms of polar co-ordinates (Fig. 5.58).

Fig. 5.58. Restoring to the geometry during refinement.

## Moving nodes

Moving the nodes results in a lower average aspect ratio. A node is moved to the centre of the surrounding nodes (Fig. 5.59). A test is performed so that no elements with a negative area are introduced.

Fig. 5.59. Moving the nodes.

An edge-based refinement is easily extended with the moving node approach. A node is moved to the centre of the surrounding nodes and all pairs of two adjacent surrounding elements are tested. If swapping of a diagonal was necessary, the node is moved again to the new centre of the surrounding nodes. The loop is repeated until the number of swaps is sufficiently low. Fig. 5.61a shows the triangulation before moving the nodes, after one loop (Fig. 5.61b) and after two loops (Fig. 5.61c) of moving and swapping. No diagonals were swapped after the two loops.

Fig. 5.60. Extended test algorithm.

Fig. 5.61. Triangulation after moving the nodes.

## Starting solution for the next adaptation step

An iterative solver to solve the system of linear equations needs a starting solution. The first start solution is often the zero-solution. The starting solution of the next adaptation step is determined by the interpolated solution of the previous step before movement of the nodes. When an edge is refined, the average of the solution of the two nodes forming the edge is taken as the solution for the new node. A starting solution obtained in this way can reduce the number of iterative steps by 25 %. A reduction of one or more Newton steps is possible for strongly saturated problems.