Many would argue that your CFD solution is as good as the mesh behind it. Many aspects of the mesh have a vital contribution to simulation accuracy, and include among others the type of physics models simulated, the details of the solution to the particular simulation, chosen discretization scheme and geometric mesh properties having to do with cell growth, smoothness, proximity and curvature attributes, stretching, featured angles, etc’…
Before delving into the domain of mesh quality and classify a mesh quality as “good” or “bad” we need to have a concise and well posed benchmark (or several of them) of which a specific mesh could be compared to, so we are able to decide where to place a particular mesh on a quality spectrum.
The general consensus is that a good quadrilateral mesh would be formed by two families of orthogonal, or at least nearly orthogonal, curves with a smooth gradation between a coarse mesh in the far field and a fine mesh near solid boundaries and that Mesh Quality concerns the characteristics of a mesh permitting a particular numerical PDE simulation to be efficiently performed, with fidelity to the underlying physics, and with the accuracy required for the problem.
Let us expand a bit on the above:
- We first note that mesh quality depends on the particular simulation which is undertaken. It is decisively not obvious that there is a single best high quality mesh we should strive for.
- Nevertheless, a mesh methodology should be such that for the least it does not create difficulties in the calculation.
- We should also stress that efficiency could have multiple meanings such as: time to produce, time to compute, memory usage, and so forth and so on. Different meanings apply for different encounters so one should weigh the diverse attributes of efficiency before concluding.
- Accuracy is of course far and foremost the most important issue for most practitioners as they choose a higher order discretization scheme in the aim of increasing the calculation accuracy. Seldom one thinks that the the order of discretization expected truncation error increases due to non-alignment of the mesh with the flow gradients for example.
MESH QUALITY METRICS
Commercial software codes tend to quantify the mesh quality in terms of criteria that can measure the element quality and the gradation in mesh element size such that at the very least, it will allow one to identify “bad” quality locations and thus decide where a careful visual inspection maybe needed. Such quantification is presented in the form of mesh quality metrics.
This post has no intention of presenting an exhaustive list of metrics, but to name a few (orthogonality, skewness, aspect-ratio) as conceptual means in the evaluation of mesh quality and its impact on obtaining an accurate solution.
The concept of mesh orthogonality relates to how close the angles between adjacent element faces (or adjacent element edges) are to some optimal angle (depending on the relevant topology). An example for orthogonality is presented as in ANSYS Fluent in the figure below.
The orthogonality measure ranges from 0 (bad) to 1 (good).
Skewness in tetrahedral elements is best captured by the deviation from an optimal (equilateral) volume:
For all kinds of cells it may be captured according to Normalized Angle Deviation Concept by finding the minimum angle between two lines joining opposite mid-sides of the element. 90 degrees (60 for tets) minus the minimum angle found is reported as skew of the element.
The skewness metric ranges from 0 (good) to 1 (bad):
The aspect ratio metric would be defined as length to height ratio in 2D or as the radius ratio of circumscribed to the inscribed circles in 3D (sometimes also the area ratio:
It should be noted that high aspect ratio cells are frequently required for orthogonal layers near solid boundary so that the limitation is relaxed for such instances.
As mentioned above there are many metrics out there, not necessarily independent on one another, and there is no intention providing an exhaustive review of them in the post but only just mention the ones commonly followed in most commercial codes.
CHOOSING A MESHING METHODOLOGY AND TOPOLOGY
Oh yes… the ongoing lengthy and heated debate… Which mesh has the best kong fu?
Structured Vs. Unstructured Mesh
In general, most people dislike unstructured meshes due to lack of direct control over the mesh and to the fact that much more data points and cells are produced than their structured counterpart.
On the other hand, unstructured meshes are mostly automated, much easier to produce, and in more than numerous occasions are the only ones possible (especially for large scale highly complex geometry for industrial applications).
Before a discussion on the subject, we need some review of various special attributes of structured and unstructured meshes, without giving much detail on either, as the main goal here is to elucidate the advantages and disadvantages of them, allowing us some basic understanding of the notions involved when differing between methodologies.
One main advantage of an unstructured mesh is its ease of generation. As CFD became more heavily used in the industry it was recognized that typically as much as 80% of the human time required to solve any given fluid dynamics problem was spent in generating the mesh. I am not sure that this is considered to be far too disproportionate as it seems also like typically 80% of the simulation error originates from bad mesh, but far and foremost the intention was to give (perceived) more important issues such as solution validation and results interpretation the lion share of the engineer’s attention.
A second advantage of an unstructured mesh is that the approximated partial differential equation (PDE) to be solved remains in its original coordinate system, hence no derivation or processing of metric information is necessary during the numerical solution procedure (as is the case for structured mesh) which is supposed to reduce the arithmetic overhead on computation resources. On the other hand, the points at which the discretization must be done are now filling the domain in an irregular manner, making the discretization of the approximated PDE much more complicated. Specifically, the main adverse effect manifests its self in the approximation of the derivatives for any desired formal order of accuracy.
It is fairly straight forward to show that in general, in 2D, a total of the point and its five nearest-neighbor point grid-function values will be needed to produce a second-order accurate first derivative approximation in structured mesh, but for an unstructured irregularly spaced points is generally deteriorated to first order accurate unless more neighboring points are involved.
A major disadvantage of unstructured meshes is that many highly-efficient time splitting methods for solving system of equations (resulting from implicit schemes) can not be implemented and instead methods for elliptic PDEs such as Incomplete LU (ILU) Decomposition which will increase the arithmetic overhead even when complemented by multigrid procedures, and especially when time dependent problems are concerned. Consequentially, it is not so clear if a net reduction in the arithmetic overhead on computation resources is actually achieved.
Three major aspects should always tilt the scale from one methodology to the other:
- Complexity of geometry : the rise in computing power allows for industrial applications involving highly complex geometry to be simulated. In such cases there may be no other option but to use unstructured mesh, especially if time is of essence.
- Convergence time: structured meshes are much more efficient as far as solving the actual PDEs are concerned (as explained above).
- Accuracy: This is not entirely decisive, although as mentioned above, in general a structured mesh may achieve better accuracy with much less neighboring cells.
Tet Vs. Hex Vs. Poly
Turning to a topology point of view, there is much more than meets the eye, though the general conception is that a hexahedral mesh may be placed such that it is much more aligned with the flow gradients than tetrahedral meshes which are obviously impossible to place with a constant face normal direction, and therefore produce more truncation (see the simple illustration below).
Tet meshes do present several geometric assets, such as planar faces and well defined face and volume centroids. Tet meshes can approximate almost any arbitrarily shaped geometry in great detail.
Furthermore, automatic volume-fill methods such as Delauny and advancing-front have been well studied, developed and suited for tet, providing currently a very robust solution for meshing ultra complex geometries in 3D, especially when mesh morphing (such as if ice accretion effect on a wing shape is to be simulated) and adaptivity are concerned.
With the advent of poly mesh the debate over which mesh topology is preferable became much more fierce. Proponents of the poly mesh (author included) and its successor, the polyhexcore (hybrid) mesh point to the many merits such a topology proposes.
A blog post “Nature’s Answer to Meshing” (by Stephen Ferguson), details the merits of poly mesh and also relates beautifully history and the life sciences to these merits: “Apart from the obvious benefits of economy, polyhedral meshes provide other advantages too. Because each polyhedral cell has more faces, it also has more neighbors than traditional cell types. A tetrahedral cell communicates with only four neighbor cells, and a hexahedral just six. In both cases this limits the influence of each cell to just a few neighbors. By contrast each polyhedral cell has on average 12 or 14 neighbors. The net result of this is that information propagates much more quickly through a polyhedral mesh, ultimately leading to an increased rate of convergence”.
The amount of neighboring cells is nothing less than crucial (especially in complex flows) for the approximation of gradients in gradient calculation methods such as the Green-Gauss and the Least Squares approaches.
Another important topic of which there is a consensus among CFD practitioners is the impact of alignment of flow gradients on phenomena like numerical diffusion an dispersion.
Not to delve to deep into the topic, looking at the pure convection equation:
The above PDE’s modified equation (the PDE which the exact solution to the discretized equations satisfies) is:
Meaning that the numerical behavior of the discretization scheme largely depends on the relative importance of dispersive and dissipative effects, specifically, a low order scheme along with a mesh which is not aligned with the flow’s gradients will tend to smear all over the place:
There is of course no benefit in this aspect when there is no dominant flow gradient direction:
Poly mesh would not be as aligned with the flow gradients for very simple flows such as the above, but due to the multiplicity of faces it has 6 optimal direction (if it has 12 faces) in contrast with hex which have 3 optimal faces, it may be more accurate than hex mesh for recirculating flows even for those which seem optimal for hex as they have a cartesian domain such as the qubic lid-driven cavity:
Fully automated workflow Vs. Practitioner controlled improved algorithm meshing
There is another debate going on quite lately between the “fully automated workflow proponents” to the “improved algorithms albeit practitioner controlled proponents”.
I think it is possible to enjoy both as long as you understand the range of validity of a workflow and you are aware of your own limitations (you can trust smart workflows to not create a foolish mesh the same way you can trust a foolish practitioner to fail in creating a smart mesh… 😉).
Moreover, specifically targeting the verification and validation process, I would say that the verification process is the one of utmost importance when an automated workflow is to be implemented. For instance, the automation can’t be considered as verified if the generated mesh compromises the formal truncation error that would be expected from the mesh by the practitioner later on, due non-alignment with flow gradients of a mesh that is high in quality as far as some metrics such as skewness, orthogonality, jacobian and aspect ratio. This means that a verification process of an automated mesh should amount to an optimization problem, having the objective be the a metrics which is constrained by the fact that for the least no expected formal accuracy is compromised due to the mesh. Such a process should be carried out very often anyway (automation or not), but I am a strong proponent for implementing even this kind of quality optimization problem as an automated part of the workflow. The reason I support that notion is that most of the time practitioners tend to ignore it (due to time overhead or simple lack of understanding of an unverified mesh effect ), and settle for a representative obvious metrics (e.g. skeweness and orthogonality) subsequently not realizing that the validation has now been unnecessarily compromised. This means that the automated optimization problem need not be perfect, it just has to be better and faster than the average practitioner. While this level of automation is far from being effectively implemented, I have no doubt that it shall be implemented in the very near future such that it would seem wiser to trust the workflow algorithm more often than our own propensity to detect, correct and achieve a better result for the optimization problem of our mesh.
In sum, it seems like there is no decisive answers for a one and inclusive “best practice” when it comes to mesh generation.
Trade-offs between factors such as: time to produce and memory, Resolution, alignment with flow gradients, applicability to morphing, robustness and level of maturity of the mesh generation method, convergence, geometry complexity and accuracy issues have to be made. And then some more…
Stay tuned for “Know Thy Mesh – Mesh Quality – Part II” where specific issues such as: surface mesh, volume mesh, boundary layer mesh and improvement of mesh quality shall be addressed.