Transforming Chaos; the Law of Nature, into Order; The Law of Man

By Vyssion and jjn9128 on

“Big whorls have little whorls, which feed on their velocity. And little whorls have lesser whorls. And so on to viscosity.” - Lewis Fry Richardson, 1922

Turbulence is the most complicated kind of fluid motion. It is so complicated in fact, that even coming up with a precise definition of it can be quite difficult. One popular definition was proposed by Markatos back in 1986:  
“A fluid motion is described as turbulent if it is three-dimensional, rotational, intermittent, highly disordered, diffusive and dissipative.”

Turbulent flows are three-dimensional, non-linear and time-dependent; within which there are chaotic changes in the fluid’s pressure and flow velocity. Loosely stated, turbulence arises when there is an excess of kinetic energy that is able to overcome the fluid’s natural viscosity. In other words, the fluid has a higher proportion of inertial forces present than viscous forces; the ratio of which, defines a dimensionless quantity which can be used in order to predict flow patterns, and thus, the onset of any potential turbulence: Reynolds Number,
Wind tunnel testing is expensive because of the requirement to design, build and then power and run the sometimes multi-megawatt fans and so the ability to simulate within a computer what is happening to a fluid flow around an object is an extremely attractive thing to be able to do. A tool which makes it possible to verify the performance of designs for aerodynamic devices (against validated models) without the need to spend the time and/or money in order to manufacture said piece to test it in the real world is a powerful tool to have at your disposal. This is particularly true within the world of Formula 1 which, apart from the complexities of turbulence in and of itself, also involve extremely complex geometry with numerous parameters defining their shape, each requiring optimization.  

Complex flow field around a Formula 1 car’s front wing [endplate] in ground effect; lifted from J Pegrum “Experimental Study of the Vortex System Generated by a Formula 1 Front Wing”, Imperial College London, 2006

The complexity of directly simulating turbulence (which we will explore further shortly - in all its daunting and unfathomable complexity) exists from the fundamental principle that turbulence is dissipated, and in addition to that, momentum of a fluid is exchanged through the interactions of small-scale fluctuations. Within laminar flows (i.e. smooth and uniform), the following cascade process can be modeled with relative ease:
“Big whorls have little whorls, which feed on their velocity. And little whorls have lesser whorls. And so on to viscosity.” -- Lewis Fry Richardson, 1922

These eddies, or whorls, can be thought of as a sort of tangle of vortex elements which are stretched in some characteristic direction by the average flow as well as by one another in some other random direction. It is this mechanism, often called “vortex stretching” which ultimately leads to them being broken down into smaller and smaller versions of themselves. Because these eddies can only interact with one another, kinetic energy is extracted from the main bulk fluid motion by the larger eddies which then transfer it to smaller ones, and so on until the amount of energy exchanged becomes less than that of what would be required to resist the influence of the viscous stresses within the flow, and so they dissipate into a small amount of thermal energy. This process is called the “energy cascade”. The higher the Reynolds number of the flow, the more steps it takes the largest eddies to cascade down to dissipation.
However, more turbulent flows (none is more turbulent in F1 than behind/around the rotating tyres) pose quite a difficult challenge given the extreme scale difference between the larger eddies, present within the bulk fluid motion, down to the tiniest little eddies, which dissipate out to nothing. It is this incredibly large spectrum of scales which gives rise to some mind-boggling numbers in order to resolve even the simplest flows.
Direct Numerical Simulation (DNS) is the most daunting method of solving fluid motion within the CFD arsenal. DNS involves solving a set of equations which define fluid motion in terms of velocity, pressure, density, and viscosity numerically, without the need for any “modelling” of turbulence in order to do so. The equations required for this numerical solve are known as the Navier-Stokes equations, which were originally derived in the 1840s using the concepts of conservation laws and first-order approximations. Until the late 1970s, this process was impossible due to the computational requirements of obtaining a solution by solving all spatial and temporal scales of turbulence. The accuracy of DNS is unrivalled by any other solving method. However, DNS must satisfy two important constraints:  
  1. The dimensions of the computational domain must be large enough to encompass/contain the largest turbulence scales
  2. Grid resolution must be fine enough to capture the dissipation length scale - specific to that flow regime
To give an idea of the computational requirements to solve a DNS case, if we were to take a 100 millimeter cube with some sort of fluid motion within it, the size of some of the eddies present within it may be less than 100 microns in length, with each “event” present within the motion occurring up to 10,000 times a second. The worst case scenario would require in the order of a trillion cells in total to resolve around 100 microseconds of actual time passing. Therefore, to get any meaningful capture of how a particular flow structure propagates over an object would be nigh on impossible using current technology. It is for this reason that most CFD simulations within F1 are performed using Reynolds-Averaged Navier Stokes (RANS) methods.
(Note: This article will only introduce the concept of turbulent flow modelling as it pertains to single-phase, incompressible flow with a constant viscosity - any equations which have been included are meant purely to inform the information presented; you will not need to fully understand them!!)

Within the area of RANS modelling, there exist many equation based models for the approximation of turbulence. In the interest of brevity, only a select few have been chosen for summary in order to establish their prominent parameters. There are two equations which can be derived by means of describing the turbulent motion of a fluid via random variations about some average value; that is, defining the instantaneous flow characteristics as a function of the mean flow characteristics plus or minus some fluctuation value. This method is known as “Reynolds-Decomposition”, and through some numerical manipulation the following continuity and Navier-Stokes equations are derived:  
Continuity Equation

Reynolds-Averaged Navier-Stokes Equation

The key term in this equation is which is called the Reynolds-stress tensor. This is the term which is the anathema of turbulence equation solving!
The Reynolds-stress tensor is a symmetric tensor matrix with six independent components. If we look again at the RANS equation above, you can see that the pressure term, three velocity components, and six stress components add up to ten unknowns with only two equations to solve them. This is a problem, as in order to solve them in their current state, would require knowing the finished solution beforehand. This shortcoming of these equations is known as the “closure problem” and so CFD Engineers currently employ a method of approximation to the Reynolds-stress tensor in order to solve RANS equations.
The most popular approach to defining the Reynolds-stress tensor is the use of something called the “Boussinesq Eddy-Viscosity Approximation”. Essentially, what this approximation does is relate molecular motions to turbulent motions within a flow; this means that they correlate the Reynolds-Stresses to the rate of strain within the average flow characteristics. In layman's terms, imagine that you have a turbulent eddy within a flow. This eddy is described as a “packet” of the flow which moves through the fluid like a single molecule of air. As it moves, it collides with other “packets” and exchanges momentum - in the same motion that the kinetic theory of gases defines.
What comes out of this approximation is an “eddy viscosity” term, which is added to the molecular viscosity (the isotropic part of the pressure is absorbed into the average pressure term) within the RANS equations, which can then be used to approximate the flow solution. However, this eddy viscosity is not constant!! In fact, it may vary quite significantly from flow to flow and as such, must be solved.
If you perform a dimensional analysis on this eddy viscosity term, you will find that it must be proportional to the product of some velocity and a length scale. Determining how to calculate these two values becomes the key method in defining what they are; and the following models seek to do just that:
    Zero Equation Models
      Both the characteristic velocity and length scale are defined as algebraic equations
    One Equation Models
      The characteristic velocity is defined as the square-root of the turbulent kinetic energy of the flow, with the characteristic length scale defined algebraically
    Two Equation Models
      Use differential equations in order to calculate both the characteristic length and velocity and then use a supplementary “estimation equation” depending on which two equation model being used
Zero equation models only use partial differential equations in order to calculate the flow regime, with the algebraic equation, as mentioned above, defining the turbulence quantity. The most well known version of a zero equation model is the “Prandtl mixing length model”. This model defines the eddy viscosity as an algebraically defined mixing length, which is squared and then multiplied by the magnitude of the rate of change of velocity in the x-direction, with respect to height away from some surface or wall. This means that while it performs moderately well when used to predict boundary layer flows or mixing layers, it breaks down when predicting flows with any sort of recirculation and/or separation.
One equation models comprise an additional transport equation specifically for turbulent kinetic energy (k). To do this, a length scale still has to be defined, and this is usually done based on experimental data and associated algebraic equations. However, experimental data for separated or recirculation flows is difficult to obtain, so for flows dominated by these phenomenon, the use of one equation models are typically not advised. The most commonly known one equation model is the Spalart-Allmaras model which was developed and optimized for flow past wings and aerofoils; which it does quite well. This being said, the model does have one major flaw within it: numerical diffusion, which arises due to discretizing the Navier-Stokes equations and causes the flow medium within the computational world to have a higher rate of diffusion than the real-world medium would exhibit.
Two equation models are what most people will be familiar with. These types of methods involve solving two transport equations for the characteristic length scale and velocity. The first equation is usually used to calculate the turbulent kinetic energy (k), with the second equation calculating one of many alternative variables (beyond what is listed below):
Dissipation Rate of Turbulent Kinetic Energy (ε)

Specific Dissipation Rate ()

Length Scale (l)

Time Scale (τ)

The k-ε and k-⍵ models are the most widely used and so we will touch briefly on these.
The k-ε model was developed by Launder and Sharma in 1974, and is able to model turbulent shear flows relatively well. However, it struggles to predict flows with strong adverse pressure gradients, and therefore separated flows, because of its difficulty in being integrated within the viscous sub-layer of a flow over a given surface. There have been a few revisions to the basic model, however, they still tend to struggle with flows exhibiting any sort of acceleration.
The k-⍵ model was originally proposed by Kolmogorov in 1942, however, it wasn’t until 2006 that the most important development by Wilcox was added. It differs from k-ε in that it utilizes a few additional relationships and closure coefficients in order to better define the two equations - including a few which denote vortex-stretching. This model is generally able to achieve a higher accuracy than the k-ε method for several reasons. For one thing, it can achieve a much greater level of accuracy when solving boundary layers with adverse pressure gradients and is able to be integrated into the viscous sub-layer without the need for any additional damping functions. It does, however, still suffer when applied to flows akin to a jet in freestream.
There is one other type of two equation model which is of particular interest, and that is the Shear Stress Transport (SST) model created by Menter in 1994. The SST model has been validated for several different applications with quite accurate results. This is because it combines both the k-ε and k-⍵ models via a scaling equation which biasses the k-⍵ when close to a boundary layer flow, and the k-ε when within the far-field.
There are more types of equations such as Low-Reynolds-Number modifications and Non-Linear Eddy Viscosity Models, however, because they are less commonly utilized they will not be looked into here.
We have summarized both the pure numerical solving, method of turbulence, DNS along with the time-averaged approximation methods, RANS, however, there is quite a gap in accuracy between the two methods. Where DNS would solve a flow as it would exist in the real world, it is computationally impossible with modern computing for all but a very few cases. While RANS is less computationally intensive, we have explored ways that the RANS zero, one, and two equation models rely on approximations of the flow in order to solve it. The main thing which results in this massive leap in computational expense is the ability to solve for all scales of turbulence; even the teeny tiny ones… So what if we were to find a way of hybridizing that necessity?
Large Eddy Simulations (LES) do just that. They operate under the process of fully resolving all of the important large scale flow features and only using turbulence modelling for the smallest, sub-grid scale, eddies (RANS methods model the entire spectrum). RANS methods operate via time-averaged equations, where as the premise of LES simulations is to “spatially filter” the domain into the regions where larger eddies are located and the areas of smaller eddies for modelling. A mathematical filtering derivation is applied to the Navier-Stokes Equations and instead of a Reynolds Stress Tensor coming out at the end, we end up with a Sub-Grid Scale (SGS) Tensor instead:
If you recall how we derived the Reynolds-Averaged Navier Stokes equations by treating any instantaneous value as a function of some average value plus or minus a fluctuation, if we reverse this (i.e. average = instantaneous +/- fluctuation) within this SGS Tensor, you end up with the following equation:
This equation breaks down the key difference between LES and RANS models as far as the actual numerical equations being solved are concerned. If we break this down into parts, similar to how Clark (et al.) did so back in 1979, there are three key term definitions:
    Leonard Term - The first two terms combined make up this term, they help to denote interactions between the scales of the flow which have been fully resolved; it can be calculated directly from the LES filtered field and so doesn’t require modeling
    Cross Term - The third and fourth term together form this term and it is a measure of how much interaction there is between the large scale flow features and what is “filtered” out as being SGS
    Reynolds Term - The final single term denotes how much interaction within the SGS there is with itself
Within RANS methods, the Leonard and Cross Terms are equal to zero - which would result in just the RANS Reynolds Stress Tensor remaining!! With LES, both these terms are typically non-zero , and are more or less equal to one another. This tensor is formulated by averaging over what is “filtered” as being large scale vs. SGS, and so this means that the accuracy of an LES model is only as good as it’s model for the SGS stresses. This is extremely important because of how energy is transferred between the resolved and unresolved turbulent eddies.
Beyond this point, there exist many hybrid models which take pieces of RANS, LES, and other time-averaging methods in order to try and increase accuracy while also reducing computational time. These will not be looked into here.
In summary, RANS methods are the most widely used in industry, and to date are accurate enough for most applications. Where they fall down is with applications that involve large curvatures of geometry, at low Reynolds Numbers (which may or may not be buoyancy dominated), and with flows which transition from laminar to turbulent (sometimes repeatedly - called intermittency). LES methods are currently the most accurate methods available for most practical applications. While DNS would obviously provide the most detailed and accurate solutions, it remains out of reach for any simulation beyond the most simple of flows - though these are extremely useful for validating new and revised models.  

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *  
N.C. Markatos, “The Mathematical Modelling of Turbulent Flows”, Appl. Math. Modell 10, 1986, pp. 190–220
B.E. Launder, “Heat and Mass Transport”, Chapter 6 in Turbulence, Topics in Applied Physics, 1978, pp. 231–287
C.G. Speziale, “Analytical Methods for the Development of Reynolds-Stress Closures in Turbulence”, 1991, pp. 107–157
R.A. Clark, J.H. Ferziger, W.C. Reynolds, “Evaluation of subgrid-scale models using an accurately simulated turbulent flow”, 1979, pp. 1–16

Join the discussion of the article at its dedicated topic! viewtopic.php?f=6&t=27397