The technological jump from no functional aeroplane to the first serious military fighter occurred in a mere 10 years. The Wright brothers conducted their first flight in late 1903 and by 1914 WWI broke out with an associated expansion in military flying. This expansion occurred almost entirely without the benefits of organised science in formal institutions and universities, and was led predominantly by tinkering aviators. Aircraft pioneers were often gifted flying-buffs or sporting daredevils, but very few of them had any real theoretical knowledge. This proved to be sufficient for the early developments, when flying was mostly a matter of strapping a powerful and lightweight engine to a basic flying design, and having the skills to keep the aircraft aloft and stable. Many pioneers, like Charles Rolls, paid with their lives for this mindset and it took many accidents from stalls and spins to figure out that something was amiss.

The specifications and operating environment of aeroplanes was, of course, entirely different from either cars or trains. Particularly the design requirements for reliable yet lightweight construction posed a conundrum for early aerospace engineers. To make something stronger, a rule of thumb is to add more material. For aircraft this means increasing the wall-thickness of the beams, frames and plates that comprise the aircraft. Of course, by making components thicker, the structure becomes heavier and less likely to fly. Furthermore, thicker structures are stiffer, which causes loads to be redirected within the structure, and rather counter-intuitively, can make the aircraft more likely to fail.

This counter-intuitive finding was played-out during the discovery of wing twisting. Wings are predominantly subject to bending forces due to aerodynamic lift that keeps the aircraft aloft. As this is entirely obvious, and since there was a great deal of acquired expertise in bridge building, wing bending loads were supported quite reliably by beams (spars) running along the length of the wing. The wing is, however, also subject to large twisting forces, and if these are not accounted for, the wing will twist-off the fuselage.

Spars running along the length of the wing and connected by a series of ribs

By 1917, the Allies had developed a certain degree of air superiority over the Western Front of WWI by means of better biplane construction. Out of necessity the Dutch engineer Anthony Fokker, working in Germany at the time, was developing a more advanced monoplane design with performance specifications better than anything the Allies had to offer. While biplanes are very light and were the preferred type of construction up to that point, their flying performance in terms of nimbleness and speed is limited due to the high drag induced by the aerodynamic interference of the two separate wings. There was thus a strong incentive to build faster monoplane aeroplanes. But since the fateful crash of Samuel Langley into the Potomac River in 1903, monoplanes had the reputation of being entirely unreliable.

Fokker D8

And indeed, as soon as Fokker’s new D8 aeroplane flew in combat situations, the wings started to snap-off as pilots pulled out of dives during dogfights. Being pressed for time, the D8 hadn’t gone through an extensive series of flying tests, and this cost many of Germany’s best pilots their lives. As a result, the German Air Force ordered a series of structural tests on the D8. As in the more standard biplanes of the time, the wings of the D8 were entirely covered by a thin fabric whose only purpose was to provide an aerodynamic profile for lift creation. The fabric itself did not carry any of the aerodynamic loads, and indeed all wing-bending loads were carried by two spars projecting from the fuselage and running along the length of the wing. The spars were connected by a series of ribs which served as attachment points for the stretched fabric. According to the testing standards of the time, the D8 aircraft was mounted upside down with weights suspended from the wings to simulate aerodynamic loads six times the weight of the aircraft. When tested this way, the wings showed absolutely no sign of weakness. When increasing the load beyond the factor of six, the wings began to fail in the aft spar such that the German authorities ordered all rear spars to be replaced by thicker and stronger ones. Unfortunately for the German military command, the accidents of the D8 become more frequent as a result of this intervention. Germany’s engineers now faced the perplexing conundrum that adding more material to the wings seemingly made them weaker!

At this point Fokker took matters into his own hands and repeated the tests in his own factory. What he found was that not only would the wings rise as a result of aerodynamic loads, but they would twist too, even though there was no obvious twisting loads being applied. Particularly important was the direction of twisting, which occurred as to twist the leading edge upwards, thereby increasing the angle of attack and the lift created by the wings, thus further increasing wing twisting, and so on in a detrimental feedback loop. As a pilot pulled up out of a dive, the extra lift needed to pull-off the manoeuvre was sufficient to initiate this catastrophic feedback loop, until the wings eventually twisted-off. Fokker had discovered the phenomenon now known as “divergence”.

But why did this divergent behaviour occur in the first place?

Imagine two horizontal and identical beams placed side by side and connected by a number ribs along their length to bridge the gap between them. One end of this assemblage is free and the other is rigidly supported (clamped). This simple construction is basically the fundamental structure of even the most modern aircraft. If a vertical load is applied exactly halfway between the two beams at the free end, then both beams will just bend upwards without any twist. However, if the vertical load is biased towards one of the beams then the assemblage will bend and twist at the same time, because the load carried by one beam is greater than the load carried by the other. The point where a load must be applied such that a structure bends without twisting is known as the flexural centre.

If a load is applied at the flexural centre (for a wing pretty much half-way between the two spars) the wing will only bend. But because the centre of pressure is located at the quarter-chord position, the wing bends and twists at the same time. The load is not applied at the flexural centre

Of course, if there are more than two beams or if the beams are of different stiffness, then the flexural centre will not be halfway between the beams. In fact, the aerodynamic lift forces are distributed across the wing and do not really act at a single point. However, the distribution of aerodynamic pressure can be summed up and represented mathematically as acting as a point load somewhere between the front and rear spars. This point is known as the centre of pressure and may shift along the length of the wing. One might assume that the centre of pressure of a wing profile is situated nicely in the middle of the two spars, but this is not what happens. The centre of pressure for most wing profiles is in fact just behind the front spar, in the vicinity of the quarter-chord position, that is 25% of the chord length behind the leading edge. Therefore it follows quite simply that if the flexural centre and centre of pressure do not coincide, the wing must twist and bend at the same time. The extent of twisting naturally depends on this mismatch and the stiffness of construction in torsion. It is the designer’s role to minimise it as much as possible, and in fact, the thick quill of a bird’s feather is located at about the quarter span to minimise twisting.

Wing lift distribution with centre of pressure at the quarter-chord. A feather features a reinforcing “spar” at the quarter-chord to prevent twisting of the feathers

In the simple fabric-covered D8 monoplane, the flexural centre and torsional stiffness of the wing depended entirely on the two wing spars. In early designs of the D8, the centre of flexure was pretty much bang in the middle between the two spars, and the fruitless attempts of beefing-up the rear spar only moved the flexural centre further to the rear and away from the centre of pressure at the quarter chord. So Fokker decided to reduce the thickness of the rear spar, thereby not only solving the problem of divergence but also making the aircraft lighter and a serious menace to the British and French biplanes.

Fokker also came up with a second design evolution that enabled monoplanes. In the early fabric-covered monoplanes the torsional stiffness of the wing is provided entirely by differential bending of the two spars. Not much can be done to improve the torsional stiffness by tinkering with the design of these spars. This was part of the reason why monoplanes were forbidden in the early days of flying. It was a safety precaution, and not a particularly unpopular one, because in practise many biplanes were not much slower than monoplanes and considerably more reliable.

An example of the shear flow around a wing box due to a vertically applied load

As a structure is sheared it creates what is called shear flow – the shearing force divided by the length of material over which it acts. Because the fabric does not carry sufficient loading, the early fabric-covered monoplane construction is considered an “open” cross-section as shear cannot flow from one spar to the other. The strutted and braced construction of the biplane, however, has the advantage of creating a closed “torsion box”. The torsion box of biplanes creates a closed cross-section and the shearing forces can flow around the material to optimally resist torsion. Torsion is therefore ideally resisted by any box or tube whose sides are continuous. The second breakthrough of monoplane construction was therefore to replace the fabric with thicker sheet-metal that could carry load. Now the closed aerodynamic surface of the wing could provide the job of resisting shear loads efficiently, while the two spars predominantly served to resist bending loads. In effect, this is an efficient division of labour concept even though it requires a much thicker and heavier wing to resist torsion.

References
[1] J.E. Gordon. Structures: Or Why Things Don’t Fall Down. DeCapo Press. 2nd Edition, 2003.

Tagged with:
 

This post is a first. Up to now, all content on this blog has been exclusively written by myself. But recently Nick Mehlig and Ben Names from Structural Design and Analysis, Inc. (SDA), a team of stress engineers that design lightweight and load efficient structures, contacted me with a proposal for a guest post. The reason why I agreed is because the guys at SDA have a unique perspective on a fascinating real-world engineering problem — re-designing the wing of a Fokker D-VII.  The Fokker D-VII was a German fighter aircraft in World War I but was also used by many other countries after the Great War. This post is a look at some of the details of how aircraft components are designed.


Within the Aerospace Engineering community, there is an entire sub-discipline devoted to understanding the dynamics of a system and how loads are generated (propulsive, inertial, aerodynamic, etc). In smaller companies, engineers often need to wear multiple hats, and the lines between classical stress analysis and aerodynamic loads analysis begin to blur. Recently, Structural Design and Analysis, Inc. (SDA) worked with a local resident who had taken it upon himself to build a Fokker D-VII Biplane from scratch, and wanted to know how much weight he could save if he used an aluminium spar for the main wings instead of the original wooden spar design.


Our engineers had to develop a finite element model (FEM) and conduct the basic loads and dynamics analysis to define the load cases for the vehicle. Generating aerodynamic loads is relatively straight forward for aircraft with more conventional designs. Typically, a combination of 2D aerodynamic theory and corrections for wings with finite span are used to generate the loads in the early stages of the design phase. These loads are then applied to the structure at the quarter chord location of the wing. For the Fokker, this analysis is slightly more complicated because the biplane construction creates interference effects between the upper and lower wing which must be considered when determining the loads that act on the aircraft.

The goal of this case study is to show that various approaches can be taken to solve this loads-generation problem, and that the “best” approach for an engineer depends on his/her technical expertise, available resources (time and/or money), and the desired accuracy of the results. Three different methods were selected to calculate the aerodynamic loading on the Fokker D-VII biplane, and they are listed in increasing order of required technical expertise and accuracy:

  1. Assuming that each wing can be analysed separately. This type of solution is best suited to an aircraft enthusiast or an engineer without much background in theoretical aerodynamics.
  2. Accounting for the interaction between the upper and lower wing using correction factors. This type of solution is best suited for an engineer with a level of understanding comparable to an undergraduate education in aerospace engineering.
  3. Using an advanced FEM analyses suite such as NX Nastran’s Static Aeroelastic SOL 144. This solution technique requires the least amount of effort on the user since the loads are calculated internally by NX Nastran, but is best suited to an engineer with some postgraduate education in aerospace engineering.

Let’s compares the efficacy of these three methods and the accuracy of their respective results.

Method 1
The first step in calculating the aerodynamic loads on the aircraft is to get the airfoil data. The Fokker D-VII uses a modified Goettigen GOE 418 airfoil for the upper wing. The airfoil data points (see the diagram below) were imported into XFOIL, a popular open-source 2D potential flow code, and the lift coefficients were extracted for various angles of attack (AOA). Using the XFOIL data, a plot relating the wing’s AOA to the lift coefficient ( C_L ) is constructed. A trendline is added to the data to estimate the lift curve slope for the airfoil of  C_{L_\alpha} = 0.095/^\circ and the zero lift AOA is -8^\circ(the airfoil is angled down for no lift).

To calculate the lift on the upper and lower wings, a simple approximation from Prandtl Lifting Line Theory is used which relates the 3D lift coefficient to the 2D lift curve slope, the wing aspect ratio (AR) and the AOA. The lower wing of the Fokker D-VII has a  1^\circ AOA while the upper wing has  0^\circ  AOA.

 C_{L,upper} = C_{L_\alpha} \left( \frac{AR}{AR+2} \right) \left( \alpha - \alpha_{0L} \right) = 0.095 \left( \frac{5.14}{5.14+2} \right) \left( 0-(-8)\right) = 0.55  C_{L,lower} = C_{L_\alpha} \left( \frac{AR}{AR+2} \right) \left( \alpha - \alpha_{0L} \right) = 0.095 \left( \frac{5.67}{5.67+2} \right) \left( 1-(-8)\right) = 0.63

The lift equation was then used to calculate the lifting force on the wings.

 L_{upper} = qS_{upper}C_{L,upper} = (0.277\ psi)(20,418.3\ in^2)(0.55) = 3,110.73\ lb  L_{lower} = qS_{lower}C_{L,lower} = (0.277\ psi)(12,630.72\ in^2)(0.63) = 2,204.19\ lb

where  S is the wing area and  q = 1/2 \rho V^2 is the dynamic pressure that depends on the density  \rho and the airspeed  V of the particular manoeuvre.

To balance the aircraft, the moment created by the wing lift about the centre of gravity of the Fokker needs to be balanced by the tail wing lift force,  F_{tail}. Each moment is equal to the lift force multiplied by the distance of the point of action  x from the centre of gravity of the aircraft. Given the relative positions of the two wings and the tail plane, we solve the following equation

 F_{tail} = \frac{L_{lower} x_{lower} - L_{upper} x_{upper}}{x_{tail}} = \frac{3110.73 \times 11.60 - 2239.17 \times 10.41}{190.12} = 79.40\ lb

The sum of these three loads is  3,110.73+2,204.19+79.40 = 5,394.32 lb or 4.31g’s. Since we are analysing  a 4.0g load case here, the lift on the wings will need to be reduced. As the lift on the wings is reduced, the pitching moment will change which, in turn, changes the required tail force to balance the aircraft. Excel’s goal seek was used to reduce the wing loading and balance the aircraft such that the total lift (including the tail) is equal to 4.0g. The final loads are shown below.

 L_{upper} = 2,894.41 lb

 L_{lower} = 2,078.78 lb

 F_{tail} = 62.81 lb

These final loads are applied to the quarter chord location of the wings. Here, a rectangular spanwise lift distribution is applied to the upper and lower wings.

Method 2
By having two wings subject to the same flow, each wing interacts with the other’s vortex system such that the upper wing experiences an increase in lift and the lower wing experiences a decrease in lift, denoted by  \Delta C_{L,upper} and  \Delta C_{L,lower} respectively. The following method uses the simple biplane theory which is detailed in NACA Technical Report No. 458 [1]. It is shown that the change in lift coefficient  \Delta C_{L} follows a linear relation with the overall vehicle lift coefficient  C_{L}  in the following form:

 \Delta C_{L,upper} = K_1 + K_2 C_L

Where  K_1 and  K_2 are constants relating to the gap between the two wings, wing stagger (the relative fore-aft position of the two wings), decalage (angle difference between the upper and lower wings of the biplane), overhang (the extension of one wing span over the other), and wing thickness. The change in lift for the lower wing is related to the change in lift of the upper wing by the ratio of wing areas.

 \Delta C_{L,lower} = -\Delta C_{L,upper} \frac{S_{upper}}{S_{lower}}

The methods of finding the values of  K_1 and  K_2 follow a graphical approach using the biplane ratios of wing gap = 55 in., wing stagger = 25 in. , percent wing overhang = 17.4%, and decalage = 1 deg. Using these values and the method described in NACA Report 458, the following values are calculated:  K_1 = -0.090 and  K_2 = 0.195 . The final lift coefficient for the upper and lower wings are found by adding the correction to the vehicle lift coefficient to the original uncorrected value. This uncorrected value is calculated from the maximum weight of the aircraft, which naturally determines the lift required from the wings. The maximum weight of the Fokker D-VII is 1,259 lbs, and for a 4.0g manoeuvre (n=4 in the equation below), the aircraft lift coefficient is:

 C_L = \frac{nW}{0.5 \rho V^2 S} = \frac{nW}{qS} = \frac{4 \times 1259}{0.277 \times 33049} = 0.55

Plugging in values for  K_1 ,  K_2 and  C_L into the  \Delta C_{L,upper} and  \Delta C_{L,lower} equations give the following values:

 \Delta C_{L,upper} = 0.015 and  \Delta C_{L,lower} = -0.025

Using the new corrected values for the wing coefficients  C_{L,upper} = C_L + \Delta C_{L,upper} and  C_{L,lower} = C_L + \Delta C_{L,lower}, the total load can be calculated for the upper and lower wings. A moment balance is performed and the following loads are calculated for the aircraft:

 L_{upper} = 3,139.88 lb

 L_{lower} = 1,803.24 lb

 F_{tail} = 92.87 lb

Again, the loads are applied to the quarter-chord position of the aircraft wings. Two different spanwise lift distributions are applied to the model for this comparison study. The first assumes an elliptical lift distribution. The second uses Schrenk’s Approximation to estimate a more accurate spanwise lift distribution. These two distributions are shown below along with a reference to a rectangular distribution.

Method 3
The third and final method is to use NX Nastran’s static aeroelastic SOL 144 analysis to generate the loads using a vortex lattice formulation. A potential flow model is created in FEMAP to generate the aerodynamic loading for the Fokker. One of the powerful functionalities about SOL 144 trim analysis is that given high-level information about any flight condition, Nastran cannot only calculate the aerodynamic forces, but can also ensure that the vehicle is stable. With just a few clicks, the load case can be modified to model any corner of the flight envelope by changing the dynamic pressure and load factor of the aircraft. Since Nastran calculates all the loads internally using this high-level flight condition information, it can save an incredible amount of time that might be devoted to calculating loads externally, bringing them in, and applying them to the structure accurately. This solution requires the least amount of effort on the user since the loads are calculated internally by NX Nastran and then applied to the FEM through the points defined in the model. NX Nastran automatically calculates the trim condition of the aircraft and calculates the loading on the upper wing, lower wing, and horizontal tail. The resulting loads are shown below:

 L_{upper} = 3223.21 lb

 L_{lower} = 1,697.10 lb

 F_{tail} = 122.75 lb

Results
As the complexity increases with each of the methods discussed above, so does the accuracy of the results. However, not every stage of the design requires the same precision. Since this discussion focuses on how the analysis approach impacts the design of the aircraft, let us first compare the calculated loads for all three methods. Below is a table outlining the loads for each method and the percentage difference when compared to the aeroelasticity finite element model (the most accurate loads generation approach).

When comparing the lifting force of upper and lower wings of the aircraft, the aerodynamic loading from method 1 underestimates the lift on the upper wing by 10.2% and over estimates the lift on the lower wing by 22.5%. Applying Simple Biplane Theory in method 2 captures the interference effects and estimates the wing loading much more accurately with the upper wing lift only 2.6% less and the lower wing lift 6.3% greater, when compared to the aeroelastic model. A more detailed way to compare the resulting design impact of the three different load generation methods is to look at the internal shear and bending moment diagrams within the spars. Below is the shear force diagram for the upper wing leading edge spar.

Starting with the simplest method, the 2D non-interfering lift generates the highest shear force. As one would expect the variation in the maximum internal shear force is small (at most 7% difference) over all the models since the total lift generated by the plane was set to be constant. The differences are partially due to how the lift is distributed between the upper and lower wings. Furthermore, the spanwise distribution clearly has an impact on the internal shear force. Interestingly, the model that matches the shear force of the aeroelasticity model the closest is the Biplane Theory model using Schrenk’s approximation. The differences produced by these aerodynamic models becomes even more apparent when inspecting the internal bending moment.

The internal bending moment clearly shows how differences in the aerodynamic models can propagate. The most basic model (2D Non-interfering Lift) produces the highest bending moment, 33% higher than the aeroelastic solution. While it is safer to be on the conservative side, this kind of inaccuracy will lead to a substantially heavier structure, thus limiting performance.

The shear and bending moment diagrams are often excellent indicators of the internal stress state within simple structures. As a stress engineer, comparing stress plots is the most meaningful way to compare how the different aerodynamics models impact the stress throughout the vehicle. Below is a picture of the upper wing leading and trailing edge spars Von Mises stress under a 4g pull up when using the aeroelasticity model.

The max spar cap Von Mises stress is 11.4 ksi. In comparison, the same stress contour is presented below, only for the case using the Biplane theory using Schrenk’s approximation, which exhibited a maximum Von Mises stress of 11.5 ksi, a 0.8% difference from the aeroelastic aerodynamic model.

In contrast, the Von Mises stress state for the upper wing under the 2D non-interfering lift can be seen below as a gross overprediction, predicting a maximum Von Mises stress of 15.2 ksi, 33% higher than the aeroelastic aerodynamic model.

Conclusion
Having exhaustively explored the impact of different aerodynamic models on the final stress results, several conclusions have become clear. First and foremost, as laid out at the beginning, none of these approaches are inherently bad. However their mileage does vary significantly. Requiring the least technical background, the 2D non-interfering lift model provides a good approximation of the stress state in the leading and trailing edges, but is over-conservative in predicting internal stresses. As expected, including the interference effects between the upper and lower wing in the simple Biplane Theory and applying finite span effects has the potential to predict stresses within 0.8% of the most accurate model. Unfortunately, this relies on the user correctly calculating the span wise loading and interference effects which often requires complex analytical methods or using a potential flow method. Furthermore, there are ample opportunities for an engineer to make a mistake when taking this approach, and it could be difficult to detect these without the results of a more accurate model to compare to.

Now, given that fairly accurate stress distributions using semi-analytical methods can be achieved, you might be asking yourself, why might anyone want to spend the money to use the NX Nastran Aeroelasticity module? First, it removes substantial uncertainty in the accuracy of the aerodynamic model. The Nastran Aeroelasticity module can account for interfering lifting surfaces, slender fuselage effects, ground effect, compressibility, wing sweep and taper, as well as a number of other factors. Additionally, once implemented, the Nastran Aeroelasticity module is more flexible than generating the loads from an outside source and then applying those loads within FEMAP (the pre/post environment). Nastran can generate the loads for any flight condition such as steady level flight, a 4g pull up, or a 3g coordinated turn, requiring only high level information from the user.

Finally, the user is also provided with additional information such as the trim angle of attack, control surface deflection angles, and vehicle stability derivatives. As with most problems, there is rarely a single correct approach, but when high accuracy and case-generation flexibility are desired, then using NX Nastran’s Static Aeroelastic Solution 144 is the way to go. However, if you are working on a budget, can take on additional mass, or do not have the technical background to employ Solution 144, then using an analytical method or generating the loads some other way externally is probably the way to go.

References

[1] Diehl, W. S., “Relative Loading on Biplane Wings,” NACA TR-458, January 1934

Tagged with:
 

A very important parameter when designing jet engines is specific power – the amount of power output divided by the mass of the engine. In general, a good heuristic to keep in mind when designing anything that moves is that maximising the power output per unit mass leads to a more efficient design. Afterburning is an exception to this rule. Yes, afterburning provides more thrust and therefore provides more bang for every gram of jet engine, but it is terribly fuel inefficient.

Afterburning, sometimes also called reheat, is a means of increasing the thrust of a jet engine in order to improve aircraft take-off and climb performance, to accelerate beyond the sound barrier, or in a military setting, for improved combat performance. Of course, we could simply increase the thrust by building a bigger and more powerful engine, but this naturally leads to a greater frontal area that impedes the oncoming flow, and therefore increases drag. Even though afterburning is incredibly fuel-inefficient, it is the best solution for enabling massive amounts of additional thrust at the switch of a button. This means an engine can be run in two modes – a fuel efficient, low thrust configuration and a fuel-inefficient, high-thrust configuration.

Effect of afterburning during take-off and climbing [1]

From conservation of momentum we know that the thrust of a jet engine is governed by the mass flow rate, \dot{m}, and the difference between the entry and exit velocities of air into, U_a, and out of, U_j, the jet engine.

\text{Thrust} = F=\dot{m}(U_j-U_a)

For a fixed airspeed U_a, this means that the level of thrust depends on both the exit jet velocity of the gases and the mass of air flowing through the engine per second. So to produce high levels of thrust we can either accelerate the exhaust gases to a greater velocity, or just increase the amount of air that is being sucked into the engine. Early turbojets attempted to maximise the exit jet velocity in order to create more and more thrust. The downside of this approach is that it decreases the efficiency of the engine. The propulsive or Froude efficiency N_p of a jet engine is defined by the power output divided by the rate of change of kinetic energy of the air. The kinetic energy of the air represents the power input to the system. The power output P is the product of force output i.e. the thrust F and the resulting air speed  U_a. Although this is an approximation, this equation summarises the essential terms that define aircraft propulsion. So, power output is

P=F U_a = \dot{m} U_a (U_j - U_a)

and the rate of change in kinetic energy is

\Delta KE = 0.5 \dot{m} (U_j^2 - U_a^2)

such that the propulsive efficiency is

N_p = \frac{\dot{m} U_a (U_j-U_a)}{0.5\dot{m} (U_j^2 - U_a^2)} = \frac{2Ua}{Ua + Uj}

This means that for a fixed airspeed  U_a,  the efficiency can be increased by reducing the jet exit velocity U_j. However, decreasing the jet exit velocity U_j decreases the thrust unless the mass flow rate \dot{m} is increased as well. Note that the advantage of increasing the mass flow rate is that it does not have an effect on the propulsive efficiency.

Schematic Diagram of Turbofan Engine (Photo credit: Wikipedia)

High bypass-ratio turbofan engines, which are the most common in modern airliners, are designed around this second principle – the big fan at the front sucks in tons of air, but because this flow bypasses the combustion chamber, it is not accelerated to a high exit velocity. The advantages of this design is that increasing the bypass ratio yields better fuel efficiency which means that turbofans can be operated over long periods (great for long-haul commercial passenger flights).

The downside, of course, is increased size and induced drag, which is a nightmare for nimble fighter aircraft. In fighter aircraft you want a small and compact engine that provides tons of thrust. Fuel efficiency is typically of secondary concern. Therefore, an afterburner naturally addresses the first principle we discussed above – accelerating the exhaust gases to higher velocity. This generally means that we can shrink the size of the engine and decrease the bypass ratio to provide better aerodynamic performance.

Schematic of afterburning components and functionality at the tail end of a jet engine [1]

As shown in the figure above, afterburning is achieved by injecting extra fuel into the hot exhaust gases that are being expelled by the turbine stage. The gas inside the jet engine is highest just before entering the turbine (just after combustion), and this turbine entry temperature is often the limiting design driver of the entire jet engine. Today, the turbine entry temperature is actually greater than the melting point of the metal used to make the turbine blades, but clever single-casting manufacturing methods and intricate cooling ducts inside the turbine blades guarantee that the blades do not creep excessively. To further limit the turbine entry temperature the combustion just prior to the turbine stage typically occurs at an oxygen-rich ratio such that sufficient oxygen is present in the hot gases flowing through and exiting the turbine. The hot jet exiting the turbine contains sufficient uncombusted oxygen that spraying more fuel into the jet pipe, and igniting the ensuing fuel-oxygen mix with a little spark raises the temperature to about 1700°C, and the related increase in the pressure forces the gases through the exhaust nozzle at increased velocity.

The hot jet from the turbine flows into the jet pipe at a velocity of around 250 m/s to 400 m/s, and this velocity is far too high to guarantee stable combustion in the jet pipe. Just prior to the jet pipe, the cross-sectional area of the exit portion to the turbine increases to diffuse the flow to lower velocities. However, because the standard injection rate of kerosene at a good fuel-to-oxygen mixture is only around 1-2 m/s, the kerosene would be rapidly blown away even by the diffused jet stream. To prevent this a vapour gutter is placed just prior to the fuel injection nozzles that spins the jet into turbulent eddie currents, thereby further slowing down the hot turbine exhaust gases and allowing for a better mixture of fuel and jet stream. A common misconception is that due to the high temperature of the gases exiting the turbine (around 700°C), the fuel-oxygen mixture in the jet pipe would combust spontaneously. Cooler combustion flames can develop at these temperatures, but because of the atmospheric pressure differences between ground level and altitude, a design that spontaneously combusts at ground level would never do so at altitude. To guarantee a stable and smooth reaction over a wide range of mixture ratios and flying altitudes, a high-intensity spark is needed.

Two-position nozzle [1]

Variable-area nozzle [1]

To allow the jet to operate without afterburning, the jet pipe is fitted with a two-position or a variable-area propelling nozzle as shown above. When afterburning is not being used, the nozzle remains in its closed configuration, but opens when afterburning is initiated to increase the exit area and prevent pressure from building up in the jet pipe that can adversely affect the operation of the turbine. A two-position nozzle has two “eyelids” that can be moved irrespective of the other in order to open or close the nozzle area. A variable-area nozzle consists of multiple flaps situated side-by-side in a ring arrangement around the exit nozzle and hinged to the outer casing. The nozzles can rotate into or out of the flow by rotating rollers that are actuated by a camtrack and a linear actuator (operating ram). When afterburning is initiated, a fuel control unit determines the correct amount of fuel to flow into the jet pipe to provide the correct balance between increased jet pipe pressure and the pressure ratio across the turbine. The pressure ratio across the turbine is crucial for efficient operation of the jet engine as it provides the energy to operate the compressor stages. Therefore, the control system can automatically vary the nozzle exit area in order to maintain the correct pressure ratio across the turbine – the higher the degree of afterburning, the greater the build-up of pressure in the jet pipe, and thus, the greater the required nozzle area to reduce the load on the turbine.

Thrust and fuel consumption

The increase in thrust is a function of the increase in jet pipe temperature as a result of afterburning. For a perfectly efficient system, the relationship between the temperature ratio before and after fuel is burnt, and the thrust increase is nearly linear in the typical operating range with temperature ratios of 1.4 to 2.2. Within this range we can expect a 40% increase in thrust for a doubling of the temperature in the jet pipe. Thus, if afterburning raises the jet pipe temperature from 700°C (973 K) to 1500°C (1773 K) this results in a thrust increase of around 36%.

 \frac{1773}{973} \times \frac{40\%}{2} = 36\%

In a static test bed, thrust increases of up to 70% can be obtained at the top end, and at high forward speeds, several times this can be achieved. The lower the temperature exiting the turbine and the greater the extent of uncombusted oxygen, the greater the temperature increase in the jet pipe due to afterburning.

As is to be expected, afterburning naturally incurs a fuel consumption penalty, and this is why afterburning is typically constrained to short bursts. The aim of the compressor in a classic jet engine is to raise the pressure of the incoming air to the optimal pressure for efficient combustion. After expansion by the turbine stage, the gases are at a lower degree of compression, and therefore the fuel is not burnt as efficiently as in the combustion chamber between compressor and turbine. For a 70% increase in thrust the fuel consumption can easily double, but of course this increased fuel consumption is balanced by an improved performance in terms of take-off and climb. This means that the increased fuel consumption is balanced by the time saved to cover a desired distance or operating manoeuvre.

If you are interested in jet engine design, check out my posts on its history, compressor and turbine design, jet engine optimisation and turbine cooling.

References

The inspiration of this post and the diagrams have all been taken or inspired by [1] Rolls-Royce (1996). The Jet Engine. Rolls Royce Technical Publications; 5th ed. edition (Amazon link). For anyone interested in jet engine design this is a beautiful book, describing lots of intricate details about jet engine design and presenting the information in an intuitive and visually pleasing manner using diagrams as used throughout this post. I can not recommend this book enough.

Aeroelasticity is the study of the interactions between dynamic, inertial and aerodynamic forces that arise when a body is immersed in airflow. The unique challenge of aeroelasticity is to analyse how vibrations, static deflections and lift and drag forces combine, and to make sure that any interaction of these three forces does not lead to inferior aircraft performance or even failure.

The triangle in the figure below is known as Collar’s triangle and each vertex shows one of the forces mentioned above. When all three forces interact simultaneously we are in the realm of aeroelasticity and common failure modes include wing flutter and buffeting. When inertial and elastic forces combine in the absence of aerodynamic forces we are in the classical domain of structural dynamics and essentially dealing with any sort of mechanical vibration that you would experience on any piece of moving machinery.  The interaction of inertial forces and aerodynamic forces gives rise to aerodynamic stability problems. How does an aircraft react to small disturbances – do the oscillations dampen out or do they get worse over time? Finally, the interaction of aerodynamic forces and elastic forces can give rise to a phenomenon known as divergence, which is an effect where twisting of the wing becomes theoretically infinite and can cause wings to twist off.

The Collar Triangle defining aeroelasticity as “the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design.”

The two most dramatic aeroelastic effects are flutter and divergence. Flutter is a dynamic instability, often of the wing, caused by positive feedback between the wing’s deflection and the aerodynamic lift and drag forces. The flutter speed is the airspeed at which the wing, or any other part of the structure, starts to undergo simple harmonic motion – much like the simple to and fro motion of a simple pendulum – and this vibration occurs with zero net damping. Zero net damping means that there is no dissipation of energy (think of a pendulum swinging for eternity) and so any further decrease in net damping  will result in self-oscillation – the structure is basically forcing itself to vibrate more and more, which at some point, will naturally lead to failure.

As we all know, the lift force acting on a wing will tend to bend it upwards, but what is less well-known is that this lift force can also cause the wing to twist. This is because the centre of pressure, the point where the total sum of the lift pressure field is assumed to act on an airfoil, is not necessarily coincident with the shear centre, the point through which a bending load needs to be applied to get pure bending without any twisting. Imagine holding a ruler in one hand and pushing up on it with your other hand. If you apply the load along the central axis of the ruler, the ruler will only bend, but if you apply the load at one of the two sides you can see the ruler bend and twist ever so slightly. Most of the time, the shear centre of an airfoil is not coincident with the centre of pressure, and so a lift force produces both bending and twisting. A critical phenomenon called divergence can occur when this twisting of a wing increases the angle of attack, which consequently increases the lift force further or creates further mismatch between shear centre and centre of pressure, so that a feedback loop ensues until the wing diverges or essentially shears off. In fact, one of the Wright Brothers’ main rivals in the race to being the first at heavier-than-air flight was Samuel Langley, whose prototype plane crashed into the Potomac river in Washington D.C., and this is now believed to have occurred as a results of torsional divergence. Furthermore, torsional divergence was a large problem with many WWI fighter planes and required a lot of additional stiffening of the wings.

Divergence of wings in action

Forward-swept wings

One of the domains where divergence is particularly pernicious is in forward-swept wings. Simply put, wing sweep delays the onset of shock waves over the wings and therefore reduces the associated rise in aerodynamic drag caused by boundary layer separation. In slightly more detail, as air flows over a curved object, such as an aircraft wing, it accelerates due to centripetal forces and this means that an aircraft travelling slightly slower than Mach 1.0 (the speed of sound) can develop pockets of supersonic flow over areas with high local curvature, typically the wings or the canopy. For thermodynamic reasons, supersonic flows terminate in a shock wave which results in a sudden increase in the density of the air. This effect disturbs the smooth flow over the wing and creates vortices behind the aircraft, which means it is a form of parasitic drag. Sweeping the wing reduces the curvature of the body as seen from the airflow by the cosine of the angle of sweep. For example, a 45 degree sweep reduces the effective curvature by around 70% (\cos 45^\circ = 0.71) compared to the straight-wing case. As a result, this increases the airspeed at which supersonic pockets start to form by about 30%, such that the aircraft can reach speeds much closer to Mach 1 before shocks occur.

Another way to think about the effect of sweep is to imagine the airflow over the wing as shown in the figure below. The effect of sweeping is such as to break the airflow into a component normal to the wing chord (“normal component”), and one along the span of the wing (“spanwise component”). The maximum curvature of the wing occurs along the wing chord, and the normal velocity component for the swept wing ( V \cos \psi ) is always less than the normal component for a straight wing (V).

The figure above highlights another critical aspect of swept wings: the spanwise component. On a backward-swept wing the spanwise flow is outwards and towards the tip, while on a forward-swept wing it is inwards towards the root (see the figure below). Firstly, with the air flowing inwards towards the fuselage, wingtip vortices and the accompanying drag are reduced. Wingtip vortices form when the higher pressure air underneath the wing is sucked up onto the lower pressure top surface of the wing, thereby reducing the effective lift-generating surface of the wing. On most modern backward-swept airliners, winglets and sharklets prevent this phenomenon from occurring. Forward-swept wings similarly minimise this effect by re-routing some of the flow towards the wing root, and therefore allow for a smaller wing at the same lift performance. The second advantage of forward-swept wings is that shockwaves tend to develop first at the root of the wing, rather than towards the tips, and this helps to reduce tip stall. Aerodynamic control surfaces such as ailerons are typically located near the tips of the wings, because the further outboard, the greater their effect on controlling the rolling action of the plane. Tip stall essentially renders these ailerons useless, and therefore jeopardises the pilot’s control over the aircraft. As a result, the dangerous tip stall condition of a backward-swept design becomes a safer and more controllable root stall on a forward-swept design, providing better manoeuvrability at high angles of attack.

Airflow forward and backward swept aircraft

For all their merits, forward-swept wings suffer from one detrimental flaw – divergence. In a forward-swept wing configuration, the aerodynamic lift causes a twisting force that rotates the leading edge upward, causing a higher angle of attack, which in turn increases lift, and twists the wing further. With conventional metallic construction, additional torsional stiffening is typically required which adds weight, and is therefore sub-optimal in terms of aircraft performance.

Enter the Grumman X-29

The Grumman X-29 was an experimental aircraft developed by Grumman in the 1980’s, and flown by NASA and the US Air Force. The X-29 tested a forward-swept wing, canard control surfaces, and computerised fly-by-wire control to counter balance the various aerodynamic instabilities created by its airframe. From my perspective, the most important innovation, however, was the novel use of composite materials to control the aeroelastic divergence of forward-swept wings. At the time, composite materials were popular in the high-performance aircraft community as a means of creating stiff and strong structures at very low weight. In fact, composites were mainly used to save weight. However, the X-29 showcased a second advantage of this new material over classic metallic structures – multi-functionality.

X-29 at High Angle of Attack with Smoke Generators

Metals are isotropic materials, meaning that their stiffness is the same in all directions. The relationship between stress and strain along one direction of an aluminium panel is the same as in any other direction. Because composite materials are a union of stiff fibres held together by a resin matrix, we can manufacture panels that are stiffer in one direction than in another. This is because the composite material will be very stiff along the fibre direction but relatively compliant perpendicular to the fibre direction. In most fibre-reinforced composite materials, such as fibreglass and carbon fibre, this variation in stiffness is restricted to the plane of a single sheet of material known as an orthotropic lamina.

Consider one such layer of a continuous fibre-reinforced composite in the figure above. The material axes 1-2 denote the stiffer fibre in the 1-direction and the weaker resin in the 2-direction. If we align the fibres with the global x-axis and apply a load in the x-direction, the layer will stretch along the fibres and compress in the resin direction (or vice versa). However, if the fibres are aligned at an angle to the x-direction (of say 45°), and a load is applied in the x-direction, then the layer will not only stretch in the x-direction and compress in the y-direction but also shear. This is because the layer will stretch less in the fibre direction than in the resin direction. This effect can be precluded if the number of +45° layers is balanced by an equal amount of -45° layers stacked on top of each other to form a laminate, e.g. a [45,-45,-45,45] laminate. However, this [45,-45,-45,45] laminate will exhibit bend-twist coupling because the 45° layers are placed further away from the mid plane than the the -45° layers. The bending stiffness of a layer is a factor of the layer-thickness cubed plus the distance from the axis of bending (here the mid-plane) squared. Thus, even if the +45° and -45° layers have the same thickness, the outer 45° layers contribute more to the bending stiffness of the [45,-45,-45,45] laminate than the -45° layers do. Therefore, stretching-shearing coupling is eliminated in a [45,-45,-45,45] laminate as the number of +45° and -45° layers is the same, but bend-twist coupling will occur because the +45° layers are further from the mid-plane than the -45° layers.

Let’s now apply this effect at a wing level, i.e. a [+\theta,-\theta] layup is used for the top wing surface and a [-\theta,+\theta] layup for the bottom wing surface. At the global wing level, the layup  is balanced because we have an equal number of +\theta and -\theta layers, but the +\theta layers are further away from the wing mid-plane than the -\theta layers. This means that the bending stiffness is dominated by the +\theta layers, and the wing will twist when it bends.

In the Grumman X-29, this bend-twist coupling was successfully exploited to prevent divergence in the forward-swept wings. As aerodynamic lift forces the wing tips to bend upward, the forward-swept wing wants to twist to higher angles of attack, but the inherent bend-twist coupling of the composite laminates forces the wing to twist in the opposite direction and thereby counters an increase in the angle of attack – divergence is avoided!

Bend-twist coupling in Grumman X-29 wings. Both top and bottom wing skin may have the same number of +theta and -theta fibre angles, but if the +theta angles are further from the wing mid-plane then they will dominate the bending behaviour and cause the leading edge to twist down as the wing bends up.

The Grumman X-29 is an excellent example of an efficient, autonomous and passively activated control system. Rather than adding more material to the wing to make it stiffer (but also heavier) an alternative solution is to use the bend-twist coupling capability of composite laminates. This capability is an example of elastic tailoring, and remains one of the most under-exploited advantages of composite materials. As the big aircraft manufacturers overcome the initial hurdles of using composites on a large scale with the 787 Dreamliner and A350-XWB, expect more and more of these multi-functional capabilities of composites to find their way onto aircraft components.

J.E. Gordon, a leading engineer at the Royal Aircraft Establishment at Farnborough and holder of the British Silver Medal of the Royal Aeronautical Society, wrote two brilliant books on engineering: “The New Science of Strong Materials” and “Structures – Or Why Things Don’t Fall Down”. Elon Musk has recommended the latter of the two books, and I can only encourage you to read both. In my eyes, the role of a good non-fiction writer is to explain the intricacies of a non-trivial topic that we can see all around us but nevertheless rarely fully appreciate. Something interesting hidden in plain sight, if you will.

With this in mind, let’s discuss an underappreciated topic from the world of materials science.

First of all, what do we mean by a material’s stiffness and strength?

To be able to compare the load and deformation acting on components of different sizes, engineers prefer to use the quantities of stress and strain over load and deformation. Imagine a solid rod of a certain diameter and length which is being pulled apart in tension. Naturally, two rods of the same material but of different diameters and lengths will deform by different amounts. However, if both rods are stressed by the same amount, then they will experience the same amount of strain. In our simple one-dimensional rod example, the stress  \sigma is given by

 \sigma = \frac{P}{A}

where P is the tensile force and A = \pi d^2 / 4 is the cross-sectional area for a diameter  d , i.e. force normalised by cross-sectional area.

The engineering strain  \epsilon is given by

 \epsilon = \frac{\Delta L}{L}

where \Delta L is the change in length (deformation) of the rod and L is its original length, i.e. the deformation normalised by original length.

For an elastic material deforming linearly (i.e. no plastic deformation), the ratio of stress to strain is constant, and for our simple one-dimensional example the constant of proportionality is equal to the stiffness of the material.

 E = \frac{\sigma}{\epsilon}  (Hooke’s Law).

This stiffness  E  is known as the Young’s modulus of the material.

These two definitions of stress and strain illustrate a simple point. By dividing force by cross-sectional area and change in length (deformation) by original length, the role of geometry is eliminated entirely. This means we can deal purely in terms of material properties, i.e.  Young’s modulus (stiffness), stress to failure (strength), etc., and can therefore compare the degree of loading (stress) and deformation (strain) in components of different sizes, shapes, dimensions, etc.


We can all appreciate that metals are incredibly strong and stiff. But why are some materials stronger and stiffer than others? Why don’t all materials have the same strength and stiffness? Aren’t all materials just an assemblage of molecules and atoms whose molecular bonds stretch and eventually separate upon fracture? If this is so, why don’t all materials break at the same value of stress and strain?

The stiffness and strength of a material does indeed depend on the relative stiffness and strength of the underlying chemical bonds, and these do vary from material to material. But this difference is not sufficient to explain the large variations in strength that we observe for materials such as steel and glass – that is, why does glass break so easily and steel does not?

In the 1920s, a British engineer called A.A. Griffith explained for the first time why different materials have such vastly different strengths. To calculate the theoretical maximum strength of a material, we need to use the concept of strain energy. When we stretch a rod by 1 mm using a force of 1,000 N, the 1 J of energy we exerted (0.001 m times 1,000 N) is stored within the material as strain energy because individual atomic bonds are essentially stretched like mechanical springs. Written in terms of stresses and strains, the strain energy stored within a unit volume of material is simply half the product of stress and strain:

 \text{Strain Energy per unit volume} = \frac{1}{2} \sigma \times \epsilon

Griffith’s brilliant insight was to equate the strain energy stored in the material just before fracture to the surface energy of the two new surfaces created upon fracture.

Surface energy??

It is probably not immediately obvious why a surface would possess energy. But from watching insects walk over water we can observe that liquids must possess some form of surface tension that stops the insect from breaking through the surface. When the surface of a liquid is extended, say by inflating a soap bubble, work is done against this surface tension and energy is stored within the surface. Similarly, when an insect is perched on the surface of a pond, its legs form small dimples on the surface of the water and this deformation causes an increase in the surface energy. In fact, we can calculate how far the insect sinks into the surface by equating the increase in surface energy to the decrease in gravitational potential energy as the insect sinks. Furthermore, liquids tend to minimise their surface energy under the geometrical and thermodynamic constraints placed upon them, and this is precisely why raindrops are spherical and not cubic.

When a liquid freezes into a solid, the underlying molecular structure changes, but the overall surface energy remains largely the same. Because the molecular bonds in solids are so much stronger than those in liquids, we can’t actually see the effect of surface tension in solids (an insect landing on a block of ice will not visibly dimple the external surface). Nevertheless, the physical concept of surface energy is still valid for solids.

So, back to our fracture problem. What we want to calculate is the stress which will separate two adjacent rows of molecules within a material. If the rows of molecules are initially  d metres apart then a stress  \sigma causing a strain  \epsilon will lead to the following strain energy per square metre

 \text{Strain Energy per unit area} = \frac{1}{2} \sigma \times \epsilon \times d

From Hooke’s law we know that

 \epsilon = \frac{\sigma}{E}

and therefore replacing \epsilon in the first equation we have

 \text{Strain Energy per unit area} = \frac{d\sigma^2}{2E}

Now, if the surface energy per square metre of the solid is equal to G, then the separation of the two rows of molecules will lead to an increase in surface energy of 2G (two new surfaces are created). By assuming that all of the strain energy is converted to surface energy:

 \frac{d\sigma^2}{2E} = 2G \Rightarrow \sigma = 2 \sqrt{\frac{G E}{d}}

There is typically a considerable amount of plastic deformation in the material before the atomic bonds rupture. This means that the Young’s modulus decreases once the plastic regime is reached and the strain energy is roughly half of the ideal elastic case. Hence, we can simply drop the 2 in front of the square root above to get a simple, yet approximate, expression for the strength of a material

 \sigma = \sqrt{\frac{G E}{d}}

As the values of  E and  G vary from material to material, the theoretical strengths will be different as well. The surface tension of a material is roughly proportional to the Young’s modulus because the same chemical bonds give rise to both these properties. In fact, the relationship between surface energy and Young’s modulus can be approximated as

 G \approx \frac{Ed}{20}

such that the strength of a material is approximately proportional to the Young’s modulus by the following relation

 \sigma \approx \sqrt{\frac{E^2}{20}} \approx \frac{E}{5}

Given, the relationship between stress and strain we can conclude that the theoretical failure strain of most materials ought to be, approximately,

 \epsilon = \frac{\sigma}{E} \approx \frac{1}{5}

or 20% for basically all materials.


In everyday practise, most materials have failure strengths far beneath the theoretical maximum and also vary widely in their failure strains. To explain why, Griffith conducted some simple experiments on glass.  After calculating the Young’s modulus  E from a simple tensile test and assuming a molecular spacing of  d = 3 Angstroms, Griffith arrived at a theoretical strength for glass of 14,000 MPa. Griffith then tested a number of 1 mm diameter glass rods in tension  and found the strength to be on average around 170 MPa, i.e.  1/100 th of the theoretical value.

The pultrusion process used to create the glass rods allowed Griffith to pull thinner and thinner rods, and as the diameter decreased, the failure stress of the rods started to increase – slowly at first, but then very rapidly. Glass fibres of 2.5 \mum in diameter showed strengths of 6,000 MPa when newly drawn, but dropped to about half that after a few hours. Griffith was not able to manufacture smaller rods so he fitted a curve to his experimental data and extrapolated to much smaller diameters. And lo and behold, the exponential curve converged to a failure strength of 11,000 MPa – much closer to the 14,000 MPa predicted by his theory.

Variation of tensile strength with fibre diameter. From W.H. Otto (1955). Relationship of Tensile Strength of Glass Fibers to Diameter. Journal of the American Ceramic Society 38(3): 122-124. DOI: 10.1111/j.1151-2916.1955.tb14588.x.

Variation of tensile strength with fibre diameter. From W.H. Otto (1955). Relationship of Tensile Strength of Glass Fibers to Diameter. Journal of the American Ceramic Society 38(3): 122-124.


Griffith’s next goal was to explain why the strength of thicker glass rods fell so far below the theoretical value. Griffith surmised that as the volume of a specimen increases, some form of weakening mechanisms must be active because the underlying chemical structure of the material remains the same. This weakening mechanism must somehow lead to an increase in the actual stress around a future failure site and act as a stress concentration. Luckily, the idea of stress concentrations had previously been introduced in the naval industry, where the weakening effects of hatchways and other openings in the hull had to be accounted for. Griffith decided that he would apply the same concept at a much smaller scale and consider the effects of molecular “openings” in a series of chemical bonds.

The idea of a stress concentration is quite simple. Any hole or sharp notch in a material causes an increase in the local stress around the feature. Rather counter-intuitively, the increase in local stress is solely a function of the shape of the notch and not of its size. A tiny hole will weaken the material just as much as a large one will. This means a shallow cut in a branch will lower the load-carrying capacity just as well as a deep one – it is the sharpness of the cut that increases the stress.

We can visualise quite easily what must happen at a molecular scale when we introduce a notch in a series of molecules. A single strand of molecules must reach the maximum theoretical strength. Similarly, placing a number of such strands side by side should not effect the strength. However, if we cut a number of adjacent strands at a specific location perpendicular to the loading direction, then the flow of stress from molecule to molecule has been interrupted and the load in the material has to be redistributed to somewhere else. Naturally, the extra load simply goes around the notch and will therefore have to pass through the first intact bond. As a result, this bond will fail much earlier than any of the other bonds as the stress is concentrated in this single bond. As this overloaded bond breaks, the situation becomes slightly worse because the next bond down the line has to carry the extra load of all the broken bonds.

Stress concentration at a notch

Stress concentration at a notch

The stress concentration factor of a notch of half-length  a  and radius of curvature at the crack tip  R is given by

 1 + 2 \sqrt{\frac{a}{R}}

If we now consider a crack about 2 \mum long and 1 Angstrom tip radius, this produces a stress concentration factor of

 1 + 2 \sqrt{\frac{1 \times 10^{-6}}{1 \times 10^{-10}}} = 201

and therefore this would lower the theoretical strength of glass from 14,000 MPa to around 70 MPa, which is very close to the average strength of typical domestic glass.

As a result, Griffith made the conjecture that glass and all other materials are full of tiny little cracks that are too small to be seen but nevertheless significantly reduce the theoretical maximum strength. Griffith did not give an explanation for why these cracks appeared in the first place or why they were rarer for thinner glass rods. As it turns out, Griffith was correct about the mechanism of stress concentrationa, but wrong about their origins.

It took quite some time until a more satisfactory explanation was provided, dispelling the notion that the reduction in strength could be attributed to inherent defects within the material. After WWII, experiments showed that even thick glass rods could approach the theoretical upper limit of strength when carefully manufactured. It was also noticed that stronger fibres would weaken over time, probably as a result of handling, and that weakened fibres could consequently be strengthened again by chemically removing the top surface. By depositing sodium vapour on the external surface of glass, the density of cracks could be visualised and was found to be inversely proportional to the strength of the glass – the more cracks, the lower the strength, and vice versa.

These cracks are a simple result of scratching when the exterior surface comes in contact with other objects. Larger pieces of glass are more likely to develop surface cracks due to the larger surface area. Furthermore, thin glass fibres are much more likely to bend when in contact with other objects, and are therefore less likely to scratch. This means that there is nothing special about thin fibres of glass – if the surface of a thick fibre can be kept just as smooth as that of a thin fibre then it will be just as strong.

This means that an airplane cast from one piece of 100% pristine glass could theoretically sustain all flight loads, such an idea ludicrous in reality, because the likelihood of inducing surface cracks during service is basically 100%.


At this point you might be asking, what is different about metals – why are they used on aircraft instead?

The difference boils down to differences between the atomic structure of glasses and metals. When liquids freeze they typically crystallise into a densely packed array and form a solid that is denser than the liquid. Glasses on the other hand do not arrange themselves into a nicely packed crystalline structure but rather cool into a purely solidified liquid. Glasses can crystallise under some circumstances under a process known as devitrification, but the glass is often weakened as a result. When a solid crystallises, it can deform via a new process in which it starts to flow in shear just like Plasticine or moulding clay does when it is formed.

There is no clear demarcation line between a brittle (think glass) and ductile (think metal) material. The general rule of thumb is that a brittle material does not visibly deform before failure and failure is caused by a single crack that runs smoothly through the entire material. This is why it’s often possible to glue a broken vase back together.

In ductile materials, there is permanent plastic deformation before ultimate failure and so these materials behave more like moulding clay. Before a ductile material, like mild steel, finally snaps in two, there is considerable plastic deformation which can be imagined along the lines of flowing honey or treacle. This plastic flowing is caused by individual layers of atoms sliding over each other, rather than coming apart directly. As this shearing of atomic bonds takes place, the material is not significantly weakened because the atomic bonds have the ability to re-order, and the material may even be strengthened by a process known as cold working (atomic bonds align with the direction of the applied load). The amount of shearing before final failure depends largely on the type of metal alloy and always increases as a metal is heated; hence a blacksmith heats metal before shaping it.

Generally, these two fracture mechanism, brittle cracking and plastic flowing, are always competing in a solid. The material will break in whatever mechanism is weakest; yield before cracking if it is ductile or crack directly if it is brittle.

On December 17 1903, the bicycle mechanic Orville Wright completed the first successful flight in a heavier-than-air machine. A flight that lasted a mere 12 seconds, reaching an altitude of 10 feet and landing 120 feet from the starting point. The Wright Flyer was made of wood and canvas, powered by a 12 horsepower internal combustion engine and endowed with the first, yet basic, mechanisms for controlling pitch, yaw and roll. Only 66 years later, Neil Armstrong walked on the moon, and another 12 years later the first fully re-usable space transportation system, the Space Shuttle, made its way into orbit.

Even though the means of providing lift and attitude control in the Wright Flyer and the Space Shuttle were nearly identical, the operational conditions could not be more different. The Space Shuttle re-entered the atmosphere at orbital velocity of 8 km/s (28x the speed of sound), which meant that the Shuttle literally collided with the atmosphere, creating a hypersonic shock wave with gas temperatures close to 12,000°C -temperature levels hotter than the surface of the sun. How was such unprecedented progress – from Wright Flyer to Space Shuttle – possible in a mere 78 years? This blog post chronicles this technological evolution by telling the story of five iconic aircraft.

Orville Wright&flyer1909

The Wright brothers were the first to succesfully fly what we now consider a modern airplane, but as the brothers would adamantly confirm, they did not invent the airplane. Rather, the brothers stood on the shoulders of a century-old keen interest in aeronautical research. The story of the modern airplane goes back to about 100 years before the Wright brothers, to a relatively unknown British scientist, philosopher, engineer and member of parliament, Sir George Cayley. Although Leonardo da Vinci had thought up flying machines 300 years prior to this, his inventions have relatively little in common with modern designs. In 1799 Cayley proposed the first three-part concept that, to this day, represent the fundamental operating principles of flying:

  • A fixed wing for creating lift.
  • A separate mechanism using paddles to provide propulsion.
  • And a cruciform tail for horizontal and vertical stability.

Many of the flying enthusiasts of the 18th century based their designs on the biomimicry of birds, combining lift, propulsive and control functions in a single oversized wing contraption that was insufficient at providing lift, forward propulsion, let alone a means of control. During a decade of intensive study of the aerodynamics of birds and fish from 1799-1810, Cayley constructed a series of rotating airfield apparatuses that tested the lift and drag of different airfoil shapes. In 1852, Cayley published his most famous work “Sir George Cayley’s Governable Parachutes”, which detailed the blueprint of a large glider with almost all of the features we take for granted on a modern aircraft. A prototype of this glider was built in 1853 and flown by Cayley’s coachman, accelerating the prototype off the rooftop of Cayley’s house in Yorkshire.

The distinctive characteristic of the Wright brothers was their incessant persistence and never-ending scepticism of the research conducted by scientists of authority. By single-handedly revising the historic textbook data on airfoils and building all of their inventions themselves, they developed into the most experienced aeronautical engineers of their day. Engineering often requires a certain intuitive knowledge of what works and what doesn’t, typically acquired through first-hand experience, and the Wright brothers had developed this knack in abundance. In this sense, they were best-equipped to refine the concepts of their peers and develop them into something that superseded everything that came before.


One of the most potent signals of British defiance in WWII is the Supermarine Spitfire. In the summer of 1940, during the Battle of Britain, the Spitfire presented the last bulwark between tyranny and democracy. Between July and October 1940, 747 Spitfires were built of which 361 were destroyed and 352 were damaged. Just 34 Spitfires that were built during the summer of 1940 made it through the war unscathed. Unsurprisingly, the Spitfire is one of the most famous airplanes of all time and its aerodynamic beauty of elliptical wings and narrow body make it one of the most iconic aircraft ever built.

Ray Flying Legends 2005-1

The Spitfire was designed by the chief engineer of Supermarine, RJ Mitchell. Before WWII Mitchell led the construction of a series of sea-landing planes that won the Schneider Trophy three times in a row in 1927, 1929 and 1931. The Schneider Trophy was the most important aviation competition between WWI and WWII – initially intended to promote technical advances in civil aviation, it quickly morphed into pure speed contest over a triangular course of around 300 km. As competitions so often do, the Schneider Trophy became an impetus for advancing aeroplane technology, particularly in aerodynamics and engine design. In this regard the Schneider Trophy had a direct impact on many of the best fighters of WWII. The low drag profile and liquid-cooled engine which were pioneered during the Schneider Trophy were all features of the Supermarine Spitfire and the Mustang P-51. The winning airplane in 1931 was the Supermarine S6.B, setting a new airspeed record of 655.8 km/h (407.4 mph). The S6.B was powered by the supercharged Rolls-Royce R engine with 1900 bhp, which presented such insurmountable problems with cooling that surface radiators had to be attached to the buoyancy floats used to land on water. In March 1936, Mitchell evolved the S6.B into the Spitfire with a new Rolls Royce Merlin engine. The Spitfire also featured its radical elliptical wing design which promised to minimise lift-induced drag. Theoretically, an infinitely long wing of constant chord and airfoil section produces no induced drag. A rectangular wing of finite length however produces very strong wingtip vortices and as a result almost all modern wings are tapered towards the tips or fitted with wing tip devices. The advantage of an elliptical planform (tapered but with curved leading and trailing edges) over a tapered trapezoidal planform is that the effective angle of attack of the wing can be kept constant along the entire wingspan. Elliptical wings are probably a remnant of the past as they are much more difficult to manufacture and the benefit over a trapezoidal wing is negligible for the long wing spans of commercial jumbo jets. However, the design will forever live on in one of the most iconic fighters of all time, the Supermarine Spitfire.


Captain Chuck Yeager, an American WWII fighter ace, became the first supersonic pilot in 1947 when the chief test pilot for the Bell Corporation refused to fly the rocket-powered Bell X-1 experimental aircraft without any additional danger pay. The X-1 closely resembled a large bullet with short stubby wings for higher structural efficiency and less drag at higher speeds. The X-1 was strapped to the belly of a B-29 bomber and then dropped at 20,000 feet, at which point Yeager fired his rocket motors propelling the aircraft to Mach 0.85 as it climbed to 40,000 feet. Here Yeager fully opened the throttle, pushing the aircraft into a flow regime for which there was no available wind tunnel data, ultimately reaching a new airspeed record of Mach 1.06. Yeager had just achieved something that had eluded Europe’s aircraft engineers through all of WWII.

Bell X-1 46-062 (in flight)

The limit that the European aircraft designer ran into during the air speed competitions prior to WWII was the sound barrier. The problem of flying faster, or in fact approaching the speed of sound, is that shock waves start to form at certain locations over the aircraft fuselage. A shock wave is a thin front (about 10 micrometers thick) in which molecules are squashed together by such a degree that it is energetically favourable to induce a sudden increase in the fluid’s density, temperature and pressure. As an aircraft approaches the speed of sound, small pockets of sonic or supersonic flow develop on the top surface of the wing due to airflow acceleration over the curved upper skin. These supersonic pockets terminate in a shockwave, drastically slowing the airflow and increasing the fluid pressure. Even in the absence of shock waves the airflow runs into an adverse pressure gradient towards the trailing edge of the wing, slowing the airflow and threatening to separate the boundary layer from the wing. This condition drastically increases the induced drag and reduces lift, which in the worst case can lead to aerodynamic stall. In the presence of a shock wave this scenario is exacerbated by the sudden increase in pressure and drop in airflow velocity across the shock wave. For this precise reason, commercial aircraft are limited to speeds of around Mach 0.87-0.88 as any further increase in speed would induce shock waves over the wings, increasing drag and requiring an unproportional amount of additional engine power.

It was precisely this problem that aircraft designers ran into in the 1930’s and 1940’s. To make their airplanes approach the speed of sound they needed incredible amounts of extra power, which the internal combustion engines of the time could not provide. Quite fittingly this seemingly insurmountable speed limit was dubbed the sound barrier. It was not until the advent of refined jet engines after WWII that the sound barrier was broken. However, exceeding the sound barrier does not mean things get any easier. The ratio of upstream to downstream airflow speed and pressure across a shock wave are simple functions of the upstream Mach number (airspeed / local speed of sound). Unfortunately for aircraft designers, these ratios change with the square of the upstream Mach number, which means that the induced drag becomes worse and worse the further the speed of sound is exceeded. This is why the Concorde needed such powerful engines and why its fuel costs were so exorbitant.


The North American X-15 rocket plane was one of NASA’s most daring experimental aircraft intended to test flight conditions at hypersonic speeds (Mach 5+) at the edge of space. Three X-15s made 199 flights from 1960-1968 and the data collected and knowledge gained directly impacted the design of the Space Shuttle. Initially designed for speeds up to Mach 6 and altitudes up to 250,000 feet, the X-15 ultimately reached a top speed of Mach 6.72 (more than one mile a second) and a maximum altitude of 354,200 feet (beyond the official demarcation line of space). As of this writing, the X-15 still holds the world record for the highest speed recorded by a manned aircraft. Given the awesome power required to overcome the induced drag of flying at these velocities, it is no surprise that the X-15 was not powered by a traditional turbojet engine but rather a full-fledged liquid-propellant rocket engine, gulping down 2,000 pounds of ethyl alcohol and liquid oxygen every 10 seconds.

North American X-15

The X-15 was dropped from a converted B-52 bomber and then made its way on one of two different experimental flight profiles. High-speed flights were conducted at an altitude of a typical commercial jetliner (below 100,000 feet) using conventional aerodynamic control surfaces. For high-altitude flights the X-15 initiated a steep climb at full throttle, followed by engine shut-down once the aircraft left Earth’s atmosphere. What followed was a ballistic coast, carrying the aircraft up to the peak of an arc and then plummeting back to Earth. Beyond Earth’s atmosphere the aerodynamic control surfaces of the X-15 were obviously useless, and so the X-15 relied on small rocket thrusters for control.

The hypersonic speeds beyond the conventional sound barrier discussed previously created a new problem for the X-15. In any medium, sound is transmitted by vibrations of the medium’s molecules. As an aircraft slices through the air, it disturbs the molecules around it which ensues in a pressure wave as molecules bump into adjacent molecules, sequentially passing on the disturbance. Flying faster than the speed of sound means that the aircraft is moving faster than this pressure wave. Put another way, the air molecules are transmitting the information of the disturbance created by the aircraft via a pressure wave that travels at the speed of sound. While the aircraft is creating new disturbances further upstream, Nature can’t keep up with the aircraft. At hypersonic speeds the aircraft is literally smashing into the surrounding stationary air molecules, and the ensuing compression of the air around the aircraft skin leads to fluid temperatures that are above the melting point of steel. Hence, one of the major challenges of the X-15 was guaranteeing structural integrity at these incredibly high temperatures. As a result, the X-15 was constructed from Inconel X, a high-temperature nickel alloy, which is also used in the very hot turbine stages of a jet-engine.

The wedge tail visible at the back of the aircraft was also specifically required to guarantee attitude stability of the aircraft at hypersonic speeds. At lower speeds this thick wedge created considerable amounts of drag, in fact as much as some individual fighter aircraft alone. The area of the tail wedge was around 60% of the entire wing area and additional side panels could be extended out to further increase the overall surface area.


12 April 1981 marked a new era in manned spaceflight: Space Shuttle Columbia lifted off for the first time from Cape Canaveral. The Shuttle capped an incredible fruitful period in aerospace engineering development. The ground work laid by the original Wright flyer, the Spitfire, the X-1 and the X-15 is all part of the technological arc that led to the Shuttle. The fundamentals didn’t change but their orders of magnitude did.

“Like bolting a butterfly onto a bullet” — Story Musgrave, Columbia astronaut, 1996

Story Musgrave’s description of the Space Shuttle is not far off the mark. On the launch pad the Shuttle sat on three solid-rocket boosters producing 37 million horsepower, accelerating the Shuttle beyond the speed of sound in about 30 seconds. Eight minutes and 500,000 gallons of fuel later the Shuttle was travelling at 17,500 mph at the edge of space. The Space Shuttle was not only powerful but possessed a grace that the Wright brothers would have appreciated. After smashing through the atmosphere upon reentry at Mach 28 (8 km/s) the piloting astronaut had to slow the Shuttle down to 200 mph via a series of gliding twists and turns, using the surrounding air as an aerodynamic break.

Shuttle profiles

The ultimate mission of the Shuttle was to serve as a cost-effective means of travelling to space for professional astronauts and civilians. That vision never came to fruition due to the high maintenance costs between flights, and partly due the Challenger and Columbia disasters that shattered all hopes that space travel would become routine.

Perhaps the Space Shuttle is one of humanities greatest inventions because it reminds us that for all its power, grace and genius it is still the brainchild of fallible men.

Tagged with:
 

After Germany and its allies lost WWI, motor flying became strictly prohibited under the Treaty of Versailles. Creativity often springs from constraints, and so, paradoxically, the ban imposed by the Allies encouraged precisely what they had actually wanted to thwart: the growth of the German aviation industry. As all military flying was prohibited under the Treaty, the innovation in German aviation throughout the 1920’s took an unlikely path via unmotorised gliders built by student associations at universities.

Before and during WWI, Germany had been one of the leading countries in terms of the theoretical development of aviation and the actual construction of novel aircraft. The famous aerodynamicist Ludwig Prandtl and his colleagues developed the theory of the boundary layer which later led to wing theory. The close relationship of research laboratories and industrial magnates, like Fokker and Junkers, meant that many of the novel ideas of the day were tested on actual aircraft during WWI. Part of the reason why Baron von Richthofen, the Red Baron, became the most decorated fighter pilot of his day, was because his equipment was more technologically advanced than that of his opponents; a direct result of a thicker cambered wing that Prandtl had tested in his wind tunnels.

Given this heritage, it comes to no surprise that German students and professors soon found a way around the ban imposed at the Treaty of Versailles. For example, a number of enthusiastic students from the University of Aachen formed the Flugwissenschaftliche Vereinigung Aachen (FVA, Aachen Association for Aeronautical Sciences). These students loved the art and science of flying and intended to continue their passion despite the ban. Theodore von Kármán, of vortex street and Tacoma Narrows bridge fame, was a professor at the Technical University of Aachen at the time and remembers the episode as follows:

One day an FVA member approached me with a bright idea.
“Herr Professor,“ he said. “We would like your help. We wish to build a glider.”
“A glider? Why do you wish to build a glider?”
“For sport.” the student said.
I thought it over. Constructing a glider would be more than sport. It would be an interesting and useful aerodynamic project, quite in keeping with German traditions, but in view of postwar turmoil it could be politically quite risky … On the other hand, motorised flight was specifically outlawed in the Treaty of Versailles, and sport flying was not military flying. So rationalizing in this way, I told the boys to go ahead.

What von Kármán was not aware of at the time was that he was helping to lay the foundation for an important part of the German air force during WWII. The lessons learned in improving glider design would be directly applicable to military aeronautics later on.

Glider development in itself is a topic worth studying. The French sailor Le Bris constructed a functional glider in 1870, but the most famous gliders of the 19th century were all built by Otto Lilienthal. Lilienthal became the first aviator to realise the superiority of curved wings over flat surfaces for providing lift. Lilienthal conducted some rudimentary wing testing to tabulate the air pressure and lift for different wing sections; data which inspired, but was then superseded by the Wright brothers’ experiments using their own wind tunnel. In the USA, Octave Chanute is famous for his work on gliders and for many years he served as a direct mentor to the Wright brothers, who themselves built a number of successful gliders to optimise wing shapes and control mechanisms.

After the first successful motor-powered flight in 1903, interest in gliders largely subsided, but was then revived by collegiate sporting competitions organised by German universities. Oskar Ursinus, the editor of the aeronautics journal Flugsport (Sport Flying), organised an intercollegiate gliding competition in the Rhön mountains, a spot renowned for its strong upwinds. So work began behind closed doors in many university labs and sheds. Von Kármán’s school, the University of Aachen, built a 6 m (20 foot) wing-span glider called the Black Devil, which was the first cantilever monoplane glider to be built at the time. As a result of the cantilever wing construction, the design abandoned any form of wire bracing to stabilise the wing and relied purely on internal wing bracing, as had been pioneered by Junkers in 1915. In this regard, the glider was already more advanced than most of the fighters in WWI that were based on the classical bi-plane or even trip-plane design held together by wires and struts.

The Black Devil sailplane, designed by Wolfgang Klemperer

By early 1920 the Black Devil was ready to compete. At this point the students faced a new logistical challenge — how were they going to transport the glider a 150 miles south through three military zones (British, French and American), when shipping aircraft components was strictly forbidden?

As reckless students they of course operated in secret. The Black Devil was dismantled into its components, packed into a tarpaulin freight car and then driven through the night. Of this episode von Kármán recounts that,

On one occasion during the journey we almost lost the Black Devil to a contingent of Allied troops. Fortunately the engineer and student guard received advance notice of the movement, disengaged the car holding the glider, and silently transferred it to a dark sliding until the troops rode past.

Overall, the trip took six hours and the teams from Stuttgart, Göttingen and Berlin were already in attendance.

Lacking any motorised aircraft to launch the gliders, two rubber ropes were attached to the nose of the glider and then used as a catapult to launch the glider off the edge of a hill. Once in the air, it was the role of the pilot to manoeuvre the plane purely by shifting his/her body weight to balance the glider in the wind. In 1920, Aachen managed to win the competition with a flight time of 2 minutes and 20 seconds. Not a new revolution in glider design, but proving the aerodynamics of their concept plane nevertheless. A year later, an improved version of the Black Devil, the Blue Mouse, flew for 13 minutes, breaking the long-held record by Orville Wright of 9 minutes and 45 seconds. Some great videos of the early flights at the Wasserkuppe in the Rhön mountains exist to this day.

The Blue Mouse glider flying at the Wasserkuppe in the Rhön mountains.

In the following years, von Kármán and his scientific mentor and aerodynamics pioneer Ludwig Prandtl gave a series of seminars on the aerodynamics of gliding, which were attended by students and flying enthusiasts from all over the country. Among the attendees was Willy Messerschmitt, an engineering student at the time, whose fighters and bombers later formed the core of the Nazi air force during WWII. Even established industrialists, German royalty and other university professors attended the talks. As a result of this broad and democratic dissemination of knowledge and the collaborative spirit at the time, innovations began to sprout quickly. One of the main innovations was efficiently using thermal updrafts in combination with topological updrafts to extend the flying time. In 1922, a collaborative design team from the University of Hannover built the Hannover H 1 Vampyre glider, which first extended the gliding record to 3 hours and then to 6 hours in 1923. The Vampyr was one of the first heavier-than-air aircraft to use the stressed-skin “monocoque” design philosophy and is the forerunner of all modern gliders.

Aircraft Glider Vampyr

The Vampyr glider. One of the first aircraft ever to use the stresses skin”monocoque” concept.

The collegiate sporting competitions continued until the early 1930’s. The Nazis soon realised that the technical knowledge gained in these sporting competitions could be used in rebuilding the German air force. Due to the lack of a power unit and limited control surfaces, the student engineers and industrialists had been forced to design efficient lightweight structures and wings that provided the best compromise in terms of lift, drag and attitude control. Most importantly, the gliders proved the superiority of single long cantilevered wings over the limited double- and triple-wing configuration used during WWI. The internal structure of the wing allowed for much lighter construction as the size of the aircraft grew, the parasitic source of drag induced by the wires and struts was eliminated, and the lift to drag ratio was dramatically improved by the long glider wings. Tragically, some pioneers took these concept too far and lost their lives as a result. While the lift efficiency of a wing is increased as the aspect ratio (length to chord ratio) increases, so do the bending stresses at the root of the wing due to lift. As a result, there were a number of incidents where insufficiently stiffened wings literally twisted off the fuselage.

On the importance of glider developments von Kármán reflects that,

I have always thought that the Allies were shortsighted when they banned motor flying in Germany … Experiments with gliders in sport sharpened German thinking in aerodynamics, structural design, and meteorology … In structural design gliders showed how best to distribute weight in a light structure and revealed new facts about vibration. In meteorology we learned from gliders how planes could use the jet stream to increase speed; we uncovered the dangers of hidden turbulence in the air, and in general opened up the study of meteorological influences on aviation. It is interesting to note that glider flying did more to advance the science of aviation than most of the motorised flying in World War I.

We can only speculate how von Kármán must have felt after leaving Germany in the 1930’s, partly due to his Jewish heritage, and witness from afar how the machines he helped to develop wreaked havoc in Europe during WWII.

References

The quotes in this post are taken from von Kármán’s excellent biography The Wind and Beyond: Theodore von Karman, Pioneer in Aviation and Pathfinder in Space by Theodore von Kármán and Lee Edson.

Tagged with:
 

On November 8, 1940 newspapers across America opened with the headline “TACOMA NARROWS BRIDGE COLLAPSES”. The headline caught the eye of a prominent engineering professor who, from reading the news story, intuitively realised that a specific aerodynamical phenomenon must have led to the collapse. He was correct, and became publicly famous for what is now known as the von Kármán vortex street.

Theodore von Kármán was one of the most pre-eminent aeronautical engineers of the 20th century. Born and raised in Budapest, Hungary he was a member of a club of 20th century Hungarian scientists, including mathematician and computer scientist John von Neumann and nuclear physicist Edward Teller, who made groundbreaking strides in their respective fields. Von Kármán was a PhD student of Ludwig Prandtl at the University of Göttingen, the leading aerodynamics institute in the world at the time and home to many great German scientists and mathematicians.

Von Karman and JATO Team - GPN-2000-001652 (cropped)

Theodore von Kármán jotting down a plan on a wing before a rocket-powered aircraft test

Although brilliant at mathematics from an early age, von Kármán preferred to boil complex equations down to their essentials, attempting to find simple solutions that would provide the most intuitive physical insight. At the same time, he was a big proponent of using practical experiments to tease out novel phenomena that could then be explained using straightforward mathematics. During WWI he took a leave of absence from his role as professor of aeronautics at the University of Aachen to fulfil his military duties, overseeing the operations of a military research facility in Austria. In this role he developed a helicopter that was to replace hot-air balloons for surveillance of battlefields. Due to his combined expertise in aerodynamics and structural design he became a consultant to the Junkers aircraft and Zeppelin airship companies, helping to design the first all-metal cantilevered wing aircraft, the Junker J-1, and the Zeppelin Los Angeles.

Furthermore, von Kármán developed an unusual expertise in building wind tunnels — a suitable had not originally exist when he first started his professorship in Aachen and was desperately needed for his research. As a result, he became a sought after expert in designing and overseeing the construction of wind tunnels in the USA and Japan. Von Kármán’s broad skill set and unique combination of theoretical and experimental expertise soon placed him on the radar of physicist Robert Millikan who was setting up a new technical university in Pasadena, California, the California Institute of Technology. Millikan believed that the year-round temperate climate would attract all of the major aircraft companies of the bourgeoning aerospace industry to Southern California, and he hired von Kármán to head CalTech’s aerospace institute. Millikan’s wager paid off when companies such as Northrup, Lockheed, Douglas and Consolidated Aircraft (later Convair) all settled in the greater Los Angeles area. Von Kármán thus became a consultant on iconic aircraft such as the Douglas DC-3, the Northrup Flying Wing, and later the rockets developed by NACA (now NASA).

Von Kármán is renowned for many concepts in structural mechanics and aerodynamics, e.g. the non-linear behaviour of cylinder buckling and a mathematical theory describing turbulent boundary layers. His most well-known piece of work, the von Kármán vortex street, tragically, reached public notoriety after it explained the collapse of a suspension bridge over the Puget Sound in 1940.

The von Kármán vortex street is a direct result of boundary layer separation over bluff bodies. Immersed in fluid flow, any body of finite thickness will force the surrounding fluid to flow in curved streamlines around it. Towards the leading edge this causes the flow to speed up in order to balance the centripetal forces created by the curved streamlines. This creates a region of falling fluid pressure, also called a favourable pressure gradient. Further along the body, where the streamlines straighten out, the opposite occurs and the fluid flows into a region of rising pressure, an adverse pressure gradient. The increasing pressure gradient pushes against the flow and causes the slowest parts of the flow, those immediately adjacent to the surface, to reverse direction. At this point the boundary layer has separated from the body and the combination of flow in two directions induces a wake of turbulent vortices (see diagram below).

Boundary layer separation over cylinder

Boundary layer separation over cylinder

The type of flow in the wake depends on the Reynolds number of the flow impinging on the body,

 Re = \frac{\rho V d}{\mu}

where \rho is the density of the fluid, V is the impinging free stream flow velocity, d is a characteristic length of the body, e.g. the diameter for a sphere or cylinder, and \mu is the viscosity or inherent stickiness of the fluid. The Reynolds number essentially takes the ratio of inertial forces \rho V d to viscous forces \mu, and captures the extent of laminar flow (layered flow with little mixing) and turbulent flow (flow with strong mixing via vortices).

Flow around a cylinder for different Reynolds numbers

Flow around a cylinder for different Reynolds numbers

For example, consider the flow past an infinitely long cylinder protruding out of your screen (as shown in the figure above). For very low Reynolds number flow (Re < 10) the inertial forces are negligible and the streamlines connect smoothly behind the cylinder. As the Reynolds number is increased into the range of Re = 10-40 (by, for example, increasing the free stream velocity V), the boundary layer separates symmetrically from either side of the cylinder, and two eddies form that rotate in opposite directions. These eddies remain fixed and do not “peel away” from the cylinder. Behind the vortices the flow from either side rejoins and the size of the wake is limited to a small region behind the cylinder. As the Reynolds number is further increased into the region Re > 40, the symmetric eddy formation is broken and two asymmetric vortices form. Such an instability is known as a symmetry-breaking bifurcation in stability theory and the ensuing asymmetric vortices undergo periodic oscillations by constantly interchanging their position with respect to the cylinder. At a specific critical value of Reynolds number (Re ~ 100) the eddies start to peel away, alternately from either side of the cylinder, and are then washed downstream. As visualised below, this can produce a rather pretty effect…

Vortex-street-animation

This condition of alternately shedding vortices from the sides of the cylinder is known as the von Kármán vortex street. At a certain distance from the cylinder the behaviour obviously dissipates, but close to the cylinder the oscillatory shedding can have profound aeroelastic effects on the structure. Aeroelasticity is the study of how fluid flow and structures interact dynamically. For example, there are two very important locations on an aircraft wing:
– the centre of pressure, i.e. an idealised point of the wing where the lift can be assumed to act as a point load
– the shear centre, i.e. the point of any structural cross-section through which a point load must act to cause pure bending and no twisting

The problem is that the centre of pressure and shear centre are very rarely coincident, and so the aerodynamic lift forces will typically not only bend a wing but also cause it to twist. Twisting in a manner that forces the leading edge upwards increases the angle of attack and thereby increases the lift force. This increased lift force produces more twisting, which produces more lift, and so on. This phenomenon is known as divergence and can cause a wing to twist-off the fuselage.

A different, yet equally pernicious, aeroelastic instability can occur as a result of the von Kármán vortex street. Each time an eddy is shed from the cylinder, the symmetry of the flow pattern is broken and a difference in pressure is induced between the two sides of the cylinder. The vortex shedding therefore produces alternating sideways forces that can cause sideways oscillations. If the frequency of these oscillations is the same as the natural frequency of the cylinder, then the cylinder will undergo resonant behaviour and start vibrating uncontrollably.

So, how does this relate to the fated Tacoma Narrows bridge?

Upon completion, the first Tacoma Narrows suspension bridge, costing $6.4 mill and the third longest bridge of its kind, was described as the fanciest single span bridge in the world. With its narrow towers and thin stiffening trusses the bridge was valued for its grace and slenderness. On the morning of November 7, 1940, only a year into its service, the bridge broke apart in a light gale and crashed into the Puget Sound 190 feet below. From its inaugural day on July 1, 1940 something seemed not quite right. The span of the bridge would start to undulate up and down in light breezes, securing the bridge the nickname “Galloping Gertie”. Engineers tried to stabilise the bridge using heavy steel cables fixed to steel blocks on either side of the span. But to no avail, the galloping continued.

On the morning of the collapse, Gertie was bouncing around in its usual manner. As the winds started to intensify to 60 kmh (40 mph) the rhythmic up and down motion of the bridge suddenly morphed into a violent twisting motion spiralling along the deck. At this point the authorities closed the bridge to any further traffic but the bridge continued to writhe like a corkscrew. The twisting became so violent that the sides of the bridge deck separated by 9 m (28 feet) with the bridge deck oriented at 45° to the horizontal. For half an hour the bridge resisted these oscillatory stresses until at one point the deck of the bridge buckled, girders and steel cables broke loose and the bridge collapsed into the Puget Sound.

After the collapse, the Governor of Washington, Clarence Martin, announced that the bridge had been built correctly and that another one would be built using the same basic design. At this point von Kármán started to feel uneasy and he asked technicians at CalTech to build a small rubber replica of the bridge for him. Von Kármán then tested the bridge at home using a small electric fan. As he varied the speed of the fan, the model started to oscillate, and these oscillations grew greater as the rhythm of the air movement induced by the fan was synchronised with the oscillations.

Indeed, Galloping Gertie had been constructed using cylindrical cable stays and these shed vortices in a periodic manner when a cross-wind reached a specific intensity. Because the bridge was also built using a solid sidewall, the vortices impinged immediately onto a solid section of the bridge, inducing resonant vibrations in the bridge structure.

Von Kármán then contacted the governor and wrote a short piece for the Engineering News Record describing his findings. Later, von Kármán served on the committee that investigated the cause of the collapse and to his surprise the civil engineers were not at all enamoured with his explanation. In all of the engineers’ training and previous engineering experience, the design of bridges had been governed by “static forces” of gravity and constant maximum wind load. The effects of “dynamic loads”, which caused bridges to swing from side to side, had been observed but considered to be negligible. Such design flaws, stemming from ignorance rather than the improper application of design principles, are the most harrowing as the mode of failure is entirely unaccounted for. Fortunately, the meetings adjourned with agreements in place to test the new Tacoma Narrows bridge in a wind tunnel at CalTech, a first at the time. As a result of this work, the solid sidewall of the bridge deck was perforated with holes to prevent vortex shedding, and a number of slots were inserted into the bridge deck to prevent differences in pressure between the top and bottom surfaces of the deck.

The one person that did suffer irrefutable damage to his reputation was the insurance agent that initially underwrote the $6 mill insurance policy for the state of Washington. Figuring that something as big as the Tacoma Narrows bridge would never collapse, he pocketed the insurance premium himself without actually setting up a policy, and ended up in jail…

If you would like to learn more about Theodore von Kármán’s life, I highly recommend his autobiography, which I have reviewed here.

The material we covered in the last two posts (skin friction and pressure drag) allows us to consider a fun little problem:

How quickly do the small bubbles of gas rise in a pint of beer?

To answer this question we will use the concept of aerodynamic drag introduced in the last two posts – namely,

  • skin friction drag – frictional forces acting tangential to the flow that arise because of the inherent stickiness (viscosity) of the fluid.
  • pressure drag – the difference between the fluid pressure upstream and downstream of the body, which typically occurs because of boundary layer separation and the induced turbulent wake behind the body.

The most important thing to remember is that both skin friction drag and profile drag are influenced by the shape of the boundary layer.

What is this boundary layer?

As a fluid flows over a body it sticks to the body’s external surface due to the inherent viscosity of the fluid, and therefore a thin region exists close to the surface where the velocity of the fluid increases from zero to the mainstream velocity. This thin region of the flow is known as the boundary layer and the velocity profile in this region is U-shaped as shown in the figure below.

Velocity profile of laminar versus turbulent boundary layer

Velocity profile of laminar versus turbulent boundary layer

As shown in the figure above, the flow in the boundary layer can either be laminar, meaning it flows in stratified layers with no to very little mixing between the layers, or turbulent, meaning there is significant mixing of the flow perpendicular to the surface. Due to the higher degree of momentum transfer between fluid layers in a turbulent boundary layer, the velocity of the flow increases more quickly away from the surface than in a laminar boundary layer. The magnitude of skin friction drag at the surface of the body (y = 0 in the figure above) is given by

 \tau_w = \mu \frac{\mathrm{d}u}{\mathrm{d}y}_w

where  \mathrm{d}u/\mathrm{d}y is the so-called velocity gradient, or how quickly the fluid increases its velocity as we move away from the surface. As this velocity gradient at the surface (y = 0 in the figure above) is much steeper for turbulent flow, this type of flow leads to more skin friction drag than laminar flow does.

Skin friction drag is the dominant form of drag for objects whose surface area is aligned with the flow direction. Such shapes are called streamlined and include aircraft wings at cruise, fish and low-drag sports cars. For these streamlined bodies it is beneficial to maintain laminar flow over as much of the body as possible in order to minimise aerodynamic drag.

Conversely, pressure drag is the difference between the fluid pressure in front of (upstream) and behind (downstream) the moving body. Right at the tip of any moving body, the fluid comes to a standstill relative to the body (i.e. it sticks to the leading point) and as a result obtains its stagnation pressure.

The stagnation pressure is the pressure of a fluid at rest and, for thermodynamic reasons, this is the highest possible pressure the fluid can obtain under a set of pre-defined conditions. This is why from Bernoulli’s law we know that fluid pressure decreases/increases as the fluid accelerates/decelerates, respectively.

At the trailing edge of the body (i.e. immediately behind it) the pressure of the fluid is naturally lower than this stagnation pressure because the fluid is either flowing smoothly at some finite velocity, hence lower pressure, or is greatly disturbed by large-scale eddies. These large-scale eddies occur due to a phenomenon called boundary layer separation.

Boundary layer separation over cylinder

Boundary layer separation over a cylinder

 

Why does the boundary layer separate?

Any body of finite thickness will force the fluid to flow in curved streamlines around it. Towards the leading edge this causes the flow to speed up in order to balance the centripetal forces created by the curved streamlines. This creates a region of falling fluid pressure, also called a favourable pressure gradient. Further along the body, the streamlines straighten out and the opposite phenomenon occurs – the fluid flows into a region of rising pressure, also known as an adverse pressure gradient. This adverse pressure gradient decelerates the flow and causes the slowest parts of the boundary layer, i.e. those parts closest to the surface, to reverse direction. At this point, the boundary layer “separates” from the body and the combination of flow in two directions induces a wake of turbulent vortices; in essence a region of low-pressure fluid.

The reason why this is detrimental for drag is because we now have a lower pressure region behind the body than in front of it, and this pressure difference results in a force that pushes against the direction of travel. The magnitude of this drag force greatly depends on the location of the boundary layer separation point. The further upstream this point, the higher the pressure drag.

To minimise pressure drag it is beneficial to have a turbulent boundary layer. This is because the higher velocity gradient at the external surface of the body in a turbulent boundary layer means that the fluid has more momentum to “fight” the adverse pressure gradient. This extra momentum pushes the point of separation further downstream. Pressure drag is typically the dominant type of drag for bluff bodies, such as golf balls, whose surface area is predominantly perpendicular to the flow direction.

So to summarise: laminar flow minimises skin-friction drag, but turbulent flow minimises pressure drag.

Given this trade-off between skin friction drag and pressure drag, we are of course interested in the total amount of drag, known as the profile drag. The propensity of a specific shape in inducing profile drag is captured in the dimensionless drag coefficient C_D

 C_D = \frac{D}{1/2 \rho U_0^2A}

where D is the total drag force acting on the body, \rho is the density of the fluid, U_0 is the undisturbed mainstream velocity of the flow, and A represents a characteristic area of the body. For bluff bodies A is typically the frontal area of the body, whereas for aerofoils and hydrofoils A is the product of wing span and mean chord. For a flat plate aligned with the flow direction, A is the total surface area of both sides of the plate.

The denominator of the drag coefficient represents the dynamic pressure of the fluid (1/2 \rho U_0^2) multiplied by the specific area (A) and is therefore equal to a force. As a result, the drag coefficient is the ratio of two forces, and because the units of the denominator and numerator cancel, we call this a dimensionless number that remains constant for two dynamically similar flows. This means C_D is independent of body size, and depends only on its shape. As discussed in the wind tunnel post, this mathematical property is why we can create smaller scaled versions of real aircraft and test them in a wind tunnel.

Skin friction drag versus pressure drag for differently shaped bodies

Looking at the diagram above we can start to develop an appreciation for the relative magnitude of pressure drag and skin friction drag for different bodies. The “worst” shape for boundary layer separation is a plate perpendicular to the flow as shown in the first diagram. In this case, drag is clearly dominated by pressure drag with negligible skin friction drag. The situation is similar for the cylinder shown in the second diagram, but in this case the overall profile drag is smaller due to the greater degree of streamlining.

The degree of boundary layer separation, and therefore the wake of eddies behind the cylinder, depends to a large extent on the surface roughness of the body and the Reynolds number of the flow. The Reynolds number is given by

 R = \frac{\rho U_0 d}{\mu}

where U_0 is the free-stream velocity and d is the characteristic dimension of the body. The reason why the Reynolds number influences boundary layer separation is because it is the dominant factor in influencing the nature, laminar or turbulent, of the boundary layer. The transition from laminar to turbulent boundary layer is different for different problems, but as a general rule of thumb a value of  R = 10^5 can be used.

This influence of Reynolds number can be observed by comparing the second diagram to the bottom diagram. The flow over the cylinder in the bottom diagram has increased by a factor of 100 ( R = 10^7), thereby increasing the extent of turbulent flow and delaying the onset of boundary layer separation (smaller wake). Hence, the drag coefficient of the bottom cylinder is half the drag coefficient of the cylinder in the second diagram ( R = 10^5) even though the diameter has remained unchanged. Remember though that only the drag coefficient has been halved, whereas the overall drag force will naturally be higher for  R = 10^7 because the drag force is a function of  C_D U_0^2 and the velocity U_0 has increased by a factor of 100.

Notice also that the streamlined aircraft wing shown in the third diagram has a much lower drag coefficient. This is because the aircraft wing is essentially a “drawn-out” cylinder of the same “thickness” d as the cylinder in the second diagram, but by streamlining (drawing out) its shape, boundary layer separation occurs much further downstream and the size of the wake is much reduced.

Terminal velocity of rising beer bubbles

The terminal velocity is the speed at which the forces accelerating a body equal those decelerating it. For example, the aerodynamic drag acting on a sky diver is proportional to the square of his/her falling velocity. This means that at some point the sky diver reaches a velocity at which the drag force equals the force of gravity, and the sky diver cannot accelerate any further. Hence, the terminal velocity represents the velocity at which the forces accelerating a body are equal to those decelerating it.

Beer bubbles rising to the surface

Turbulent wake behind a moving sphere. We will model the gas bubbles rising to the top of beer as a sphere moving through a liquid

The net accelerating force of a bubble of air/gas in a liquid is the buoyancy force, i.e. the difference in density between the liquid and the gas. This buoyancy force  F_B force is given by

 F_B = \frac{\pi}{6} d^3 \left( \rho_l-\rho_g \right)g

where  d is the diameter of the spherical gas bubble,  \rho_g is the density of the gas,  \rho_l is the density of the liquid and  g is the gravitational acceleration 9.81 m/s^2. The buoyancy force essentially expresses the force required to displace a sphere volume  \frac{\pi}{6} d^3 given a certain difference in density between the gas and liquid.

At terminal velocity the buoyancy force is balanced by the total drag acting on the gas bubble. Using the equation for the drag coefficient above we know that the total drag  D is

 D = 1/2 C_D \rho_l U_T^2 \left( \frac{\pi}{4} d^2\right)

where  U_T is the terminal velocity and we have replaced  A with the frontal area of the gas bubble  \frac{\pi}{4} d^2 , i.e. the area of a circle. Thus, equating  D and  F_B

 \frac{\pi}{6} d^3 \left( \rho_l-\rho_g \right)g = 1/2 C_D \rho_l U_T^2 \left( \frac{\pi}{4} d^2\right)

and re-arranging for terminal velocity gives us

 U_T^2 = \frac{4d\left(\rho_l-\rho_g\right)g}{3C_D\rho_l}

At this point we can calculate the terminal velocity of a spherical gas bubble driven by buoyancy forces for a certain drag coefficient. The problem now is that the drag coefficient of a sphere is not constant; it changes with the flow velocity. Fortunately, the drag coefficient of a sphere plateaus at around 0.5 for Reynolds numbers  10^3-10^5 (see digram below) and it is reasonable to assume that the flow considered here falls within this range. Some good old engineering judgement at work!

Drag coefficient of a sphere as a function of Reynolds number

Drag coefficient as a function of Reynolds number. The curve flattens out between 10^3 and 10^5.

Hence, for our simplified calculation we will assume a drag coefficient of 0.5, a gas bubble 3 mm in diameter, density of the gas equal to 1.2 kg/m^3 and density of the fluid equal to 989 kg/m^3 (5% volume beer).

Therefore, the terminal velocity of gas bubbles rising in a beer are somewhere in the range of

 U_T^2 = \frac{4 \times 0.003 \times \left(989-1.2\right) \times 9.81}{3 \times 0.5 \times 989} = 0.0790 \ m^s/s^2

and taking the square root

 U_T = 0.281 \ m/s = 28.1 \ cm/s \left( 11 \ inches/s \right)

Given that the viscosity of the fluid is around \mu = 0.001 Ns/m^2 we can now check that we are in the right Reynolds number range:

 R = \frac{\rho_l U_T d}{\mu} = \frac{989 \times 0.281 \times 0.003}{0.001} = 833

which is right at the bottom of R =  10^3-10^5 !

So there you have it: Beer bubbles rise at around a foot per second.

Perhaps the next time you gaze pensively into a glass of beer after a hard day’s work, this little fun-fact will give you something else to think (or smile) about.

Acknowledgements

This post is based on a fun little problem that Prof. Gary Lock set his undergraduate students at the University of Bath. Prof. Lock was probably the most entertaining and effective lecturer I had during my undergraduate studies and has influenced my own lecturing style. If I can only pass on a fraction of the passion for engineering and teaching that Prof. Lock instilled in me, I consider my job well done.

At the start of the 19th century, after studying the highly cambered thin wings of many different birds, Sir George Cayley designed and built the first modern aerofoil, later used on a hand-launched glider. This biomimetic, highly cambered and thin-walled design remained the predominant aerofoil shape for almost 100 years, mainly due to the fact that the actual mechanisms of lift and drag were not understood scientifically but were explored in an empirical fashion. One of the major problems with these early aerofoil designs was that they experienced a phenomenon now known as boundary layer separation at very low angles of attack. This significantly limited the amount of lift that could be created by the wings and meant that bigger and bigger wings were needed to allow for any progress in terms of aircraft size. Lacking the analytical tools to study this problem, aerodynamicists continued to advocate thin aerofoil sections, as there was plenty of evidence in nature to suggest their efficacy. The problem was considered to be more one of degree, i.e. incrementally iterating the aerofoil shapes found in nature, rather than of type, that is designing an entirely new shape of aerofoil in accord with fundamental physics.

During the pre-WWI era, the misguided notions of designers was compounded by the ever-increasing use of wind-tunnel tests. The wind tunnels used at the time were relatively small and ran at very low flow speeds. This meant that the performance of the aerofoils was being tested under the conditions of laminar flow (smooth flow in layers, no mixing perpendicular to flow direction) rather than the turbulent flow (mixing of flow via small vortices) present over the wing surfaces. Under laminar flow conditions, increasing the thickness of an aerofoil increases the amount of skin-friction drag (as shown in last month’s post), and hence thinner aerofoils were considered to be superior.

The modern plane – born in 1915

The situation in Germany changed dramatically during WWI. In 1915 Hugo Junkers pioneered the first practical all-metal aircraft with a cantilevered wing – essentially the same semi-monocoque wing box design used today. The most popular design up to then was the biplane configuration held together by wires and struts, which introduced considerable amounts of parasitic drag and thereby limited the maximum speed of aircraft. Eliminating these supporting struts and wires meant that the flight loads needed to be carried by other means. Junkers cantilevered a beam from either side of the fuselage, the main spar, at about 25% of the chord of the wing to resist the up and down bending loads produced by lift. Then he fitted a smaller second spar, known as the trailing edge spar, at 75% of the chord to assist the main spar in resisting fore and aft bending induced by the drag on the wing. The two spars were connected by the external wing skin to produce a closed box-section known as the wing box. Finally, a curved piece of metal was fitted to the front of the wing to form the “D”-shaped leading edge, and two pieces of metal were run out to form the trailing edge. This series of three closed sections provided the wing with sufficient torsional rigidity to sustain the twisting loads that arise because the aerodynamic centre (the point where the lift force can be considered to act) is offset from the shear centre (the point where a vertical load will only cause bending and no twisting). Junker’s ideas were all combined in the world’s first practical all-metal aircraft, the Junker J 1, which although much heavier than other aircraft at the time, developed into the predominant form of construction for the larger and faster aircraft of the coming generation.

Junkers J 1 at Döberitz in 1915

Structures + Aerodynamics = Superior Aircraft

Junkers construction naturally resulted in a much thicker wing due to the room required for internal bracing, and this design provided the impetus for novel aerodynamics research. Junker’s ideas were supported by Ludwig Prandtl who carried out his famous aerodynamics work at the University of Göttingen. As discussed in last month’s post, Prandtl had previously introduced the notion of the boundary layer; namely the existence of a U-shaped velocity profile with a no-flow condition at the surface and an increasing velocity field towards the main stream some distance away from the surface. Prandtl argued that the presence of a boundary layer supported the simplifying assumption that fluid flow can be split into two non-interacting portions; a thin layer close to the surface governed by viscosity (the stickiness of the fluid) and an inviscid mainstream. This allowed Prandtl and his colleagues to make much more accurate predictions of the lift and drag performance of specific wing-shapes and greatly helped in the design of German WWI aircraft. In 1917 Prandtl showed that Junker’s thick and less-cambered aerofoil section produced much more favourable lift characteristics than the classic thinner sections used by Germany’s enemies. Second, the thick aerofoil could be flown at a much higher angle of attack without stalling and hence improved the manoeuvrability of a plane during dog fighting.

Skin Friction versus Pressure Drag

The flow in a boundary layer can be either laminar or turbulent. Laminar flow is orderly and stratified without interchange of fluid particles between individual layers, whereas in turbulent flow there is significant exchange of fluid perpendicular to the flow direction. The type of flow greatly influences the physics of the boundary layer. For example, due to the greater extent of mass interchange, a turbulent boundary layer is thicker than a laminar one and also features a steeper velocity gradient close to the surface, i.e. the flow speed increases more quickly as we move away from the wall.

Velocity profile of laminar versus turbulent boundary layer

Velocity profile of laminar versus turbulent boundary layer. Note how the turbulent flow increases velocity more rapidly away from the wall.

Just like your hand experiences friction when sliding over a surface, so do layers of fluid in the boundary layer, i.e. the slower regions of the flow are holding back the faster regions. This means that the velocity gradient throughout the boundary layer gives rise to internal shear stresses that are akin to friction acting on a surface. This type of friction is aptly called skin-friction drag and is predominant in streamlined flows where the majority of the body’s surface is aligned with the flow. As the velocity gradient at the surface is greater for turbulent than laminar flow, a streamlined body experiences more drag when the boundary layer flow over its surfaces is turbulent. A typical example of a streamlined body is an aircraft wing at cruise, and hence it is no surprise that maintaining laminar flow over aircraft wings is an ongoing research topic.

Over flat surfaces we can suitably ignore any changes in pressure in the flow direction. Under these conditions, the boundary layer remains stable but grows in thickness in the flow direction. This is, of course, an idealised scenario and in real-world applications, such as curved wings, the flow is most likely experiencing an adverse pressure gradient, i.e. the pressure increases in the flow direction. Under these conditions the boundary layer can become unstable and separate from the surface. The boundary layer separation induces a second type of drag, known as pressure drag. This type of drag is predominant for non-streamlined bodies, e.g. a golfball flying through the air or an aircraft wing at a high angle of attack.

So why does the flow separate in the first place?

To answer this question consider fluid flow over a cylinder. Right at the front of the cylinder fluid particles must come to rest. This point is aptly called the stagnation point and is the point of maximum pressure (to conserve energy the pressure needs to fall as fluid velocity increases, and vice versa). Further downstream, the curvature of the cylinder causes the flow lines to curve, and in order to equilibrate the centripetal forces, the flow accelerates and the fluid pressure drops. Hence, an area of accelerating flow and falling pressure occurs between the stagnation point and the poles of the cylinder. Once the flow passes the poles, the curvature of the cylinder is less effective at directing the flow in curved streamlines due to all the open space downstream of the cylinder. Hence, the curvature in the flow reduces and the flow slows down, turning the previously favourable pressure gradient into an adverse pressure gradient of rising pressure.

Boundary layer separation over cylinder

Boundary layer separation over a cylinder (axis out out the page).

To understand boundary layer separation we need to understand how these favourable and adverse pressure gradients influence the shape of the boundary layer. From our discussion on boundary layers, we know that the fluid travels slower the closer we are to the surface due to the retarding action of the no-slip condition at the wall. In a favourable pressure gradient, the falling pressure along the streamlines helps to urge the fluid along, thereby overcoming some of the decelerating effects of the fluid’s viscosity. As a result, the fluid is not decelerated as much close to the wall leading to a fuller U-shaped velocity profile, and the boundary layer grows more slowly.

By analogy, the opposite occurs for an adverse pressure gradient, i.e. the mainstream pressure increases in the flow direction retarding the flow in the boundary layer. So in the case of an adverse pressure gradient the pressure forces reinforce the retarding viscous friction forces close to the surface. As a result, the difference between the flow velocity close to the wall and the mainstream is more pronounced and the boundary layer grows more quickly. If the adverse pressure gradient acts over a sufficiently extended distance, the deceleration in the flow will be sufficient to reverse the direction of flow in the boundary layer. Hence the boundary layer develops a point of inflection, known as the point of boundary layer separation, beyond which a circular flow pattern is established.

For aircraft wings, boundary layer separation can lead to very significant consequences ranging from an increase in pressure drag to a dramatic loss of lift, known as aerodynamic stall. The shape of an aircraft wing is essentially an elongated and perhaps asymmetric version of the cylinder shown above. Hence the airflow over the top convex surface of a wing follows the same basic principles outlined above:

  • There is a point of stagnation at the leading edge.
  • A region of accelerating mainstream flow (favourable pressure gradient) up to the point of maximum thickness.
  • A region of decelerating mainstream flow (adverse pressure gradient) beyond the point of maximum thickness.

These three points are summarised in the schematic diagram below.

Boundary layer separation over the top surface of a wing

Boundary layer separation over the top surface of a wing.

Boundary layer separation is an important issue for aircraft wings as it induces a large wake that completely changes the flow downstream of the point of separation. Skin-friction drag arises due to inherent viscosity of the fluid, i.e. the fluid sticks to the surface of the wing and the associated frictional shear stress exerts a drag force. When a boundary layer separates, a drag force is induced as a result of differences in pressure upstream and downstream of the wing. The overall dimensions of the wake, and therefore the magnitude of pressure drag, depends on the point of separation along the wing. The velocity profiles of turbulent and laminar boundary layers (see image above) show that the velocity of the fluid increases much slower away from the wall for a laminar boundary layer. As a result, the flow in a laminar boundary layer will reverse direction much earlier in the presence of an adverse pressure gradient than the flow in a turbulent boundary layer.

To summarise, we now know that the inherent viscosity of a fluid leads to the presence of a boundary layer that has two possible sources of drag. Skin-friction drag due to the frictional shear stress between the fluid and the surface, and pressure drag due to flow separation and the existence of a downstream wake. As the total drag is the sum of these two effects, the aerodynamicist is faced with a non-trivial compromise:

  •  skin-friction drag is reduced by laminar flow due to a lower shear stress at the wall, but this increases pressure drag when boundary layer separation occurs.
  • pressure drag is reduced by turbulent flow by delaying boundary layer separation, but this increases the skin-friction drag due to higher shear stresses at the wall.

As a result, neither laminar nor turbulent flow can be said to be preferable in general and judgement has to be made regarding the specific application. For a blunt body, such as a cylinder, pressure drag dominates and therefore a turbulent boundary layer is preferable. For more streamlined bodies, such as an aircraft wing at cruise, the overall drag is dominated by skin-friction drag and hence a laminar boundary layer is preferable. Dolphins, for example, have very streamlined bodies to maintain laminar flow. Early golfers, on the other hand, realised that worn rubber golf balls flew further than pristine ones, and this led to the innovation of dimples on golf balls. Fluid flow over golf balls is predominantly laminar due to the relatively low flight speeds. Dimples are therefore nothing more than small imperfections that transform the predominantly laminar flow into a turbulent one that delays the onset of boundary layer separation and therefore reduces pressure drag.

Aerodynamic Stall

The second, and more dramatic effect, of boundary layer separation in aircraft wings is aerodynamic stall. At relatively low angles of attack, for example during cruise, the adverse pressure gradient acting on the top surface of the wing is benign and the boundary layer remains attached over the entire surface. As the angle of attack is increased, however, so does the pressure gradient. At some point the boundary layer will start to separate near the trailing edge of the wing, and this separation point will move further upstream as the angle of attack is increased. If an aerofoil is positioned at a sufficiently large angle of attack, separation will occur very close to the point of maximum thickness of the aerofoil and a large wake will develop behind the point of separation. This wake redistributes the flow over the rest of the aerofoil and thereby significantly impairs the lift generated by the wing. As a result, the lift produced is seriously reduced in a condition known as aerodynamic stall. Due to the high pressure drag induced by the wake, the aircraft can further lose airspeed, pushing the separation point further upstream and creating a deleterious feedback loop where the aircraft literally starts to fall out of the sky in an uncontrolled spiral. To prevent total loss of control, the pilot needs to reattach the boundary as quickly as possible which is achieved by reducing the angle of attack and pointing the nose of the aircraft down to gain speed.

The lift produced by a wing is given by

L = \frac{1}{2}C_L \rho V^2 S

where \rho is the density of the surrounding air, V is the flight velocity, S is the wing area and C_L is the lift coefficient of the aerofoil shape. The lift coefficient of a specific aerofoil shape increases linearly with the angle of attack up to a maximum point C_{Lmax}. The maximum lift coefficient of a typical aerofoil is around 1.4 at an angle of attack of around 16^\circ, which is bounded by the critical angle of attack where the stall condition occurs.

During cruise the angle of attack is relatively small (\approx 2^\circ) as sufficient lift is guaranteed by the high flight velocity V. Furthermore, we actually want to maintain a small angle of attack as this minimises the pressure drag induced by boundary layer separation. At takeoff and landing, however, the flight velocity is much smaller which means that the lift coefficient has to be increased by setting the wings at a more aggressive angle of attack (\approx 15^\circ). The issue is that even with a near maximum lift coefficient of 1.4, large jumbo jets have a hard time achieving the necessary lift force at safe speeds for landing. While it would also be possible to increase the wing area, such a solution would have detrimental effect on the aircraft weight and therefore fuel efficiency.

High-lift Devices

A much more elegant solution are leading-edge slats and trailing-edge flaps. A slat is a thin, curved aerofoil that is fitted to the front of the wing and is intended to induce a secondary airflow through the gap between the slat and the leading edge. The air accelerates through this gap and thereby injects high momentum fluid into the boundary on the upper surface, delaying the onset of flow reversal in the boundary layer. Similarly, one or two curved aerofoils may be placed at the rear of wing in order to invigorate the flow near the trailing edge. In this case the high momentum fluid reinvigorates the flow which has been slowed down by the adverse pressure gradient. The maximum lift coefficient can typically be doubled by these devices and therefore allows big jumbo jets to land and takeoff at relatively low runway speeds.

Leading edge slats and trailing edge flaps on an aircraft wing

The next time you are sitting close to the wings observe how these devices are retracted after take-off and activated before landing. In fact, birds have a similar devices on their wings. The wings of bats are comprised of thin and flexible membranes reinforced by small bones which roughen the membrane surface and help to transition the flow from laminar to turbulent and prevent boundary layer separation. As is so often the case in engineering design, a lot of inspiration can be taken from nature!

Tagged with:
 
%d bloggers like this: