At the start of the 19th century, after studying the highly cambered thin wings of many different birds, Sir George Cayley designed and built the first modern aerofoil, later used on a hand-launched glider. This biomimetic, highly cambered and thin-walled design remained the predominant aerofoil shape for almost 100 years, mainly due to the fact that the actual mechanisms of lift and drag were not understood scientifically but were explored in an empirical fashion. One of the major problems with these early aerofoil designs was that they experienced a phenomenon now known as boundary layer separation at very low angles of attack. This significantly limited the amount of lift that could be created by the wings and meant that bigger and bigger wings were needed to allow for any progress in terms of aircraft size. Lacking the analytical tools to study this problem, aerodynamicists continued to advocate thin aerofoil sections, as there was plenty of evidence in nature to suggest their efficacy. The problem was considered to be more one of degree, i.e. incrementally iterating the aerofoil shapes found in nature, rather than of type, that is designing an entirely new shape of aerofoil in accord with fundamental physics.
During the pre-WWI era, the misguided notions of designers was compounded by the ever-increasing use of wind-tunnel tests. The wind tunnels used at the time were relatively small and ran at very low flow speeds. This meant that the performance of the aerofoils was being tested under the conditions of laminar flow (smooth flow in layers, no mixing perpendicular to flow direction) rather than the turbulent flow (mixing of flow via small vortices) present over the wing surfaces. Under laminar flow conditions, increasing the thickness of an aerofoil increases the amount of skin-friction drag (as shown in last month’s post), and hence thinner aerofoils were considered to be superior.
The modern plane – born in 1915
The situation in Germany changed dramatically during WWI. In 1915 Hugo Junkers pioneered the first practical all-metal aircraft with a cantilevered wing – essentially the same semi-monocoque wing box design used today. The most popular design up to then was the biplane configuration held together by wires and struts, which introduced considerable amounts of parasitic drag and thereby limited the maximum speed of aircraft. Eliminating these supporting struts and wires meant that the flight loads needed to be carried by other means. Junkers cantilevered a beam from either side of the fuselage, the main spar, at about 25% of the chord of the wing to resist the up and down bending loads produced by lift. Then he fitted a smaller second spar, known as the trailing edge spar, at 75% of the chord to assist the main spar in resisting fore and aft bending induced by the drag on the wing. The two spars were connected by the external wing skin to produce a closed box-section known as the wing box. Finally, a curved piece of metal was fitted to the front of the wing to form the “D”-shaped leading edge, and two pieces of metal were run out to form the trailing edge. This series of three closed sections provided the wing with sufficient torsional rigidity to sustain the twisting loads that arise because the aerodynamic centre (the point where the lift force can be considered to act) is offset from the shear centre (the point where a vertical load will only cause bending and no twisting). Junker’s ideas were all combined in the world’s first practical all-metal aircraft, the Junker J 1, which although much heavier than other aircraft at the time, developed into the predominant form of construction for the larger and faster aircraft of the coming generation.Structures + Aerodynamics = Superior Aircraft
Junkers construction naturally resulted in a much thicker wing due to the room required for internal bracing, and this design provided the impetus for novel aerodynamics research. Junker’s ideas were supported by Ludwig Prandtl who carried out his famous aerodynamics work at the University of Göttingen. As discussed in last month’s post, Prandtl had previously introduced the notion of the boundary layer; namely the existence of a U-shaped velocity profile with a no-flow condition at the surface and an increasing velocity field towards the main stream some distance away from the surface. Prandtl argued that the presence of a boundary layer supported the simplifying assumption that fluid flow can be split into two non-interacting portions; a thin layer close to the surface governed by viscosity (the stickiness of the fluid) and an inviscid mainstream. This allowed Prandtl and his colleagues to make much more accurate predictions of the lift and drag performance of specific wing-shapes and greatly helped in the design of German WWI aircraft. In 1917 Prandtl showed that Junker’s thick and less-cambered aerofoil section produced much more favourable lift characteristics than the classic thinner sections used by Germany’s enemies. Second, the thick aerofoil could be flown at a much higher angle of attack without stalling and hence improved the manoeuvrability of a plane during dog fighting.
Skin Friction versus Pressure Drag
The flow in a boundary layer can be either laminar or turbulent. Laminar flow is orderly and stratified without interchange of fluid particles between individual layers, whereas in turbulent flow there is significant exchange of fluid perpendicular to the flow direction. The type of flow greatly influences the physics of the boundary layer. For example, due to the greater extent of mass interchange, a turbulent boundary layer is thicker than a laminar one and also features a steeper velocity gradient close to the surface, i.e. the flow speed increases more quickly as we move away from the wall.
Just like your hand experiences friction when sliding over a surface, so do layers of fluid in the boundary layer, i.e. the slower regions of the flow are holding back the faster regions. This means that the velocity gradient throughout the boundary layer gives rise to internal shear stresses that are akin to friction acting on a surface. This type of friction is aptly called skin-friction drag and is predominant in streamlined flows where the majority of the body’s surface is aligned with the flow. As the velocity gradient at the surface is greater for turbulent than laminar flow, a streamlined body experiences more drag when the boundary layer flow over its surfaces is turbulent. A typical example of a streamlined body is an aircraft wing at cruise, and hence it is no surprise that maintaining laminar flow over aircraft wings is an ongoing research topic.
Over flat surfaces we can suitably ignore any changes in pressure in the flow direction. Under these conditions, the boundary layer remains stable but grows in thickness in the flow direction. This is, of course, an idealised scenario and in real-world applications, such as curved wings, the flow is most likely experiencing an adverse pressure gradient, i.e. the pressure increases in the flow direction. Under these conditions the boundary layer can become unstable and separate from the surface. The boundary layer separation induces a second type of drag, known as pressure drag. This type of drag is predominant for non-streamlined bodies, e.g. a golfball flying through the air or an aircraft wing at a high angle of attack.
So why does the flow separate in the first place?
To answer this question consider fluid flow over a cylinder. Right at the front of the cylinder fluid particles must come to rest. This point is aptly called the stagnation point and is the point of maximum pressure (to conserve energy the pressure needs to fall as fluid velocity increases, and vice versa). Further downstream, the curvature of the cylinder causes the flow lines to curve, and in order to equilibrate the centripetal forces, the flow accelerates and the fluid pressure drops. Hence, an area of accelerating flow and falling pressure occurs between the stagnation point and the poles of the cylinder. Once the flow passes the poles, the curvature of the cylinder is less effective at directing the flow in curved streamlines due to all the open space downstream of the cylinder. Hence, the curvature in the flow reduces and the flow slows down, turning the previously favourable pressure gradient into an adverse pressure gradient of rising pressure.
To understand boundary layer separation we need to understand how these favourable and adverse pressure gradients influence the shape of the boundary layer. From our discussion on boundary layers, we know that the fluid travels slower the closer we are to the surface due to the retarding action of the no-slip condition at the wall. In a favourable pressure gradient, the falling pressure along the streamlines helps to urge the fluid along, thereby overcoming some of the decelerating effects of the fluid’s viscosity. As a result, the fluid is not decelerated as much close to the wall leading to a fuller U-shaped velocity profile, and the boundary layer grows more slowly.
By analogy, the opposite occurs for an adverse pressure gradient, i.e. the mainstream pressure increases in the flow direction retarding the flow in the boundary layer. So in the case of an adverse pressure gradient the pressure forces reinforce the retarding viscous friction forces close to the surface. As a result, the difference between the flow velocity close to the wall and the mainstream is more pronounced and the boundary layer grows more quickly. If the adverse pressure gradient acts over a sufficiently extended distance, the deceleration in the flow will be sufficient to reverse the direction of flow in the boundary layer. Hence the boundary layer develops a point of inflection, known as the point of boundary layer separation, beyond which a circular flow pattern is established.
For aircraft wings, boundary layer separation can lead to very significant consequences ranging from an increase in pressure drag to a dramatic loss of lift, known as aerodynamic stall. The shape of an aircraft wing is essentially an elongated and perhaps asymmetric version of the cylinder shown above. Hence the airflow over the top convex surface of a wing follows the same basic principles outlined above:
- There is a point of stagnation at the leading edge.
- A region of accelerating mainstream flow (favourable pressure gradient) up to the point of maximum thickness.
- A region of decelerating mainstream flow (adverse pressure gradient) beyond the point of maximum thickness.
These three points are summarised in the schematic diagram below.
Boundary layer separation is an important issue for aircraft wings as it induces a large wake that completely changes the flow downstream of the point of separation. Skin-friction drag arises due to inherent viscosity of the fluid, i.e. the fluid sticks to the surface of the wing and the associated frictional shear stress exerts a drag force. When a boundary layer separates, a drag force is induced as a result of differences in pressure upstream and downstream of the wing. The overall dimensions of the wake, and therefore the magnitude of pressure drag, depends on the point of separation along the wing. The velocity profiles of turbulent and laminar boundary layers (see image above) show that the velocity of the fluid increases much slower away from the wall for a laminar boundary layer. As a result, the flow in a laminar boundary layer will reverse direction much earlier in the presence of an adverse pressure gradient than the flow in a turbulent boundary layer.
To summarise, we now know that the inherent viscosity of a fluid leads to the presence of a boundary layer that has two possible sources of drag. Skin-friction drag due to the frictional shear stress between the fluid and the surface, and pressure drag due to flow separation and the existence of a downstream wake. As the total drag is the sum of these two effects, the aerodynamicist is faced with a non-trivial compromise:
- skin-friction drag is reduced by laminar flow due to a lower shear stress at the wall, but this increases pressure drag when boundary layer separation occurs.
- pressure drag is reduced by turbulent flow by delaying boundary layer separation, but this increases the skin-friction drag due to higher shear stresses at the wall.
As a result, neither laminar nor turbulent flow can be said to be preferable in general and judgement has to be made regarding the specific application. For a blunt body, such as a cylinder, pressure drag dominates and therefore a turbulent boundary layer is preferable. For more streamlined bodies, such as an aircraft wing at cruise, the overall drag is dominated by skin-friction drag and hence a laminar boundary layer is preferable. Dolphins, for example, have very streamlined bodies to maintain laminar flow. Early golfers, on the other hand, realised that worn rubber golf balls flew further than pristine ones, and this led to the innovation of dimples on golf balls. Fluid flow over golf balls is predominantly laminar due to the relatively low flight speeds. Dimples are therefore nothing more than small imperfections that transform the predominantly laminar flow into a turbulent one that delays the onset of boundary layer separation and therefore reduces pressure drag.
The second, and more dramatic effect, of boundary layer separation in aircraft wings is aerodynamic stall. At relatively low angles of attack, for example during cruise, the adverse pressure gradient acting on the top surface of the wing is benign and the boundary layer remains attached over the entire surface. As the angle of attack is increased, however, so does the pressure gradient. At some point the boundary layer will start to separate near the trailing edge of the wing, and this separation point will move further upstream as the angle of attack is increased. If an aerofoil is positioned at a sufficiently large angle of attack, separation will occur very close to the point of maximum thickness of the aerofoil and a large wake will develop behind the point of separation. This wake redistributes the flow over the rest of the aerofoil and thereby significantly impairs the lift generated by the wing. As a result, the lift produced is seriously reduced in a condition known as aerodynamic stall. Due to the high pressure drag induced by the wake, the aircraft can further lose airspeed, pushing the separation point further upstream and creating a deleterious feedback loop where the aircraft literally starts to fall out of the sky in an uncontrolled spiral. To prevent total loss of control, the pilot needs to reattach the boundary as quickly as possible which is achieved by reducing the angle of attack and pointing the nose of the aircraft down to gain speed.
The lift produced by a wing is given by
where is the density of the surrounding air, is the flight velocity, is the wing area and is the lift coefficient of the aerofoil shape. The lift coefficient of a specific aerofoil shape increases linearly with the angle of attack up to a maximum point . The maximum lift coefficient of a typical aerofoil is around 1.4 at an angle of attack of around , which is bounded by the critical angle of attack where the stall condition occurs.
During cruise the angle of attack is relatively small () as sufficient lift is guaranteed by the high flight velocity . Furthermore, we actually want to maintain a small angle of attack as this minimises the pressure drag induced by boundary layer separation. At takeoff and landing, however, the flight velocity is much smaller which means that the lift coefficient has to be increased by setting the wings at a more aggressive angle of attack (). The issue is that even with a near maximum lift coefficient of 1.4, large jumbo jets have a hard time achieving the necessary lift force at safe speeds for landing. While it would also be possible to increase the wing area, such a solution would have detrimental effect on the aircraft weight and therefore fuel efficiency.
A much more elegant solution are leading-edge slats and trailing-edge flaps. A slat is a thin, curved aerofoil that is fitted to the front of the wing and is intended to induce a secondary airflow through the gap between the slat and the leading edge. The air accelerates through this gap and thereby injects high momentum fluid into the boundary on the upper surface, delaying the onset of flow reversal in the boundary layer. Similarly, one or two curved aerofoils may be placed at the rear of wing in order to invigorate the flow near the trailing edge. In this case the high momentum fluid reinvigorates the flow which has been slowed down by the adverse pressure gradient. The maximum lift coefficient can typically be doubled by these devices and therefore allows big jumbo jets to land and takeoff at relatively low runway speeds.
The next time you are sitting close to the wings observe how these devices are retracted after take-off and activated before landing. In fact, birds have a similar devices on their wings. The wings of bats are comprised of thin and flexible membranes reinforced by small bones which roughen the membrane surface and help to transition the flow from laminar to turbulent and prevent boundary layer separation. As is so often the case in engineering design, a lot of inspiration can be taken from nature!
In the early 20th century, a group of German scientists led by Ludwig Prandtl at the University of Göttingen began studying the fundamental nature of fluid flow and subsequently laid the foundations for modern aerodynamics. In 1904, just a year after the first flight by the Wright brothers, Prandtl published the first paper on a new concept, now known as the boundary layer. In the following years, Prandtl worked on supersonic flow and spent most of his time developing the foundations for wing theory, ultimately leading to the famous red triplane flown by Baron von Richthofen, the Red Baron, during WWI.
Prandtl’s key insight in the development of the boundary layer was that as a first-order approximation it is valid to separate any flow over a surface into two regions: a thin boundary layer near the surface where the effects of viscosity cannot be ignored, and a region outside the boundary layer where viscosity is negligible. The nature of the boundary layer that forms close to the surface of a body significantly influences how the fluid and body interact. Hence, an understanding of boundary layers is essential in predicting how much drag an aircraft experiences, and is therefore a mandatory requirement in any first course on aerodynamics.
Boundary layers develop due to the inherent stickiness or viscosity of the fluid. As a fluid flows over a surface, the fluid sticks to the solid boundary which is the so-called “no-slip condition”. As sudden jumps in flow velocity are not possible for flow continuity requirements, there must exist a small region within the fluid, close to the body over which the fluid is flowing, where the flow velocity increases from zero to the mainstream velocity. This region is the so-called boundary layer.
The U-shaped profile of the boundary layer can be visualised by suspending a straight line of dye in water and allowing fluid flow to distort the line of dye (see below). The distance of a distorted dye particle to its original position is proportional to the flow velocity. The fluid is stationary at the wall, increases in velocity moving away from the wall, and then converges to the constant mainstream value at a distance equal to the thickness of the boundary layer.
To further investigate the nature of the flow within the boundary layer, let’s split the boundary layer into small regions parallel to the surface and assume a constant fluid velocity within each of these regions (essentially the arrows in the figure above). We have established that the boundary layer is driven by viscosity. Therefore, adjacent regions within the boundary layer that move at slightly different velocities must exert a frictional force on each other. This is analogous to you running your hand over a table-top surface and feeling a frictional force on the palm of your hand. The shear stresses inside the fluid are a function of the viscosity or stickiness of the fluid , and also the velocity gradient :
where is the coordinate measuring the distance from the solid boundary, also called the “wall”.
Prandtl first noted that shearing forces are negligible in mainstream flow due to the low viscosity of most fluids and the near uniformity of flow velocities in the mainstream. In the boundary layer, however, appreciable shear stresses driven by steep velocity gradients will arise.
So the pertinent question is: Do these two regions influence each other or can they be analysed separately?
Prandtl argued that for flow around streamlined bodies, the thickness of the boundary layer is an order of magnitude smaller than the thickness of the mainstream, and therefore the pressure and velocity fields around a streamlined body may analysed disregarding the presence of the boundary layer.
Eliminating the effect of viscosity in the free flow is an enormously helpful simplification in analysing the flow. Prandtl’s assumption allows us to model the mainstream flow using Bernoulli’s equation or the equations of compressible flow that we have discussed before, and this was a major impetus in the rapid development of aerodynamics in the 20th century. Today, the engineer has a suite of advanced computational tools at hand to model the viscid nature of the entire flow. However, the idea of partitioning the flow into an inviscid mainstream and viscid boundary layer is still essential for fundamental insights into basic aerodynamics.
Laminar and turbulent boundary layers
One simple example that nicely demonstrates the physics of boundary layers is the problem of flow over a flat plate.
The fluid is streaming in from the left with a free stream velocity and due to the no-slip condition slows down close to the surface of the plate. Hence, a boundary layer starts to form at the leading edge. As the fluid proceeds further downstream, large shearing stresses and velocity gradients develop within the boundary layer. Proceeding further downstream, more and more fluid is slowed down and therefore the thickness, , of the boundary layer grows. As there is no sharp line splitting the boundary layer from the free-stream, the assumption is typically made that the boundary layer extends to the point where the fluid velocity reaches 99% of the free stream. At all times, and at at any distance from the leading edge, the thickness of the boundary layer is small compared to .
Close to the leading edge the flow is entirely laminar, meaning the fluid can be imagined to travel in strata, or lamina, that do not mix. In essence, layers of fluid slide over each other without any interchange of fluid particles between adjacent layers. The flow speed within each imaginary lamina is constant and increases with the distance from the surface. The shear stress within the fluid is therefore entirely a function of the viscosity and the velocity gradients.
Further downstream, the laminar flow becomes unstable and fluid particles start to move perpendicular to the surface as well as parallel to it. Therefore, the previously stratified flow starts to mix up and fluid particles are exchanged between adjacent layers. Due to this seemingly random motion this type of flow is known as turbulent. In a turbulent boundary layer, the thickness increases at a faster rate because of the greater extent of mixing within the main flow. The transverse mixing of the fluid and exchange of momentum between individual layers induces extra shearing forces known as the Reynolds stresses. However, the random irregularities and mixing in turbulent flow cannot occur in the close vicinity of the surface, and therefore a viscous sublayer forms beneath the turbulent boundary layer in which the flow is laminar.
An excellent example contrasting the differences in turbulent and laminar flow is the smoke rising from a cigarette.
As smoke rises it transforms from a region of smooth laminar flow to a region of unsteady turbulent flow. The nature of the flow, laminar or turbulent, is captured very efficiently in a single parameter known as the Reynolds number
where is the density of the fluid, the local flow velocity, a characteristic length describing the geometry, and is the viscosity of the fluid.
There exists a critical Reynolds number in the region for which the flow transitions from laminar to turbulent. For the plate example above, the characteristic length is the distance from the leading edge. Therefore increases as we proceed downstream, increasing the Reynolds number until at some point the flow transitions from laminar to turbulent. The faster the free stream velocity , the shorter the distance from the leading edge where this transition occurs.
Due to the different degrees of fluid mixing in laminar and turbulent flows, the shape of the two boundary layers is different. The increase in fluid velocity moving away from the surface (y-direction) must be continuous in order to guarantee a unique value of the velocity gradient . For a discontinuous change in velocity, the velocity gradient , and therefore the shearing forces would be infinite, which is obviously not feasible in reality. Hence, the velocity increases smoothly from zero at the wall in some form of parabolic distribution. The further we move away from the wall, the smaller the velocity gradient and the retarding action of the shearing stresses decreases.
In the case of laminar flow, the shape of the boundary layer is indeed quite smooth and does not change much over time. For a turbulent boundary layer however, only the average shape of the boundary layer approximates the parabolic profile discussed above. The figure below compares a typical laminar layer with an averaged turbulent layer.
In the laminar layer, the kinetic energy of the free flowing fluid is transmitted to the slower moving fluid near the surface purely means by of viscosity, i.e. frictional shear stresses. Hence, an imaginary fluid layer close to the free stream pulls along an adjacent layer close to the wall, and so on. As a result, significant portions of fluid in the laminar boundary layer travel at a reduced velocity. In a turbulent boundary layer, the kinetic energy of the free stream is also transmitted via Reynolds stresses, i.e. momentum exchanges due to the intermingling of fluid particles. This leads to a more rapid rise of the velocity away from the wall and a more uniform fluid velocity throughout the entire boundary layer. Due to the presence of the viscous sublayer in the close vicinity of the wall, the wall shear stress in a turbulent boundary layer is governed by the usual equation . This means that because of the greater velocity gradient at the wall the frictional shear stress in a turbulent boundary is greater than in a purely laminar boundary layer.
Skin Friction drag
Fluids can only exert two types of forces: normal forces due to pressure and tangential forces due to shear stress. Pressure drag is the phenomenon that occurs when a body is oriented perpendicular to the direction of fluid flow. Skin friction drag is the frictional shear force exerted on a body aligned parallel to the flow, and therefore a direct result of the viscous boundary layer.
Due to the greater shear stress at the wall, the skin friction drag is greater for turbulent boundary layers than for laminar ones. Skin friction drag is predominant in streamlined aerodynamic profiles, e.g. fish, airplane wings, or any other shape where most of the surface area is aligned with the flow direction. For these profiles, maintaining a laminar boundary layer is preferable. For example, the crescent lunar shaped tail of many sea mammals or fish has evolved to maintain a relatively constant laminar boundary layer when oscillating the tail from side to side.
One of Prandtl’s PhD students, Paul Blasius, developed an analytical expression for the shape of a laminar boundary layer over a flat plate without a pressure gradient. Blasius’ expression has been verified by experiments many times over and is considered a standard in fluid dynamics. The two important quantities that are of interest to the designer are the boundary layer thickness and the shear stress at the wall at a distance from the leading edge. The boundary layer thickness is given by
with the Reynolds number at a distance from the leading edge. Due to the presence of in the numerator and in the denominator, the boundary layer thickness scales proportional to , and hence increases rapidly in the beginning before settling down.
Next, we can use a similar expression to determine the shear stress at the wall. To do this we first define another non dimensional number known as the drag coefficient
which is the value of the shear stress at the wall normalised by the dynamic pressure of the free-flow. According to Blasius, the skin-friction drag coefficient is simply governed by the Reynolds number
This simple example reiterates the power of dimensionless numbers we mentioned before when discussing wind tunnel testing. Even though the shear stress at the wall is a dimensional quantity, we have been able to express it merely as a function of two non-dimensional quantities and . By combining the two equations above, the shear stress can be written as
and therefore scales proportional to , tending to zero as the distance from the leading edge increases. The value of is the frictional shear stress at a specific point from the leading edge. To find the total amount of drag exerted on the plate we need to sum up (integrate) all contributions of over the length of the plate
where is now the Reynolds number of the free stream calculated using the total length of the plate . Similar to the skin friction coefficient we can define a total skin friction drag coefficient
Hence, can be used to calculate the local amount of shear stress at a point from the leading edge, whereas is used to find the total amount of skin friction drag acting on the surface.
Unfortunately, do to the chaotic nature of turbulent flow, the boundary layer thickness and skin drag coefficient for a turbulent boundary layer cannot be determined as easily in a theoretical manner. Therefore we have to rely on experimental results to define empirical approximations of these quantities. The scientific consensus of the these relations are as follows:
Therefore the thickness of a turbulent boundary layer grows proportional to (faster than the relation for laminar flow) and the total skin friction drag coefficient varies as (also faster than the relation of laminar flow). Hence, the total skin drag coefficient confirms the qualitative observations we made before that the frictional shear stresses in a turbulent boundary layer are greater than those in a laminar one.
Skin friction drag and wing design
The unfortunate fact for aircraft designers is that turbulent flow is much more common in nature than laminar flow. The tendency for flow to be random rather than layered can be interpreted in a similar way to the second law of thermodynamics. The fact that entropy in a closed system only increases is to say that, if left to its own devices, the state in the system will tend from order to disorder. And so it is with fluid flow.
However, the shape of a wing can be designed in such a manner as to encourage the formation of laminar flow. The P-51 Mustang WWII fighter was the first production aircraft designed to operate with laminar flow over its wings. The problem back then, and to this day, is that laminar flow is incredibly unstable. Protruding rivet heads or splattered insects on the wing surface can easily “trip” a laminar boundary layer into turbulence, and preempt any clever design the engineer concocted. As a result, most of the laminar flow wings that have been designed based on idealised conditions and smooth wing surfaces in a wind tunnel have not led to the sweeping improvements originally imagined.
For many years NASA conducted a series of experiments to design a natural laminar flow (NLF) aircraft. Some of their research suggested the wrapping of a glove around the leading edge of a Boeing 757 just outboard of the engine. The modified shape of this wing promotes laminar flow at the high altitudes and almost sonic flight conditions of a typical jet airliner. To prevent the build up of insect splatter at take-off a sheath of paper was wrapped around the glove which was then torn away at altitude. Even though the range of such an aircraft could be increased by almost 15% this, rather elaborate scheme, never made it into production.
In the mid 1990s NASA fitted active test panels to the wings of two F-16’s in order to test the possibility of achieving laminar flow on swept delta-wings flying at supersonic speed; in NASA’s view a likely wing configuration for future supersonic cruise aircraft. The active test panels essentially consisted of titanium covers perforated with millions of microscopic holes, which were attached to the leading edge and the top surface of the wing. The role of these panels was to suck most of the boundary layer off the top surface through perforations using an internal pumping system. By removing air from the boundary layer its thickness decreased and thereby promoted the stability of the laminar boundary layer over the wing. This Supersonic Laminar Flow (SLFC) project successfully maintained laminar flow over a large portion of the wing during supersonic flight of up to Mach 1.6.
While these elaborate schemes have not quite found their way into mass production (probably due to their cost, maintenance problems and risk), laminar flow wings are a very viable future technology in terms of reducing greenhouse gases as stipulated by environmental legislation. An important driver in reducing greenhouse gases is maximising the lift-to-drag ratio of the wings, and therefore I would expect research to continue in this field for some time to come.
Despite the growing computer power and increasing sophistication of computational models, any design meant operate in the real world requires some form of experimental validation. The idealist modeller, me included, wants to believe that computer simulation will replace all forms of experimental testing and thereby allow for much faster design cycles. The issue with this is that random imperfections, and most importantly their concurrence, are very hard to account for robustly, especially when operating in nonlinear domains. As a result, the quantity and quality of both computational and experimental validation have increased in lockstep over the few last decades.
In “The Wind and Beyond”, the autobiography of Theodore von Kármán, one of the pre-eminent aerospace engineers and scientists of the 20th century, von Kármán recounts a telling episode regarding the role of wind tunnel testing in the development of the Douglas DC-3, the first American commercial jetliner. Early versions of the DC-3 faced a problem with aerodynamic instabilities that could throw the airplane out of control. A similar problem had been noticed earlier on the Northrop Alpha airplane, which, like the DC-3, featured a wing that was attached to the underside of the fuselage. When two of von Kármán’s assistants, Major Klein and Clark Millikan, subjected a model of the Alpha to high winds in a wind tunnel, the model aircraft started to sway and shake violently. In the following investigation, Klein and Millikan found that the sharp corner at the connection between the wing and fuselage decelerated the air as it flowed past, causing boundary layer separation and a wake of eddies. As these eddies broke away from the trailing edge of the wing, they adversely impacted the flow over the horizontal stabiliser and vertical tail fin at the rear of the aircraft and resulted in uncontrollable vibrations.Fortunately, Theodore von Kármán was world-renowned, among other things, for his work on eddies and especially the so-called von Kármán Vortex Street. Von Kármán therefore intuitively realised what had to be done to eliminate the creation of these eddies. Von Kármán and his colleagues fitted a small fairing, a filling if you like, to the connection between the wing and the fuselage to smooth out the eddies. This became one of the textbook examples of how wind tunnel findings could be applied in a practical way to iron out problems with an aircraft. When French engineers learned of the device from von Kármán at a conference a few years later, they were so enamoured that such a simple idea could solve such a big problem that they named the fillet a “Kármán”.
When testing the aerodynamics of aircraft, the wind tunnel is indispensable. The Wright brothers built their own wind tunnel to validate the research data on airfoils that had been recorded throughout the 19th century. One of the most important pieces of equipment in the early days of NACA (now NASA) was a variable-density wind tunnel, which by pressurising the air, allowed realistic operating conditions to be simulated on 1/20th geometrically-scaled models.This brings us to an important point: How do you test the aerodynamics of an aircraft in a wind-tunnel?
Do you need to build individual wind-tunnels big enough to fit a particular aircraft? Or can you use a smaller multi-purpose wind tunnel to test small-scale models of the actual aircraft? If this is the case, how representative is the collected data of the actual flying aircraft?
Luckily we can make use of some clever mathematics, known as dimensional analysis, to make our life a little easier. The key idea behind dimensional analysis is to define a set of dimensionless parameters that govern the physical behaviour of the phenomenon being studied, purely by identifying the fundamental dimensions (time, length and mass in aerodynamics) that are at play. This is best illustrated by an example.
The United States developed the atomic bomb during WWII under the greatest security precautions. Even many years after the first test of 1945 in the desert of New Mexico, the total amount of energy released during the explosion remained unknown. The British scientist G.I. Taylor then famously estimated the total amount of energy released by the explosion simply by using available pictures showing the explosion plume at different time stamps after detonation.
By assuming that the shock wave could be modelled as a perfect sphere, Taylor posited that the size of the plume, i.e. the radius , should depend on the energy of the explosion, the time after detonation and the density of the surrounding air.
In dimensional analysis we proceed to define the fundamental units or dimensions that quantify our variables. So in this case:
- Radius is defined by a distance, and therefore the units are length, i.e.
- The units of time are, you guessed it, time, i.e.
- Energy is force times distance, where a force is mass times acceleration, and acceleration is distance divided by time squared i.e.
- Density is mass divided by volume, where volume is a distance cubed, i.e.
Having determined all our variables in the fundamental dimensions of distance, time and mass, we now attempt to relate the radius of the explosion to the energy, density and time. If we assume that the radius is proportional to these three variables, then dividing the radius by the product of the other three variables must result in a dimensionless number. Hence,
Or alternatively, all fundamental dimensions in the above fraction must cancel:
For all units to disappear we need:
and solving this system gives:
Therefore the shock wave radius is given by
and by re-arranging
So, we have an expression that relates the energy of the explosion to the radius, the density of air and time after detonation, which were all available to Taylor from the individual time stamps (these provided a diameter estimate and the time after detonation. The density of the air was known).
In the example above, specific calculations of also require an estimate of the constant . In aerodynamics, we are typically interested in quantifying the constant itself using the variables at hand. Hence, by analogy with the above example, we would know the energy, the density, radius and time and then calculate a value for the constant under these conditions. As the constant is dimensionless, it allows us to make an unbiased judgement of the flow conditions for entirely different and unrelated problems.
The most famous dimensionless number in aerodynamics is probably the Reynolds number which quantifies the nature of the flow, i.e. is it laminar (nice and orderly in layers that do not mix), or is it turbulent, or somewhere in between?
In determining aerodynamic forces, two of the important variables we want to understand and quantify are the lift and drag. Particularly, we want to determine how the lift and drag vary with independent parameters such as the flight velocity, wing area and the properties of the surrounding area.
Using a similar method as above, it can be shown that the two primary dimensionless variables are the lift () and drag coefficients (), which are defined in terms of lift (), drag (), flight velocity (), static fluid density () and wing area ().
where is known as the dynamic pressure of a fluid in motion. When the dynamic pressure is multiplied by the wing area, , we are left with units of force which cancel the unit of lift () and drag (), thus making and dimensionless.
As long as the geometry of our vehicle remains the same (scaling up and down at constant ratio of relative dimensions, e.g. length, width, height, wing span, chord etc.), these two parameters are only dependent on two other dimensionless variables: the Reynolds number
where and are characteristic flow velocity and length (usually aerofoil chord or wingspan), and the the Mach Number
which is the ratio of aircraft speed to the local speed of sound.
Let’s recap what we have developed until now. We have two dimensionless parameters, the lift and drag coefficients, which measure the amount of lift and drag an airfoil or flight vehicle creates normalised by the conditions of the surrounding fluid () and the geometry of the lifting surface (). Hence, these dimensionless parameters allow us to make a fair comparison of the performance of different airfoils regardless of their size. Comparing the and of two different airfoils requires that the operating conditions be comparable. They do not have to be exactly the same in terms of air speed, density and temperature but their dimensionless quantities, namely the Mach number and Reynolds number, need to be equal.
As an example consider a prototype aircraft flying at altitude and a scaled version of the same aircraft in a wind tunnel. The model and prototype aircraft have the same geometrical shape and only vary in terms of their absolute dimensions and the operating conditions. If the values of Reynolds number and Mach number of the flow are the same for both, then the flows are called dynamically similar, and as the geometry of the two aircraft are scaled version of each other, it follows that the lift and drag coefficients must be the same too. This concept of dynamic similarity is crucial for wind-tunnel experiments as it allows engineers to create small-scale models of full-sized aircraft and reliably predict their aerodynamic qualities in a wind tunnel.
This of course means that the wind tunnel needs to be operated at entirely different temperatures and pressures than the operating conditions at altitude. As long as the dimensions of the model remain in proportion upon scaling up or down, the model wing area scales with the square of the wing chord, i.e. is proportional to . We know from the explanation above that for a certain combination of Mach number and Reynolds number the lift and drag coefficients are fixed.
Using the definition of and the lift is given by
and the drag by
The lift and drag created by an aircraft or model under constant Mach number and Reynolds number scale with the wing area or the wing chord squared. Rearranging the equation for the Reynolds number, the wing chord can in fact be shown to be proportional to the operating temperature and pressure of the fluid flow. So by rearranging the Reynolds number equation:
and from the fundamental gas equation
and the Mach Number we have
such that we can reformulate the chord length as follows
Hence, the chord of the model is inversely proportional to the fluid pressure and directly proportional to the square of the fluid temperature. Thus, maximising the pressure and reducing the temperature (maximum fluid density) reduces the required size of the model and the overall aerodynamic forces. The was the concept behind NACA’s early variable density tunnel and is still exploited in modern cryogenic wind tunnels.
(Caveat: There is a little bit more maths in this post than usual. I have tried to explain the equations as good as possible using diagrams. In any case, the real treat is at the end of the post where I go through the design of rocket nozzles. However, understanding this design methodology is naturally easier by first reading what comes before.)
One of the most basic equations in fluid dynamics is Bernoulli’s equation: the relationship between pressure and velocity in a moving fluid. It is so fundamental to aerodynamics that it is often cited (incorrectly!) when explaining how aircraft wings create lift. The fact is that Bernoulli’s equation is not a fundamental equation of aerodynamics at all, but a particular case of the conservation of energy applied to a fluid of constant density.
The underlying assumption of constant density is only valid for low-speed flows, but does not hold in the case of high-speed flows where the kinetic energy causes changes in the gas’ density. As the speed of a fluid approaches the speed of sound, the properties of the fluid undergo changes that cannot be modelled accurately using Bernoulli’s equation. This type of flow is known as compressible. As a rule of thumb, the demarcation line for compressibility is around 30% the speed of sound, or around 100 m/s for dry air close to Earth’s surface. This means that air flowing over a normal passenger car can be treated as incompressible, whereas the flow over a modern jumbo jet is not.
The fluid dynamics and thermodynamics of compressible flow are described by five fundamental equations, of which Bernoulli’s equation is a special case under the conditions of constant density. For example, let’s consider an arbitrary control volume of fluid and assume that any flow of this fluid is
- adiabatic, meaning there is no heat transfer out of or into the control volume.
- inviscid, meaning no friction is present.
- at constant energy, meaning no external work (for example by a compressor) is done on the fluid.
This type of flow is known as isentropic (constant entropy), and includes fluid flow over aircraft wings, but not fluid flowing through rotating turbines.
At this point you might be wondering how we can possible increase the speed of a gas without passing it through some machine that adds energy to the flow?
The answer is the fundamental law of conservation of energy. The temperature, pressure and density of a fluid at rest are known as the stagnation temperature, stagnation pressure and stagnation density, respectively. These stagnation values are the highest values that the gas can possibly attain. As the flow velocity of a gas increases, the pressure, temperature and density must fall in order to conserve energy, i.e. some of the internal energy of the gas is converted into kinetic energy. Hence, expansion of a gas leads to an increase in its velocity.
The isentropic flow described above is governed by five fundamental conservation equations that are expressed in terms density (), pressure (), velocity (), area (), mass flow rate (), temperature () and entropy (). This means that at two stations of the flow, 1 and 2, the following expressions must hold:
– Conservation of mass:
– Conservation of linear momentum:
– Conservation of energy:
– Equation of state:
– Conservation of entropy (in adiabatic and inviscid flow only):
where is the specific universal gas constant (normalised by molar mass) and is the specific heat at constant pressure.
The Speed of Sound
Fundamental to the analysis of supersonic flow is the concept of the speed of sound. Without knowledge of the local speed of sound we cannot gauge where we are on the compressibility spectrum.
As a simple mind experiment, consider the plunger in a plastic syringe. The speed of sound describes the speed at which a pressure wave is transmitted through the air chamber by a small movement of the piston. As a very weak wave is being transmitted, the assumptions made above regarding no heat transfer and inviscid flow are valid here, and any variations in the temperature and pressure are small. Under these conditions it can be shown from only the five conservation equations above that the local speed of sound within the fluid is given by:
The term is the heat capacity ratio, i.e. the ratio of the specific heat at constant pressure () and specific heat at constant volume (), and is independent of temperature and pressure. The specific universal gas constant , as the name suggests, is also a constant and is given by the difference of the specific heats, . As the above equation shows, the speed of sound of a gas only depends on the temperature. The speed of sound in dry air ( J/(kg K), = 1.4) at the freezing point of 0° C (273 Kelvin) is 331 m/s.
Why is the speed of sound purely a function of temperature?
Well, the temperature of a gas is a measure of the gas’ kinetic energy, which essentially describes how much the individual gas molecules are jiggling about. As the air molecules are moving randomly with differing instantaneous speeds and energies at different points in time, the temperature describes the average kinetic energy of the collection of molecules over a period of time. The higher the temperature the more ferocious the molecules are jiggling about and the more often they bump into each other. A pressure wave momentarily disturbs some particles and this extra energy is transferred through the gas by the collisions of molecules with their neighbours. The higher the temperature, the quicker the pressure wave is propagated through the gas due to the higher rate of collisions.
This visualisation is also helpful in explaining why the speed of sound is a special property in fluid dynamics. One possible source of an externally induced pressure wave is the disturbance of an object moving through the fluid. As the object slices through the air it collides with stationary air particles upstream of the direction of motion. This collision induces a pressure wave which is transmitted via the molecular collisions described above. Now imagine what happens when the object is travelling faster than the speed of sound. This means the moving object is creating new disturbances upstream of its direction of motion at a faster rate than the air can propagate the pressure waves through the gas by means of molecular collisions. The rate of pressure wave creation is faster than the rate of pressure wave transmission. Or put more simply, information is created more quickly than it can be transmitted; we have run out of bandwidth. For this reason, the speed of sound marks an important demarcation line in fluid dynamics which, if exceeded, introduces a number of counter-intuitive effects.
Given the importance of the speed of sound, the relative speed of a body with respect to the local speed of sound is described by the Mach Number:
The Mach number is named after Ernst Mach who conducted many of the first experiments on supersonic flow and captured the first ever photograph of a shock wave (shown below).
As described previously, when an object moves through a gas, the molecules just ahead of the object are pushed out of the way, creating a pressure pulse that propagates in all directions (imagine a spherical pressure wave) at the speed of sound relative to the fluid. Now let’s imagine a loudspeaker emitting three sound pulses at equal intervals, , , .
If the object is stationary, then the three sound pulses at times , and are concentric (see figure below).
However, if the object starts moving in one direction, the centre of the spheres shift to the side and the sound pulses bunch up in the direction of motion and spread out in the opposite direction. A bystander listening to the sound pulses upstream of the loudspeaker would therefore hear a higher pitched sound than a downstream bystander as the frequency the sound waves reaching him are higher. This is known as the Doppler effect.
If the object now accelerates to the local speed of sound, then the centres of the sound pulse spheres will be travelling just as fast as the sound waves themselves and the spherical waves all touch at one point. This means no sound can travel ahead of the loudspeaker and consequently an observer ahead of the loudspeaker will hear nothing.
Finally, if the loudspeaker travels at a uniform speed greater than the speed of sound, then the loudspeaker will in fact overtake the sound pulses it is creating. In this case, the loudspeaker and the leading edges of the sound waves form a locus known as the Mach cone. An observer standing outside this cone is in a zone of silence and is not aware of the sound waves created by the loudspeaker.
The half angle of this cone is known as the Mach angle and is equal to
and therefore when the object is travelling at the speed of sound and decreases with increasing velocity.
As mentioned previously, the temperature, pressure and density of the gas all fall as the flow speed of the gas increases. The relation between Mach number and temperature can be derived directly from the conservation of energy (stated above) and is given by:
where is the maximum total temperature, also known as stagnation temperature, and is called the static temperature of the gas moving at velocity .
An intuitive way of explaining the relationship between temperature and flow speed is to return to the description of the vibrating gas molecules. Previously we established that the temperature of a gas is a measure of the kinetic energy of the vibrating molecules. Hence, the stagnation temperature is the kinetic energy of the random motion of the air molecules in a stationary gas. However, if the gas is moving in a certain direction at speed then there will be a real net movement of the air molecules. The molecules will still be vibrating about, but at a net movement in a specific direction. If the total energy of the gas is to remain constant (no external work), some of the kinetic energy of the random vibrations must be converted into kinetic energy of directed motion, and hence the energy associated with random vibration, i.e. the temperature, must fall. Therefore, the gas temperature falls as some of the thermal internal energy is converted into kinetic energy.
In a similar fashion, for flow at constant entropy, both the pressure and density of the fluid can be quantified by the Mach number.
In this regard the Mach number can simply be interpreted as the degree of compressibility of a gas. For small Mach numbers (M< 0.3), the density changes by less than 5% and this is why the assumptions of constant density underlying Bernoulli’s equation are applicable.
An Application: Convergent-divergent Nozzles
In typical engineering applications, compressible flow typically occurs in ducts, e.g. engine intakes, or through the exhaust nozzles of afterburners and rockets. This latter type of flow typically features changes in area. If we consider a differential, i.e. infinitesimally small control volume, where the cross-sectional area changes by , then the velocity of the flow must also change by a small amount in order to conserve the mass flow rate. Under these conditions we can show that the change in velocity is related to the change in area by the following equation:
Without solving this equation for a specific problem we can reveal some interesting properties of compressible flow:
- For M < 1, i.e. subsonic flow, with a positive constant. This means that increasing the flow velocity is only possible with a decrease in cross-sectional area and vice versa.
- For M = 1, i.e. sonic flow . As has to be finite this implies that and therefore the area must be a minimum for sonic flow.
- For M > 1, i.e. supersonic flow . This means that increasing the flow velocity is only possible with an increase in cross-sectional area and vice versa.
Hence, because of the term , changes in subsonic and supersonic flows are of opposite sign. This means that if we want to expand a gas from subsonic to supersonic speeds, we must first pass the flow through a convergent nozzle to reach Mach 1, and then expand it in a divergent nozzle to reach supersonic speeds. Therefore, at the point of minimum area, known as the throat, the flow must be sonic and, as a result, rocket engines always have large bell-shaped nozzle in order to expand the exhaust gases into supersonic jets.
The flow through such a bell-shaped convergent-divergent nozzle is driven by the pressure difference between the combustion chamber and the nozzle outlet. In the combustion chamber the gas is basically at rest and therefore at stagnation pressure. As it exits the nozzle, the gas is typically moving and therefore at a lower pressure. In order to create supersonic flow, the first important condition is a high enough pressure ratio between the combustion chamber and the throat of the nozzle to guarantee that the flow is sonic at the throat. Without this critical condition at the throat, there can be no supersonic flow in the divergent section of the nozzle.
We can determine this exact pressure ratio for dry air () from the relationship between pressure and Mach number given above:
Therefore, a pressure ratio greater than or equal to 1.893 is required to guarantee sonic flow at the throat. The temperature at this condition would then be:
or 1.2 times smaller than the temperature in the combustion chamber (as long as there is no heat loss or work done in the meantime, i.e. isentropic flow).
The term “shock wave” implies a certain sense of drama; the state of shock after a traumatic event, the shock waves of a revolution, the shock waves of an earthquake, thunder, the cracking of a whip, and so on. In aerodynamics, a shock wave describes a thin front of energy, approximately m in thickness (that’s 0.1 microns, or 0.0001 mm) across which the state of the gas changes abruptly. The gas density, temperature and pressure all significantly increase across the shock wave. A specific type of shock wave that lends itself nicely to straightforward analysis is called a normal shock wave, as it forms at right angles to the direction of motion. The conservation laws stated at the beginning of this post still hold and these can be used to prove a number of interesting relations that are known as the Prandtl relation and the Rankine equations.
The Prandtl relation provides a means of calculating the speed of the fluid flow after a normal shock, given the flow speed before the shock.
where is the speed of sound at the stagnation temperature of the flow. Because we are assuming no external work or heat transfer across the shock wave, the internal energy of the flow must be conserved across the shock, and therefore the stagnation temperature also does not change across the shock wave. This means that the speed of sound at the stagnation temperature must also be conserved and therefore the Prandtl relation shows that the product of upstream and downstream velocities must always be a constant. Hence, they are inversely proportional.
We can further extend the Prandtl relation to express all flow properties (speed, temperature, pressure and density) in terms of the upstream Mach number , and hence the degree of compressibility before the shock wave. In the Prandtl relation we replace the velocities with their Mach numbers and divide both sides of the equations by
and because we know the relationship between temperature, stagnation temperature and Mach number from above:
substitution for states 1 and 2 the Prandtl relation is transformed into:
This equation looks a bit clumsy but it is actually quite straightforward given that the terms involving are constants. For clarity a graphical representation of the the equation is shown below.
It is clear from the figure that for we necessarily have . Therefore a shock wave automatically turns the flow from supersonic to subsonic. In the case of we have reached the limiting case of a sound wave for which there is no change in the gas properties. Similar expressions can also be derived for the pressure, temperature and density, which all increase across a shock wave, and these are known as the Rankine equations.
Both the temperature and pressure ratios increase with higher Mach number such that both and tend to infinity as tends to infinity. The density ratio however, does not tend to infinity but approaches an asymptotic value of 6 as increases. In isentropic flow, the relationship between the pressure ratio and the density ratio must hold. Given that tends to infinity with increasing but does not, this implies that the above relation between pressure ratio and density ratio must be broken with increasing , i.e. the flow can no longer conserve entropy. In fact, in the limiting case of a sound wave, where , there is an infinitesimally weak shock wave and the flow is isentropic with no change in the gas properties. When a shock wave forms as a result of supersonic flow the entropy always increases across the shock.
Even though the Rankine equations are valid mathematically for subsonic flow, the predicted fluid properties lead to a decrease in entropy, which contradicts the Second Law of Thermodynamics. Hence, shock waves can only be created in supersonic flow and the pressure, temperature and density always increase across it.
With our new-found knowledge on supersonic flow and nozzles we can now begin to intuitively design a convergent-divergent nozzle to be used on a rocket. Consider two reservoirs connected by a convergent-divergent nozzle (see figure below).
The gas within the upstream reservoir is stagnant at a specific stagnation temperature and pressure . The pressure in the downstream reservoir, called the back pressure , can be regulated using a valve. The pressure at the exit plane of the divergent section of the nozzle is known as the exit pressure , and the pressure at the point of minimum area within the nozzle is known as the throat pressure . Changing the back pressure influences the variation of the pressure throughout the nozzle as shown in the figure above. Depending on the back pressure, eight different conditions are possible at the exit plane.
- The no-flow condition: In this case the valve is closed and . This is the trivial condition where nothing interesting happens. No flow, nothing, boring.
- Subsonic flow regime: The valve is opened slightly and the flow is entirely subsonic throughout the entire nozzle. The pressure decreases from the stagnant condition in the upstream reservoir to a minimum at the throat, but because the flow does not reach the critical pressure ratio , the flow does not reach Mach 1 at the throat. Hence, the flow cannot accelerate further in the divergent section and slows down again, thereby increasing the pressure. The exit pressure is exactly equal to the back pressure.
- Choking condition: The back pressure has now reached a critical condition and is low enough for the flow to reach Mach 1 at the throat. Hence, . However, the exit flow pressure is still equal to the back pressure () and therefore the divergent section of the nozzle still acts as a diffuser; the flow does not go supersonic. However, as the flow can not go faster than Mach 1 at the throat, the maximum mass flow rate has been achieved and the nozzle is now choked.
- Non-isentropic flow regime: Lowering the back pressure further means that the flow now reaches Mach 1 at the throat and can then accelerate to supersonic speeds within the divergent portion of the nozzle. The flow in the convergent section of the nozzle remains the same as in condition 3) as the nozzle is choked. Due to the supersonic flow, a shock wave forms within the divergent section turning the flow from supersonic into subsonic. Downstream of the shock the divergent nozzle now diffuses the flow further to equalise the back pressure and exit pressure (). The lower the back pressure is decreased, the further the shock wave travels downstream towards the exit plane, increasing the severity of the shock at the same time. The location of the shock wave within the divergent section will always be such as to equalise the exit and back pressures.
- Exit plane shock condition: This is the limiting condition where the shock wave in the divergent portion has moved exactly to the exit plane. At the exit of the nozzle there is an abrupt increase in pressure at the exit plane and therefore the exit plane pressure and back pressure are still the same ().
- Overexpansion flow regime: The back pressure is now low enough that the flow is subsonic throughout the convergent portion of the nozzle, sonic at the throat and supersonic throughout the entire divergent portion. This means that the exit pressure is now lower than the gas pressure (the flow is overexpanded), causing it to suddenly contract once it exits the nozzle. These sudden compressions cause nonisentropic oblique pressure waves which cannot be modelled using the simple 1D flow assumptions we have made here.
- Nozzle design condition: At the nozzle design condition the back pressure is low enough to match the pressure of the supersonic flow at the exit plane. Hence, the flow is entirely isentropic within the nozzle and inside the downstream reservoir. As described in a previous post on rocketry, this is the ideal operating condition for a nozzle in terms of efficiency.
- Underexpansion flow regime: Contrary to the over expansion regime, the back pressure is now lower than the exit pressure of the supersonic flow, such that the exit flow must expand to equilibrate with the reservoir pressure. In this case, the flow is again governed by oblique pressure waves, which this time expand outward rather than contract inward.
Thus, as we have seen the flow inside and outside of the nozzle is driven by the back pressure and by the requirement of the exit pressure and back pressure to equilibrate once the flow exits the nozzle. In some cases this occurs as a result of shocks inside the nozzle and in others as a result of pressure waves outside. In terms of the structural mechanics of the nozzle, we obviously do not want shock to occur inside the nozzle in case this damages the structural integrity. Ideally, we would want to operate a rocket nozzle at the design condition, but as the atmospheric pressure changes throughout a flight into space, a rocket nozzle is typically overexpanded at take-off and underexpanded in space. To account for this, variable area nozzles and other clever ideas have been proposed to operate as close as possible to the design condition.
This is the fourth and final part of a series of posts on rocket science. Part I covered the history of rocketry, Part II dealt with the operating principles of rockets and Part III looked at the components that go into the propulsive system.
One of the most important drivers in rocket design is the mass ratio, i.e. the ratio of fuel mass to dry mass of the rocket. The greater the mass ratio the greater the change in velocity (delta-v) the rocket can achieve. You can think of delta-v as the pseudo-currency of rocket science. Manoeuvres into orbit, to the moon or any other point in space are measured by their respective delta-v’s and this in turn defines the required mass ratio of the rocket.
For example, at an altitude of 200 km an object needs to travel at 7.8 km/s to inject into low earth orbit (LEO). Accounting for frictional losses and gravity, the actual requirement rocket scientists need to design for when starting from rest on a launch pad is just shy of delta-v=10 km/s. Using Tsiolovsky’s rocket equation and assuming a representative average exhaust velocity of 3500 m/s, this translates into a mass ratio of 17.4:
A mass ratio of 17.4 means that the rocket needs to be % fuel!
This simple example explains why the mass ratio is a key indicator of a rocket’s structural efficiency. The higher the mass ratio the greater the ratio of delta-v producing propellant to non-delta-v producing structural mass. The simple example also explains why staging is such an effective strategy. Once, a certain amount of fuel within the tanks has been used up, it is beneficial to shed the unnecessary structural mass that was previously used to contain the fuel but is no longer contributing to delta-v.
At the same time we need to ask ourselves how to best minimise the mass of the rocket structure?
So in this post we will turn to my favourite topic of all: Structural design. Let’s dig in…
The role of the rocket structure is to provide some form of load-bearing frame while simultaneously serving as an aerodynamic profile and container for propellant and payload. In order to maximise the mass ratio, the rocket designer wants to minimise the structural mass that is required to safely contain the propellant. There are essentially two ways to achieve this:
- Using lightweight materials.
- And/or optimising the geometric design of the structure.
When referring to “lightweight materials” what we mean is that the material has high values of specific stiffness, specific strength and/or specific toughness. In this case “specific” means that the classical engineering properties of elastic modulus (stiffness), yield or ultimate strength, and fracture toughness are weighted by the density of the material. For example, if a design of given dimensions (fixed volume) requires a certain stiffness and strength, and we can achieve these specifications with a material of superior specific properties, then the mass of the structure will be reduced compared to some other material. In the rocket industry the typical materials are aerospace-grade titanium and aluminium alloys as their specific properties are much more favourable than those of other metal alloys such as steel.
However, over the last 30 years there has been a drive towards increasing the proportion of advanced fibre-reinforced plastics in rocket structures. One of the issues with composites is that the polymer matrices that bind the fibres together become rather brittle (think of shattering glass) under the cryogenic temperatures of outer space or when in contact with liquid propellants. The second issue with traditional composites is that they are more flammable; obviously not a good thing when sitting right next to liquid hydrogen and oxygen. Third, it is harder to seal composite rocket tanks and especially bolted joints are prone to leaking. Finally, the high-performance characteristics that are needed for space applications require the use of massive high-pressure, high-temperature ovens (autoclaves) and tight-tolerance moulds which significantly drive up manufacturing costs. For these reasons the use composites is mostly restricted to payload fairings. NASA is currently working hard on their out-of-autoclave technology and automated fibre placement technology, while RocketLabs have announced that they will be designing a carbon-composite rocket too, and I would expect this technology to mature over the next decade.
The load-bearing structure in a rocket is very similar to the fuselage of an airplane and is based on the same design philosophy: semi-monocoque construction. In contrast to early aircraft that used frames of discrete members braced by wires to sustain flight loads and flexible membranes as lift surfaces, the major advantage of semi-monocoque construction is that the functions of aerodynamic profile and load-carrying structure are combined. Hence, the visible cylindrical barrel of a rocket serves to contain the internal fuel as a pressure vessel, sustains the imposed flights loads and also defines the aerodynamic shape of the rocket. Because the external skin is a working part of the structure, this type of construction is known as stressed skin or monocoque. The even distribution of material in a monocoque means that the entire structure is at a more uniform and lower stress state with fewer local stress concentrations that can be hot spots for crack initiation.
Second, curved shell structures, as in a cylindrical rocket barrel, are one of the most efficient forms of construction found in nature, e.g. eggs, sea-shells, nut-shells etc.. In thin-walled curved structures the external loads are reacted internally by a combination of membrane stresses (uniform stretching or compression through the thickness) and bending stresses (linear variation of stresses through the thickness with tension on one side, compression on the other side, zero stress somewhere in the interior of the thickness known as the neutral axis). As a rule of thumb, membrane stresses are more efficient than bending stresses, as all of the material through the thickness is contributing to reacting the external load (no neutral axis) and the stress state is uniform (no stress concentrations).
In general, flat structures such as your typical credit card, will resist tensile and compressive external loads via uniform membrane stresses, and bending via linearly varying stresses through the thickness. The efficiency of curved shells stems from the fact that membrane stresses are induced to react both uniform stretching/compressive forces and bending moments. The presence of a membrane component reduces the peak stress that occurs through the thickness of the shell, and ultimately means that a thinner wall thickness and associated lower component mass will safely resist the externally applied loads. This is important as the bending stiffness of thin-walled structures is typically at least an order of magnitude smaller than the stretching/compressive stiffness (e.g. you can easily bend your credit card, but try stretching it).
Alas, as so often in life, there is a compromise. Optimising a structure for one mode of deformation typically makes it more fragile in another. This means that if the structure fails in the deformation mode that it has been optimised for, the ensuing collapse is most-likely sudden and catastrophic.
As described above, reducing the wall-thickness in a monocoque construction greatly helps to reduce the mass of the structure. However, the bending stiffness scales with the cube of the thickness, whereas the membrane stiffness only scales linearly. Hence, in a thin-walled structure we ideally want all deformation to be in a membrane state (uniform squashing or stretching), and curved shell structures help to guarantee this. However, due to the large mismatch between membrane stiffness and bending stiffness in a thin-walled structure, the structure may at some point energetically prefer to bend and will transition to a bending state.
This phenomenon is known as buckling and is the bane of thin-walled construction.
One of the principles of physics is that the deformation of a structure is governed by the proclivity to minimise the strain energy. Hence, a structure can at some point bifurcate into a different deformation shape if this represents a lower energy state. As a little experiment, form a U-shape with your hand, thumb on one side and four fingers on the other. Hold a credit card between your thumb and the four fingers and start to compress it. Initially, the structure reacts this load by compressing internally (membrane deformation) in a flat state, but very soon the credit card will snap one way to form a U-shape (bending deformation).
The reason this works is because compressing the credit card reduces the distance between two edges held by the thumb and four fingers. The credit card can satisfy these new externally imposed constraints either by compressing uniformly, i.e. squashing up, or by maintaining its original length and bending into an arc. At some critical point of compression the bending state is energetically more favourable than the squashed state and the credit card bifurcates. Note that this explanation should also convince you that this form of behaviour is not possible under tension as the bifurcation to a bending state will not return the credit card to its original length.
The advantage of curved monocoques is that their buckling loads are much greater than those flat plates. For example, you can safely stand on a soda can even though it is made out of relatively cheap aluminium. However, once the soda can does buckle all hell breaks loose and the whole thing collapses in one big heap. What is more, curved structures are very susceptible to initial imperfections which drastically reduce the load at which buckling occurs. Flick the side of a soda can to initiate a little dent and stand back on the can to feel the difference.
This problem is exacerbated by the fact that the shape of the tiny initial imperfections, typically of the order of the thickness of the shell, can lead to vastly different failure modes. Thus, the behaviour of the shell is emergent of the initial conditions. In this domain of complexity it is very difficult to make precise repeatable predictions of how the structure will behave. For this reason, curved shells are often called the “prima-donna” of structures and we need to be very careful in how we go about designing them.
A rocket is naturally exposed to compressive forces as a result of gravity and inertia while accelerating. In order to increase the critical buckling loads of the cylindrical rocket shell, the skin is stiffened by internal stiffeners. This type of construction is known as semi-monocoque to describe the discrete discontinuities of the internal stiffeners. A rocket cylinder typically has internal stringers running top to bottom and internal hoops running around the circumference of the cylindrical skin.
The purpose of these stringers and hoops is twofold:
- First, they help to resist compressive loading and therefore remove some of the onus on the thin skin.
- Second, they break the thin skin into smaller sections which are much harder to buckle. To convince yourself, find an old out-of-date credit card, cut it in half and repeat the previously described experiment.
The cylindrical rocket shell has a second advantage in that it acts as a pressure vessel to contain the pressurised propellants. The internal pressure of the propellants increases the circumference of the rocket shell, and like blowing up a balloon, imparts tensile stretching deformations into the skin which preempt the compressive gravitational and inertial loads. In fact, this pressure-stabilisation effect is so helpful that some old rockets that you see on display in museums, most notoriously the Atlas 2E rocket, need to be pressurised artificially by external air pumps at all times to prevent them from collapsing under their own weight. If you look at the diagram below you can see little diamond-shaped dimples spread all over the skin. These are buckling waveforms.
NASA Langley Research Center has been, and continues to be, a leader in studying the complex failure behaviour of rocket shells. To find out more, check out the video by some of the researchers that I have worked with who are developing new methods of designing the next generation of composite rocket shells.
This is the third in a series of posts on rocket science. Part I covered the history of rocketry and Part II dealt with the operating principles of rockets. If you have not checked out the latter post, I highly recommend you read this first before diving into what is to follow.
We have established that designing a powerful rocket means suspending a bunch of highly reactant chemicals above an ultralight means of combustion. In terms of metrics this means that a rocket scientist is looking to
- Maximise the mass ratio to achieve the highest amounts of delta-v. This translates to carrying the maximum amount of fuel with minimum supporting structure to maximise the achievable change in velocity of the rocket.
- Maximise the specific impulse of the propellant. The higher the specific impulse of the fuel the greater the exhaust velocity of the hot gases and consequently the greater the momentum thrust of the engine.
- Optimise the shape of the exhaust nozzle to produce the highest amounts of pressure thrust.
- Optimise the staging strategy to reach a compromise between the upside of staging in terms of shedding useless mass and the downside of extra technical complexity involved in joining multiple rocket engines (such complexity typically adds mass).
- Minimise the dry mass costs of the rocket either by manufacturing simple expendable rockets at scale or by building reusable rockets.
These operational principles set the landscape of what type of rocket we want to design. In designing chemical rockets some of the pertinent questions we need to answer are
- What propellants to use for the most potent reaction?
- How to expel and direct the exhaust gases most efficiently?
- How to minimise the mass of the structure?
Here, we will turn to the propulsive side of things and answer the first of these two questions.
In a chemical rocket an exothermic reaction of typically two different chemicals is used to create high-pressure gases which are then directed through a nozzle and converted into a high-velocity directed jet.
From the Tsiolkovsky rocket equation we know that the momentum thrust depends on the mass flow rate of the propellants and the exhaust velocity,
The most common types of propellant are:
- Monopropellant: a single pressurised gas or liquid fuel that disassociates when a catalyst is introduced. Examples include hydrazine, nitrous oxide and hydrogen peroxide.
- Hypergolic propellant: two liquids that spontaneously react when combined and release energy without requiring external ignition to start the reaction.
- Fuel and oxidiser propellant: a combination of two liquids or two solids, a fuel and an oxidiser, that react when ignited. Combinations of solid fuel and liquid oxidiser are also possible as a hybrid propellant system. Typical fuels include liquid hydrogen and kerosene, while liquid oxygen and nitric acid are often used as oxidisers. In liquid propellant rockets the oxidiser and fuel are typically stored separately and mixed upon ignition in the combustion chamber, whereas solid propellant rockets are designed premixed.
Rockets can of course be powered by sources other than chemical reactions. Examples of this are smaller, low performance rockets such as attitude control thruster, that use escaping pressurised fluids to provide thrust. Similarly, a rocket may be powered by heating steam that then escapes through a propelling nozzle. However, the focus here is purely on chemical rockets.
Solid propellants are made of a mixture of different chemicals that are blended into a liquid, poured into a cast and then cured into a solid. At its simplest, these chemical blends or “composites” are comprised of four different functional ingredients:
- Solid oxidiser granules.
- Flakes or powders of exothermic compounds.
- Polymer binding agent.
- Additives to stabilise or modify the burn rate.
Gunpowder is an example of a solid propellant that does not use a polymer binding agent to hold the propellant together. Rather the charcoal fuel and potassium nitrate oxidiser are compressed to hold their shape. A popular solid rocket fuel is ammonium perchlorate composite propellant (APCP) which uses a mixture of 70% granular ammonium perchlorate as an oxidiser, with 20% aluminium powder as a fuel, bound together using 10% polybutadiene acrylonitrile (PBAN).
Solid propellant rockets have been used much less frequently than liquid fuel rockets. However, there are some advantages, which can make solid propellants favourable to liquid propellants in some military applications (e.g. intercontinental ballistic missiles, ICBMs). Some of the advantages of solid propellants are that:
- They are easier to store and handle.
- They are simpler to operate with.
- They have less components. There is no need for a separate combustion chamber and turbo pumps to pump the propellants into the combustion chamber. The solid propellant (also called “grain”) is ignited directly in the propellant storage casing.
- They are much denser than liquid propellants and therefore reduce the fuel tank size (lower mass). Furthermore, solid propellants can be used as a load-bearing component, which further reduces the structural weight of the rocket. The cured solid propellant can readily be encased in a filament-wound composite rocket shell, which has more favourable strength-to-weight properties of the metallic rocket shells typically used for liquid rockets.
Apart from their use as ICBMs, solid rockets are known for their role as boosters. The simplicity and relatively low cost compared with liquid-fuel rockets means that solid rockets are a better choice when large amounts of cheap additional thrust is required. For example, the Space Shuttle used two solid rocket boosters to complement the onboard liquid propellant engines.
The disadvantage of solid propellants is that their specific impulse, and hence the amount of thrust produced per unit mass of fuel, is lower than for liquid propellants. The mass ratio of solid rockets can actually be greater than that of liquid rockets as a result of the more compact design and lower structural mass, but the exhaust velocities are much lower. The combustion process in solid rockets depends on the surface area of the fuel, and as such any air bubbles, cracks or voids in the solid propellant cast need to be prevented. Therefore, quite expensive quality assurance measures such as ultrasonic inspection or x-rays are required to assure the quality of the cast. The second problem with air bubbles in the cast is that the amount of oxidiser is increased (via the oxygen in the air) which results in local temperature hot spots and increased burn rate. Such local imbalances can spiral out of control to produce excessive temperatures and pressures, and ultimately lead to catastrophic failure. Another disadvantage of solid propellants are their binary operation mode. Once the chemical reaction has started and the engines have been ignited, it is very hard to throttle back or control the reaction. The propellant can be arranged in a manner to provide a predetermined thrust profile, but once this has started it is much hard to make adjustments on the fly. Liquid propellant rockets on the other hand use turbo pumps to throttle the propellant flow.
Liquid propellants have more favourable specific impulse measures than solid rockets. As such they are more efficient at propelling the rocket for a unit mass of reactant mass. This performance advantage is due to the superior oxidising capabilities of liquid oxidisers. For example, traditional liquid oxidisers such as liquid oxygen or hydrogen peroxide result in higher specific impulse measures than the ammonium perchlorate in solid rockets. Furthermore, as the liquid fuel and oxidiser are pumped into the combustion chamber, a liquid-fuelled rocket can be throttled, stopped and restarted much like a car or a jet engine. In liquid-fuelled rockets the combustion process is restricted to the combustion chamber, such that only this part of the rocket is exposed to the high pressure and temperature loads, whereas in solid-fuelled rockets the propellant tanks themselves are subjected to high pressures. Liquid propellants are also cheaper than solid propellants as they can be sourced from the upper atmosphere and require relatively little refinement compared to the composite manufacturing process of solid propellants. However, the cost of the propellant only accounts for around 10% of the total cost of the rocket and therefore these savings are typically negligible. Incidentally, the high proportion of costs associated with the structural mass of the rocket is why re-usability of rocket stages is such an important factor in reducing the cost of spaceflight.The main drawback of liquid propellants is the difficulty of storage. Traditional liquid oxidisers are highly reactive and very toxic such that they need to be handled with care and properly insulated from other reactive materials. Second, the most common oxidiser, liquid oxygen, needs to be stored at very low cryogenic temperatures and this increases the complexity of the rocket design. What is more, additional components such as turbopumps and the associated valves and seals are needed that are entirely absent from solid-fuelled rockets.
Modern spaceflight is dominated by two liquid propellant mixtures:
- Liquid oxygen (LOX) and kerosene (RP-1): As discussed in the previous post this mix of oxidiser and fuel is predominantly used for lower stages (i.e. to get off the launch pad), due to the higher density of kerosene compared to liquid hydrogen. Kerosene, as a higher density fuel, allows for better ratios of propellant to tankage mass which is favourable for the mass ratio. Second, high density fuels work better in an atmospheric pressure environment. Historically, the Atlas V, Saturn V and Soyuz rockets have used LOX and RP-1 for the first stages and so does the SpaceX Falcon rocket today.
- Liquid oxygen and liquid hydrogen: This combination is mostly used for the upper stages that propel a vehicle into orbit. The lower density of the liquid hydrogen requires higher expansion ratios (gas pressure – atmospheric pressure) and therefore works more efficiently at higher altitudes. The Atlas V, Saturn V and modern Delta family or rockets all used this propellant mix for the upper rocket stages.
The choice of propellant mixture for different stages requires certain tradeoffs. Liquid hydrogen provides higher specific impulse than kerosene, but its density is around 7 times lower and therefore liquid hydrogen occupies much more space for the same mass of fuel. As a result, the required volume and associated mass of tankage, fuel pumps and pipes is much greater. Both the the specific impulse of the propellant and tankage mass influence the potential delta-v of the rocket, and hence liquid hydrogen, chemically the more efficient fuel, is not necessarily the best option for all rockets.
Although the exact choice of fuel is not straightforward I will propose two general rules of thumb that explain why kerosene is used for the early stages and liquid hydrogen for the upper stages:
- In general, the denser the fuel the heavier the rocket on the launch pad. This means that the rocket needs to provide more thrust to get off the ground and it carries this greater amount of thrust throughout the entire duration of the burn. As fuel is being depleted, the greater thrust of denser fuel rockets means that the rocket reaches orbit earlier and as a result minimises drag losses in the atmosphere.
- Liquid hydrogen fuelled rockets generally produce the lightest design and are therefore used on those parts of the spacecraft that actually need to be propelled into orbit or escape Earth’s gravity to venture into deep space.
Engine and Nozzle
In combustive rockets, the chemical reaction between the fuel and oxidiser creates a high temperature, high pressure gas inside the combustion chamber. If the combustion chamber were closed and symmetric, the internal pressure acting on the chamber walls would cause equal force in all directions and the rocket would remain stationary. For anything interesting to happen we must therefore open one end of the combustion chamber to allow the hot gases to escape. As a result of the hot gases pressing against the wall opposite to the opening, a net force in the direction of the closed end is induced.Rocket pioneers, such as Goddard, realised early on that the shape of the nozzle is of crucial importance in creating maximum thrust. A converging nozzle accelerates the escaping gases by means of the conservation of mass. However, converging nozzles are fundamentally limited to fluid flows of Mach 1, the speed of sound, and this is known as the choke condition. In this case, the nozzle provides relatively little thrust and the rocket is purely propelled by the net force acting on the close combustion chamber wall.
To further accelerate the flow, a divergent nozzle is required at the choke point. A convergent-divergent nozzle can therefore be used to create faster fluid flows. Crucially, the Tsiolkovsky rocket equation (conservation of momentum) indicates that the exit velocity of the hot gases is directly proportional to the amount of thrust produced. A second advantage is that the escaping gases also provide a force in the direction of flight by pushing on the divergent section of the nozzle.The exit static pressure of the exhaust gases, i.e. the pressure of the gases if the exhaust jet was brought to rest, is a function of the pressure created inside the combustion chamber and the ratio of throat area to exit area of the nozzle. If the exit static pressure of the exhaust gases is greater than the surrounding ambient air pressure, the nozzle is known to be underexpanded. On the other hand, if the exit static pressure falls below the ambient pressure then the nozzle is known to be overexpanded. In this case two possible scenarios are possible. The supersonic air flow exiting the nozzle will induce a shock wave at some point along the flow. As the exhaust gas particles travel at speeds greater than the speed of sound, other gas particles upstream cannot “get out of the way” quickly enough before the rest of the flow arrives. Hence, the pressure progressively builds until at some point the properties of the fluid, density, pressure, temperature and velocity, change instantaneously. Thus, across the shock wave the gas pressure of an overexpanded nozzle will instantaneously shift from lower than ambient to exactly ambient pressure. If shock waves, visible by shock diamonds, form outside the nozzle, the nozzle is known as simply overexpanded. However, if the shock waves form inside the nozzle this is known as grossly overexpanded.
In an ideal world a rocket would continuously operate at peak efficiency, the condition where the nozzle is perfectly expanded throughout the entire flight. This can intuitively be explained using the rocket thrust equation introduced in the previous post:
Peak efficiency of the rocket engine occurs when such that the pressure thrust contribution is equal to zero. This is the condition of peak efficiency as the contribution of the momentum thrust is maximised while removing any penalties from over- or underexpanding the nozzle. An underexpanded nozzle means that , and while this condition provides extra pressure thrust, is lower and some of the energy that has gone into combusting the gases has not been converted into kinetic energy. In an overexpanded nozzle the pressure differential is negative, . In this case, is fully developed but the overexpansion induces a drag force on the rocket. If the nozzle is grossly overexpanded such that a shock wave occurs inside the nozzle, may still be greater than but the supersonic jet separates from the divergent nozzle prematurely (see diagram below) such that decreases. In outer space decreases and therefore the thrust created by the nozzle increases. However, is also decreasing as the flow separates earlier from the divergent nozzle. Thus, some of the increased efficiency of reduced ambient pressure is negated.
A perfectly expanded nozzle is only possible using a variable throat area or variable exit area nozzle to counteract the ambient pressure decrease with gaining altitude. As a result, fixed area nozzles become progressively underexpanded as the ambient pressure decreases during flight, and this means most nozzles are grossly overexpanded at takeoff. Some various exotic nozzles such as plug nozzles, stepped nozzles and aerospikes have been proposed to adapt to changes in ambient pressure and increasing thrust at higher altitudes. The extreme scenario obviously occurs once the rocket has left the Earth’s atmosphere. The nozzle is now so grossly overexpanded that the extra weight of the nozzle structure outweighs any performance gained from the divergent section.
Thus we can see that just as in the case of the propellants the design of individual components is not a straightforward matter and requires detailed tradeoffs between different configurations. This is what makes rocket science such a difficult endeavour.
In a previous post we covered the history of rocketry over the last 2000 years. By means of the Tsiolkovsky rocket equation we also established that the thrust produced by a rocket is equal to the mass flow rate of the expelled gases multiplied by their exit velocity. In this way, chemically fuelled rockets are much like traditional jet engines: an oxidising agent and fuel are combusted at high pressure in a combustion chamber and then ejected at high velocity. So the means of producing thrust are similar, but the mechanism varies slightly:
- Jet engine: A multistage compressor increases the pressure of the air impinging on the engine nacelle. The compressed air is mixed with fuel and then combusted in the combustion chamber. The hot gases are expanded in a turbine and the energy extracted from the turbine is used to power the compressor. The mass flow rate and velocity of the gases leaving the jet engine determine the thrust.
- Chemical rocket engine: A rocket differs from the standard jet engine in that the oxidiser is also carried on board. This means that rockets work in the absence of atmospheric oxygen, i.e. in space. The rocket propellants can be in solid form ignited directly in the propellant storage tank, or in liquid form pumped into a combustion chamber at high pressure and then ignited. Compared to standard jet engines, rocket engines have much higher specific thrust (thrust per unit weight), but are less fuel efficient.
In this post we will have a closer look at the operating principles and equations that govern rocket design. An introduction to rocket science if you will…
The fundamental operating principle of rockets can be summarised by Newton’s laws of motion. The three laws:
- Objects at rest remain at rest and objects in motion remain at constant velocity unless acted upon by an unbalanced force.
- Force equals mass times acceleration (or ).
- For every action there is an equal and opposite reaction.
are known to every high school physics student. But how exactly to they relate to the motion of rockets?
Let us start with the two qualitative equations (the first and third laws), and then return to the more quantitative second law.
Well, the first law simply states that to change the velocity of the rocket, from rest or a finite non-zero velocity, we require the action of an unbalanced force. Hence, the thrust produced by the rocket engines must be greater than the forces slowing the rocket down (friction) or pulling it back to earth (gravity). Fundamentally, Newton’s first law applies to the expulsion of the propellants. The internal pressure of the combustion inside the rocket must be greater than the outside atmospheric pressure in order for the gases to escape through the rocket nozzle.
A more interesting implication of Newton’s first law is the concept escape velocity. As the force of gravity reduces with the square of the distance from the centre of the earth (), and drag on a spacecraft is basically negligible once outside the Earth’s atmosphere, a rocket travelling at 40,270 km/hr (or 25,023 mph) will eventually escape the pull of Earth’s gravity, even when the rocket’s engines have been switched off. With the engines switched off, the gravitational pull of earth is slowing down the rocket. But as the rocket is flying away from Earth, the gravitational pull is simultaneously decreasing at a quadratic rate. When starting at the escape velocity, the initial inertia of the rocket is sufficient to guarantee that the gravitational pull decays to a negligible value before the rocket comes to a standstill. Currently, the spacecraft Voyager 1 and 2 are on separate journeys to outer space after having been accelerated beyond escape velocity.
At face value, Newton’s third law, the principle of action and reaction, is seemingly intuitive in the case of rockets. The action is the force of the hot, highly directed exhaust gases in one direction, which, as a reaction, causes the rocket to accelerate in the opposite direction. When we walk, our feet push against the ground, and as a reaction the surface of the Earth acts against us to propel us forward.
So what does a rocket “push” against? The molecules in the surrounding air? But if that’s the case, then why do rockets work in space?
The thrust produced by a rocket is a reaction to mass being hurled in one direction (i.e. to conserve momentum, more on that later) and not a result of the exhaust gases interacting directly with the surrounding atmosphere. As the rockets exhaust is entirely comprised of propellant originally carried on board, a rocket essentially propels itself by expelling parts of its mass at high speed in the opposite direction of the intended motion. This “self-cannibalisation” is why rockets work in the vacuum of space, when there is nothing to push against. So the rocket doesn’t push against the air behind it at all, even when inside the Earth’s atmosphere.
Newton’s second law gives us a feeling for how much thrust is produced by the rocket. The thrust is equal to the mass of the burned propellants multiplied by their acceleration. The capability of rockets to take-off and land vertically is testament to their high thrust-to-weight ratios. Compare this to commercial jumbo or military fighter jets which use jet engines to produce high forward velocity, while the upwards lift is purely provided by the aerodynamic profile of the aircraft (fuselage and wings). Vertical take-off and landing (VTOL) aircraft such as the Harrier Jump jet are the rare exception.
At any time during the flight, the thrust-to-weight ratio is equal to the acceleration of the rocket. From Newton’s second law,
where is the net thrust of the rocket (engine thrust minus drag) and is the instantaneous mass of the rocket. As propellant is burned, the mass of the rocket decreases such that the highest accelerations of the rocket are achieved towards the end of a burn. On the flipside, the rocket is heaviest on the launch pad such that the engines have to produce maximum thrust to get the rocket away from the launch pad quickly (determined by the net acceleration ).
However, Newton’s second law only applies to each instantaneous moment in time. It does not allow us to make predictions of the rocket velocity as fuel is depleted. Mass is considered to be constant in Newton’s second law, and therefore it does not account for the fact that the rocket accelerates more as fuel inside the rocket is depleted.
The rocket equation
The Tsiolkovsky rocket equation, however, takes this into account. The motion of the rocket is governed by the conservation of momentum. When the rocket and internal gases are moving as one unit, the overall momentum, the product of mass and velocity, is equal to . Thus, for a total mass of rocket and gas moving at velocity
As the gases are expelled through the rear of the rocket, the overall momentum of the rocket and fuel has to remain constant as long as no external forces act on the system. Thus, if a very small amount of gas is expelled at velocity relative to the rocket (either in the direction of or in the opposite direction), the overall momentum of the system (sum of rocket and expelled gas) is
As has to equal to conserve momentum
and by isolating the change in rocket velocity
The negative sign in the equation above indicates that the rocket always changes velocity in the opposite direction of the expelled gas, as intuitively expected. So if the gas is expelled in the opposite direction of the rocket motion (so is negative), then the change in the rocket velocity will be positive and it will accelerate.
At any time the quantity is equal to the residual mass of the rocket (dry mass + propellant) and denotes it change. If we assume that the expelled velocity of the gas remains constant throughout, we can easily find the incremental change in velocity as the rocket changes from an initial mass to a final mass . So,
This equation is known as the Tsiolkovsky rocket equation and is applicable to any body that accelerates by expelling part of its mass at a specific velocity. Even though the expulsion velocity may not remain constant during a real rocket launch we can refer to an effective exhaust velocity that represent a mean value over the course of the flight.
The Tsiolkovsky rocket equation shows that the change in velocity attainable is a function of the exhaust jet velocity and the ratio of original take-off mass (structural weight + fuel = ) to its final mass (structural mass + residual fuel = ). If all of the propellant is burned, the mass ratio expresses how much of the total mass is structural mass, and therefore provides some insight into the efficiency of the rocket.
In a nutshell, the greater the ratio of fuel to structural mass, the more propellant is available to accelerate the rocket and therefore the greater the maximum velocity of the rocket.
So in the ideal case we want a bunch of highly reactant chemicals magically suspended above an ultralight means of combusting said fuel.
In reality this means we are looking for a rocket propelled by a fuel with high efficiency of turning chemical energy into kinetic energy, contained within a lightweight tankage structure and combusted by a lightweight rocket engine. But more on that later!
Often, we are more interested in the thrust created by the rocket and its associated acceleration . By dividing the rocket equation above by a small time increment and again assuming to remain constant
and the associated thrust acting on the rocket is
where is the mass flow rate of gas exiting the rocket. If the differences in exit pressure of the combustion gases and surrounding ambient pressure are accounted for this becomes:
where is the jet velocity at the nozzle exit plane, is the flow area at the nozzle exit plane, i.e. the cross-sectional area of the flow where it separates from the nozzle, is the static pressure of the exhaust jet at the nozzle exit plane and the pressure of the surrounding atmosphere.
This equation provides some additional physical insight. The term is the momentum thrust which is constant for a given throttle setting. The difference in gas exit and ambient pressure multiplied by the nozzle area provides additional thrust known as pressure thrust. With increasing altitude the ambient pressure decreases, and as a result, the pressure thrust increases. So rockets actually perform better in space because the ambient pressure around the rocket is negligibly small. However, also decreases in space as the jet exhaust separates earlier from the nozzle due to overexpansion of the exhaust jet. For now it will suffice to say that pressure thrust typically increases by around 30% from launchpad to leaving the atmosphere, but we will return to physics behind this in the next post.
Impulse and specific impulse
The overall amount of thrust is typically not used as an indicator for rocket performance. Better indicators of an engine’s performance are the total and specific impulse figures. Ignoring any external forces (gravity, drag, etc.) the impulse is equal to the change in momentum of the rocket (mass times velocity) and is therefore a better metric to gauge how much mass the rocket can propel and to what maximum velocity. For a change in momentum the impulse is
So to maximise the impulse imparted on the rocket we want to maximise the amount of thrust acting over the burn interval . If the burn period is broken into a number of finite increments, then the total impulse is given by
Therefore, impulse is additive and the total impulse of a multistage rocket is equal to the sum of the impulse imparted by each individual stage.
By specific impulse we mean the net impulse imparted by a unit mass of propellant. It’s the efficiency with which combustion of the propellant can be converted into impulse. The specific impulse is therefore a metric related to a specific propellant system (fuel + oxidiser) and essentially normalises the exhaust velocity by the acceleration of gravity that it needs to overcome:
where is the effective exhaust velocity and =9.81. Different fuel and oxidiser combinations have different values of and therefore different exhaust velocities.
A typical liquid hydrogen/liquid oxygen rocket will achieve an around 450 s with exhaust velocities approaching 4500 m/s, whereas kerosene and liquid oxygen combinations are slightly less efficient with around 350 s and around 3500 m/s. Of course, a propellant with higher values of is more efficient as more thrust is produced per unit of propellant.
Delta-v and mass ratios
The Tsiolkovsky rocket equation can be used to calculate the theoretical upper limit in total velocity change, called delta-v, for a certain amount of propellant mass burn at a constant exhaust velocity . At an altitude of 200 km an object needs to travel at 7.8 km/s to inject into low earth orbit (LEO). If we start from rest, this means a delta-v equal to 7.8 km/s. Accounting for frictional losses and gravity, the actual requirement rocket scientists need to design for is just shy of delta-v=10 km/s. So assuming a lower bound effective exhaust velocity of 3500 m/s, we require a mass ratio of…
to reach LEO. This means that the original rocket on the launch pad is 17.4 times heavier than when all the rocket fuel is depleted!
Just to put this into perspective, this means that the mass of fuel inside the rocket is SIXTEEN times greater than the dry structural mass of tanks, payload, engine, guidance systems etc. That’s a lot of fuel!The rocket’s initial mass to its final mass
is known as the mass ratio. In some cases, the reciprocal of the mass ratio is used to calculate the mass fraction:
The mass fraction is necessarily always smaller than 1, and in the above case is equal to .
So 94% of this rocket’s mass is fuel!
Such figures are by no means out of the ordinary. In fact, the Space Shuttle had a mass ratio in this ballpark (15.4 = 93.5% fuel) and Europe’s Ariane V rocket has a mass ratio of 39.9 (97.5% fuel).
If anything, flying a rocket means being perched precariously on top of a sea of highly explosive chemicals!
The reason for the incredibly high amount of fuel is the exponential term in the above equation. The good thing is that adding fuel means we have an exponential law working in our favour: For each extra gram of fuel we can pack into the rocket we get a superlinear (better than linear) increase in delta-v. On the downside, for every piece of extra equipment, e.g. payload, we stick into the rocket we get an equally exponential reduction in delta-v.
In reality, the situation is obviously more complex. The point of a rocket is to carry a certain payload into space and the distance we want to travel is governed by a specific amount of delta-v (see figure to the right). For example, getting to the Moon requires a delta-v of approximately 16.4 km/s which implies a whopping mass ratio of 108.4. Therefore, if we wish to increase the payload mass, we need to simultaneously increase propellant mass to keep the mass ratio at 108.4. However, increasing the amount of fuel increases the loads acting on the rocket, and therefore more structural mass is required to safely get the rocket to the Moon. Of course, increasing structural mass similarly increases our fuel requirement, and off we go on a nice feedback loop…
This simple example explains why the mass ratio is a key indicator of a rocket’s structural efficiency. The higher the mass ratio the greater the ratio of delta-v producing propellant to non-delta-v producing structural mass. All other factors being equal, this suggests that a high mass ratio rocket is more efficient because less structural mass is needed to carry a set amount of propellant.
The optimal rocket is therefore propelled by high specific impulse fuel mixture (for high exhaust velocity), with minimal structural requirements to contain the propellant and resist flight loads, and minimal requirements for additional auxiliary components such as guidance systems, attitude control, etc.
For this reason, early rocket stages typically use high-density propellants. The higher density means the propellants take up less space per unit mass. As a result, the tank structure holding the propellant is more compact as well. For example, the Saturn V rocket used the slightly lower specific impulse combination of kerosene and liquid oxygen for the first stage, and the higher specific impulse propellants liquid hydrogen and liquid oxygen for later stages.
Closely related to this, is the idea of staging. Once, a certain amount of fuel within the tanks has been used up, it is beneficial to shed the unnecessary structural mass that was previously used to contain the fuel but is no longer contributing to delta-v. In fact, for high delta-v missions, such as getting into orbit, the total dry-mass of the rockets we use today is too great to be able to accelerate to the desired delta-v. Hence, the idea of multi-stage rockets. We connect multiple rockets in stages, incrementally discarding those parts of the structural mass that are no longer needed, thereby increasing the mass ratio and delta-v capacity of the residual pieces of the rocket.
The cost of getting a rocket on to the launch pad can roughly be split into three components:
- Propellant cost.
- Cost of dry mass, i.e. rocket casing, engines and auxiliary units.
- Operational and labour costs.
As we saw in the last section, more than 90% of a rocket take-off mass is propellant. However, the specific cost (cost per kg) of the propellants is multiple orders of magnitude smaller than the cost per unit mass of the rocket dry mass mass, i.e. the raw material costs and operational costs required to manufacture and test them. A typical propellant combination of kerosene and liquid oxygen costs around $2/kg, whereas the dry mass cost of an unmanned orbital vehicle is at least $10,000/kg. As a result, the propellant cost of flying into low earth orbit is basically negligible.
The incredibly high dry mass costs are not necessarily because the raw material, predominantly high-grade aerospace metals, are prohibitively expense, rather they cannot be bought at scale because of the limited number of rockets being manufactured. Second, the criticality of reducing structural mass for maximising delta-v means that very tight safety factors are employed. Operating a tight safety factor design philosophy while ensuring sufficient safety and reliability standards under the extreme load conditions exerted on the rocket means that manufacturing standards and quality control measures are by necessity state-of-the-art. Such procedures are often highly specialised technologies that significantly drive up costs.
To clear these economic hurdles, some have proposed to manufacture simple expendable rockets at scale, while others are focusing on reusable rockets. The former approach will likely only work for unmanned smaller rockets and is being pursued by companies such as Rocket Lab Ltd. The Space Shuttle was an attempt at the latter approach that did not live up to its potential. The servicing costs associated with the reusable heat shield were unexpectedly high and ultimately forced the retirement of the Shuttle. Most, recently Elon Musk and SpaceX have picked up the ball and have successfully designed a fully reusable first stage.
The principles outlined above set the landscape of what type of rocket we want to design. Ideally, a high specific impulse chemicals suspended in a lightweight yet strong tankage structure above an efficient means of combustion.
Some of the more detailed questions rocket engineers are faced with are:
- What propellants to use to do the job most efficiently and at the lowest cost?
- How to expel and direct the exhaust gases most efficiently?
- How to control the reaction safely?
- How to minimise the mass of the structure?
- How to control the attitude and accuracy of the rocket?
We will address these questions in the next part of this series.
 Rolls-Royce plc (1996). The Jet Engine. Fifth Edition. Derby, England.
Rocket technology has evolved for more than 2000 years. Today’s rockets are a product of a long tradition of ingenuity and experimentation, and combine technical expertise from a wide array of engineering disciplines. Very few, if any, of humanity’s inventions are designed to withstand equally extreme conditions. Rockets are subjected to awesome g-forces at lift-off, and experience extreme hot spots in places where aerodynamic friction acts most strongly, and extreme cold due to liquid hydrogen/oxygen at cryogenic temperatures. Operating a rocket is a balance act, and the line between a successful launch and catastrophic blow-out is often razor thin. No other engineering system rivals the complexity and hierarchy of technologies that need to interface seamlessly to guarantee sustained operation. It is no coincidence that “rocket science” is the quintessential cliché to describe the mind-blowingly complicated.
Fortunately for us, we live in a time where rocketry is undergoing another golden period. Commercial rocket companies like SpaceX and Blue Origin are breathing fresh air into an industry that has traditionally been dominated by government-funded space programs. But even the incumbent companies are not resting on their laurels, and are developing new powerful rockets for deep-space exploration and missions to Mars. Recent blockbuster movies such as Gravity, Interstellar and The Martian are an indication that space adventures are once again stirring the imagination of the public.
What better time than now to look back at the past 2000 years of rocketry, investigate where past innovation has taken us and look ahead to what is on the horizon? It’s certainly impossible to cover all of the 51 influential rockets in the chart below but I will try my best to provide a broad brush stroke of the early beginnings in China to the Space Race and beyond.
The history of rocketry can be loosely split into two eras. First, early pre-scientific tinkering and second, the post-Enlightenment scientific approach. The underlying principle of rocket propulsion has largely remained the same, whereas the detailed means of operation and our approach to developing rocketry has changed a great deal.The fundamental principle of rocket propulsion, spewing hot gases through a nozzle to induce motion in the opposite direction, is nicely illustrated by two historic examples. The Roman writer Aulus Gellius tells a story of Archytas, who, sometime around 400 BC, built a flying pigeon out of wood. The pigeon was held aloft by a jet of steam or compressed air escaping through a nozzle. Three centuries later, Hero of Alexandria invented the aeolipile based on the same principle of using escaping steam as a propulsive fluid. In the aeolipile, a hollow sphere was connected to a water bath via tubing, which also served as a primitive type of bearing, suspending the sphere in mid-air. A fire beneath the water basin created steam which was subsequently forced to flow into the sphere via the connected tubing. The only way for the gas to escape was through two L-shaped outlets pointing in opposite directions. The escaping steam induced a moment about the hinged support effectively rotating the sphere about its axis.
In both these examples, the motion of the device is governed by the conservation of momentum. When the rocket and internal gases are moving as one unit, the overall momentum, the product of mass and velocity, is equal to . Thus for a total mass of rocket and gas, , moving at velocity
As the gases are expelled through the rear of the rocket, the overall momentum of the rocket and fuel has to remain constant as long as no external forces are acting on the system. Thus, if a very small amount of gas is expelled at velocity relative to the rocket (either in the direction of or in the opposite direction), the overall momentum of the system is
As has to equal to conserve momentum
and by isolating the change in rocket velocity
The negative sign in the equation above indicates that the rocket always changes velocity in the opposite direction of the expelled gas. Hence, if the gas is expelled in the opposite direction of the motion (i.e. is negative), then the change in the rocket velocity will be positive (i.e. it will accelerate).
At any time the quantity is equal to the residual mass of the rocket (dry mass + propellant) and denotes it change. If we assume that the expelled velocity of the gas remains constant throughout, we can easily integrate the above expression to find the incremental change in velocity as the total rocket mass (dry mass + propellant) changes from an intial mass to a final mass . Hence,
This equation is known as the Tsiolkovsky rocket equation (more on him later) and is applicable to any body that accelerates by expelling part of its mass at a specific velocity.
Often, we are more interested in the thrust created by the rocket and its associated acceleration . Hence, by dividing the equation for by a small time increment
and the associated thrust acting on the rocket is
where is the mass flow rate of gas exiting the rocket. This simple equation captures the fundamental physics of rocket propulsion. A rocket creates thrust either by expelling more of its mass at a higher rate () or by increasing the velocity at which the mass is expelled. In the ideal case that’s it! (So by idealised we mean constant and no external forces, e.g. aerodynamic drag in the atmosphere or gravity. In actual calculations of the required propellant mass these forces and other efficiency reducing factors have to be included.)
A plot of the rocket equation highlights one of the most pernicious conundrums of rocketry: The amount of fuel required (i.e. the mass ratio ) to accelerate the rocket through a velocity change at a fixed effective escape velocity increases exponentially as we increase the demand for greater . As the cost of a rocket is closely related to its mass, this explains why it is so expensive to propel anything of meaningful size into orbit ( 28,800 km/hr (18,000 mph) for low-earth orbit).The early beginnings The wood pigeon and aeolipile do not resemble anything that we would recognise as a rocket. In fact, the exact date when rockets first appeared is still unresolved. Records show that the Chinese developed gunpowder, a mixture of saltpetre, sulphur and charcoal dust, at around 100 AD. Gunpowder was used to create colourful sparks, smoke and explosive devices out of hollow bamboo sticks, closed off at one end, for religious festivals. Perhaps some of these bamboo tubes started shooting off or skittering along the ground, but the Chinese started tinkering with the gunpowder-filled bamboo sticks and attached them to arrows. Initially the arrows were launched in the traditional way using bows, creating a form of early incendiary bomb, but later the Chinese realised that the bamboo sticks could launch themselves just by the thrust produced by the escaping hot gases.
The first documented use of such a “true” rocket was during the battle of Kai-Keng between the Chinese and Mongols in 1232. During this battle the Chinese managed to hold the Mongols at bay using a primitive form a solid-fueled rocket. A hollow tube was capped at one end, filled with gunpowder and then attached to a long stick. The ignition of the gunpowder increased the pressure inside the hollow tube and forced some of the hot gas and smoke out through the open end. As governed by the law of conservation of momentum, this creates thrust to propel the rocket in the direction of the capped end of the tube, with the long stick acting as a primitive guidance system, very much reminiscent of the firework “rockets” we use today.
According to a Chinese legend, Wan Hu, a local official during the 16th century Ming dynasty, constructed a chair with 47 gunpowder bamboo rockets attached, and in some versions of the legend supposedly fitted kite wings as well. The rocket chair was launched by igniting all 47 bamboo rockets simultaneously, and apparently, after the commotion was over, Wan Hu was gone. Some say he made it into space, and is now the “Man in the Moon”. Most likely, Wan Hu suffered the first ever launch pad failure.
One theory is that rockets were brought to Europe via the 13th cetnury Mongol conquests. In England, Roger Bacon developed a more powerful gunpowder (75% saltpetre, 15% carbon and 10% sulfur) that increased the range of rockets, while Jean Froissart added a launch pad by launching rockets through tubes to improve aiming accuracy. By the Renaissance, the use of rockets for weaponry fell out of fashion and experimentation with fireworks increased instead. In the late 16th century, a German tinkerer, Johann Schmidlap, experimented with staged rockets, an idea that is the basis for all modern rockets. Schmidlap fitted a smaller second-stage rocket on top of a larger first-stage rocket, and once the first stage burned out, the second stage continued to propel the rocket to higher altitudes. At about the same time, Kazimierz Siemienowicz, a Polish-Lithuanian commander in the Polish Army published a manuscript that included a design for multi-stage rockets and delta-wing stabilisers that were intended to replace the long rods currently acting as stabilisers.
The scientific method meets rocketry
The scientific groundwork of rocketry was laid during the Enlightenment by none other than Sir Isaac Newton. His three laws of motion,
1) In a particular reference frame, a body will stay in a state of constant velocity (moving or at rest) unless a net force is acting on the body
2) The net force acting on a body causes an acceleration that is proportional to the body’s inertia (mass), i.e. F=ma
3) A force exerted by one body on another induces an equal an opposite reaction force on the first body
are known to every student of basic physics. In fact, these three laws were probably intuitively understood by early rocket designers, but by formalising the principles, they were consciously being used as design guidelines. The first law explains why rockets move at all. Without creating propulsive thrust the rocket will remain stationary. The second quantifies the amount of thrust produced by a rocket at a specific instant in time, i.e. for a specific mass . (Note, Newton’s second law is only valid for constant mass systems and is therefore not equivalent to the conservation of momentum approach described above. When mass varies, an equation that explicitly accounts for the changing mass has to be used.) The third law explains that due to the expulsion of mass, in re-action a thrusting force is produced on rocket.
In the 1720s, at around the time of Newton’s death, researchers in the Netherlands, Germany and Russia started to use Newton’s laws as tools in the design of rockets. The dutch professor Willem Gravesande built rocket-propelled cars by forcing steam through a nozzle. In Germany and Russia rocket designers started to experiment with larger rockets. These rockets were powerful enough that the hot exhaust flames burnt deep holes into the ground before launching. The British colonial wars of 1792 and 1799 saw the use of Indian rocket fire against the British army. Hyder Ali and his son Tipu Sultan, the rulers of the Kingdom of Mysore in India, developed the first iron-cased rockets in 1792 and then used it against the British in the Anglo-Mysore Wars.
Casing the propellant in iron, which extended range and thrust, was more advanced technology than anything the British had seen until then, and inspired by this technology, the British Colonel William Congreve began to design his own rocket for the British forces. Congreve developed a new propellant mixture and fitted an iron tube with a conical nose to improve aerodynamics. Congreve’s rockets had an operational range of up to 5 km and were successfully used by the British in the Napoleonic Wars and launched from ships to attack Fort McHenry in the War of 1812. Congreve created both carbine ball-filled rockets to be used against land targets, and incendiary rockets to be used against ships. However, even Congreve’s rockets could not significantly improve on the main shortcomings of rockets: accuracy.
At the time, the effectiveness of rockets as a weapon was not their accuracy or explosive power, but rather the sheer number that could be fired simultaneously at the enemy. The Congreve rockets had managed some form of basic attitude control by attaching a long stick to the explosive, but the rockets had a tendency to veer sharply off course. In 1844, a British designer, William Hale developed spin stabilisation, now commonly used in gun barrels, which removed the need for the rocket stick. William Hale forced the escaping exhaust gases at the rear of the rocket to impinge on small vanes, causing the rocket to spin and stabilise (the same reason that a gyroscope remains upright when spun on a table top). The use of rockets in war soon took a back seat once again when the Prussian army developed the breech-loading cannon with exploding warheads that proved far superior than the best rockets.
The era of modern rocketry
Soon, new applications for rockets were being imagined. Jules Verne, always the visionary, put the dream of space flight into words in his science-fiction novel “De la Terre á la Lune” (From the Earth to the Moon), in which a projectile, named Columbiad, carrying three passengers is shot at the moon using a giant cannon. The Russian schoolteacher Konstantin Tsiolkovsky (of rocket equation fame) proposed the idea of using rockets as a vehicle for space exploration but acknowledged that the main bottlenecks of achieving such a feat would require significant developments in the range of rockets. Tsiolkovsky understood that the speed and range of rockets was limited by the exhaust velocity of the propellant gases. In a 1903 report, “Research into Interplanetary Space by Means of Rocket Power”, he suggested the use of liquid-propellants and formalised the rocket equation derived above, relating the rocket engine exhaust velocity to the change in velocity of the rocket itself (now known as the Tsiolkovsky rocket equation in his honour, although it had already been discovered previously).
Tsiolkovsky also advocated the development of orbital space stations, solar energy and the colonisation of the Solar System. One of his quotes is particularly prescient considering Elon Musk’s plans to colonise Mars:
“The Earth is the cradle of humanity, but one cannot live in the cradle forever” — In a letter written by Tsiolkovsky in 1911.
The American scientist Robert H. Goddard, now known as the father of modern rocketry, was equally interested in extending the range of rockets, especially reaching higher altitudes than the gas balloons used at the time. In 1919 he published a short manuscript entitled “A Method of Reaching Extreme Altitudes” that summarised his mathematical analysis and practical experiments in designing high altitude rockets. Goddard proposed three ways of improving current solid-fuel technology. First, combustion should be contained to a small chamber such that the fuel container would be subjected to much lower pressure. Second, Goddard advocated the use of multi-stage rockets to extend their range, and third, he suggested the use of a supersonic de Laval nozzle to improve the exhaust speed of the hot gases.
Goddard started to experiment with solid-fuel rockets, trying various different compounds and measuring the velocity of the exhaust gases. As a result of this work, Goddard was convinced of Tsiolkovsky’s early premonitions that a liquid-propellant would work better. The problem that Goddard faced was that liquid-propellant rockets were an entirely new field of research, no one had ever built one, and the system required was much more complex than for a solid-fuelled rocket. Such a rocket would need separate tanks and pumps for the fuel and oxidiser, a combustion chamber to combine and ignite the two, and a turbine to drive the pumps (much like the turbine in a jet engine drives the compressor at the front). Goddard also added a de Laval nozzle which cooled the hot exhaust gases into a hypersonic, highly directed jet, more than doubling the thrust and increasing engine efficiency from 2% to 64%! Despite these technical challenges, Goddard designed the first successful liquid-fuelled rocket, propelled by a combination of gasoline as fuel and liquid oxygen as oxidiser, and tested it on March 16, 1926. The rocket remained lit for 2.5 seconds and reached an altitude of 12.5 meters. Just like the first 40 yard flight of the Wright brothers in 1903, this feat seems unimpressive by today’s standards, but Goddard’s achievements put rocketry on an exponential growth curve that led to radical improvements over the next 40 years. Goddard himself continued to innovate; his rockets flew to higher and higher altitudes, he added a gyroscope system for flight control and introduced parachute recovery systems.
On the other side of the Atlantic, German scientists were beginning to play a major role in the development of rockets. Inspired by Hermann Oberth’s ideas on rocket travel, the mathematics of spaceflight and the practical design of rockets published in his book “Die Rakete zu den Planetenraumen” (The Rocket to Space), a number of rocket societies and research institutes were founded in Germany. The German bicycle and car manufacturer Opel (now part of GM) began developing rocket powered cars, and in 1928 Fritz von Opel drove the Opel-RAK.1 on a racetrack. In 1929 this design was extended to the Opel-Sander RAK 1-airplane, which crashed during its first flight in Frankfurt. In the Soviet Union, the Gas dynamics Laboratory in Leningrad under the directorship of Valentin Glushko built more than 100 different engine designs, experimenting with different fuel injection techniques.
Under the directorship of Wernher von Braun and Walter Dornberger, the Verein for Raumschiffahrt or Society for Space Travel played a pivotal role in the development of the Vergeltungswaffe 2, also known as the V-2 rocket, the most advanced rocket of its time. The V-2 rocket burned a mixture of alcohol as fuel and liquid oxygen as oxidiser, and it achieved great amounts of thrust by considerably improving the mass flow rate of fuel to about 150 kg (380 lb) per second. The V-2 featured much of the technology we see on rockets today, such as turbo pumps and guidance systems, and due to its range of around 300 km (190 miles), the V-2 could be launched from the shores of the Baltic to bomb London during WWII. The 1000 kg (2200 lb) explosive warhead fitted in the tip of the V-2 was capable of devastating entire city blocks, but still lacked the accuracy to reliably hit specific targets. Towards the end of WWII, German scientists were already planning much larger rockets, today known as Intercontinental Ballistic Missiles (ICBMs), that could be used to attack the United States, and were strapping rockets to aircraft either for powering them or for vertical take-off.
With the fall of the Third Reich in April 1945 a lot of this technology fell into the hands of the Allies. The Allies’ rocket program was much less sophisticated such that a race ensued to capture as much of the German technology as possible. The Americans alone captured 300 train loads of V-2 rocket parts and shipped them back to the United States. Furthermore, the most prominent of the German rocket scientists emigrated to the United States, partly due to the much better opportunities to develop rocketry there, and partly to escape the repercussions of having played a role in the Nazi war machine. The V-2 essentially evolved into the American Redstone rocket which was used during the Mercury project.
The Space Race – to the moon and beyond
After WWII both the United States and the Soviet Union began heavily funding research into ICBMs, partly because these had the potential to carry nuclear warheads over long distances, and partly due to the allure of being the first to travel to space. In 1948, the US Army combined a captured V-2 rocket with a WAC Corporal rocket to build the largest two-stage rocket to be launched in the United States. This two-stage rocket was known as the “Bumper-WAC”, and over course of six flights reached a peak altitude of 400 kilometres (250 miles), pretty much exactly to the altitude where the International Space Station (ISS) orbits today.Despite these developments the Soviets were the first to put a man-made object orbit into space, i.e. an artificial satellite. Under the leadership of chief designer Sergei Korolev, the V-2 was copied and then improved upon in the R-1, R-2 and R-5 missiles. At the turn of 1950s the German designs were abandoned and replaced with the inventions of Aleksei Mikhailovich Isaev which was used as the basis for the first Soviet ICBM, the R-7. The R-7 was further developed into the Vostok rocket which launched the first satellite, Sputnik I, into orbit on October 4, 1957, a mere 12 years after the end of WWII. The launch of Sputnik I was the first major news story of the space race. Only a couple of weeks later the Soviets successfully launched Sputnik II into orbit with dog Laika onboard.
One of the problems that the Soviets did not solve was atmospheric re-entry. Any object wishing to orbit another planet requires enough speed such that the gravitational attraction towards the planet is offset by the curvature of planet’s surface. However, during re-entry, this causes the orbiting body to literally smash into the atmosphere creating incredible amounts of heat. In 1951, H.J. Allen and A.J. Eggers discovered that a high drag, blunted shape, not a low-drag tear drop, counter-intuitively minimises the re-entry effects by redirecting 99% of the energy into the surrounding atmosphere. Allen and Eggers’ findings were published in 1958 and were used in the Mercury, Gemini, Apollo and Soyuz manned space capsules. This design was later improved upon in the Space Shuttle, whereby a shock wave was induced on the heat shield of the Space Shuttle via an extremely high angle of attack, in order to deflect most of the heat away from the heat shield.
The United States’ first satellite, Explorer I, would not follow until January 31, 1958. Explorer I weighed about 30 times less than the Sputnik II satellite, but the Geiger radiation counters on the satellite were used to make the first scientific discovery in outer space, the Van Allen Radiation Belts. Explorer I had originally been developed as part of the US Army, and in October 1958 the National Advisory Committee for Aeronautics (NACA, now NASA) was officially formed to oversee the space program. Simultaneously, the Soviets developed the Vostok, Soyuz and Proton family of rockets from the original R-7 ICBM to be used for the human spaceflight programme. In fact, the Soyuz rocket is still being used today, is the most frequently used and reliable rocket system in history, and after the Space Shuttle’s retirement in 2011 became the only viable means of transport to the ISS. Similarly, the Proton rocket, also developed in the 1960s, is still being used to haul heavier cargo into low-Earth orbit.
Shortly after these initial satellite launches, NASA developed the experimental X-15 air-launched rocket-propelled aircraft, which, in 199 flights between 1959 and 1968, broke numerous flying records, including new records for speed (7,274 kmh or 4,520 mph) and altitude records (108 kmh or 67 miles). The X-15 also provided NASA with data regarding the optimal re-entry angles from space into the atmosphere.
The next milestone in the space race once again belonged to the Soviets. On April 12, 1961, the cosmonaut Yuri Gagarin became the first human to travel into space, and as a result became an international celebrity. Over a period of just under two hours, Gagarin orbited the Earth inside a Vostok 1 space capsule at around 300 km (190 miles) altitude, and after re-entry into the atmosphere ejected at an altitude of 6 km (20,000 feet) and parachuted to the ground. At this point Gagarin became the most famous Soviet on the planet, travelling around the world as a beacon of Soviet success and superiority over the West.
Shortly after Gagarin’s successful flight, the American astronaut Alan Shepherd reached a suborbital altitude of 187 km (116 miles) in the Freedom 7 Mercury capsule. The Redstone ICBM that was used to launch Shephard from Cape Caneveral did not quite have the power to send the Mercury capsule into orbit, and had suffered a series of emberrassing failures prior to the launch, increasing the pressure on the US rocket engineers. However, days after Shephard’s flight, President John F. Kennedy delivered the now famous words before a joint session in Congress
“This nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”
Despite the bold nature of this challenge, NASA’s Mercury project was already well underway in developing the technology to put the first human on the moon. In February 1962, the more powerful Atlas missile propelled John Glenn into orbit, and thereby restored some form of parity between the USA and the Soviet Union. The last of the Mercury flights were scheduled for 1963 with Gordon Cooper orbiting the Earth for nearly 1.5 days. The family of Atlas rockets remains one of the most successful to this day. Apart from launching a number of astronauts into space during the Mercury project, the Atlas has been used for bringing commercial, scientific and military satellites into orbit.
Following the Mercury missions, the Gemini project made significant strides towards a successful Moon flight. The Gemini capsule was propelled by an even more power ICBM, the Titan, and allowed astronauts to remain in space for up to two weeks, during which astronauts had the first experience with space-walking, and rendezvous and docking procedures with the Gemini spacecraft. An incredible ten Gemini missions were flown throughout 1965-66. The high success rate of the missions was testament to the improving reliability of NASA’s rockets and spacecraft, and allowed NASA engineers to collect invaluable data for the coming Apollo Moon missions. The Titan missile itself, remains as one of the most successful and long-lived rockets (1959-2005), carrying the Viking spacecraft to Mars, the Voyager probe to the outer solar system, and multiple heavy satellites into orbit. At about the same time, around the early 1960s, an entire family of versatile rockets, the Delta family, was being developed. The Delta family became the workhorse of the US space programme achieving more than 300 launches with a reliability greater than 95% percent! The versatility of the Delta family was based on the ability to tailor the lifting capability, using different interchangeable stages and external boosters that could be added for heavier lifting.
At this point, the tide had mostly turned. The United States had been off to a slow start but had used the data from their early failures to improve the design and reliability of their rockets. The Soviets, while being more successful initially, could not achieve the same rate of launch success and this significantly hampered their efforts during the upcoming race to the moon.
To get to the moon, a much more powerful rocket than the Titan or Delta rockets would be needed. This now infamous rocket, the 110.6 m (330 feet) tall Saturn V (check out this sick drawing), consisted of three separate main rocket stages; the Apollo capsule with a small fourth propulsion stage for the return trip; and a two-staged lunar lander, with one stage for descending onto the Moon’s surface and the other for lifting back off the Moon. The Saturn V was largely the brainchild and crowning achievement of Wernher von Braun, the original lead developer of the V-2 rocket in WWII Germany, with a capability of launching 140,000 kg (310,000 lb) into low-Earth orbit and 48,600 kg (107,100 lb) to the Moon. This launch capability dwarfed all previous rockets and to this day remains the tallest, heaviest and most powerful rocket ever built to operational flying status (last on the chart at the start of the piece). NASA’s efforts reached their glorious climax with the Apollo 11 mission on July 20, 1969 when astronaut Neil Armstrong became the first man to set foot on the Moon, a mere 11.5 years after the first successful launch of the Explorer I satellite. The Apollo 11 mission became the first of six successful Moon landings throughout the years 1969-1972. A smaller version of the moon rocket, the Saturn IB, was also developed and used for some of the early Apollo test missions and later to transport three crews to the US space station Skylab.
The Space ShuttleNASA’s final major innovation was the Space Shuttle. The idea behind the Space Shuttle was to design a reusable rocket system for carrying crew and payload into low-Earth orbit. The rationale behind this idea is that manufacturing the rocket hardware is a major contributor to the overall launch costs, and that allowing different stages to be destroyed after launch is not cost effective. Imagine having to throw away your Boeing 747 or Airbus A380 every time you fly from London to New York. In this case ticket prices would not be where they are now. The Shuttle consisted of a winged airplane-looking spacecraft that was boosted into orbit by liquid-propellant engines on the Shuttle itself, fuelled from a massive orange external tank, and two solid rocket booster attached to either side. After launch, the solid-rocket boosters and external fuel tank were jettisoned, and the boosters recovered for future use. At the end of a Shuttle mission, the orbiter re-entered Earth’s atmosphere, and then followed a tortuous zig-zag course, gliding unpowered to land on a runway like any other aircraft. Ideally NASA promised that the Shuttle was going to reduce launch costs by 90%. However, crash landings of the solid rocket boosters in water often damaged them beyond repair, and the effort required to service the orbiter heat shield, inspecting each of the 24,300 unique tiles separately, ultimately led to the cost of putting a kilogram of payload in orbit to be greater than for the Saturn V rocket that preceded it. The five Shuttles, the Endeavour, Discovery, Challenger, Columbia and Atlantis, completed 135 missions between 1981 and 2011 with the tragic loss of the Challenger in 1983 and the Columbia in 2003. While the Shuttle facilitated the construction of the International Space Station and the installation of the Hubble space telescope in orbit, the ultimate goal of economically sustainable space travel was never achieved.
However, this goal is now on the agenda of commercial space companies such as SpaceX, Reaction Engines, Blue Origin, Rocket Lab and the Sierra Nevada Corporation.
After the demise of the Space Shuttle programme in 2011, the US’ capability of launching humans into space was heavily restricted. NASA is currently working on a new Space Launch System (SLS), the aim of which is to extend NASA’s range beyond low-Earth orbit and further out into the Solar system. Although the SLS is being designed and assembled by NASA, other partners such as Boeing, United Launch Alliance, Orbital ATK and Aerojet Rocketdyne are co-developing individual components. The SLS specification as it stands would make it the most powerful rocket in history and the SLS is therefore being developed in two stages (reminiscent of the Saturn IB and Saturn V rocket). First, a rocket with a payload capability of 70 metric tons (175,000 lb) is being developed from components of previous rockets. The goal of this heritage SLS is to conduct two lunar flybys with the Orion spacecraft, one unmanned and the other with a crew. Second, a more advanced version of the SLS with a payload capability of 130 metric tons (290,000 lb) to low-earth orbit, about the same payload capacity and 20% more thrust than the Saturn V rocket, is deemed to carry scientific equipment, cargo and the manned Orion capsule into deep space. The first flight for an unmanned Orion capsule on a trip around the moon is planned for 2018, while manned missions are expected by 2021-2023. By 2026 NASA plans to send a manned Orion capsule to an asteroid previously placed into lunar orbit by a robotic “capture-and-place” mission.However, with the commercialisation of space travel new incumbents are now working on even more daunting goals. The SpaceX Falcon 9 rocket has proven to be a very reliable launch system (with a current success rate of 20 out of 22 launches). Furthemore, SpaceX was the first private company to successfully launch and recover an orbital spacecraft, the Dragon capsule, which regularly supplies the ISS with supplies and new scientific equipment. Currently, the US relies on the Russian Soyuz rocket to bring astronauts to the ISS but in the near future manned missions are planned with the Dragon capsule. The Falcon 9 rocket is a two-stage-to-orbit launch vehicle comprised of nice SpaceX Merlin rocket engines fuelled by liquid oxygen and kerosene with a payload capacity of 13 metric tons (29,000 lb) into low-Earth orbit. There have been three versions of the Falcon 9, v1.0 (retired), v1.1 (retired) and most recently the partially reusable full thrust version, which on December 22, 2015 used propulsive recovery to land the first stage safely in Cape Canaveral. To date, efforts are being made to extend the landing capabilities from land to sea barges. Furthermore, the Falcon Heavy with 27 Merlin engines (a central Falcon 9 rocket with two Falcon 9 first stages strapped to the sides) is expected to extend SpaceX’s lifting capacity to 53 metric tons into low-Earth orbit, making it the second most powerful rocket in use after NASA’s SLS. First flights of the Falcon Heavy are expected for late this year (2016). Of course, the ultimate goal of SpaceX’s CEO Elon Musk, is to make humans a multi planetary species, and to achieve this he is planning to send a colony of a million humans to Mars via the Mars Colonial Transporter, a space launch system of reusable rocket engines, launch vehicles and space capsules. SpaceX’s Falcon 9 rocket already has the lowest launch costs at $60 million per launch, but reliable re-usability should bring these costs down over the next decade such that a flight ticket to Mars could become enticing for at least a million of the richest people on Earth (or perhaps we could sell spots on “Mars – A Reality TV show“).
Blue Origin, the rocket company of Amazon founder Jeff Bezos, is taking a similar approach of vertical takeoff and landing to re-usability and lower launch costs. The company is on an incremental trajectory to extend its capabilities from suborbital to orbital flight, led by its motto “Gradatim Ferocity” (latin for step by step, ferociously). Blue Origin’s New Shepard rocket underwent its first test flight in April 2015. In November 2015 the rocket landed successfully after a suborbital flight to 100 km (330,000 ft) altitude and this was extended to 101 km (333,000 ft) in January 2016. Blue hopes to extend its capabilities to human spaceflight by 2018.
Reaction Engines is a British aerospace company conducting research into space propulsion systems focused on the Skylon reusable single-stage-to-orbit spaceplane. The Skylon would be powered by the SABRE engine, a rocket-based combined cycle, i.e. a combination of an air-breathing jet engine and a rocket engine, whereby both engines share the same flow path, reusable for about 200 flights. Reaction Engines believes that with this system the cost of carrying one kg (2.2 lb) of payload into low-earth orbit can be reduced from the $1,500 today (early 2016) to around $900. The hydrogen-fuelled Skylon is designed to take-off from a purpose built runway and accelerate to Mach 5 at 28.5 km (85,500 feet) altitude using the atmosphere’s oxygen as oxidiser. This air-breathing part of the SABRE engine works on the same principles as a jet engine. A turbo-compressor is used to raise the pressure ratio of the incoming atmospheric air, which is pre-staged by a pre-cooler to cool the hot air impinging on the engine at hypersonic speeds. The compressed air is fed into a rocket combustion chamber where it is ignited with liquid hydrogen. As in a standard jet engine, a high pressure ratio is crucial to pack as much of the oxidiser into the combustion chamber and increase the thrust of the engine. As the natural source of oxygen runs out at high altitude, the engines switch to the internally stored liquid oxygen supplies, transforming the engine into a closed-cycle rocket and propelling the Skylon spacecraft into orbit. The theoretical advantages of the SABRE engine is its high fuel efficiency and low mass, which facilitate the single-stage-to-orbit approach. Reminiscent of the Shuttle, after deploying the its payload of up to 15 tons (38,000 lb), the Skylon spacecraft would then re-enter the atmosphere protected by a heat shield and land on a runway. The first ground tests of the SABRE engine are planned for 2019 and first unmanned test flights are expected for 2025.
Sierra Nevada Corporation is working alongside NASA to develop the Dream Chaser spacecraft for transporting cargo and up to seven people to low-earth orbit. The Dream Chaser is designed to launch on top of the Atlas V rocket (in place of the nose cone) and land conventionally by gliding onto a runway. The Dream Chaser looks a lot like a smaller version of the Space Shuttle, so intuitively one would expect the same cost inefficiencies as for the Shuttle. However, the engineers at Sierra Nevada say that two changes have been made to the Dream Chaser that should reduce the maintenance costs. First, the thrusters used for attitude control are ethanol-based, and therefore not toxic and a lot less volatile than the hydrazine-based thursters used by the Shuttle. This should allow maintenance of the Dream Chaser to ensue immediately after landing and reduce the time between flights. Second, the thermal protection system is based on an ablative tile that can survive multiple flights and can be replaced in larger groups rather than tile-by-tile. The Dream Chaser is planned to undergo orbital test flights in November 2016.Finally, the New Zealand-based firm Rocket Lab is developing the all-carbon composite liquid-fuelled Electron rocket with a payload capability to low-Earth orbit of 110 kg (240 lb). Thus, Rocket Lab is focusing on high-frequency rocket launches to transport low-mass payload, e.g. nano satellites, into orbit. The goal of Rocket Lab is to make access to space frequent and affordable such that the rapidly evolving small-scale satellites that provide us with scientific measurements and high-speed internet can be launched reliably and quickly. The Rocket Lab system is designed to cost $5 million per launch at 100 launches a year and use less fuel than a flight on a Boeing 737 from San Francisco to Los Angeles. A special challenge that Rocket Lab is facing is the development of the all-carbon composite liquid oxygen tanks to provide the mass efficiency required for this high fuel efficiency. To date the containment of cryogenic (super cold) liquid fuels, such as liquid hydrogen and liquid oxygen, is still the domain of metallic alloys. Concerns still exist about potential leaks due to micro cracks developing in the resin of the composite at cryogenic temperatures. In composites, there is a mismatch between the thermal expansion coefficients of the reinforcing fibre and the resin, which induces thermal stresses as the composite is cooled to cryogenic temperatures from its high temperature/high pressure curing process. The temperature and pressure cycles during the liquid oxygen/hydrogen fill-and-drain procedures then induces extra fatigue loading that can lead to cracks permeating through the structure through which hydrogen or oxygen molecules can easily pass. This leaking process poses a real problem for explosion.
Where do we go from here?
As we have seen, over the last 2000 years rockets have evolved from simple toys and military weapons to complex machines capable of transporting humans into space. To date, rockets are the only viable gateway to places beyond Earth. Furthermore, we have seen that the development of rockets has not always followed a uni-directional path towards improvement. Our capability to send heavier and heavier payloads into space peaked with the development of the Saturn V rocket. This great technological leap was fuelled, to a large extent, by the competitive spirit of the Soviet Union and the United States. Unprecedented funds were available to rocket scientists on both sides during the 1950-1970s. Furthermore, dreamers and visionaries such as Jules Verne, Konstantin Tsiolkovsky and Gene Roddenberry sparked the imagination of the public and garnered support for the space programs. After the 2003 Columbia disaster, public support for spending taxpayer money on often over-budget programs understandably waned. However, the successes of incumbent companies, their fierce competition and visionary goals of colonising Mars are once again inspiring a younger generation. This is, once again, an exciting time for rocketry.
Tom Benson (2014). Brief History of Rockets. NASA URL: https://www.grc.nasa.gov/www/k-12/TRC/Rockets/history_of_rockets.html
NASA. A Pictorial History of Rockets. URL: https://www.nasa.gov/pdf/153410main_Rockets_History.pdf
John Partridge is the founder of the deap-sea instrumentation company Sonardyne, and also graduated from the University of Bristol, my alma mater, with a degree in Mechanical Engineering in 1962. Since the founding in 1971, Sonardyne has developed into one of the leading instrumentation companies in oceanography, oil drilling, underwater monitoring and tsunami warning systems.
During my PhD graduation ceremony last week John Partridge received an honorary doctorate in engineering for his contributions to the field. His acceptance speech was shorter than most but packed a punch. Among others, he discussed the current state of engineering progress, the three essential characteristics an engineer should possess and his interests in engineering education.
The last topic is one close to my heart and one of the reasons this blog exists at all. I have transcribed Dr Partridge’s speech and you can find the full copy here, or alternatively listen to the speech here. What follows are excerpts from his speech that I found particularly interesting with some additional commentary on my part. All credit is due to Dr Partridge and any errors in transcribing are my own.
Straight off the bat Dr Partridge reminds us of the key skills required in engineering, namely inventiveness, mathematical analysis and decision making:
Now I am going to get a bit serious about engineering education. According to John R. Dixon, a writer on engineering education, the key skills required in engineering are inventiveness, analysis and decision making. Very few people have all three of Dickson’s specified skills, which is why engineering is best done as a group activity, and it is why I am totally indebted to my engineering colleagues in Sonardyne, particularly in compensating for my poor skills at mathematical analysis. Some of my colleagues joined Sonardyne straight from university and stayed until their retirement. But the really difficult part of running a business is decision making, which applies at all stages and covers a wide variety of subjects: technical, commercial, financial, legal. One incorrect decision can spell the end of a substantial company. In recent decades, bad decisions by chief executives have killed off large successful British companies some of which had survived and prospered for over a century.
The key tenet of John R. Dixon’s teachings is that engineering design is essentially science-based problem solving with social human awareness. Hence, the character traits often attributed to successful engineering, for example intelligence, creativity and rationality (i.e. inventiveness and analysis), which are typically the focus of modern engineering degrees, are not sufficient in developing long-lasting engineering solutions. Rather, engineering education should focus on distilling a “well-roundedness”, in the American liberal arts sense of the word.
As Dr Partridge points out in his speech, this requires a basic understanding of decision making under uncertainty, as pioneered by Kahnemann and Tversky, and how to deal with randomness or mitigate the effects of Black Swan events (see Taleb). Second, Dr Partridge acknowledges that combining these characteristics in a single individual is difficult, if not impossible, such that companies are essential in developing good engineering solutions. This means that soft skills, such as team work and leadership, need to be developed simultaneously and a basic understanding of business (commercial, financial and legal) is required to operate as an effective and valuable member of an engineering company.
Next, Dr Partridge turns his attention to the current state of technology and engineering. He addresses the central question, has the progress of technology, since the development of the transistor, the moon landings and the wide-spread use of the jet engine, been quantitative or qualitative?
I remember a newspaper article by [Will] Hutton, [political economist and journalist, now principal of Hertford College, Oxford], decades ago entitled “The familiar shape of things to come”, a pun on H.G. Wells’ futuristic novel “The unfamiliar shape of things to come”. Hutton’s article explained how my parents’ generation, not my generation, not your generation, my parents’ generation had experienced the fastest rate of technological change in history. They grew up in the era of gas light but by the end of their days the man had been on the moon, jet airliners and colour television were a common experience. But, Hutton argued, since the 1960s subsequent progress of technology has been quantitative rather than qualitative.
But, how about the dramatic improvements in microelectronics and communications, etc.?, much of which has occurred since Hutton’s article was written. Are they quantitative or qualitative improvements? I think they are quantitative because so much of the groundwork had already been completed long before the basic inventions could be turned into economical production. […T]he scientific foundation for present microelectronic technology, way [laid] back in the 1930s. [This] work in solid state physics provided the underpinning theory that enabled the invention of the transistor in the 1950s. Now we harbour millions of these tiny devices inside the mobile phones of our pockets. That is quantitative progress from bytes to gigabytes.
This reminds me of one of Peter Thiel’s, co-founder of the Silicon Valley venture capital firm Founders Fund, statements “We wanted flying cars, instead we got 140 characters”. On the Founders Fund website the firm has published a manifesto “What happened to the future?”. In the aerospace sector alone, the manifesto addresses two interesting case studies, namely that the cost of sending 1kg of cargo into orbit has barely decreased since the Apollo program of the 1960’s (of course Elon Musk is on a mission to change this), and that, since the retirement of the Concorde, the time for crossing the Atlantic has actually gone up.
While I don’t fundamentally agree with Thiel’s overall assessment of the state of technology, I believe there is abundant evidence that the technologies around us are, to a large extent, more powerful, faster and generally improved versions of technology that already existed in the 1960s, hence quantitative improvements. On the other hand, the addition of incremental changes over long periods of time can lead to dramatic changes. The best example of this is Moore’s Law, i.e. the observation that the number of transistors on an integrated circuit chip doubles every 18 to 24 months.
At face value, this is clearly quantitative progress, but what about the new technologies that our new found computational power has facilitated? Without this increase in computational power, the finite element method would not have taken off in the 1950s and engineers would not be able to model complex structural and fluid dynamic phenomena today. Similarly, computers allow chemists to develop new materials specifically designed for a predefined purpose. Digital computation facilitated the widespread use of control theory, which is now branching into new fields such as 3D printing and self-assembly of materials at the nano-scale (both control problems applied to chemistry). Are these new fields not qualitative?
The pertinent philosophical questions seems to be, what qualifies as qualitative progress? As a guideline we can turn to Thomas Kuhn’s work on scientific revolutions. Kuhn challenged the notion of scientific progress on a continuum, i.e. by accumulation, and proposed a more discrete view of scientific progress by “scientific revolutions”. In Kuhn’s view the continuity of knowledge accumulation is interrupted by spontaneous periods of revolutionary science driven by anomalies, which subsequently lead to new ways of thinking and a roadmap for new research. Kuhn defined the characteristics of a scientific revolution as follows:
- It must resolve a generally accepted problem with the current paradigm that cannot be reconciled in any other way
- It must preserve, and hence agree with a large part of previously accrued scientific knowledge
- It must solve more problems, and hence open up more questions than its predecessor.
With regards to this definition I would say that nanotechnology, 3D printing and shape-adaptive materials, to name a few, are certainly revolutionary technologies in that they allow us to design and manufacture products that were completely unthinkable before. In fact I would argue that the quantitative accumulation of computation power has facilitated a revolution towards more optimised and multifunctional structures akin to the design we see in nature. To name another, more banal example, the modern state of manufacturing has been transformed through globalisation. 30 years ago products were most exclusively manufactured in one country and then consumed there. The reality today is that different factories in different countries manufacture small components which are then assembled in a central processing unit. This assembly process has two fundamental enablers, IT and the modern logistics system. This engineering progress is certainly revolutionary, but perhaps not as sexy as flying cars and therefore not as present in the media or our minds.
The problem that Dr Partridge sees is that the tradition of engineering philosophy is not as well developed as that of science.
So what is engineering? Is it just a branch of applied science, or does it have a separate nature? What is technology? These questions were asked by Gordon Rodgers in his 1983 essay “The nature of engineering and philosophy of technology”. […] The philosophy of science has a large corpus of work but the philosophy of technology is still an emerging subject, and very relevant to engineering education.
In this regard, I agree with David Blockley (who I have written about before) that engineering is too broad to be defined succinctly. In its most general sense it is the act of using technical and scientific knowledge to turn an idea, supporting a specific human endeavour, hence a tool, into reality. Of course the act of engineering involves all forms of art, science and craft through conception, design, analysis and manufacturing. As homo sapiens our ingenuity in designing tools played a large part in our anthropological development, and according to Winston Churchill “we shape our tools and thereafter they shape us”.
So perhaps another starting point in addressing the quantitative/qualitative dichotomy of engineering progress is to consider how much humans have changed as a result of recent technological inventions. Are the changes in human behaviour due to social media and information technology of a fundamental kind or rather of degree? In terms of aerospace engineering, the last revolution of this kind was indeed the commercialisation of jet travel, and until affordable space travel becomes a reality, I see no revolutions of this kind in the near past or future.
So it seems more inventiveness is crucial for further progress in the aerospace industry. As a final thought, Dr Partridge ends with an interesting question:
Can one teach inventiveness or is it a gift?
Let me know your thoughts in the comments.
One of the key factors in the Wright brothers’ achievement of building the first heavier-than-air aircraft was their insight that a functional airplane would require a mastery of three disciplines:
Whereas the first two had been studied to some success by earlier pioneers such as Sir George Cayley, Otto Lilienthal, Octave Chanute, Samuel Langley and others, the question of control seemed to have fallen by the wayside in the early days of aviation. Even though the Wright brothers build their own little wind tunnel to experiment with different airfoil shapes (mastering lift) and also built their own lightweight engine (improving propulsion) for the Wright flyer, a bigger innovation was the control system they installed on the aircraft.Fundamentally, an aircraft manoeuvres about its centre of gravity and there are three unique axes about which the aircraft can rotate:
- The longitudinal axis from nose to tail, also called the axis of roll, i.e. rolling one wing up and one wing down.
- The lateral axis from wing tip to wing tip, also called the axis of pitch, i.e. nose up or nose down.
- The normal axis from the top of the cabin to the bottom of landing gear, also called the axis of yaw, i.e. nose rotates left or right.
- Moving the elevator down increases the effective camber across the horizontal tail plane, thereby increasing the aerodynamic lift at the rear of the aircraft and causing a nose-downward moment about the aircraft’s centre of gravity. Alternatively, an upward movement of the elevator induces a nose-up movement.
- In the case of the rudder, deflecting the rudder to one side increases the lift in the opposite direction and hence rotates the aircraft nose in the direction of the rudder deflection.
- In the case of ailerons, one side is being depressed while the other is raised to produce increased lift on one side and decreased lift on the other, thereby rolling the aircraft.
Today, many other control systems are being used in addition to, or instead of, the conventional system outlined above. Some of these are:
- Elevons – combined ailerons and elevators.
- Tailerons – two differentially moving tailplanes.
- Leading edge slats and trailing edge flaps – mostly for increased lift at takeoff and landing.
But ultimately the action of operation is fundamentally the same, the lift over a certain portion of the aircraft is changed, causing a moment about the centre of gravity.
Special Aileron Conditions
Two special conditions arise in the operation of the ailerons.
The first is known as adverse yaw. As the ailerons are deflected, one up and one down, the aileron pointing down induces more aerodynamic drag than the aileron pointing up. This induced drag is a function of the amount of lift created by the airfoil. In simplistic terms, an increase in lift causes more pronounced vortex shedding activity, and therefore a high-pressure area behind the wing, which acts as a net retarding force on the aircraft. As the downward pointing airfoil produces more lift, induced drag is correspondingly greater. This increased drag on the downward aileron (upward wing) yaws the aircraft towards this wing, which must be counterbalanced by the rudder. Aerodynamicists can counteract the adverse yawing effect by requiring that the downward pointing aileron deflects less than the upward pointing one. Alternatively, Frise ailerons are used, which employ ailerons with excessively rounded leading edges to increase the drag on the upward pointing aileron and thereby help to counteract the induced drag on the downward pointing aileron of the other wing. The problem with Frise ailerons is that they can lead to dangerous flutter vibrations, and therefore differential aileron movement is typically preferred.
The second effect is known as aileron reversal, which occurs under two different scenarios.
- At very low speeds with high angles of attack, e.g. during takeoff or landing, the downward deflection of an aileron can stall a wing, or at the least reduce the lift across the wing, by increasing the effective angle of attack past sustainable levels (boundary layer separation). In this case, the downward aileron produces the opposite of the intended effect.
- At very high airspeeds, the upward or downward deflection of an aileron may produce large torsional moments about the wing, such that the entire wing twists. For example, a downward aileron will twist the trailing edge up and leading edge down, thereby decreasing the angle of attack and consequently also the lift over that wing rather than increasing it. In this case, the structural designer needs to ensure that the torsional rigidity of the wing is sufficient to minimise deflections under the torsional loads, or that the speed at which this effect occurs is outside the design envelope of the aircraft.
What do we mean by the stability of an aircraft? Fundamentally we have to discern between the stability of the aircraft to external impetus, with and without the pilot responding to the perturbation. Here we will limit ourselves to the inherent stability of the aircraft. Hence the aircraft is said to be stable if it returns back to its original equilibrium state after a small perturbing displacement, without the pilot intervening. Thus, the aircraft’s response arises purely from the inherent design. At level flight we tend to refer to this as static stability. In effect the airplane is statically stable when it returns to the original steady flight condition after a small disturbance; statically unstable when it continues to move away from the original steady flight condition upon a disturbance; and neutrally stable when it remains steady in a new condition upon a disturbance. The second, and more pernicious type of stability is dynamic stability. The airplane may converge continuously back to the original steady flight state; it may overcorrect and then converge to the original configuration in a oscillatory manner; or it can diverge completely and behave uncontrollably, in which case the pilot is well-advised to intervene. Static instability naturally implies dynamic instability, but static stability does not generally guarantee dynamic stability.
By longitudinal stability we refer to the stability of the aircraft around the pitching axis. The characteristics of the aircraft in this respect are influenced by three factors:
- The position of the centre of gravity (CG). As a rule of thumb, the further forward (towards the nose) the CG, the more stable the aircraft with respect to pitching. However, far-forward CG positions make the aircraft difficult to control, and in fact the aircraft becomes increasingly nose heavy at lower airspeeds, e.g. during landing. The further back the CG is moved the less statically stable the aircraft becomes. There is a critical point at which the aircraft becomes neutrally stable and any further backwards movement of the CG leads to uncontrollable divergence during flight.
- The position of the centre of pressure (CP). The centre of pressure is the point at which the aerodynamic lift forces are assumed to act if discretised onto a single point. Thus, if the CP does not coincide with the CG, pitching moments will naturally be induced about the CG. The difficulty is that the CP is not static, but can move during flight depending on the angle of incidence of the wings.
- The design of the tailplane and particularly the elevator. As described previously, the role of the elevator is to control the pitching rotations of the aircraft. Thus, the elevator can be used to counter any undesirable pitching rotations. During the design of the tailplane and aircraft on a whole it is crucial that the engineers take advantage of the inherent passive restoring capabilities of the elevator. For example, assume that the angle of incidence of the wings increases (nose moves up) during flight as a result of a sudden gust, which gives rise to increased wing lift and a change in the position of the CP. Therefore, the aircraft experiences an incremental change in the pitching moment about the CG given by
At the same time, the elevator angle of attack also increases due to the nose up/tail down perturbation. Hence, the designer has to make sure that the incremental lift of the elevator multiplied by its distance from the CG is greater than the effect of the wings, i.e.
As a result the interplay between CP and CG, tailplane design greatly influences the degree of static pitching stability of an aircraft. In general, due to the general tear-drop shape of an aircraft fuselage, the CP of an aircraft is typically ahead of it’s CG. Thus, the lift forces acting on the aircraft will always contribute some form of destabilising moment about the CG. It is mainly the job of the vertical tailplane (the fin) to provide directional stability, and without the fin most aircraft would be incredibly difficult to fly if not outright unstable.
By lateral stability we are referring to the stability of the aircraft when rolling one wing down/one wing up, and vice versa. As an aircraft rolls and the wings are no longer perpendicular to the direction of gravitational acceleration, the lift force, which acts perpendicular to the surface of the wings, is also no longer parallel with gravity. Hence, rolling an aircraft creates both a vertical lift component in the direction of gravity and a horizontal side load component, thereby causing the aircraft to sideslip. If these sideslip loads contribute towards returning the aircraft to its original configuration, then the aircraft is laterally stable. Two of the more popular methods of achieving this are:
- Upward-inclined wings, which take advantage of the dihedral effect. As an aircraft is disturbed laterally, the rolling action to one side results in a greater angle of incidence on the downward-facing wing than the upward-facing one. This occurs because the forward and downward motion of the wing is equivalent to a net increase in angle of attack, whereas the forward and upward motion of the other wing is equivalent to a net decrease. Therefore, the lift acting on the downward wing is greater than on the upward wing. This means that as the aircraft starts to roll sideways, the lateral difference in the two lift components produces a moment imbalance that tends to restore the aircraft back to its original configuration. This is in effect a passive controlling mechanism that does not need to be initiated by the pilot or any electronic stabilising control system onboard. The opposite destabilising effect can be produced by downward pointing anhedral wings, but conversely this design improves manoeuvrability.
- Swept back wings. As the aircraft sideslips, the downward-pointing wing has a shorter effective chord length in the direction of the airflow than the upward-pointing wing. The shorter chord length increases the effective camber (curvature) of the lower wing and therefore leads to more lift on the lower wing than on the upper. This results in the same restoring moment discussed for dihedral wings above.
It is worth mentioning that the anhedral and backward wept wings can be combined to reach a compromise between stability and manoeuvrability. For example, an aircraft may be over-designed with heavily swept wings, with some of the stability then removed by an anhedral design to improve the manoeuvrability.
Interaction of Longitudnal/Directional and Lateral Stability
As described above, movement of the aircraft in one plane is often coupled to movement in another. The yawing of an aircraft causes one wing to move forwards and the other backwards, and thus alters the relative velocities of the airflow over the wings, thereby resulting in differences in the lift produced by the two wings. The result is that yawing is coupled to rolling. These interaction and coupling effects can lead to secondary types of instability.
For example, in spiral instability the directional stability of yawing and lateral stability of rolling interact. When we discussed lateral stability, we noted that the sideslip induced by a rolling disturbance produces a restoring moment against rolling. However, due to directional stability it also produces a yawing effect that increases the bank. The relative magnitude of the lateral and directional restoring effects define what will happen in a given scenario. Most aircraft are designed with greater directional stability, and therefore a small disturbance in the rolling direction tends to lead to greater banking. If not counterbalanced by the pilot or electronic control system, the aircraft could enter an ever-increasing diving turn.
Another example is the dutch roll, an intricate back-and-forth between yawing and rolling. If a swept wing is perturbed by a yawing disturbance, the now slightly more forward-pointing wing generates more lift, exactly for the same argument as in the sideswipe case of shorter effective chord and larger effective area to the airflow. As a result, the aircraft rolls to the side of the slightly more backward-pointing wing. However, the same forward-pointing wing with higher lift also creates more induced drag, which tends to yaw the aircraft back in the opposite direction. Under the right circumstances this sequence of events can perpetuate to create an uncomfortable wobbling motion. In most aircrafts today, dampers in the automatic control system are installed to prevent this oscillatory instability.
In this post I have only described a small number of control challenges that engineers face when designing aircraft. Most aircraft today are controlled by highly sophisticated computer programmes that make loss of control or stability highly unlikely. Free unassisted “Flying-by-wire”, as it is called, is getting rarer and mostly limited to start and landing manoeuvres. In fact, it is more likely that the interface between human and machine is what will cause most system failures in the future.
(1) Richard Bowyer (1992). Aerodynamics for the Professional Pilot. Airlife Publishing Ltd., Shrewsbury, UK.
Sign-up to the monthly Aerospaced newsletter