Fear of Flying & the Production of Comforting Statistics

The capacity of some individuals in the aviation industry to indulge in self-delusion is nothing short of astonishing. The latest example comes from Robert Sturgell, Acting Administrator of the Federal Aviation Administration (FAA).In a 2 December 2008 speech before the International Safety Forum titled “A Risk Averse Society,” Sturgell all but blamed the flying public for having an unfounded anxiety about flying. “The data tells us that the fear of aviation accidents does not match the facts,” Sturgell asserted. “What I see instead is a society that has grown, and continues to grow, more and more averse to risk.”

Not to put too fine a point on it, but Sturgell stopped just a metaphorical hair short of calling the flying public wimps.

“Aviation is safe … The numbers prove it, and … statistically, any bar of soap in the bathtub is potentially far more lethal than a ride in an airplane,” Sturgell claimed.

Thus, in one sweeping declaration, Sturgell equated a slippery bar of soap to flammable vapors in fuel tanks, faulty wiring, software errors lurking in avionics, the risk of collisions with other aircraft at airports, the potential for pilot error, maintenance mistakes, and on and on.

Sturgell was on a roll, going on to compare airline travel to trips by automobile:

“People fear what they cannot control, which is why you’re led to believe that flying at flight level three zero zero at Mach .82 is somehow more dangerous than doing 75 on the Washington Beltway. For those of you from out of town, the Beltway is the highway circling Washington DC where cars at high speed and drivers making angry gestures converge. The Beltway’s safety record is so tangled that local radio stations talk about it every 10 minutes …

“On a 500 mile trip, the risk of fatal injury on the safest road system in the country [the Interstate, of which the Beltway is a part] is about 50 times greater than when traveling on a commercial airline, depending on exactly what period one chooses to compare.”

The relative risk, though, depends on how safety is measured. It could be assessed in terms of hours in the vehicle (or hours exposed to the activity, such as taking a shower), or in terms of the number of miles traveled.

Consider the measure by hours first. Based on 1992 data presented at a Royal Institution Discourse in London by the late Edmund Hambly, a distinguished consulting civil engineer, the fatal accident rates per 100 million hours of activity actually favors the bathtub over the airplane:

Accident at home, able-bodied 1
Factory work 4
Travel by train 5
Travel by car 30
Travel by airliner 40
Travel by helicopter 500
Fireman in London air-raids 1940 1,000
Rock climbing, while on rock face 4,000

There is “face credibility” to Hambly’s table, only a portion of which appears here, but compare helicopters to airliners: the difference in safety is a factor of about 12, which accords with the number of fatal helicopter versus airline crashes of late.

For an hour in an airplane or an hour in a car, one is safer in the car, although the distance traveled is far less.

A somewhat different picture emerges when safety data is measured in terms of distance traveled. Taking Hambly’s data as a starting point, Trevor Kletz, a highly respected risk analyst based in Britain, assumed that the average speed of an airliner is 500 mph and 40 mph for an automobile. An aircraft therefore travels 12.5 times (500/40 = 12.5) as many miles in an hour as a car.

The greater risk in flying for an hour as opposed to driving for an hour is mitigated by the need to travel greater distances. When the risk is spread over 12.5 times the distance traveled in an hour, Kletz believes it is fair to argue that, on average, air travel is about ten times safer per mile than travel by car.

It is an interesting but irrelevant comparison. We are contrasting private auto travel with a highly regulated form of public transport. The two do not equate. Consider just one tiny example: people drive drunk, in the wee hours of the morning – a mix that guarantees a peak in fatal auto accidents on the so-called “back side of the clock,” which is to say the hours between midnight and 6 a.m. In the airline industry, the pilots hew to the 8-hour rule for “bottle to throttle,” or the mandatory time between drinking and occupying the cockpit. Pilots also live under mandatory rest periods and a 16-hour limit to the duty day.

Comparisons of airline flying to automobile travel or slippery bars of soap at home are not germane. What is germane, and which Sturgell alluded to, is the public’s perception of risk. According to David Ropeik at Harvard University’s Center for Risk Analysis, public anxiety about flying derives from a number of factors:

  • Control vs. no control. People feel in control when they drive. Not so when they are in an airplane, as passenger, bumping through turbulence at 30,000 feet. When one feels in control, one is less fearful. In airline travel, the loss of control manifests itself in a number of ways: flight delays, cramped seating, and so forth.
  • Catastrophic vs. chronic. People tend to be more afraid of what can kill wholesale, suddenly and violently, like a plane crash, than, say, lung cancer, which causes hundreds of thousands of deaths, but one at a time, over time.
  • Natural vs. human-made. People are less afraid of radiation from the sun than of radiation from power lines and cell phone towers. Yet the risk from the sun is immensely greater.
  • Trust vs. distrust. If the officials informing the public about risk are trusted, fears subside.

The FAA, often known to the public as the “tombstone agency” because of its reactive approach to safety – usually after a fiery crash – may not be the best source of unbiased information about risk. Ropeik offered this modest but trenchant proposal:

“Why not create … (a)n independent, nongovernmental agency – let’s call it the Risk Analysis Institute – to provide us with credible, trustworthy guidance on risks? The institute would rank the hazards we face, so we would know which ones are the most likely to occur; classify risks according to which ones have the most serious consequences; and conduct cost-benefit studies to help us rank mitigation choices by cost and effectiveness, so we know which options will maximize resources to protect the most people.”

Excellent idea. In the alternative, the National Transportation Safety Board (NTSB) could be tasked by Congress to provide this type of risk analysis, at least for transportation and pipeline systems. The NTSB is the repository for all of the lessons learned from the investigation of actual aircraft, railroad, highway, maritime and pipeline accidents, and this type of comparative analysis would flow smoothly into the NTSB’s mandate to formulate safety recommendations.

At the same time, the FAA needs to reconsider one of its basic measurements of risk: the probability of catastrophic failure. As Sturgell explained in his speech:

“In the case of commercial aviation, the low level of risk is unprecedented. Consider that each system on a commercial airliner is designed for a failure rate of no worse than 10 to the minus 9 – one in a billion. Currently, the total experience for U.S. commercial aviation, including operational issues as well as the airplane itself, is less than 10 to the minus 8. So the total is within a stone’s throw of the required risk level for an individual system. A person boarding a commercial flight is basically at negligible risk.”

Well, not quite. The B737, with a single panel rudder controlled by a single rudder power control unit (RPCU) had accumulated about 93 million flights as a fleet (equating to a quarter billion flight hours) when USAir flight 427 crashed from an uncommanded rudder reversal in 1994. As it turns out, about 100 incidents of uncommanded rudder reversal preceded this tragedy (where pilots in most cases were able to recover the aircraft). In terms of the one in a billion standard extolled by Sturgell, the B737 was off by a factor of ten.

Consider also the supersonic Concorde, which suffered a fiery crash on takeoff at Paris in July 2000. The airplane was felled by electrical arcing, a fuel leak, and the resulting ignition that turned the airplane into a fireball. At that time, the fleet had accumulated about 80,000 flights, so Concorde’s safety record was about 12,000 times lower than the one in a billion standard.

In both cases, the certification of the airplane design was questioned. The B737 RPCU was challenged by a mid-level FAA official during certification as not providing sufficient independent redundancy. He was over-ridden, the airplane was pressed into revenue service, and after the USAir tragedy, the RPCU had to be redesigned and retrofitted to the entire fleet.

The Concorde was certified to below the one in a billion standard and the fleet did not recover from the Paris crash. The surviving airplanes are museum pieces today.

Behind Concorde certification lay a host of waivers, exemptions and deviations, which at the time were deemed essential to put the aircraft into service.

The late Lu Zuckerman, a reliability and maintainability engineer, explained to me a few years ago why he believed the FAA definition is bogus:

“The mythical failure rate of 10-9 can be addressed two ways. The FARs [Federal Aviation Regulations] require that a single point failure that can contribute to the loss of an aircraft can occur no more frequently than 10-9 and if at all possible should be designed out. The 10-9 figure that most people quote does not apply at the aircraft level but, instead, it applies to the system failure that can cause loss of the aircraft ….

“Here is the kicker. The FTAs [Fault Tree Analysis] are for systems and not the aircraft … This process should be carried one step further by making a FTA with an ‘OR’ gate representing the aircraft, with each of the systems feeding into that gate. Having a final ‘OR’ gate will provide a truer picture of the catastrophic failure rate at the very top level. Because it is an ‘OR’ gate [as opposed to an ‘AND’ gate], one would most likely come up with a catastrophic loss rate in the area of 10-8 (one in 100 million hours of exposure) or possibly lower – not 10-9 – which more truly reflects the crash rate of commercial aircraft. People fixate on the 1 x 10-9 failure rate thinking it is at the aircraft level when it is in fact at the system level.

“However, the FAA does not require this assessment at the aircraft level. So much for safety.”

While we are on the subject of using statistics to assess safety risks, remember the FAA’s Air Transportation Oversight System (ATOS) and its “systems safety” program, which does none of this. ATOS has never uncovered a lurking causal factor to an accident. Rather, the risks to aviation safety have been revealed by post-accident investigations – after the damage is done – and by mid-level FAA whistleblowers who brought their safety concerns to Congress.

Sturgell said, “We must try to educate people to understand that the system is risk based and that our focus should be on the high risks and the high-consequence events.”

All well and good, but note that Sturgell talked about risk in terms of individual systems. For a more accurate and realistic assessment of safety, the FAA could end its self-delusion by assessing failures at the aircraft level, not at individual systems. If the public knew how the FAA stopped short in assessing safety, the latent anxiety about flying would be replaced by genuine alarm.