The limits of almost totally safe systems: Cockpit automation and the loss of Air France 447

800px-A330_Cockpit

by Nick Oliver, Thomas Calvard and Kristina Potočnik

The interactions between technology and human beings are a source of fascination to many social scientists, from the impact of technology on individual well-being, to power relations in the work place to technology’s transformative potential. Technology has the ability to empower and to deskill, to enhance human capability and to subjugate humans to its requirements.

Science fiction plays on the fear that technology may one day ‘take over’ its human creators. A more immediate concern is that advances in technology lure us into designing and building systems that exceed the capacity of their operators to understand them, especially in the face of unusual, non-routine situations.  

Such concerns notwithstanding, humans have succeeded in developing ways to manage and operate very complex, sometimes high-risk technologies with remarkably few mishaps. Such “high reliability organizations” are of great interest to organizational scholars perhaps because they represent a kind of organization-technology frontier in terms of what is possible. Yet scholars of safety science observe that even extremely safe, error-free systems seem to have accident rates, which although very low, are remarkably persistent.

Our recently published study explores some of these issues by examining the interplay between human cognition and system design with reference to the final minutes of Air France flight 447 (AF447), which disappeared over the Atlantic in 2009. Data from the cockpit voice and flight data recorders, retrieved from the ocean floor two years after the crash, revealed a situation in which interactions between pilots and aircraft technology caused an initially relatively benign situation to escalate rapidly into a catastrophe.

AF447 was around three and half hours into a night flight from Rio to Paris, cruising on autopilot at 35,000 feet. Due to the length of the flight, there were three flight crew aboard, the captain (Marc Dubois) and two first officers (Pierre-Cédric Bonin and David Robert).

Dubois was out of the cockpit on a routine break as AF447 when ice crystals briefly obstructed the three sensor tubes that provided data on airspeed to the flight control computer. As it was programmed to do when faced with inconsistent data inputs, the flight computer disengaged the autopilot and handed control to the (startled) pilots. The pilots now had to fly the plane, an Airbus A330, by hand. The flight control system also withdrew the automatic protection that normally prevents manoeuvres that might endanger the aircraft – again, as it was programmed to do when inconsistencies in data inputs were detected.

At this point, all Bonin and Robert had to do was maintain the flightpath manually whilst they worked out what was happening. However, Bonin, the more junior of the two first officers and the one who took manual control, was clearly startled by this sudden development and unpracticed at flying by hand at high altitude. In his efforts to control a slight roll he made overly aggressive control inputs with his sidestick, causing the plane to quite violently roll left and right multiple times. He also put the plane into a climb and persisted with this action despite warnings from his colleague Robert to descend.

Due to Bonin’s actions, within 60 seconds of autopilot disconnection the aircraft had gained so much altitude and lost so much airspeed that it entered an aerodynamic stall and began a rapid, uncontrolled descent to the ocean below.  

Once the aircraft had stalled actions and responses, both technological and human, fed off each other in an increasingly toxic cycle.

The icing of the speed sensors cleared quite quickly and within about a minute all three speed indicators were working normally. By then the aircraft had lost so much speed that the crew may not have believed the now-valid speed readings. Immediately after the autopilot disconnection, Robert, who initially had a better appreciation of the situation than Bonin, started to read out the system messages that had appeared on his electronic display. This was an important step towards diagnosing what was happening to the aircraft.

However, he Robert became distracted by the aircraft movements induced by Bonin and switched his attention to monitoring the flightpath and to calling the captain back to the cockpit. This meant that a crucial piece of information on the display – that some automatic protection had been withdrawn – was not recognized.

Neither Bonin nor Robert, nor Dubois when he returned to cockpit, realized that the aircraft had stalled. This was despite a barrage of cues including a synthetic voice that announced “stall” 75 times; buffeting as the aircraft approached the point of stall; and tremendous aerodynamic noise due to turbulence around the wings as the aircraft descended, belly first with its nose raised.

Of course, normally the flight control system would have rendered a stall impossible. Features that helped the pilots under normal circumstances now were hindrances to their attempts to make sense of the situation and regain control of the aircraft. For example, the stall warning shuts off when the airspeed fell below a certain level to prevent distracting false alarms. But due to AF447’s angle of descent, the indicated airspeed fell below this threshold, causing the stall warning to fall silent.

When the pilots briefly took appropriate recovery action by putting the plane’s nose down, the airspeed increased and the stall warning re-activated, sending the wrong cue in response to the right action. Dubois and Robert eventually realized what was happening but by then the aircraft had insufficient altitude to recover from the stall.

Throughout the whole episode, apart from the transitory icing of the speed sensors, the technology functioned as it was programmed to do.

Commercial aviation is a complex human-technological system that is incredibly safe, nearly all of time, which means that events like the loss of AF447 are all the more shocking when they do occur. For social scientists interested in technology, we see two key messages from the AF447 disaster.

The first concerns the nature of human-technology interactions in automated, complex environments. The aviation community has started to appreciate that the very advances in cockpit automation that make flying so safe, most of the time, may also lead to a subtle erosion of pilot’s cognitive skills, an erosion that only becomes apparent under extreme, unusual conditions when such skills are most needed.

Second, AF447 demonstrates the limits of system design in eliminating errors and accidents in complex systems. Airbus designers had apparently not conceived of a situation in which a crew would be unable to recognize that their aircraft was in a prolonged stall and unable to process the multiple cues telling them that this was indeed what was happening. Gaps of conception such as this will always leave holes in organizational defences.

The implication of AF447 is that as long as human intervention is the last resort in complex, automated systems, careful thought needs to be given to strategies to maintain the cognitive capabilities necessary to deal with rare, extreme events.

Nick Oliver, Thomas Calvard and Kristina Potočnik are faculty members at the University of Edinburgh Business School. This article summarizes “Cognition, Technology, and Organizational Limits: Lessons from the Air France 447 Disaster” in Organization Science.  

Image via wikimedia

Advertisement
6 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: