14 October 2016

*** Crash: how computers are setting us up for disaster


11 October 2016  
Source Link


We increasingly let computers fly planes and carry out security checks. Driverless cars are next. But is our reliance on automation dangerously diminishing our skills?

When a sleepy Marc Dubois walked into the cockpit of his own aeroplane, he was confronted with a scene of confusion. The plane was shaking so violently that it was hard to read the instruments. An alarm was alternating between a chirruping trill and an automated voice: “STALL STALL STALL.” His junior co-pilots were at the controls. In a calm tone, Captain Dubois asked: “What’s happening?”

Co-pilot David Robert’s answer was less calm. “We completely lost control of the aeroplane, and we don’t understand anything! We tried everything!”

The crew were, in fact, in control of the aeroplane. One simple course of action could have ended the crisis they were facing, and they had not tried it. But David Robert was right on one count: he didn’t understand what was happening.

As William Langewiesche, a writer and professional pilot, described in an article for Vanity Fair in October 2014, Air France Flight 447 had begun straightforwardly enough – an on-time take-off from Rio de Janeiro at 7.29pm on 31 May 2009, bound for Paris. With hindsight, the three pilots had their vulnerabilities. Pierre-Cédric Bonin, 32, was young and inexperienced. David Robert, 37, had more experience but he had recently become an Air France manager and no longer flew full-time. Captain Marc Dubois, 58, had experience aplenty but he had been touring Rio with an off-duty flight attendant. It was later reported that he had only had an hour’s sleep.


Fortunately, given these potential fragilities, the crew were in charge of one of the most advanced planes in the world, an Airbus 330, legendarily smooth and easy to fly. Like any other modern aircraft, the A330 has an autopilot to keep the plane flying on a programmed route, but it also has a much more sophisticated automation system called fly-by-wire. A traditional aeroplane gives the pilot direct control of the flaps on the plane – its rudder, elevators and ailerons. This means the pilot has plenty of latitude to make mistakes. Fly-by-wire is smoother and safer. It inserts itself between the pilot, with all his or her faults, and the plane’s mechanics. A tactful translator between human and machine, it observes the pilot tugging on the controls, figures out how the pilot wanted the plane to move and executes that manoeuvre perfectly. It will turn a clumsy movement into a graceful one.

This makes it very hard to crash an A330, and the plane had a superb safety record: there had been no crashes in commercial service in the first 15 years after it was introduced in 1994. But, paradoxically, there is a risk to building a plane that protects pilots so assiduously from even the tiniest error. It means that when something challenging does occur, the pilots will have very little experience to draw on as they try to meet that challenge.

The complication facing Flight 447 did not seem especially daunting: thunderstorms over the Atlantic Ocean, just north of the equator. These were not a major problem, although perhaps Captain Dubois was too relaxed when at 11.02pm, Rio time, he departed the cockpit for a nap, leaving the inexperienced Bonin in charge of the controls.

Bonin seemed nervous. The slightest hint of trouble produced an outburst of swearing: “Putain la vache. Putain!” – the French equivalent of “Fucking hell. Fuck!” More than once he expressed a desire to fly at “3-6” – 36,000 feet – and lamented the fact that Air France procedures recommended flying a little lower. While it is possible to avoid trouble by flying over a storm, there is a limit to how high a plane can go. The atmosphere becomes so thin that it can barely support the aircraft. Margins for error become tight. The plane will be at risk of stalling. An aircraft stall occurs when the plane tries to climb too steeply. At this angle the wings no longer function as wings and the aircraft no longer behaves like an aircraft. It loses airspeed and falls gracelessly in a nose-up position.

Fortunately, a high altitude provides plenty of time and space to correct the stall. This is a manoeuvre fundamental to learning how to fly a plane: the pilot pushes the nose of the plane down and into a dive. The diving plane regains airspeed and the wings once more work as wings. The pilot then gently pulls out of the dive and into level flight once more.

As the plane approached the storm, ice crystals began to form on the wings. Bonin and Robert switched on the anti-icing system to prevent too much ice building up and slowing the plane down. Robert nudged Bonin a couple of times to pull left, avoiding the worst of the weather.

The plane began rocking. The co-pilot overcorrected with sharp jerks on the stick. Then he made a simple mistake

And then an alarm sounded. The autopilot had disconnected. An airspeed sensor on the plane had iced over and stopped functioning – not a major problem, but one that required the pilots to take control. But something else happened at the same time and for the same reason: the fly-by-wire system downgraded itself to a mode that gave the pilot less help and more latitude to control the plane. Lacking an airspeed sensor, the plane was unable to babysit Bonin.

The first consequence was almost immediate: the plane began rocking right and left, and Bonin overcorrected with sharp jerks on the stick. And then Bonin made a simple mistake: he pulled back on his control stick and the plane started to climb steeply.

As the nose of the aircraft rose and it started to lose speed, the automated voice barked out in English: “STALL STALL STALL.” Despite the warning, Bonin kept pulling back on the stick, and in the black skies above the Atlantic the plane climbed at an astonishing rate of 7,000 feet a minute. But the plane’s air speed was evaporating; it would soon begin to slide down through the storm and towards the water, 37,500 feet below. Had either Bonin or Robert realised what was happening, they could have fixed the problem, at least in its early stages. But they did not. Why?

The source of the problem was the system that had done so much to keep A330s safe for 15 years, across millions of miles of flying: the fly-by-wire. Or more precisely, the problem was not fly-by-wire, but the fact that the pilots had grown to rely on it. Bonin was suffering from a problem called mode confusion. Perhaps he did not realise that the plane had switched to the alternate mode that would provide him with far less assistance. Perhaps he knew the plane had switched modes, but did not fully understand the implication: that his plane would now let him stall. That is the most plausible reason Bonin and Robert ignored the alarm – they assumed this was the plane’s way of telling them that it was intervening to prevent a stall. In short, Bonin stalled the aircraft because in his gut he felt it was impossible to stall the aircraft.

Aggravating this confusion was Bonin’s lack of experience in flying a plane without computer assistance. While he had spent many hours in the cockpit of the A330, most of those hours had been spent monitoring and adjusting the plane’s computers rather than directly flying the aircraft. And of the tiny number of hours spent manually flying the plane, almost all would have been spent taking off or landing. No wonder he felt so helpless at the controls.

The Air France pilots “were hideously incompetent”, wrote William Langewiesche, in his Vanity Fair article. And he thinks he knows why. Langewiesche argued that the pilots simply were not used to flying their own aeroplane at altitude without the help of the computer. Even the experienced Captain Dubois was rusty: of the 346 hours he had been at the controls of a plane during the past six months, only four were in manual control, and even then he had had the help of the full fly-by-wire system. All three pilots had been denied the ability to practise their skills, because the plane was usually the one doing the flying.

This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”

The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.

There are plenty of situations in which automation creates no such paradox. A customer service webpage may be able to handle routine complaints and requests, so that staff are spared repetitive work and may do a better job for customers with more complex questions. Not so with an aeroplane. Autopilots and the more subtle assistance of fly-by-wire do not free up the crew to concentrate on the interesting stuff. Instead, they free up the crew to fall asleep at the controls, figuratively or even literally. One notorious incident occurred late in 2009, when two pilots let their autopilot overshoot Minneapolis airport by more than 100 miles. They had been looking at their laptops.

When something goes wrong in such situations, it is hard to snap to attention and deal with a situation that is very likely to be bewildering.

His nap abruptly interrupted, Captain Dubois arrived in the cockpit 1min 38secs after the airspeed indicator had failed. The plane was still above 35,000 feet, although it was falling at more than 150 feet a second. The de-icers had done their job and the airspeed sensor was operating again, but the co-pilots no longer trusted any of their instruments. The plane – which was now in perfect working order – was telling them that they were barely moving forward at all and were slicing through the air down towards the water, tens of thousands of feet below. But rather than realising the faulty instrument was fixed, they appear to have assumed that yet more of their instruments had broken. Dubois was silent for 23 seconds – a long time, if you count them off. Long enough for the plane to fall 4,000 feet. Composite: Guardian Design/imageBROKER/REX/Shutterstock

It was still not too late to save the plane – if Dubois had been able to recognise what was happening to it. The nose was now so high that the stall warning had stopped – it, like the pilots, simply rejected the information it was getting as anomalous. A couple of times, Bonin did push the nose of the aircraft down a little and the stall warning started up again STALL STALL STALL – which no doubt confused him further. At one stage he tried to engage the speed brakes, worried that they were going too fast – the opposite of the truth: the plane was clawing its way forwards through the air at less than 60 knots, about 70 miles per hour – far too slow. It was falling twice as fast. Utterly confused, the pilots argued briefly about whether the plane was climbing or descending.

Bonin and Robert were shouting at each other, each trying to control the plane. All three men were talking at cross-purposes. The plane was still nose up, but losing altitude rapidly.

Robert: “Your speed! You’re climbing! Descend! Descend, descend, descend!”

Bonin: “I am descending!”

Dubois: “No, you’re climbing.”

Bonin: “I’m climbing? OK, so we’re going down.”

Nobody said: “We’re stalling. Put the nose down and dive out of the stall.”

At 11.13pm and 40 seconds, less than 12 minutes after Dubois first left the cockpit for a nap, and two minutes after the autopilot switched itself off, Robert yelled at Bonin:“Climb … climb … climb … climb …” Bonin replied that he had had his stick back the entire time – the information that might have helped Dubois diagnose the stall, had he known.

Finally the penny seemed to drop for Dubois, who was standing behind the two co-pilots. “No, no, no … Don’t climb … no, no.”

Robert announced that he was taking control and pushed the nose of the plane down. The plane began to accelerate at last. But he was about one minute too late – that’s 11,000 feet of altitude. There was not enough room between the plummeting plane and the black water of the Atlantic to regain speed and then pull out of the dive.

In any case, Bonin silently retook control of the plane and tried to climb again. It was an act of pure panic. Robert and Dubois had, perhaps, realised that the plane had stalled – but they never said so. They may not have realised that Bonin was the one in control of the plane. And Bonin never grasped what he had done. His last words were: “But what’s happening?”

Four seconds later the aircraft hit the Atlantic at about 125 miles an hour. Everyone on board, 228 passengers and crew, died instantly.

Earl Wiener, a cult figure in aviation safety, coined what is known as Wiener’s Laws of aviation and human error. One of them was:“Digital devices tune out small errors while creating opportunities for large errors.” We might rephrase it as: “Automation will routinely tidy up ordinary messes, but occasionally create an extraordinary mess.” It is an insight that applies far beyond aviation.

Victor Hankins, a British citizen, received an unwelcome gift for Christmas: a parking fine. The first Hankins knew of the penalty was when a letter from the local council dropped on to his doormat. At 14 seconds after 8.08pm on 20 December 2013, his car had been blocking a bus stop in Bradford, Yorkshire, and had been photographed by a camera mounted in a passing traffic enforcement van. A computer had identified the number plate, looked it up in a database and found Mr Hankins’s address. An “evidence pack” was automatically generated, including video of the scene, a time stamp and location. The letter from Bradford city council demanding that Hankins pay a fine or face court action was composed, printed and mailed by an automatic process.

There was just one problem: Hankins had not been illegally parked at all. He had been stuck in traffic.

In principle, such technology should not fall victim to the paradox of automation. It should free up humans to do more interesting and varied work – checking the anomalous cases, such as the complaint Hankins immediately registered, which are likely to be more intriguing than simply writing down yet another licence plate and issuing yet another ticket.

But the tendency to assume that the technology knows what it is doing applies just as much to bureaucracy as it does to pilots. Bradford city council initially dismissed Hankins’s complaint, admitting its error only when he threatened them with the inconvenience of a court case. Composite: Guardian Design/Getty Images

The rarer the exception gets, as with fly-by-wire, the less gracefully we are likely to deal with it. We assume that the computer is always right, and when someone says the computer made a mistake, we assume they are wrong or lying. What happens when private security guards throw you out of your local shopping centre because a computer has mistaken your face for that of a known shoplifter? (This technology is now being modified to allow retailers to single out particular customers for special offers the moment they walk into the store.) When your face, or name, is on a “criminal” list, how easy is it to get it taken off?

We are now on more lists than ever before, and computers have turned filing cabinets full of paper into instantly searchable, instantly actionable banks of data. Increasingly, computers are managing these databases, with no need for humans to get involved or even to understand what is happening. And the computers are often unaccountable: an algorithm that rates teachers and schools, Uber drivers or businesses on Google’s search, will typically be commercially confidential. Whatever errors or preconceptions have been programmed into the algorithm from the start, it is safe from scrutiny: those errors and preconceptions will be hard to challenge.

For all the power and the genuine usefulness of data, perhaps we have not yet acknowledged how imperfectly a tidy database maps on to a messy world. We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. This is not to say that we should call for death to the databases and algorithms. There is at least some legitimate role for computerised attempts to investigate criminal suspects, and keep traffic flowing. But the database and the algorithm, like the autopilot, should be there to support human decision-making. If we rely on computers completely, disaster awaits.

Gary Klein, a psychologist who specialises in the study of expert and intuitive decision-making, summarises the problem: “When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.”

How algorithms rule our working lives

Read more

Decision experts such as Klein complain that many software engineers make the problem worse by deliberately designing systems to supplant human expertise by default; if we wish instead to use them to support human expertise, we need to wrestle with the system. GPS devices, for example, could provide all sorts of decision support, allowing a human driver to explore options, view maps and alter a route. But these functions tend to be buried deeper in the app. They take effort, whereas it is very easy to hit “Start navigation” and trust the computer to do the rest.

It is possible to resist the siren call of the algorithms. Rebecca Pliske, a psychologist, found that veteran meteorologists would make weather forecasts first by looking at the data and forming an expert judgment; only then would they look at the computerised forecast to see if the computer had spotted anything that they had missed. (Typically, the answer was no.) By making their manual forecast first, these veterans kept their skills sharp, unlike the pilots on the Airbus 330. However, the younger generation of meteorologists are happier to trust the computers. Once the veterans retire, the human expertise to intuit when the computer has screwed up could be lost.

Many of us have experienced problems with GPS systems, and we have seen the trouble with autopilot. Put the two ideas together and you get the self-driving car. Chris Urmson, who runs Google’s self-driving car programme, hopes that the cars will soon be so widely available that his sons will never need to have a driving licence. There is a revealing implication in the target: that unlike a plane’s autopilot, a self-driving car will never need to cede control to a human being.

Raj Rajkumar, an autonomous driving expert at Carnegie Mellon University, thinks completely autonomous vehicles are 10 to 20 years away. Until then, we can look forward to a more gradual process of letting the car drive itself in easier conditions, while humans take over at more challenging moments.

“The number of scenarios that are automatable will increase over time, and one fine day, the vehicle is able to control itself completely, but that last step will be a minor, incremental step and one will barely notice this actually happened,” Rajkumar told the 99% Invisible podcast. Even then, he says, “There will always be some edge cases where things do go beyond anybody’s control.”

If this sounds ominous, perhaps it should. At first glance, it sounds reasonable that the car will hand over to the human driver when things are difficult. But that raises two immediate problems. If we expect the car to know when to cede control, then we are expecting the car to know the limits of its own competence – to understand when it is capable and when it is not. That is a hard thing to ask even of a human, let alone a computer. Composite: Guardian Design/hillandknowlton

Also, if we expect the human to leap in and take over, how will the human know how to react appropriately? Given what we know about the difficulty that highly trained pilots can have figuring out an unusual situation when the autopilot switches off, surely we should be sceptical about the capacity of humans to notice when the computer is about to make a mistake.

“Human beings are not used to driving automated vehicles, so we really don’t know how drivers are going to react when the driving is taken over by the car,” says Anuj K Pradhan of the University of Michigan. It seems likely that we’ll react by playing a computer game or chatting on a video phone, rather than watching like a hawk how the computer is driving – maybe not on our first trip in an autonomous car, but certainly on our hundredth.

And when the computer gives control back to the driver, it may well do so in the most extreme and challenging situations. The three Air France pilots had two or three minutes to work out what to do when their autopilot asked them to take over an A330 – what chance would you or I have when the computer in our car says, “Automatic mode disengaged” and we look up from our smartphone screen to see a bus careening towards us?

Anuj Pradhan has floated the idea that humans should have to acquire several years of manual experience before they are allowed to supervise an autonomous car. But it is hard to see how this solves the problem. No matter how many years of experience a driver has, his or her skills will slowly erode if he or she lets the computer take over. Pradhan’s proposal gives us the worst of both worlds: we let teenage drivers loose in manual cars, when they are most likely to have accidents. And even when they have learned some road craft, it will not take long being a passenger in a usually reliable autonomous car before their skills begin to fade.

It is precisely because the digital devices tidily tune out small errors that they create the opportunities for large ones. Deprived of any awkward feedback, any modest challenges that might allow us to maintain our skills, when the crisis arrives we find ourselves lamentably unprepared.

How driverless cars could change our whole future
John Naughton

Some senior pilots urge their juniors to turn off the autopilots from time to time, in order to maintain their skills. That sounds like good advice. But if the junior pilots only turn off the autopilot when it is absolutely safe to do so, they are not practising their skills in a challenging situation. And if they turn off the autopilot in a challenging situation, they may provoke the very accident they are practising to avoid.

An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?

When humans are asked to babysit computers, for example, in the operation of drones, the computers themselves should be programmed to serve up occasional brief diversions. Even better might be an automated system that demanded more input, more often, from the human – even when that input is not strictly needed. If you occasionally need human skill at short notice to navigate a hugely messy situation, it may make sense to artificially create smaller messes, just to keep people on their toes. Photograph: Stephen Strathdee

In the mid-1980s, a Dutch traffic engineer named Hans Monderman was sent to the village of Oudehaske. Two children had been killed by cars, and Monderman’s radar gun showed right away that drivers were going too fast through the village. He pondered the traditional solutions – traffic lights, speed bumps, additional signs pestering drivers to slow down. They were expensive and often ineffective. Control measures such as traffic lights and speed bumps frustrated drivers, who would often speed dangerously between one measure and another.

And so Monderman tried something revolutionary. He suggested that the road through Oudehaske be made to look more like what it was: a road through a village. First, the existing traffic signs were removed. (Signs always irritated Monderman: driving through his home country of the Netherlands with the writer Tom Vanderbilt, he once railed against their patronising redundancy. “Do you really think that no one would perceive there is a bridge over there?” he would ask, waving at a sign that stood next to a bridge, notifying people of the bridge.) The signs might ostensibly be asking drivers to slow down. However, argued Monderman, because signs are the universal language of roads everywhere, on a deeper level the effect of their presence is simply to reassure drivers that they were on a road – a road like any other road, where cars rule. Monderman wanted to remind them that they were also in a village, where children might play.

So, next, he replaced the asphalt with red brick paving, and the raised kerb with a flush pavement and gently curved guttering. Where once drivers had, figuratively speaking, sped through the village on autopilot – not really attending to what they were doing – now they were faced with a messy situation and had to engage their brains. It was hard to know quite what to do or where to drive – or which space belonged to the cars and which to the village children. As Tom Vanderbilt describes Monderman’s strategy in his book Traffic, “Rather than clarity and segregation, he had created confusion and ambiguity.”

Perplexed, drivers took the cautious way forward: they drove so slowly through Oudehaske that Monderman could no longer capture their speed on his radar gun. By forcing drivers to confront the possibility of small errors, the chance of them making larger ones was greatly reduced.

Monderman, who died in 2008, was the most famous of a small group of traffic planners around the world who have been pushing against the trend towards an ever-tidier strategy for making traffic flow smoothly and safely. The usual approach is to give drivers the clearest possible guidance as to what they should do and where they should go: traffic lights, bus lanes, cycle lanes, left- and right-filtering traffic signals, railings to confine pedestrians, and of course signs attached to every available surface, forbidding or permitting different manoeuvres.

Laweiplein in the Dutch town of Drachten was a typical such junction, and accidents were common. Frustrated by waiting in jams, drivers would sometimes try to beat the traffic lights by blasting across the junction at speed – or they would be impatiently watching the lights, rather than watching for other road users. (In urban environments, about half of all accidents happen at traffic lights.) With a shopping centre on one side of the junction and a theatre on the other, pedestrians often got in the way, too.

Monderman wove his messy magic and created the “squareabout”. He threw away all the explicit efforts at control. In their place, he built a square with fountains, a small grassy roundabout in one corner, pinch points where cyclists and pedestrians might try to cross the flow of traffic, and very little signposting of any kind. It looks much like a pedestrianisation scheme – except that the square has as many cars crossing it as ever, approaching from all four directions. Pedestrians and cyclists must cross the traffic as before, but now they have no traffic lights to protect them. It sounds dangerous – and surveys show that locals think it is dangerous. It is certainly unnerving to watch the squareabout in operation – drivers, cyclists and pedestrians weave in and out of one another in an apparently chaotic fashion.

Yet the squareabout works. Traffic glides through slowly but rarely stops moving for long. The number of cars passing through the junction has risen, yet congestion has fallen. And the squareabout is safer than the traffic-light crossroads that preceded it, with half as many accidents as before. It is precisely because the squareabout feels so hazardous that it is safer. Drivers never quite know what is going on or where the next cyclist is coming from, and as a result they drive slowly and with the constant expectation of trouble. And while the squareabout feels risky, it does not feel threatening; at the gentle speeds that have become the custom, drivers, cyclists and pedestrians have time to make eye contact and to read one another as human beings, rather than as threats or obstacles. When showing visiting journalists the squareabout, Monderman’s party trick was to close his eyes and walk backwards into the traffic. The cars would just flow around him without so much as a honk on the horn.

In Monderman’s artfully ambiguous squareabout, drivers are never given the opportunity to glaze over and switch to the automatic driving mode that can be so familiar. The chaos of the square forces them to pay attention, work things out for themselves and look out for each other. The square is a mess of confusion. That is why it works.

• Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.
This article is adapted from Tim Harford’s book Messy, published by Little Brownttps://www.theguardian.com/technology/201https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-are-setting-us-up-disaster6/oct/11/crash-how-computers-are-setting-us-up-disastersh: how computers are setting us up for disaster

No comments: