Perfecting self-driving cars — can it be done?

Published:

Share:

Photo by Xan Griffin on Unsplash

Robotic vehicles have been used in dangerous environments for decades, from decommissioning the Fukushima nuclear power plant to inspecting underwater energy infrastructure in the North Sea. More recently, autonomous vehicles from boats to grocery delivery carts have made the gentle transition from research centres into the real world with very few hiccups. Yet, the promised arrival of self-driving cars has not progressed beyond the testing stage.

In one test drive of an Uber self-driving car in 2018, a pedestrian was killed by the vehicle. Although these accidents happen every day when humans are behind the wheel, the public holds driverless cars to far higher safety standards, interpreting one-off accidents as proof that these vehicles are too unsafe to unleash on public roads.

Programming the perfect self-driving car that will always make the safest decision is a huge and technical task. Unlike other autonomous vehicles, which are generally rolled out in tightly controlled environments, self-driving cars must function in the endlessly unpredictable road network, rapidly processing many complex variables to remain safe.

Inspired by the highway code, Professor Ekaterina Komendantskaya and Matthew Daggitt, research associate from the National Robotarium and Luca Arnaboldi from the University of Edinburgh are working on a set of rules that will help self-driving cars make the safest decisions in every conceivable scenario. Verifying that these rules work is the final roadblock to getting trustworthy self-driving cars safely onto our roads. Writing in the Conversation, the team outlines how their research intends to provide the solution to the very serious safety concerns.

AI software is good at learning about scenarios it has never faced. Using ‘neural networks’ that take their inspiration from the layout of the human brain, such software can spot patterns in data, like the movements of cars and pedestrians, and then recall these patterns in novel scenarios. But our scientists still need to prove that any safety rules taught to self-driving cars will work in these new scenarios. To do this, they are using formal verification: the method that computer scientists use to prove that a rule works in all circumstances.

One of the more notable recent successes in the field is the verification of an AI system that uses neural networks to avoid collisions between autonomous aircraft. Researchers have successfully formally verified that the system will always respond correctly, regardless of the horizontal and vertical manoeuvres of the aircraft involved. Human drivers follow a highway code to keep all road users safe, which relies on the human brain to learn these rules and applying them sensibly in innumerable real-world scenarios. This can be taught to self-driving cars but requires unpicking each rule in the code, teaching vehicles’ neural networks to understand how to obey each rule, and then verifying that they can be relied upon to safely obey these rules in all circumstances. However, the challenge of verifying that these rules will be safely followed is complicated when examining the consequences of the phrase ‘must never’ in the highway code.

To make a self-driving car as reactive as a human driver in any given scenario, it must be programmed in such a way that accounts for nuance, weighted risk and the occasional scenario where different rules are in direct conflict, requiring the car to ignore one or more of them. Such a task cannot be left solely to programmers and will require input from lawyers, security experts, system engineers and policymakers.

Within the newly formed AISEC project, our team of researchers is designing a tool to facilitate the kind of interdisciplinary collaboration needed to create ethical and legal standards for self-driving cars. Teaching self-driving cars to be perfect will be a dynamic process: dependent upon how legal, cultural and technological experts define perfection over time.

The first experimental prototype of the AISEC tool is expected to be delivered by 2024 but creating adaptive verification methods to address the remaining safety and security concerns will likely take years to build and embed into self-driving cars. Accidents involving self-driving cars always create headlines. A self-driving car that recognises a pedestrian and stops before hitting them 99 per cent of the time is a cause for celebration in research labs, but a killing machine in the real world. By creating robust, verifiable safety rules for self-driving cars, the team is attempting to make that one per cent of accidents a thing of the past.

Contact

Bronagh Grace