Autonomous cars and the road to the future

Google driverless car

A Google self driving car. Courtesy of Wikimedia.

BY SIDDHARTH VODNALA

SAN JOSE, California –  A runaway trolley is rumbling down a track. A few hundred yards ahead, there are five people tied to a pole in the center of the track. You see the train heading straight in their direction. You see a lever in the station that can shift the trolley onto a different set of tracks. But there stands an innocent pedestrian, blissfully unaware of either the train or the moral conundrum you are in. Do you pull the lever, thus actively choosing death for one person, or stand by and let the trolley rumble on to its – and the unfortunate five victims’ – gory destiny?

This question, long known as the “trolley problem” in philosophy, is surprisingly relevant to the next frontier in automobile technology and artificial intelligence: autonomous cars.

Chris Gerdes, a researcher from Stanford University, expounded on this issue in the session “Road to Autonomous Cars” at the American Association for Advancement of Science conference on February 14.

“We want to make cars that have not only good driving skills but also human judiciousness,” Gerdes said.

When an autonomous car is faced with any of the multitude of moral choices we make on the road – do we let that bicyclist have more room to ride safely even though he’s on the wrong side of the road? – the question of how one can design safe and more importantly, wise cars becomes important.

“The question I’m most interested in is how we can program ethical structures into autonomous cars,” Gerdes said.

Gerdes, who has worked with the ‘Revs Program’ at Stanford, talked about the computational and philosophical issues that arise with designing autonomous cars. The Revs Program’s stated mission is to ‘forge new scholarship and student experiences around the past, present and future of the automobile.’

“We examined race car drivers in order to devise methods to make cars safe for everyone.” Gerdes said. “While seeing a car driving itself might be a disconcerting experience initially, they have gotten to the point where they are fairly comparable to expert race car drivers.”

Shelly, an autonomous car tested by Stanford researchers, was offered as an example. “Shelly is really good at using all her computational ability to know how fast she’s going, how far ahead the curve is, when exactly to begin braking. All of this thinking happens in computational ways,” Gerdes said.

Humans are more flexible, weighing many more things as drivers: how should we drive next to, say, an obviously old and hesitant driver? “Our focus,” said Gerdes, “is on more than just ‘Stay within the lines.’”

On the oft-asked question pertaining to the hacking of autonomous vehicles, Gerdes emphasized that no technology can be immune to abuse. “It is often said that autonomous cars will eliminate human error. They don’t, they merely shift driving error to programming error.”

“It all comes down to rules versus reasonableness,” said Bryant Walker Smith, assistant professor of law at the University of South Carolina.

Smith outlined some of the main implications of the arrival of autonomous cars:

  1. Consumers expect more.
  2. Decisions shift from consumer to companies.
  3. Companies get closer to their systems.
  4. Data management becomes more complicated.

“There are many types of legal liability, but what makes manufacturers nervous is civil liability,” Smith said. Civil liability implies that the guilty party is responsible for paying damages due to a breach of a private contract.

In such cases, courts may ask if a reasonable change to the automation system could have made the vehicle safer. But since technology is in a perpetual state of improvement, car manufacturers are likely to have to a tough day defending themselves in court if it comes to that.

Smith said that one of the aspects of his research is where, legally and morally, the system’s responsibility ends and human responsibility begins lies.

“How can we create mechanisms where we can determine where responsibility lies?” Smith asked. “That will be the defining question for future research.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s