Many drivers at one time or another have encountered this problem: I’m going to hit someone or something, so what should I do right now? Do I hit the pedestrian? Or swerve into the truck? Do I steer to the ditch, or slam on the brakes and accept a likely rear-end collision? In the split second that a driver has to make these decisions, a whole range of ethical, tactical, legal and personal decisions must be considered and resolved. How will driverless vehicle technology address these issues?
Highly automated vehicles, or HAVs, will depend on probabilistic algorithms and artificial intelligence to make the life-or-death decisions that most drivers will confront eventually over a lifetime behind the wheel. While the assumptions and priorities that were fed into the algorithm will reflect the values of the programming team, the real-world actions taken by a driverless vehicle in an emergency will reflect its artificial intelligence capacity — its ability to sift through dozens or hundreds of factors and data inputs, all to minimize the damage of a crash. How should an HAV resolve a situation that has no ideal outcome?
Driver Opinion as to Ethical Priorities Is Mixed
A good starting point might be to find out how the general public wants autonomous vehicles to resolve the ethical dilemmas that can arise when a crash is imminent. In 2015, an academic research team surveyed American drivers about the ways they would like to see an automated vehicle behave in emergency situations, but the results were ambiguous. About 50 percent of the respondents preferred their own car to place passenger safety as its highest priority, while only 19 percent would buy a car that maximized overall safety at the possible expense of the car and its passengers. In general, respondents preferred that HAVs protect the most lives possible, but for their own cars, they wanted one that would protect themselves.
A Mercedes-Benz official told Car and Driver magazine in October 2016 that the German automaker’s driverless technology would place the highest priority on passenger safety, not the safety of other road users. A few days later, however, Mercedes-Benz backed away from this statement — the company pointed out that preferring any particular life over another’s would violate German law.
Federal Guidance on Ethical Considerations Is Vague
In the United States, there is no specific requirement that a driverless vehicle’s artificial intelligence point toward any particular outcome in an emergency. The only current direction is found in the Federal Automated Vehicles Policy, issued in September 2016, which is a non-binding guidance for manufacturers and software designers that will likely undergo further refinement as HAV technology approaches commercial deployment.
For the moment, ethical considerations are just one of fifteen separate safety and performance issues that are to be addressed in a Safety Assessment Letter prepared for the review of the National Highway Traffic Safety Administration (NHTSA). The Federal Policy observes that safety, mobility and legality are three broad objectives of driving in general, and these all can be satisfied easily most of the time. But what about choosing between a traffic violation or an accident? Or hitting a deer or a tree? The Policy does not require manufacturers to satisfy any particular standard with respect to ethical considerations like these. Instead, manufacturers are asked to describe how their HAV technology will resolve the occasional ethical conflicts that can arise between the sometimes conflicting driving objectives.
The Federal Policy also states rather vaguely: “Algorithms for resolving these conflicts should be developed transparently using input from federal and state regulators, drivers, passengers and vulnerable road users.” To the extent that artificial intelligence in driverless technology will eventually be covered in a Federal Motor Vehicle Safety Standard, it’s obvious that a good deal of work remains to be done before a given set of emergency priorities is codified into a regulation.
Even gathering the public input necessary to inform a manufacturer’s emergency priorities and ethical decisionmaking could prove to be difficult. As found in the survey mentioned above, drivers seem to want HAV technology to protect as many lives as possible as a general matter, but want their own cars to protect the drivers themselves.
If a given manufacturer presents its vehicles as placing the highest priority on self-preservation, it might do well in the market, but it might undermine the overall safety of an automated driving environment as other carmakers do the same. Prioritizing the safety of the general public will probably need to become the standard practice in the HAV industry, because individual carmakers and drivers will have no real incentive otherwise to adopt this value.
Will HAV Technology Doom Private Vehicle Ownership?
It’s possible, however, that as driverless technology takes hold, people’s attitudes towards cars and driving might change. Today, people own cars as prized possessions and it makes sense for drivers to emphasize their own safety and convenience over that of others.
In the future, private ownership of cars and trucks could substantially decline. Driverless vehicles could become a quasi-public utility, with many vehicles in nearly constant use by many different people. Especially in cities, people will probably prefer to have a car available on demand rather than assume the expense and responsibility of driving, fueling, maintaining and parking a car they might only need for an hour or so a day. Under that scenario, people might be more aware of traffic as a public system rather than a personal annoyance or threat, and it might make better sense to maximize the safety of everyone involved in the system.
Truly driverless vehicle options remain at least a decade away from daily reality. It just might take that long for the ethical implications of artificial intelligence in motor vehicles to be fully considered and resolved.
About the Author
Matthew Wright began his career representing insurance companies but quickly became disillusioned with how many companies cared little about the people who suffered catastrophic or life-altering losses. He now fights aggressively on behalf of plaintiff’s that have been injured as the result of unsafe practices by trucking companies. Matt’s goal is to encourage safer practices within the industry, and ultimately arrive at a point where only safe and compliant companies remain. He has written numerous articles on the future of self-driving trucks that can be found on his website discussing Truck Injury Law.