Driverless cars may
function completely or partly without the assistance of a human driver.
Driverless automobiles, like other AI products, confront
difficulties with liability, responsibility, data protection, and customer
privacy.
Driverless cars have the potential to eliminate human
carelessness while also providing safe transportation for passengers.
They have been engaged in mishaps despite their potential.
The Autopilot software on a Tesla SUV may have failed to
notice a huge vehicle crossing the highway in a well-publicized 2016 accident.
A Tesla Autopilot may have been involved in the death of a
49-year-old woman in 2018.
A class action lawsuit was filed against Tesla as a result
of these occurrences, which the corporation resolved out of court.
Additional worries about autonomous cars have arisen as a
result of bias and racial prejudice in machine vision and face recognition.
Current driverless cars may be better at spotting people
with lighter skin, according to Georgia Institute of Technology researchers.
Product liability provides some much-needed solutions to
such problems.
The Consumer Protection Act of 1987 governs product
liability claims in the United Kingdom (CPA).
This act enacts the European Union's (EU) Product Liability
Directive 85/374/EEC, which holds manufacturers liable for product
malfunctions, i.e., items that are not as safe as they should be when bought.
This contrasts with U.S. law addressing product liability, which is fragmented and largely controlled by common law and a succession of state acts.
The Uniform Commercial Code (UCC) offers remedies where a
product fails to fulfill stated statements, is not merchantable, or is
inappropriate for its specific use.
In general, manufacturers are held accountable for injuries
caused by their faulty goods, and this responsibility may be handled in terms
of negligence or strict liability.
A defect in this situation could be a manufacturer defect,
where the driverless vehicle does not satisfy the manufacturer’s specifications
and standards; a design defect, which can result when an alternative design
would have prevented an acci dent; or a warning defect, where there is a
failure to provide enough warning as regards to a driverless car’s operations.
To evaluate product responsibility, the five stages of
automation specified by the Society of Automotive Engineers (SAE) International
should be taken into account: Level 0, full control of a vehicle by a driver; Level
1, a human driver assisted by an automated system; Level 2, an automated system
partially conduct ing the driving while a human driver monitors the environment
and performs most of the driving; Level 3, an automated system does the driving
and monitor ing of the environment, but the human driver takes back control
when signaled; Level 4, the driverless vehicle conducts driving and monitors
the environment but is restricted in certain environment; and Level 5, a
driverless vehicle without any restrictions does everything a human driver
would.
In Levels 1–3 that involve human-machine interaction, where
it is discovered that the driverless vehicle did not communicate or send out a
signal to the human driver or that the autopilot software did not work, the
manufacturer will be liable based on product liability.
At Level 4 and Level 5, liability for defective product will
fully apply.
Manufacturers have a duty of care to ensure that any
driverless vehicle they manufacture is safe when used in any foreseeable
manner.
Failure to exercise this duty will make them liable for
negligence.
In some other cases, even when manufacturers have exercised
all reasonable care, they will still be liable for unintended defects as per
the strict liability principle.
The liability for the driver, especially in Levels 1–3,
could be based on tort principles, too.
The requirement of article 8 of the 1949 Vienna Convention
on Road Traffic, which states that “[e]very vehicle or combination of vehicles
proceeding as a unit shall have a driver,” may not be fulfilled in cases where
a vehicle is fully automated.
In some U.S. states, namely, Nevada and Florida, the word driver has been changed to controller, and the latter means any person who causes the autonomous technology to engage; the person must not necessarily be present in the vehicle.
A driver or controller becomes responsible if it is proved
that the obligation of reasonable care was not performed by the driver or
controller or they were negligent in the observance of this duty.
In certain other cases, victims will only be reimbursed by
their own insurance companies under no-fault responsibility.
Victims may also base their claims for damages on the strict
responsibility concept without having to present proof of the driver’s fault.
In this situation, the driver may demand that the
manufacturer be joined in a lawsuit for damages if the driver or the controller
feels that the accident was the consequence of a flaw in the product.
In any case, proof of the driver's or controller's
negligence will reduce the manufacturer's liability.
Third parties may sue manufacturers directly for injuries
caused by faulty items under product liability.
According to MacPherson v. Buick Motor Co. (1916), where the court found that an automobile manufacturer's duty for a faulty product goes beyond the initial consumer, there is no privity of contract between the victim and the maker.
The question of product liability for self-driving vehicles
is complex.
The transition from manual to smart automated control
transfers responsibility from the driver to the manufacturer.
The complexity of driving modes, as well as the interaction
between the human operator and the artificial agent, is one of the primary
challenges concerning accident responsibility.
In the United States, the law of motor vehicle product
liability relating to flaws in self-driving cars is still in its infancy.
While the Department of Transportation and, especially, the
National Highway Traffic Safety Administration give some basic recommendations
on automation in driverless vehicles, Congress has yet to adopt self-driving
car law.
In the United Kingdom, the Automated and Electric Cars Act
of 2018 makes insurers accountable by default for accidents using automated
vehicles that result in death, bodily injury, or property damage, providing the
vehicles were in self-driving mode and insured at the time of the accident.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.
See also:
Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.
Further Reading:
Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation.” California Law Review 105: 1611–94.
Hevelke, Alexander, and Julian Nida-RĂ¼melin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (June): 619–30.
Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido Governatori, 119–28. New York: Association for Computing Machinery.
Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driving.” Philosophy & Technology 30 (September): 547–58.
Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia Law Review 105, no. 1 (March): 127–71.
Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in Object Detection.” https://arxiv.org/abs/1902.11097.