Navigating the Legal Landscape: Liability in Autonomous Vehicle Collisions
In recent years, our roads have begun to see the introduction of Autonomous Vehicles (AVs). The rise and interest in AVs have, in many ways, been seen as a means of achieving greater road safety. [1] Car accidents have frequently occurred with the traditional use of Conventional Vehicles (CVs) operated by human drivers.[2] The frequency of car accidents with CVs is primarily a result of human error.[3] Data from the NHSTA reports that 94% of car accidents result from human error.[4] AVs are a possible solution to the human error effect in accidents, as they can provide safety assistance to drivers, mitigating human error. [5] In some cases, AVs can remove the driver from operating the vehicle entirely, removing any possible pitfalls of human error.[6]
AVs are broad in meaning and vary in terms of technology and human involvement. AVs are currently classified by SAE International, defining AVs at six different levels, ranging from Levels 0 to 5. [7] At Levels 0 to 2, AVs do not entirely control the vehicle; instead, they provide driver assistance. [8] An example of technology that falls within these lower levels of automation is autopilot, which is classified as level two automation.[9] At levels 0 to 2, the driver is still in complete control of the vehicle and is responsible for monitoring the conditions on the road and responding to them adequately.[10] At Levels 3 to 5, the vehicle fully controls the driving function.[11] In the current landscape, the most common vehicles within these levels are at Level 4, primarily in the form of Robotaxis.[12] In sum, what separates AVs from Levels 0 to 2 and Levels 3 to 5 is at Levels 0 to 2, the driver is still in control of the vehicle, whereas in Levels 3 to 5, the vehicle can operate without human intervention.
As mentioned earlier, the rise of AVs brings a perspective promise of reducing car collisions. Some reports have predicted that AVs could reduce car accident fatalities by as much as 90%.[13] Further, data gathered by Waymo, a robotaxi company operating at Level 4, has reported its AVs are much safer than CVs.[14] However, despite these optimistic predictions and data, several high-profile accidents involving AVs have occurred.[15] The point of tension in many AV collisions is evaluating liability, specifically, evaluating liability in AV collisions that occur without a technological defect in the AV itself.
In recent Level 2 cases involving AVs using autopilot, Tesla has escaped liability by pointing to drivers acting negligently when using the autopilot system.[16] Examples include a driver using the autopilot system on roads Tesla cautions against or an accident occurring due to a driver failing to monitor the road while using autopilot.[17] However, placing liability on drivers in autopilot cases may be misguided because many drivers tend to overly rely on autopilot features, indicating an unfair allocation of driver liability within level two accidents.[18]
On the other hand, people have begun to question how liability should be assigned for crashes at higher levels of automation, absent design defects and human intervention. Since there is data that indicates AVs operating at level 3 to 5 automation are safer than CVs, accidents that may occur at higher levels of automation may raise issues of liability. Part of this complication is if AVs will seldom be found to have acted recklessly, there is the possibility that victims of these AV crashes may not be compensated for the damages they incur.
As a result of this tension and complication about assigning liability within AV collisions, some scholars have often proposed strict liability systems for the manufacturers of AVs in the occurrence of an accident. Two notable approaches have been those offered by Matthew Wansley and Kyle Logue. Though they don’t label their proposals as strict liability regimes, they both propose systems that would hold manufacturers responsible for AV collisions, regardless of fault. [19] Additionally, Wansley and Logue’s proposals focus on the pitfalls of using a typical negligence system, arguing the deterrence principles of negligence fall short of encouraging drivers to engage in safe driving practices.[20] Further, they argue that aspects of deterrence should be placed on manufacturers as they will be encouraged to take steps in developing AVs that reach optimal safety levels. [21]
A concern of opting for a strict liability system is that these systems tend to group all forms of AVs rather than separating them based on the actors involved, i.e., weighing the various degrees of human involvement at levels 0 to 2 and 3 to 5. Additionally, strict liability systems typically employed in tort law are reserved for inherently dangerous activities.[22] This concept is at odds with current data showing AVs would be better classified as inherently safe rather than dangerous. Lastly, arguing that strict liability will result in manufacturers being incentivized to invest in creating the safest AVs possible could have the opposite effect, in that manufacturers may find that the threat of strict liability will deter manufacturers from building these vehicles in the first place.
AVs have the potential to make our roads safer. At the same time, though AVs will reduce the number of crashes on our streets, they will not remove them altogether, possibly leading to complex questions of liability and the need to establish new liability frameworks about AV collisions. The current discourse points toward a strict liability regime. However, such a system may need to be narrower and more manageable for manufacturers. A more balanced approach to AV liability may need to be taken, perhaps focusing on the varying levels of human involvement at different levels of automation.
Footnotes