Skip to main content

Autonomous vehicles will not prevent half of real-world crashes

Alan Thomas of CAVT looks at the reality behind the safety claims fuelling the drive towards autonomous vehicles
April 5, 2017 Read time: 8 mins
Signs-algorithm
How can an algorithm be written to deal with this situation

The case for autonomous vehicles (AVs) is usually made by saying 90% of crashes are caused by driver error, so remove the driver and you avoid 90% of crashes. However, this simplistic approach ignores the real-world causes of collisions and fails to acknowledge the challenges facing the developers of smart systems.

Work being undertaken by CAVT includes researching how a range of road and traffic scenarios can produce atypical conditions in which drivers, and particularly autonomous and advanced driver assistance systems (ADAS) are faced with instantaneous choices. The outcomes of these choices range from inconsequential to a fatal collision, and the research supports the development of ADAS which take account of the real world - not the world we would like it to be.

AVs and ADAS can avert collision scenarios coded into their algorithms plus others generated through the vehicle’s deep learning capability or ‘taught’ by transfer from an as-yet non-existent external body of accumulating AI wisdom. Human drivers are also good within their training and experience and, in addition, have the capacity to learn and intuitively interpret novel and ambiguous situations. This ability is vitally important as infrastructure and traffic do not conform to standards and are subject to huge variations locally, nationally and internationally.

One example is Europe’s pictograms which have no US equivalent. And where pictograms do not suffice, text is used and may be bilingual. In the absence of 100% connected vehicle coverage, delivering a clear message in a few easily-readable words is a challenge and any automated traffic sign recognition system will require complex linguistic abilities.

This problem is amplified with variable message signs where the wording can vary and the strobe effects may render it unreadable to a digital camera.

In figures

Despite recent moderations, the mantra persists that ‘AVs are expected to eliminate most of the 93% of collisions that currently involve human error’. This comes from a summary by the US DoT focussing on Critical Reasons for Crashes Investigated in the National Motor Vehicle Collision Causation Survey (NMVCCS) which attributes 94% (±2.2%) of Critical Reasons to the driver. However, it defines the Critical Reason as ‘the immediate reason for critical pre-crash event - often the last failure in the causal chain’.

NMVCCS defines the Critical Pre-crash Event as the ‘circumstance that led to this vehicle’s first impact in the crash sequence and made the crash imminent’ - in other words, given a situation in which a collision is almost inevitable, the driver failed to avoid the impact.

Inevitable crash

This raises questions about what led up to the situation when the impact was inevitable. The full NMVCCS Report to Congress, and contributory Crash-Associated Factors highlights that there is almost always multiple factors: 30.4% of those contributing factors were attributed to road and weather, 16.4% to the vehicle with 0.7% unknown. This means the driver is solely responsible for 43.4% crashes - less than half the often stated rate. 

Looking at details of the Drivers’ Critical Reasons for the Critical Pre-Crash Events, the largest factor was errors of recognition (40.6%) followed by the wrong or no decision (34.1%), performance (10.3%) non-performance (7.1%) and other/unknown (7.9%). Again, more than one error may be involved.

Similar analyses of French and UK data classify driver functional failures differently, but show a similar pattern.
 

Fields of view

In an attempt to clarify some of this confusion and determine if automated systems could really counter these problems, CAVT undertook some testing to understand what was happening in these circumstance. A test vehicle was fitted with a road-scene video camera using a 1920x1080 HD sensor running 30 fps in full colour with a wide dynamic range. Its f/1.9 lens maximises light transmission provides a 140° field of view with a horizontal angular resolution of 0.073°/pixel (compared with 0.051°/pixel of a typical ADAS VGA camera which has a 38° field of view and comparative information about cameras used in AVs is not readily available). The extra wide angle is important because many hazards arise from the sides. LIDAR, it should be noted covers about 170°.

 

These tests suggest that a VGA camera in a vehicle travelling at 80km/h (50mph) cannot reliably detect an oncoming cyclist moving at 30km/h (20mph) until it is 50m away – or 1.60 seconds before a collision – sufficient under ideal conditions, but scary.

The recording road and traffic scenarios (excluding dangerous driving manoeuvres), reveals a vast array of circumstances which cause human confusion and would be hard to program into or train an autonomous vehicle to handle, many relating to infrastructure and in particular traffic sign recognition. These could be rectified given time and money, but there is little sign of that beginning. Other examples are not as ‘easy’ to counter and include examples of sensors being compromised by flooded motorways, wet tramlines at night and low sun.

Conflicting signs

In England this is compounded by a confusing system of road sign regulation which has been loosened to give local authorities more discretion. While this causes consternation to drivers’ organisations and highway engineers (who remain legally liable for compliance), it presents an even greater challenge for machine recognition and interpretation.
 

X2V communications would help but is not infallible. In a long-term construction zone the traffic sign recognition (TSR) system picked up the site limit, while the satnav display showed the default 70mph (112km/h) limit - a potentially dangerous error for Intelligent Speed Adaptation. Away from the construction zone, the sat-nav correctly displayed 70mph but the TSR showed an illegal 90mph (145km/h) – and inspection of the video footage could not explain the error.
 

Complex situation

There are also many examples of signage displaying access or stopping restrictions that vary by vehicle type, time or day of the week. Rarely does a map database contain this data meaning the sign’s text must be optically recognised, converted, read (and possibly translated), understood and cross-referenced with the vehicle, its occupants/cargo and the time and date.

There are also examples where AI could learn incorrectly. An 8km (5mile) stretch of the A447 has three sets of temporary warning signs but no lights, whereas a few miles further on there were active lights. The AI could have learned to ignore these signs – as did a human driver who overtook our test vehicle and was surprised by the lights and consequently caused our driver to brake abruptly.

Many such evasive actions (both resolved and failed) can be viewed on the internet and it is often apparent that the well-known Collision Timeline is under way.
 

Traffic conflict

The stages of the collision timeline blend, and the time-scale can be variable but, in relation to Ekman’s Traffic Conflict Pyramid, NMVCCS generally defines ‘Crash Associated Factors’ in the ‘normal traffic’ (green) zone as: overloaded, snow/sleet, dark but lit, not physically divided, view obstructions, non-driving activities, fatigue, tyre, wheel, or brake  deficiencies. The potential and slight conflict zones (yellow-to-orange) covers ‘Motions before the Critical Crash Envelope’ including changing lane and avoidance manoeuvres (due to a previous critical event).

The red (serious conflict) zone houses ‘Critical Reasons for the Critical Pre-Crash Event’ often include continuation of factors initiating the hazard. These include distraction, inattention, too fast for conditions, illegal manoeuvre, misjudgement of gap, overcompensation, panic and sleep as well as tyres, brake or other system failure and signage, road layout, obstructed view, fog and glare.
 

Near-miss analysis

Analysis of near-miss events (often called the Traffic Conflict Technique) is now widely used because more can be understood by observing the frequency and number of conflicts (where one road user changes trajectory due to another user’s action) than waiting for a collision to occur. This technique can also distinguish the small marginal changes between the different levels of event, by which a slight conflict can become a serious conflict or a crash, through almost random minor differences in circumstance.

 

In order to remove human errors, ADAS and AV systems would need to be immune to all contributory factors – a nirvana that may be impossible to achieve. Furthermore, the introduction of new contributory factors (due to system hardware and software shortcomings and failures – even with ISO 26262) is almost inevitable because of the number unknowns.

Human parallels

More fundamentally, there are parallels to consider between ADAS/AV and human failures including perception (inadequate range, field of view, resolution or contrast, or degradation by ghosting, reflections, artefacts and sensor obscuration by snow, rain or litter. The evaluation and interpretation of anything the sensors detect is also fraught with difficulties including conflicting inputs and false positives, then the system has to decide what to do (which could be hand over to the human) and action that decision. Every stage can be subject to error.

Accepting that sensor fusion will help overcome some of these problems and that LIDAR can play a major role in providing a ground-truth reference for other sensors, the system performance should be total, regardless of vehicle price point.

Due to many of the limitations in perception, and particularly when the driver is treated as a fall-back in Level 3 and 4 vehicles, the system must always behave as well as an experienced driver in the green/yellow zone. To delay intervention or response until the situation is yellow/amber would be uncomfortable or alarming for occupants. To delay further until the driver has failed to respond to a conflict that is already into the orange/red critical pre-crash event domain, is a recipe for failure.
 

Infrastructure maintenance

Assuming (rashly) that full integrity and reliability can be achieved, it is in principle possible to provide good enough physical infrastructure and I2V to mitigate some of the risks of ADAS and AV systems - but given today’s budgets and the inability of authorities to maintain existing standards, this is impossible. Interestingly, many of those measures, particularly the lower-cost ones, would also remove or minimise the hazards for human drivers.

While a 43% safety gains would be more than welcomed, the ‘93%’ claim must be viewed in context and the ability of advanced technology, the 50%+ aim of crash associated factors that are not down to the driver will keep everyone busy for some time.

  • ABOUT THE AUTHOR: Following a career in vehicle manufacture, safety, regulations and driver performance R&D, Alan Thomas is now director of research and consultancy at CAVT.

For more information on companies in this article

Related Content

boombox1
boombox2