Designing engineering solutions requires a bounded problem statement that can be reduced to an unambiguous specification, which the engineered machine can adhere to within defined and agreed tolerances. The bound of performance tolerance is scoped on a variety of considerations. In the case of automated driving systems, we are faced with the challenge of defining the system performance specification in an unambiguous manner – and therein lies the long tail of conditional automation.
The Society of Automotive Engineers (SAE) taxonomy of levels of driving automation are a good starting point and make it easy for everyone to understand the problem at the lowest common denominator, but it fails to address the challenge of providing the engineering design guidance to circumscribe the performance specification of any automated driving system. In this article, I will attempt to reduce the complexity surrounding increasing levels of automation to enable a clear interpretation of technologies that deliver automated driving. I will restrict myself to discussing this challenge with respect to driving automation systems sold by car manufacturers in series production passenger vehicles only. All other types of vehicles or systems are beyond the scope of this write-up.
Level 5 – Undefined by definition:
In my view, a truly unconditional automated driving system (defined as SAE Level 5 driverless operation where a vehicle can perform automated driving anytime, anywhere, under all conditions, without any form of human supervision whatsoever) is only a theoretical construct. The reason is rather simple – when the design problem is unbounded, it cannot be specified and nothing can be engineered without a specification. The ability of the automated driving system to safely operate has to be bounded within a specific set of constrained conditions under which the system can be designed to operate. This is a system’s operational specification also known as the Operational Design Domain (ODD). Defining the ODD of any automated driving system is an essential first step in providing an engineering specification and defining technology bounds. Since ODD for Level 5 is undefined by definition, only an unlimited performance envelope could be derived from it, and it therefore cannot be engineered.
Classifying Artificial Intelligence (AI) based technologies:
SAE driving automation levels fall short of describing or classifying the vast array of automated driving technologies that are emerging, each being driven by different components, many of which are based on AI. In biological sciences, a tiered taxonomic rank (comprising kingdom, phylum, class, order, family, genus, and species) is used for classifying life forms. The complexity of automated driving technologies may perhaps benefit from somewhat similar thinking, in which a tiered categorisation could be used for interpreting driving automation with fewer and more general categories at the top to a variety of specific sub-categories at the lower tiers. At the top of the hierarchy of such a classification, automated driving technologies could be sorted into three basic categories: (1) Warning technologies, (2) Assistive technologies, and (3) Driving technologies.
Defining the Top-tier categories:
Warning technologies can be defined as systems that provide audible or visual warnings to a human driver but do not take over the driving task at any time. The human driver is always responsible.
Assistive technologies can be defined as systems which perform corrective, momentary interventions to the human’s driving task. For example, nudging the car back into lane if the human driver is drifting out of lane, or automatic emergency braking if the human driver fails to apply brakes or fails to apply a sufficient amount of braking. In context of assistive technologies, the human driver is responsible except that the system could be held responsible if the system intervenes incorrectly, but not if the system fails to intervene.
Driving Technologies can be defined as systems that fully operate the vehicle (speed and steering) in a defined ODD. A human being may perform discretionary interventions to the system’s driving tasks. In context of driving technologies, the system is responsible when it is engaged in its ODD, except that the human being is responsible when the system is not engaged or the system is activated outside the bounds of its ODD.
We all know that the current generation of vehicle automation systems such as Tesla’s AutoPilot, GM’s SuperCruise or Nissan’s ProPilot, etc., are being sold as assistive technologies whereas these are a type of driving technology. Such systems should be categorised as driving technologies because when the system is activated, it takes over both speed and steering control. Ambiguity occurs because such systems do not explicitly state the bounds of the ODD, i.e. under what conditions can the systems safely operate the vehicle and under what conditions they cannot do so. Instead of defining the ODD, such systems impose an obligation upon the human being to intervene in any condition of system failure. This ambiguity is supported further on account of such systems being sold as assistive technologies. As a result, all responsibility gets improperly devolved upon the human driver when the system fails. If these vehicle automation systems were to instead be clearly positioned as driving technologies, the manufacturers of these systems would have to assume full responsibility for any system failure state in context of a clearly articulated ODD – which is not currently happening.
Driving technologies being marketed as “Pilot” today have a variety of performance limitations but with ODD not being defined clearly, it understandably becomes extremely confusing to implement a clear definition of such systems. Not only are many of the currently marketed “Pilots” unsafe inherently, they are made more unsafe because of misinterpretation by consumers of what these vehicle automation systems can or cannot do, as the ODD bounds are unknown. The manufacturers of currently marketed driving technologies do not provide a clear definition of the ODD to either the end customer or the regulators and the reasons may become apparent from the following discussion.
Longitudinal control:
Let’s first discuss longitudinal control. This requires the ability of a driving technology to detect frontal obstacles and apply braking to maintain a safe distance and avoid a collision. In most circumstances, where other vehicles are in motion, Radar systems are able to detect and track moving obstacles with a high level of fidelity. The problem occurs when a stationary obstacle is encountered upon the road, because Radar data is filtered for eliminating some categories of static obstacles. When a static obstacle appears in the path of an automated vehicle (e.g. stopped car or jersey barrier), the driving technology has to rely primarily on its camera or laser sensors to detect it. Many OEMs do not employ Lidar and given the gaps in neural network based obstacle detection systems for cameras, that most if not all OEMs have deployed, many driving technologies fail suddenly with potentially catastrophic outcomes (please refer to my last article on this topic ‘AI in Self Driving - It the Cake’ - https://selfdriving.substack.com/p/ai-in-self-driving-it-the-cake).
Lateral Control:
The challenge of lateral control is even more complex than longitudinal control. Some people suggest there is no major challenge in detecting lanes and then maintaining a centre-line. For most roads where lane markings are bright and fresh, not occluded by traffic, and appear straight, neural networks can in fact do a reasonably good job. However, when you start analysing real world situations, the enormity of the challenge becomes more apparent. Lanes could be faded or absent, or replaced by road reflectors (cat eyes) with older lane lines still apparent. Lane markings may be different across geographies, with varying curvature (from modest bends to sharper curves), and lanes could be closed or narrowed due to road works with traffic-guiding infrastructure like cones or jersey barriers. For camera-based systems, these challenges are compounded in adverse weather like heavy rain or fog, and low-light conditions, or when going through tunnels or beneath over-passing bridges. Detecting a safe driving corridor along a lane all the time is not a simple problem, and to achieve robust lane centring control, High Definition (HD) maps can be very useful if they are accurate and up-to-date (please refer to my previous article on the topic of mapping ‘When self-driving cars cannot self-drive’ - https://selfdriving.substack.com/p/when-self-driving-cars-cannot-self ).
Unless a vehicle automation system can robustly detect all objects (static or moving), and keep centred in its driving corridor within its expressed ODD, and the system preferably has access to a high definition map of the roads as a back-up, it is not prudent to consider it a safe driving technology.
Any driving technology that only performs in a satisfactory manner for a large part of its ODD (which may be geographic, or time of day, or linked to the road conditions) continues to give an illusion of competence till it fails. For example, until Tesla released a “cone detector” in late 2019 in Autopilot V10, there was no way for an average customer using Autopilot to know prior to that date that Autopilot could not detect traffic cones on a highway till they found themselves crashing into traffic cones. Perhaps consumers still do not know what type of traffic cones Tesla’s Autopilot can currently detect and if it can robustly detect all types, all the time or not? If you compare cone shapes in the US with those more commonly found in the UK, they look pretty different. In the US, cones are taller and cylindrical whereas in the UK, they are pointed and shorter in height (actually conical) in shape.
GM’s SuperCruise system comes with a set of roads for which it has an HD map, and the system can only be engaged when a customer happens to be driving on one of the mapped roads. This may help SuperCruise navigate difficult lane-centring scenarios (lateral control), but consumers would still not know if SuperCruise reliably detects all obstacles at all times on those roads (longitudinal control).
The lack of specificity about the known limitations of driving technologies (though no one is generally expected to specify what their products cannot do), exuberant messaging around self-driving technologies, confusion arising due to nomenclature of SAE levels, and absence of clear technology definitions that are easy for all to understand, results in consumers paying the ultimate price. The collective of all these factors results in such driving technologies being unsafe in use, even if they are not so by design.
In the interest of consumer safety, regulators need to demand clarity on whether a system is intended to function as a driving technology or as an assistive technology and to then ensure that the difference between the two is defined in a manner that is easily accessible to consumers. Marketing of driving technologies as assistive technologies should also be considered in a careful context by regulators around the world, while keeping in view impacts not only upon consumer safety but also upon incident liability assignment. This may not be a simple, one-size-fits-all, top-down solution and would likely require industry-wide and global regulatory consensus, but it is essential to think about and speak about vehicle automation products called “Pilots” when they result in death or injury.
Extending the performance envelope of driving technologies:
The second and more substantive pillar of addressing this challenge is to extend the performance envelope of driving technologies at least to the extent that a reasonable and measurable assurance of safety can be guaranteed within the ODD. This is a really complex challenge that requires provably safe obstacle detection and robust lateral control capabilities. Any new driving technology that can competently drive on any structurally separated roadway (no matter the state of lane markings, road works etc.), day and night, around the world, in fair weather conditions, without disengagement, would be a big leap on the current state-of-the-art. Such a system would be competently in charge of the driving task in all conditions.
The very purpose of a driving technology is to allow a safe and convenient experience and not a ‘probably-somewhat-safe driving’ experience that is ‘convenient-for-some-of-the-time’ but ‘really-inconvenient’ at other times. If a car manufacturer tried to sell a vehicle automation system with that marketing tag line, I am sure there wouldn’t be many enthusiastic takers.
It is an imperative for developers of automated driving technologies and for car manufacturers who integrate such products into their series production vehicles to address these challenges head-on before there is a societal backlash on account of system failures or governmental handcuffs in the form of stricter regulations. Neither outcome would be helpful for the industry. When these systems do perform well, the designers are quick to take credit for having enhanced driver safety, but when the system fails, responsibility is always parsed out to a variety of confounding factors and then ultimately left at the doorstep of the human occupant in the vehicle.