Reading: Testing Models of Driver Behaviour – Ben Lewis-Evans
Most driver behavior models are criticized for being descriptive rather than predictive and lacking
testable predictions. Fuller addressed this by introducing the Task-Capability Interface (TCI) model
and Task Difficulty Homeostasis (TDH) to explain driver behavior.
Task-Capability Interface (TCI)
• Capability: A driver’s ability, built hierarchically, starting with innate features (e.g., reaction
time), followed by training, and influenced by human factors like fatigue or emotions.
• Task Demands: Environmental factors such as road conditions and vehicle properties. When
demands exceed capability, loss of control occurs.
Task Difficulty Homeostasis (TDH) TDH suggests drivers prefer a certain range of task difficulty,
adjusting behavior when demands fall outside this range. It includes both immediate (proximal) and
long-term (distal) influences and introduces a "risk threshold," focusing on a "feeling of risk" rather
than the likelihood of a crash.
Risk Allostasis Theory (RAT) Evolving from TDH, RAT emphasizes continuous adjustment to perceived
risk, influenced by emotions and unconscious responses. It highlights dynamic, constant monitoring
of risk rather than reacting to set thresholds.
Commentary on TDH and RAT Both models emphasize ongoing monitoring of perceived risk and task
difficulty but are criticized for focusing too much on speed and vehicle trajectory, complicating their
predictive power. Critics argue that task difficulty is not continuously monitored.
Risk Monitor Model (RMM) RMM builds on earlier models and suggests that driver decisions are
influenced by unconscious emotional markers. However, it has been criticized for oversimplifying risk
and lacking feedback loops.
Multiple Comfort Zone Model This model categorizes driving into strategic, tactical, and operational
levels, focusing on maintaining safety margins (e.g., time to collision). Comfort zones arise when
drivers stay within these margins. Critics note its overlaps with RAT and question its risk monitoring
assumptions.
Week 2 – Readings, Automated driving: Safety blind spots
Driver assist technologies have reached the tipping point and are poised to take control of most, if
not all, aspects of the driving task. Proponents of automated driving (AD) are enthusiastic about its
promise to transform mobility and realize impressive societal benefits. This paper is an attempt to
carefully examine the potential of AD to realize safety benefits, to challenge widely-held assumptions
and to delve more deeply into the barriers that are hitherto largely overlooked. As automated vehicle
(AV) technologies advance and emerge within a ubiquitous cyber-physical world they raise additional
issues that have not yet been adequately defined, let alone researched. Issues around automation,
sociotechnical complexity and systems resilience are well known in the context of aviation and space.
There are important lessons that could be drawn from these applications to help inform the
development of automated driving. This paper argues that for the foreseeable future, regardless of
, the level of automation, a driver will continue to have a role. It seems clear that the benefits of
automated driving, safety and otherwise, will accrue only if these technologies are designed in
accordance with sound cybernetics principles, promote effective human-systems integration and gain
the trust by operators and the public.
1. Introduction
Automated driving (AD) involves vehicles performing some or all dynamic driving tasks without
human input. The technology has advanced rapidly, and its potential societal benefits—like reducing
accidents—are promising. However, the authors argue that several safety challenges and "blind
spots" need to be addressed.
2. Driver-vehicle Interaction & Automation Levels
The Society of Automotive Engineers (SAE) defines five levels of vehicle automation, ranging from no
automation to full automation. The paper critiques these classifications, arguing that they do not fully
consider the human driver’s role and the complexities of human-system interaction.
3. Safety Concerns & Human Error
Historically, human error has been identified as a major cause of traffic accidents—often cited as the
cause of up to 90% of crashes. Proponents of AD believe that by removing human drivers, accidents
will decrease dramatically. However, this assumption ignores two factors:
1. Non-driver-related causes: Accidents aren’t always due to driver error. Factors like road
design, vehicle issues, or poor weather also contribute to accidents.
2. Technology errors: Automated systems themselves can fail, as they rely on complex software
and sensors that are not flawless. Examples of accidents involving self-driving cars, such as
the Tesla autopilot crash, demonstrate that automation isn’t foolproof.
4. Ironies of Automation
The "ironies of automation" refer to situations where automation, instead of reducing the driver’s
workload, can introduce new challenges:
• Task allocation: Automated systems often handle simpler tasks, leaving humans with more
difficult or unexpected tasks, which can increase cognitive load.
• Deskilling: Over-reliance on automation can lead to a reduction in driver skills, making it
harder for them to take over when needed.
• Monitoring: Drivers must monitor complex systems, but without full understanding of how
the system works, they may not react appropriately in emergencies.
5. Trust in Automation
For AD to succeed, drivers and the public must trust these systems. Trust is essential because:
• Overtrust: Drivers may place too much faith in the system, disengage from driving, and fail to
monitor the road.
• Undertrust: Conversely, if drivers don’t trust the system, they may intervene too often or
avoid using it altogether. Building trust requires that systems be reliable and transparent, and
drivers must understand the limitations of AD technologies.