|This is an abbreviated, edited excerpt from Roadway Human Factors: From Science To Application, 2nd edition, 2022 (hopefully). To aid the causal reader, most references are not included. The original text contains the fully referenced version.|
Cell phone distraction has become a research cottage industry. A Google Scholar search, for example, of driver+distraction+phone returns 65,900 hits, which is up 20% from only three years ago. What has been learned from all this research? The answer is: surprisingly little.
This might come as a shock to most people, who read the popular press and safety advocate blogs, not to mention much of the experimental and epidemiological literature, that is full of dire predictions about the mayhem that would/could/should be caused by drivers talking on their cell phones. The evidence for believing that conversing on a cell phone will increase accident risk falls into three general categories:
- Common sense. It has been long known that attention is a limited mental resource. Attention paid to one task leaves less for others. This is so intuitively obvious that it hardly requires scientific evidence. Common sense then suggests that attention paid to conversing on a cell phone should leave less for the driving task and cause increased collision risk;
- Experimental psychology evidence. Thousands of studies have compared the behavior of “drivers” who are using a cell phone (or performing some other substitute task, such has counting backward) with drivers who are not engaged in a simultaneous cognitive task. The results generally show a performance decrement across a broad spectrum of driving tasks; and
- 3. Data mining evidence. A large number of studies have mined existing data, accident statistics, hospital admissions or, more recently, naturalistic data, to determine whether cell phones increase the “odds ratio”, the relatively likelihood of having an accident or a “near miss.” They usually employ the same type of case-control method that is used in epidemiological studies. I prefer the term “data mining” which is a better reflection of the methodology – the use of archival behavioral data that the authors did not themselves collect.
Researchers continue to churn out these experimental and data mining studies with dire predictions despite a very simple and basic observation: in the last 20+ years, the use of cell phones has skyrocketed, yet road accidents have fallen (at least up until the last couple of years). The explanation for this disconnect is a cautionary tale of the problems inherent in using research to identify and solve real-world problems and allowing the “White Hat” zero harm lobby to determine public policy.
In this article I explain the reasons why the research has so badly failed to predict cell phone safety risk. The discussion centers on what is known about the most researched and presumably most common type of cell phone distraction: having a conversation. This is often termed “cognitive distraction” as opposed to two other possible sources of distraction. “Visual distraction” is looking away from the road. “Physical distraction” is removing the hand(s) from the steering wheel to reach for or to hold some object, i.e., holding the phone, reaching for the phone, dialing, etc. In sum, the term “cell phone distraction” without a qualifier is meaningless because it conflates different tasks which, as explained below, have very different effects on road safety. To foreshadow the conclusion, there is little compelling evidence that conversing on a cell phone while driving is a risky behavior. On the other hand, there is much evidence to suggest that visual distraction is the only distraction type that is highly significant.
In the Beginning…
Almost from the invention of the automobile, the intuitive notion of limited attention raised concerns about the use of in-car technology. As early as 1906, highway authorities denounced a new technology that would increase mishap rates by hypnotizing and distracting drivers from the road. Despite the protests, the new technology, called “windshield wipers,” became a standard automobile feature in 1913. A similar concern over driver distraction arose in 1923 when the Springfield Sedan introduced car radios. Nicholas Trott, for example, wrote in the 1930 Farmer’s Almanac that some authorities believed that radios would “distract the driver and disturb the peace.” His solution was ban radio use except when the vehicle was parked. (Clearly, he was a man ahead of time.) Again, technology triumphed over caution and radios became standard within a few years. Except for some mild concern about CB radios in the ’60s, the issue of driver distraction from in-car technology lay dormant for 40 years until the dramatic spread of cell phone-wielding drivers reawakened fears that new technology would cause mayhem on the roadways.
Data Mining Evidence
The first major piece of evidence against cell phone use was Violanti (1997; 1998) which reported that cell phone use increased collision risk by a factor of nine, i.e., and odds ratio of 9:1, or 9 for short. This research had a limited sample size as well as other obvious methodological limitations, so it is not often cited these days. The real Patient Zero for the condemnation of cell phones is a statistical data mining study (Redelmeier & Tibshirani, 1997) that used data from a collision reporting center. It found an odds ratio of 4, lower than Violanti but significant because it conveniently allowed demonization of cell phones by comparison to drinking and driving which produces the same odds ratio. If it is dangerous as the ultimate bogeyman for road safety, then cell phones must be a menace indeed. Interesting, less dramatic studies published about this time were and are seldom sited. Laberge-Nadeau, Maag, Bellavance, Lapierre, Desjardins, Messier, & Saidi (2003) found an odds ratio of only 1.38 or less while Min & Redelmeier (1998) using different methodology found no effect of cell phones.
The methodological problems with Redelmeier & Tibshirani (1997) are extensive. The list is too long to explain in detail (See Green, 2022) so I’ll just note four of the obvious issues. First, there was no conclusive evidence that the driver was using the phone at the time of the collision. Second, and more significantly, the sample was biased. The study only examined drivers who had collisions, possibly a population of the worst drivers. Further, the reporting center dealt only with minor, property-only collisions. There is other evidence (see below) suggesting that if cell phone conversations have any effect, it is likely to be only on minor fender-benders. Third, the collision reports had nothing about the conditions at the time of the collision. They had no data on speed, traffic conditions, etc. In short, the study was weakly controlled. Lastly, a statistical re-analysis of the Redelmeier & Tibshirani (1997) data concluded that they did not an increased risk from cell phone conversation while driving.
Newer data mining studies using the more detailed naturalistic data do not support the Redelmeier & Tibshirani (1997) odds ratio of 4. The common finding is that that drivers talking on a cell phone had the same (Klauer, Dingus, Neale, Sudweeks, & Ramsey 2006) or even lower collision and near-miss rates than non-cell phone users (Olson, Hanowski, Hickman, & Bocanegra, 2009; Hickman, Hanowski, & Bocanegra, 2010; Young & Schreiner, 2009). That’s right – talking on a cell phone resulted in safer driving. In the case of handsfree sets, it was much safer driving.
However, other recent studies found an odds ratio somewhat above 1. Owens, Dingus, Guo, Fang, Perez, & McClafferty (2018) found an odds ratio of 1.83 combined across all classes of cell phone use. When examining only cognitive talking on a handheld the cell phone, the odds ratio was only 1.16 and an even lower 1.05 odds ratio for moderate and severe collisions. Recall that Redelmeier & Tibshirani (1997) examined only minor collisions. Drivers using a handsfree sets had so few collisions/near misses that no odds ratio could be computed, suggesting a small manual distraction effect. However, studies that examined the effects laws banning handheld phones find that they have no effect on crash rates while some experimental evidence finds no difference. The issue is not yet settled.
Another recent naturalistic study (Dingus, Guo, Lee, Antin, Perez, Buchanan-King, & Hankey, 2016) did find a moderate odds ratio of 2.2 for talking on a handheld cell phone. There are no data for talking on handsfree phones. Like Stutts, Reinfurt, Staplin, & Rodgman (2001) before them, however, they found that other generally ignored distractions caused about the same degree of risk: in-vehicle device (other) 4.6, using the climate control (2.3), manipulating the radio (1.9), eating (1.8), personal hygiene (1.4), and passenger interaction (1.4). Visual distractions that caused extended looking away from the road for any reason produced much higher odds ratios, e.g. 6.2 for texting.
The reasons for the discrepancies between early and later research and even between different newer naturalistic studies reveal the limitations of data mining problem. Data miners must decide what variables to include and what to ignore. They are creating a model of the real-world which is not the real-world. The accident reports provide insufficient information to build a reliable model. Still, naturalistic studies have their own problems. Different studies create different models. They must select operational definitions for terms such as “distraction”, “attention”, PRT, and “critical event.” Different studies also use different operational definitions, sometimes even within the same article. This is most obvious in the rather arbitrary category called “critical event.” The base data for early studies were actual accidents occurrences. Since actual crashes are so rare, however, the naturalistic literature has created a category called “critical events” which combines near-misses with collisions. In most cases, the number of near-misses is many multiples of collisions, so the research’s conclusions depend heavily on 1) the definition of near-miss and 2) the assumption that collisions and near-misses are interchangeable. There is no reason to make this assumption. Close examination of operational definitions is critical in science. They allow researchers a means of altering the statistical significance of their results. Two researchers with different operational definitions looking at the same data could come to opposite conclusions.
There are even deeper issues in the entire enterprise of performing statistical data mining to draw conclusions about road safety. Some relate to the general backward looking nature of case-control studies. The entire concept of odds ratio as a risk measure can be misleading because it expresses relative risk and not absolute risk. To say that that talking on a cell phone creates a critical event odds ratio of two says nothing about how many more crashes it would cause. Does it add one crash for every thousand miles driven? Ten thousand? Million? How many extra crashes is it likely to cause? How many micromorts1 does it increase the risk? Since actual crashes are so rare, the number is likely very small, especially given the evidence that talking on a cell phone has little or no negative safety effect at all.
Several bias factors are also likely at work. The drivers must volunteer to have their vehicles instrumented. It seems unlikely that those who drive aggressively, take risks, drink and drive, etc., would want their behavior closely monitored. Naturalistic studies likely reflect bias sample of safer and more conservative drivers. Other problems lie the many biases that affect the scientific literature. “Publication bias” refers to the fact that negative results are less likely to be published than positive ones. This means that data mining studies failing to find odds ratios greater than 1 and experimental studies finding no performance decrement with cell phone use are much less likely to see the light of day. “White Hat” bias means skewing the data for a righteous end, such as promoting safety. And of course, all researchers are biased by the need to churn out positive, publishable results in order to obtain grants, tenure and fame. Negative results are career killers. No researcher, safety authority, grant agency or news medium benefits in any way by concluding that talking on a cell phone is safe. Creating hysteria over a public safety hazard is good for all interested parties. Such considerations must be kept in mind when evaluating any scientific data.
Objective data do not exist because a human must decide what to measure, how to measure it and how to interpret it. These decision originate to some extent in previously held beliefs, theories, values and cultural identity.
Lastly, even if data mining were to demonstrate a correlation between phone conversations and collision risk, it would not prove causation. For example, there is evidence that drivers who use a cell phone while driving are a more aggressive population. Given the negative publicity, it also is reasonable to suppose that people who drive and converse are greater risk takers. (The same issue holds true for intoxicated driving.) Ironically, Violanti (1978), who was the first to publish a correlation between cell phone use and risk, was sufficiently circumspect to warn:
This analysis implies a statistical, but not necessarily a causal, relationship. A multitude of factors are involved in any traffic collision, and the exact cause of an accident and its severity level is difficult to disentangle. (Violanti, 1978).
Experimental research studies typically find that cognitive distraction causes performance losses, such as impaired detection and longer perception-response time, that the authors claim will create mayhem on the roadway. Yet, the data mining research provides no compelling confirmation and the mayhem has not occurred. Controlled research studies have not always generalized to the real-world and predicted events. What is the problem? There is a broad array of reasons, mostly due to the limitations of controlled research.
Statistical vs. practical significance. There is a distinction between statistical significance and practical significance. Not all effects are meaningful because statistical significance does not necessarily imply practical significance. Strayer, Drews, and Couch (2003) called the drunk drivers more “aggressive” because they followed more closely, 26.0 m compared to 27.4 m for control drivers. Statistically, the result was significant to the 0.05 level. The difference was only five percent. It hardly seems likely that such a small effect would have much real-world practical importance. It certainly does not warrant labeling drivers as “aggressive.” This is a classic example of taking a questionable statistic, giving it a verbal label, and using the label to overstate and to mislead. Further, a 0.05 level of significance is a borderline result. (Perhaps more importantly, using p-values to determine what is and is not a real effect is highly problematic.
Demand characteristics. Experimental demand characteristics can create artificial results. The subjects cannot decide to drive slowly, to pull off to the side of the road to make their phone calls, or to postpone their phone calls. The importance of self-regulation was demonstrated in a study the compared drivers who were performing a secondary task under two conditions. In the first, they acted at the pace dictated by the system. Although there was some risk compensation through reduced speed, drivers exhibited large performance decrements. In the second condition, drivers self-paced their behavior and showed little performance loss.
Task cognitive load. Not all cognitive loads are the same. Cell phone conversations of high “intensity” produce longer PRT and presumably more impairment. Research studies often use intense pseudo conversation such as performing complex mental mathematics and pseudo tasks, such as counting backward by non-prime numbers, that perhaps create an artificially high cognitive load. There is no obvious way to compare typical research intensity levels with typical real driver conversation intensities.
Automatic behavior. Humans reduce focal attention’s limited capacity by learning to automate behavior. Much of normal driving is controlled by automatic processes that run under minimal attentional supervision. As behavior becomes more automated, there is less need for the attention that might be diverted by the cell phone conversations. The consumption of attention by a cell phone conversations does not much disrupt automated behavior. For example, cognitive load can slow responses on tasks that require cognitive (i.e., attentional) control but not slow a driver’s response to a braking lead vehicle. Another study supports this conclusion by finding no difference in brake light detection in drivers with light or heavy cognitive load. Conversely, most controlled research studies put drivers in novel situations for short periods and often have them performing novel tasks (counting backward), so the subjects don’t have time to adapt and to automate much of their behavior. Laboratory effects greatly overestimate real-world interference from cell phone conversations by artificially loading attention. This creates a bigger, more publishable effect.
Baseline behavior when not using the cell phone. Fisher (2015) suggested that experimental (big effect) and naturalistic data (little or no effect) are at odds because they are measuring different attentional baselines. Research subjects are on abnormally high alert because they are consciously aware of being tested in an experiment. Research conversations divert attention from a very high level of vigilance. In contrast, naturalistic studies use real drivers, who are likely less attentive anyway. A study found that drivers on a daily commute reported mind wandering in 63 percent of their responses. Other data showed that 52 percent of patients brought to emergency rooms after collisions admitted to mind wandering before the crash. Both studies suggest that controlled, focal attention to driving is not necessarily normal behavior, at least in familiar driving conditions. Compared to this natural low level of attention, cell phone use may increase awareness. More specifically, cell phone conversations may keep drivers alert during long, monotonous travel when they would otherwise drift off into low arousal and at nighttime into circadian rhythm troughs. Lastly, this criticism may apply to naturalistic studies, too. There is no certainty that the drivers of instrumented vehicles are also not more alert than normal because their behavior is being monitored.
Drivers may also have other “distractions” when not on the phone. For example, they may be conversing with passengers. A laboratory study found that drivers with passengers resulted in many “look but fail to see” (LBFS) errors where drivers failed to detect road users such as pedestrians and motorcycles. In the worst case, female drivers with female passengers detected only 17 percent of motorcycles. A supporting study of hospital admissions claims that the presence of passengers correlates highly with increased collision accident risk. Having two passengers “is associated” with a doubling of risk. Is it time to ban passengers from cars?
The locus of attention. Cell phone conversations may also improve driver safety by reducing eye movements. Normal drivers spend some time glancing sideways at roadside objects. In contrast, drivers talking on cell phones concentrate gaze more intently on the center of the road ahead and exhibit less lateral variation in lane position. That is cell phones combat visual distraction. Cell phone users are ironically doing more of what drivers are supposed to do – looking where they are going. Of course, there is much more to attention than looking in the right direction (Green 2022).
Risk Compensation. Perhaps the biggest factor missing from the experimental research is risk compensation. Most views of driver behavior treat it as a skill-based task and that this is what experimental studies typically measure. However, driving is not simply based on skill, as the higher number of crashes and traffic tickets for professional race car drivers attests. Instead, driving is a self-regulated behavior that changes with task demands, so drivers’ pacing may be even more important than their skill. Drivers moderate their driving to compensate for high demands and risk.
The amount of compensation may depend on testing conditions. Compensation in a simulator where there is no real risk may underestimate compensation in the real-world. Still, research has demonstrated many types of compensation.
· Less Phone use. The simplest compensation strategy is to avoid using the cell phone. As might be expected, older drivers who have reduced attentional capacity are most prone to performance decrements from cell phones. However, according to self-report data, they are the demographic most likely to avoid using a phone;
· Use in low demand situations. Drivers prefer making phone calls in low demand situations, such as when stopped at intersections;
· Reduce speed. Cell phone conversing drivers also travel at reduced speed, although this may not be due to risk compensation. Distracted pedestrians also walk at a slower pace. Ironically, this increases rather than decreases their risk exposure;
· Leave longer headway. Many studies contradict the claim that cell phone users are more “aggressive.” On the contrary, many others found that they allow greater headway. The slower PRT in cell phone users is often cast as their major performance decrement, but the slower speed and longer headway would provide compensatory offsets. Moreover, the PRT-headway tradeoff presents a possible chicken-and-egg quandary. Do cell phone drivers leave more headway because they know that they need more time to respond, or do drivers respond more slowly when they have more headway? The second scenario is supported by research showing that even non-distracted drivers respond slower when the headway is greater. Drivers are simply slower to respond in less urgent situations;
· Restrict attention to relevant objects. Other evidence suggests that drivers also compensate by using strategies that conserve attention. While drivers on cell phones are allocating attention to a non-driving task, they conserve attention by first ceasing to attend irrelevant objects and exhibit no attentional loss to roadway objects and to hazards. They also cease relatively minor tasks such as checking speedometers and mirrors;
· Withdraw attention from the secondary task. They also conserve attention by withdrawing it from the distracting task. Driver ability to relate and remember stories declined while conducting a cell phone conversations, which suggests that they reduced attention to the call;
· Scale compensation to secondary task demands. When the secondary task demands more attention, compensation is greater. For example, drivers who are texting have greater headway compensation than those who are only conversing. Drivers similarly compensate more when using a handheld than a handsfree phone; and
· Scale compensation to driving task demands. When drivers encounter a mentally demanding road situation, they decrease attention to the secondary task. Drivers compensate more in complex urban roads than on simpler rural ones.
What really causes driver distraction?
The overall evidence that the cognitive “distraction” of talking on cell phones constitutes a general driving hazard is not compelling. This does not mean that cell phone conversations never contribute to collisions. Impairment is a joint function of dose and task. From what is known about attention, cell phone conversations should most likely contribute to collisions when they are the most intense and when the situation is most complex and demanding. However, these are situations where drivers exhibit the greatest compensatory behavior. Cell phone conversing drivers are similar to older drivers. There may be extreme cases where they create “unsystematic” risk but overall they do not constitute a system risk.
Some believe that driver risk compensation is not “adequate”. This presumably means that risk homeostasis does not occur- the cell phone risk is not brought back down to zero or to some other target level. The failure to perfectly compensate is unsurprising because humans are satisficers, not optimizers. It might be better to say that drivers perform risk “hedging” rather than risk “compensation.” They hedge the risk to a “good enough” level for the circumstances, even if it is not always zero. Even if cell phones increased risk slightly, they have benefits such as allowing drivers to perform tasks during driving, providing entertainment, reduce boredom and obtain information. The reasonable question has never been whether talking on a cell phone creates zero harm. Every human activity creates some risk. The proper question is whether talking and driving creates an acceptable risk, given its benefits and the cost of preventing it. As in most safety initiatives, such as Zero Vision, such benefits (and costs) are completely ignored in the name of achieving zero harm. Maybe they should remember the axiom:
If your world is just about safety, then your world is too small. (Long, 2014)
Despite what has been said, driver distraction is a real phenomenon and a serious concern. The conclusion that cell phone conversations are relatively benign does not extend to all potential distractions. Visual distraction produces greatly increased collision risk. The existing research strongly suggests that looking away from the road for an extended period is the main driver distraction risk. A behavior such as texting/emailing on a smartphone is undoubtedly a public safety concern. It doesn’t just draw the eyes away from the road, it 1) causes intense focus of attention on a small target or possibly an extended time and 2) changes accommodation and convergence to short distances. One naturalistic study found that texting produced an odds ratio of 163(!) of creating a safety-critical event compared to only 0.089 for talking on a hand-held cell phone and 0.65 on a handsfree cell phone (Olson, Hanowski, Hickman, & Bocanegra, 2009). (Another study found a testing odds ratio of only 2.1. Welcome to world of data mining research.)Even looking at the phone to dial only had an odds ratio of 3.5. It is hard to imagine a much more dangerous activity during driving than texting and emailing.
The bottom line on all this distracted driver research is that there is at best weak evidence that simply talking on a cell phone while driving is a particularly risky real-world behavior. Looking away from the road for any reason, be it dialing, texting, viewing the GPS navigation screen, adjusting the temperature, or opening a sandwich or water bottle likely poses a higher risk. One study suggests that tuning a radio increases crash risk by a factor of three to five. Perhaps Nicholas Trott was right all along.
The story of talking on a cell phone as risky behavior is a cautionary tale, especially for those “White Hats” who wish to save society from itself by adding rules and regulation that restrict behavior to avoid absolutely all harm. There is little compelling evidence that talking on a cell phone, especially with a handsfree set, increases collision risk. The “common sense” notion that cell phone conversations are risky because they divert critical attention from the driving task created a strong “confirmation bias” that has failed in the face of the real-world evidence. The sharp negative correlation between accident rate and cell phone use argues heavily against the common sense belief. Of course, this is just a correlation and there may be some strong countervailing factor that has lowered collision rate in spite of the cell phone menace. (Fatal collisions related to alcohol are way down.)However, this argument receives no support from the data mining literature. Although early studies using accident data suggested that the risk is high, more recent and better studies employing the detailed naturalistic data find little or no effect especially on absolute risk. Close examination of the experimental evidence shows that it has little ecological validity since it omits many factors operating in the real-world. If the White Hats were really serious, they would be calling for bans on car radios, passengers, and in-car eating, etc. Moreover, while studies typically find that while distraction is indeed a major crash cause, the large majority lie outside the vehicle. If the White Hats believed in zero harm, they would also call for bans on advertising signs, roadside flowers and skimpy clothing.
Conversely, visual distraction is a real and dangerous risk. For example, A study that directly compare cognitive and visual distraction found that only visual distraction produced a significant performance decrement. Amazing while cell phone condemnation continues, moreover, vehicles increasing come packaged with new visual distractors in the form of map displays and complex infotainment systems that are certain to consume some visual attention while driving.
However, visual distraction is not so easy to define. There is some dispute in the literature about when and how long the driver must look away from the road before he can be said to be distracted. After all, driving does not require full attention in most circumstances. The term distraction only applies when the competing behavior intrudes into the driving task. When exactly is that? It depends on the context.
So what is the sum total that the world has definitively learned about distraction from the 65,900 Google hits? If you don’t look down the road for a bit, you won’t see what is there and might have an accident. Who would have guessed!
1A micromort is a measure of risk equal to one death per million, usually per day. For example, every 250 miles driven equals one additional micromort. Climbing Mt. Everest is worth about 38,000 micromorts.