Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT/Embedded Security

10:00 PM
Joe Stanganelli
Joe Stanganelli
News Analysis-Security Now

My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars

Death by autonomous auto is coming unless the industry gets security very right. The question is really whether it's already too late.

Death at the cyber-hands of a computer is coming in 2018. But more on that later.

I began this series of cybersecurity predictions for 2018 for Security Now because there aren't enough good "new year" cybersecurity predictions. There are some decent ones, to be sure -- but they are far, far outnumbered by the bad ones. I have found the vast majority of next-year InfoSec predictions to be too broad, too bet-hedging, and/or too grounded in obvious trends.

So in my own cybersecurity predictions for next year, I have tried to go the opposite route -- offering specifics on what I consider to be merely slight likelihoods.

Previously, in Part 1 of this prediction series, I predicted that the full force of the gradually building wrath of the FTC will be visited upon an IoT device maker next year. (See: My Cybersecurity Predictions for 2018, Part 1: Following Trends & the FTC.) Then, in Part 2, I predicted that -- for all of the talk and fear about GDPR -- relatively little will wind up happening next year after GDPR comes into effect. (See: My Cybersecurity Predictions for 2018, Part 2: GDPR Hype Is Hype.)

Now, in Part 3 of my 2018 cybersecurity prediction series, I turn away from regulations and regulators -- and instead issue a forecast of a matter of life and death.

2018 Prediction No. 3: An operator or passenger of a traditional, non-autonomous vehicle will be killed in an accident involving a self-driving car. The self-driving car will be deemed by the insurance companies to be "not at fault."

I really hope this prediction does not come true -- in 2018 or ever. But I have a bad feeling about self-driving cars and society's approach to them.

Autonomous-vehicle technology is far from perfect. At an AI-technology event this past spring in Boston, the MassIntelligence Conference, MIT computer science professor Sam Madden spoke about the problems of how machines interpret light distortion applied to various pixels -- referring to research demonstrating that machine learning can be fooled into thinking, say, that pictures of a dog are pictures of an ostrich. More relevantly, Madden told the audience that self-driving cars are similarly fooled. For instance, he stated, various hand gestures made by people on the street can fool a self-driving car into thinking that a squirrel ran in front of it.

Something similar appears to have happened in the case of Joshua Brown; last year, apparently carefree autonomous-car operator Joshua Brown became the first fatality involving a self-driving Tesla after -- the way the automaker puts it -- "[n]either autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." Moreover, Brown has not been the only operator of an autonomous Tesla operator to be killed after a problem with the self-driving technology.

In both incidents, the operator was loudly alleged to be more at fault than the technology. Even in non-fatal accidents involving self-driving cars, the other -- human - parties have routinely been deemed at fault for insurance-company purposes. But insurance companies don't necessarily operate in the real world. Yes, it is technically lawful to power through a yellow light at a dangerous 40mph intersection with imperfect visibility at 38mph without paying too close attention -- but most human drivers know better. Yes, it is legally mandated to immediately stop at a crosswalk when a pedestrian is standing there patiently waiting to cross -- but from a practical perspective, most human drivers who are going close to the speed limit on a fast and busy thoroughfare know better than to slam on their brakes in traffic -- risking injury or death to themselves and others behind them. (When it comes to pedestrians near crosswalks, human drivers probably also don't detect quite so many false positives in identifying pedestrians waiting to cross.)

Nevertheless, technophiles are crying full speed ahead with self-driving technology. They espouse the message that "we need to be okay with self-driving cars that crash," arguing that technology cannot be expected to be perfect because humans aren't perfect. This, however, misses the point -- that a world full of self-driving cars is a world lacking in individual agency.

Worse, "smart" cars -- even the non-autonomous ones -- are already rife with enough vulnerabilities that could cause a black-hat attacker to wet him- or herself with glee. (See Law Comes to the Self-Driving Wild West, Part 2 and Autonomous Cars Must Be Secure to Be Safe.) Time and again, security researchers have exposed grossly dangerous vulnerabilities in the modern connected car -- and, time and again, the industry has pooh-poohed such findings. Self-driving cars are an even bigger target -- particularly because they can be so demonstrably fooled via their machine-learning avenues.

So-called progress, however, does not tend to slow. Death and destruction are coming the way of the self-driving car -- and, inevitably, the way of those in the way of the self-driving car.

Ask not for whom the autonomous car honks; it honks for thee.

Related posts:

Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
How Enterprises Are Assessing Cybersecurity Risk in Today's Environment
The adoption of cloud services spurred by the COVID-19 pandemic has resulted in pressure on cyber-risk professionals to focus on vulnerabilities and new exposures that stem from pandemic-driven changes. Many cybersecurity pros expect fundamental, long-term changes to their organization's computing and data security due to the shift to more remote work and accelerated cloud adoption. Download this report from Dark Reading to learn more about their challenges and concerns.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2022-01-24
Dell BIOS contains an improper input validation vulnerability. A local authenticated malicious user may potentially exploit this vulnerability by using an SMI to gain arbitrary code execution in SMRAM.
PUBLISHED: 2022-01-24
Dell EMC Data Protection Central versions 19.5 and prior contain a Server Side Request Forgery vulnerability in the DPC DNS client processing. A remote malicious user could potentially exploit this vulnerability, allowing port scanning of external hosts.
PUBLISHED: 2022-01-24
Dell EMC Data Protection Central version 19.5 contains an Improper Input Validation Vulnerability. A remote unauthenticated attacker could potentially exploit this vulnerability, leading to denial of service.
PUBLISHED: 2022-01-24
Dell EMC Unity, Dell EMC UnityVSA and Dell EMC Unity XT versions prior to contain an operating system (OS) command injection Vulnerability. A locally authenticated user with high privileges may potentially exploit this vulnerability, leading to the execution of arbitrary OS commands on...
PUBLISHED: 2022-01-24
An issue was discovered in COINS Construction Cloud 11.12. Due to logical flaws in the human ressources interface, it is vulnerable to privilege escalation by HR personnel.