Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT/Embedded Security

12/28/2017
10:00 PM
Joe Stanganelli
Joe Stanganelli
News Analysis-Security Now
50%
50%

My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars

Death by autonomous auto is coming unless the industry gets security very right. The question is really whether it's already too late.

Death at the cyber-hands of a computer is coming in 2018. But more on that later.

I began this series of cybersecurity predictions for 2018 for Security Now because there aren't enough good "new year" cybersecurity predictions. There are some decent ones, to be sure -- but they are far, far outnumbered by the bad ones. I have found the vast majority of next-year InfoSec predictions to be too broad, too bet-hedging, and/or too grounded in obvious trends.

So in my own cybersecurity predictions for next year, I have tried to go the opposite route -- offering specifics on what I consider to be merely slight likelihoods.

Previously, in Part 1 of this prediction series, I predicted that the full force of the gradually building wrath of the FTC will be visited upon an IoT device maker next year. (See: My Cybersecurity Predictions for 2018, Part 1: Following Trends & the FTC.) Then, in Part 2, I predicted that -- for all of the talk and fear about GDPR -- relatively little will wind up happening next year after GDPR comes into effect. (See: My Cybersecurity Predictions for 2018, Part 2: GDPR Hype Is Hype.)

Now, in Part 3 of my 2018 cybersecurity prediction series, I turn away from regulations and regulators -- and instead issue a forecast of a matter of life and death.

2018 Prediction No. 3: An operator or passenger of a traditional, non-autonomous vehicle will be killed in an accident involving a self-driving car. The self-driving car will be deemed by the insurance companies to be "not at fault."

I really hope this prediction does not come true -- in 2018 or ever. But I have a bad feeling about self-driving cars and society's approach to them.

Autonomous-vehicle technology is far from perfect. At an AI-technology event this past spring in Boston, the MassIntelligence Conference, MIT computer science professor Sam Madden spoke about the problems of how machines interpret light distortion applied to various pixels -- referring to research demonstrating that machine learning can be fooled into thinking, say, that pictures of a dog are pictures of an ostrich. More relevantly, Madden told the audience that self-driving cars are similarly fooled. For instance, he stated, various hand gestures made by people on the street can fool a self-driving car into thinking that a squirrel ran in front of it.

Something similar appears to have happened in the case of Joshua Brown; last year, apparently carefree autonomous-car operator Joshua Brown became the first fatality involving a self-driving Tesla after -- the way the automaker puts it -- "[n]either autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." Moreover, Brown has not been the only operator of an autonomous Tesla operator to be killed after a problem with the self-driving technology.

In both incidents, the operator was loudly alleged to be more at fault than the technology. Even in non-fatal accidents involving self-driving cars, the other -- human - parties have routinely been deemed at fault for insurance-company purposes. But insurance companies don't necessarily operate in the real world. Yes, it is technically lawful to power through a yellow light at a dangerous 40mph intersection with imperfect visibility at 38mph without paying too close attention -- but most human drivers know better. Yes, it is legally mandated to immediately stop at a crosswalk when a pedestrian is standing there patiently waiting to cross -- but from a practical perspective, most human drivers who are going close to the speed limit on a fast and busy thoroughfare know better than to slam on their brakes in traffic -- risking injury or death to themselves and others behind them. (When it comes to pedestrians near crosswalks, human drivers probably also don't detect quite so many false positives in identifying pedestrians waiting to cross.)

Nevertheless, technophiles are crying full speed ahead with self-driving technology. They espouse the message that "we need to be okay with self-driving cars that crash," arguing that technology cannot be expected to be perfect because humans aren't perfect. This, however, misses the point -- that a world full of self-driving cars is a world lacking in individual agency.

Worse, "smart" cars -- even the non-autonomous ones -- are already rife with enough vulnerabilities that could cause a black-hat attacker to wet him- or herself with glee. (See Law Comes to the Self-Driving Wild West, Part 2 and Autonomous Cars Must Be Secure to Be Safe.) Time and again, security researchers have exposed grossly dangerous vulnerabilities in the modern connected car -- and, time and again, the industry has pooh-poohed such findings. Self-driving cars are an even bigger target -- particularly because they can be so demonstrably fooled via their machine-learning avenues.

So-called progress, however, does not tend to slow. Death and destruction are coming the way of the self-driving car -- and, inevitably, the way of those in the way of the self-driving car.

Ask not for whom the autonomous car honks; it honks for thee.

Related posts:

Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
Researcher Finds New Office Macro Attacks for MacOS
Curtis Franklin Jr., Senior Editor at Dark Reading,  8/7/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-12777
PUBLISHED: 2020-08-10
A function in Combodo iTop contains a vulnerability of Broken Access Control, which allows unauthorized attacker to inject command and disclose system information.
CVE-2020-12778
PUBLISHED: 2020-08-10
Combodo iTop does not validate inputted parameters, attackers can inject malicious commands and launch XSS attack.
CVE-2020-12779
PUBLISHED: 2020-08-10
Combodo iTop contains a stored Cross-site Scripting vulnerability, which can be attacked by uploading file with malicious script.
CVE-2020-12780
PUBLISHED: 2020-08-10
A security misconfiguration exists in Combodo iTop, which can expose sensitive information.
CVE-2020-12781
PUBLISHED: 2020-08-10
Combodo iTop contains a cross-site request forgery (CSRF) vulnerability, attackers can execute specific commands via malicious site request forgery.