IoT Security: Onus On Developers, Security Researchers
Security teams and DevOps need to team up on 'lean security' processes that make safety a top priority before a product reaches the market.
Earlier this year, security researcher Troy Hunt gave developers behind the Nissan Leaf companion app an ultimatum: Plug the security holes in the app, or I’m making them public.
When Nissan, which has said the vulnerability poses no safety threats, failed to address the flaws, Hunt delivered on his promise. Using public screenshots of the NissanConnect app and Burp Suite screen grabs, Hunt showed that nefarious hackers need only a car’s vehicle identification number to manipulate its climate controls.
Hunt, who claims that a patch “would not be hard to do,” commented that the problem isn’t that the app’s developers did a sloppy job with authorization: The problem is they didn’t do it at all, a mistake he called “bizarre.”
A Chronic Concern
Although the Leaf security snafu isn’t life-threatening, it’s the most recent in a string of embarrassing and dangerous security flaws in connected consumer products.
Security researchers have poked security holes in everything from connected Barbie dolls to baby monitors to 2-ton SUVs. And Rob Joyce, chief of the National Security Agency’s Tailored Operations Unit, has said those types of vulnerabilities keep him awake at night. According to Joyce, hundreds of thousands of internet-connected devices — from power plants to apps to automobiles — have been haphazardly connected without proper protections in recent years.
The problem has marred the reputation of the developer community, and the whole community must take responsibility. The fix starts with a set of essential principles for device safety many know but few practice: a methodology my firm calls “lean security. ”
How Can Lean Security Help?
Lean security principles — awareness, simplification, automation, and measurement — can keep consumer devices safe and secure.
Awareness means that everyone — including engineers, line-of-business managers, and C-suite executives — must understand the impact their product has on consumers’ safety, and constantly seek ways to improve security. Just like buckling seat belts or locking doors, this must become second nature for technology companies. In the recent Jeep Cherokee hack, awareness dictates that hackers shouldn’t have been able to cut the brakes or kill the engine by accessing the vehicle’s infotainment system. Even nontechnical employees who create product catalogs at Chrysler could have anticipated the dangers of interconnecting these systems.
Then there’s simplicity. According to McCabe Software, the average car had 100,000 lines of source code in the 1970s, but today’s vehicles contain more than 100 million. Each line represents new opportunities for bugs and vulnerabilities. And insecurities don’t affect just infotainment systems: Toyota Camry driver Jean Bookout was gravely injured and her passenger killed when the vehicle accelerated unexpectedly and crashed. In the ensuing lawsuit, experts compared the source code underpinning Toyota’s electronic throttle system to “spaghetti.”
Automation can reduce the role of human error in such vulnerabilities. Organizations should develop an automated tool chain to push code segments into production through continuous integration and deployment methodologies that embed security tests and controls into software development. Perhaps more important, developers must automate failure. Netflix created Chaos Monkey, an automated failure service that randomly disables systems. Netflix’s developers are constantly working with unexpected failures, so they’ll be ready for real trouble should it befall them.
Finally, measurement is crucial. Developers and security teams must measure device security and eliminate gaps before a product reaches the market. According to a recent Coverity study, less than half of third-party code is tested for security. With more than 90% of respondents using third-party code or outsourced teams, this is a systemic problem — and one highlighted by Nissan’s third-party Leaf app. To catch security vulnerabilities, developers must build automated code review into their platforms. Automakers all employ crash dummies to test vehicle safety, but few employ device security testing.
Security Is Non-negotiable
Every insecure access point represents a new risk to consumers’ lives, and the burden is on the developer community to root out insecure coding practices.
Technology leaders must update internal controls and educate developers about security responsibilities, and developers must prioritize users’ safety. This means being aware of how consumers use products, simplifying code to eliminate insecurities, automating development and instructive failures, and measuring vulnerabilities prior to release.
What happens if the community can’t meet its obligations? Expect the Federal Trade Commission or Department of Homeland Security to step in, implementing new regulations and stifling innovation.
Back in 2014, the National Security Telecommunications Advisory Committee issued a report to the president on the Internet of Things, warning that insecure consumer devices could disrupt national security by 2025. Barely two years later, that prediction is already an underestimation, and Congress has held numerous hearings since.
It’s up to all of us to innovate with safety as our top priority, lest the government step in and ruin our party. Developers, the buck must stop with us.
Related Content:
Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada July 30 through Aug. 4, 2016. Click for information on Internet of Things briefings in the conference schedule and, here, to register.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024