7 New Rules For IoT Safety & Vuln Disclosure
In the Internet of Things, even the lowliest smart device can be used for a malicious purpose. Manufacturers take heed!
If you've ever been irked by someone who has spent an entire shared meal staring at their phone, you know that social norms around technology are slow to catch up with our actual use of it. It's no secret that most "smart" devices haven’t kept up with the state of the art in security knowledge, but manufacturers of the Internet of Things are also failing to keep up to date when it comes to safety notification and vulnerability disclosure. Here are seven new rules to safeguard users from the unknown "things" that could do them harm.
Rule #1 – Notify Users of Significant Changes
If a device is designed to be interacted with several times a day, repeated actions will quickly become muscle memory. Once that memory is in place, you'll probably always interact with a device in that way. People need to be clearly (maybe even repeatedly) notified of significant changes. Any feature changes that remove or alter safety features, or that would introduce a safety hazard, should be not be considered a feasible option.
Rule #2 – Be Thorough with Vulnerability Reports
Device manufacturers should have a protocol for handling vulnerability reports, and a responsible disclosure policy posted in a prominent place on their website. Because vulnerabilities in medical devices may literally be a matter of life and death, it is a good idea – especially if a vendor does not yet have a publicly posted responsible disclosure policy – to send vulnerability reports to the attention of the Food and Drug Administration (FDA) via MedWatcher.
Rule #3 – Give Humans Veto Power
While no one would argue that humans are infallible, there are times when it is imperative to let a human expert give the final decision. If reputations, livelihoods, health, or lives are on the line, software makers have a moral obligation to give the most qualified available human the ability to weigh in on the decision. In the case of automated medical diagnosis, that will most likely be a doctor. In the case of an auto-piloted car, that should be a driver who is still responding to what's on the road even if they’re not pressing pedals or steering. The more serious the potential outcome of the choice, the more heavily the inclusion of human decision (or at least active interaction) should be weighted towards being mandatory.
Rule #4 – Provide a Method for Prompt and Easy Updates
The code used for just about any sort of software application is incredibly long and complex, and with complexity come errors. Devices need to have the ability to quickly and easily update software when an error is identified. Whatever method is used, it should be easy for customers to spot fraudulent updates and it should not require them to go to a service center that may be hundreds of miles away from rural users.
Rule #5 – Provide a Method for Audit Logging
While we might like to think our Internet-connected washing machine might be too uninteresting for criminals to bother with, this line of thought is simply naïve. Even the lowliest devices can hold some utility for malicious purposes such as spamming or DDoS. And without behavior-logging functions to keep track of what’s happening, we can't know when that's occurring. When limited storage dictates a diminutively-sized log file, users should have the option to export or sync that file with another device.
Rule #6 – Authenticate Input
I’m sure we've all had the experience where a child or a pet pressed a button that made unexpected changes to some setting or other. When the change is something benign like enabling foreign subtitles, the stakes are fairly low. When devices have the ability or information to affect our lives, our health, or our reputations, even simple changes can have very powerful effects. This being the case, we need to make extra sure that changes are made by the authorized user or a designated representative, rather than a malicious individual or a meandering pet.
Rule #7 – Have an Exit Plan
Many IoT devices require cloud-based services to operate. If a company is discontinuing a device, going out of business or otherwise ending support for the cloud-based service, provide a mechanism for allowing users to transition the service. This could include selecting an alternate cloud-based service or publishing enough technical information so that users can create their own replacement.
It takes time for appropriate social mores to coalesce for new things, and while we might like for certain norms and rules to be obvious right out of the gate where security and safety are concerned, that is not the case. Until we get an official Miss Manners guide dictating etiquette for new technology, vendors and users will likely create some awkward situations. Hopefully, in higher-risk areas this will be hashed out more quickly so there will be little loss of life and limb and the lessons learned can then trickle down to lower-risk devices.
Related Content:
About the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024