Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem
We all make assumptions. They rarely turn out well. A new/old date problem offers a lesson in why that's so.
January 20, 2020
Twenty years ago, the IT world collectively thought it had dodged a millennium-sized bullet when years of preparation saw the dawn of Jan. 1, 2000, without a worldwide computer catastrophe. For some, though, the Y2K bullet has turned out to be a boomerang. And it's a boomerang that carries lessons for security professionals.
The date-change problem that was dodged in 2000 has reared its ugly head in 2020. Why? Because software developers in 1999 desperately trying to figure out how to keep computer systems using a two-digit year field from becoming confused about the century made some assumptions.
Their solution was called "windowing" or relying on a "pivot year": A simple flag meant that two-digit dates in a certain range (typically 20 to 99) were assumed to be in the 20th century, while those in the range 00 to 19 were assumed to be in the 21st.
It was further assumed, in those heady days when programmers were partying like it was 1999, that any system in use at the turn of the millennium would have been replaced (by a system with new, more date-inclusive software) within 20 years.
And now you can see where the problem comes in.
Long-Lasting Systems
Assumptions like "surely they won't be using this system 20 years from now" almost always turn around to bite IT professionals because these assumptions forget that companies will undoubtedly choose the cheapest option — which might very well mean forcing the IT department to keep scotch-taping together the same old system year after year after year.
In this particular case, not only are many of those millennial systems still in service, time-squeezed programmers have grabbed those two-digit date routines and used them, unaltered, in newer systems. All of this meant that on Jan. 1, 2020, more than a few pieces of business-critical software stopped working or stopped working correctly.
In some cases, the issue was discovered and patched before the end of 2019. In others, emergency patches were rushed into service at the new year.
And in still others, companies spent days (or more) waiting to print receipts, accept money from customers, or carry on other critical business processes while software developers worked to swat all of the date-related issues.
This is far from the first case of assumptions turning around to bite industry professionals. Some of us can remember "no one will ever need more than 64k of memory" as a particular set of teeth, and I have no interest in refighting the battle of IPv4 address space.
For security professionals, the real lesson in all of this is that architects, system engineers, and software developers are going to make assumptions in the work they do. Most of them will seem reasonable at the time they're made. Some will continue to seem so, but many will ultimately turn out to be the cause of a headache — or worse. So what can a security professional do to reduce the assumption-based risk?
Always Ask
The first is to have a seat at the table with those architects and developers so that you can understand some of those pesky assumptions. Another reason for the seat is to make sure that as many assumptions as possible are documented. Your successors will thank you for that.
Next, you really should go through existing systems (especially those that have been on the job longer than you have) and get some serious visibility into what they're made of and how they work, with a particular emphasis on identifying the assumptions that earlier developers might have made. It's easy to be lulled into believing that a regular patching and updating process will have cleared out the irrational underbrush, but it takes a serious update to solve an architectural limit designed into the system.
Finally, don't be afraid to look at your systems with beginners' eyes. When we're in a hurry, under pressure, trying to show colleagues how knowledgable we are, or all three, we tend to make our own assumptions about how things work. ("I don't see a cable — all the network access MUST be over Wi-Fi.") Don't be afraid to ask the dumb-sounding, most basic questions in order to verify precisely how things work. And when you ask those questions, don't let systems or people give rushed, unexamined answers. Make sure they've verified the obvious facts they're handing you.
The assumptions your organization makes may not have the headline-worthy impact of millennial issues, but they can still raise serious vulnerabilities and major system resilience issues. We've all hear the old saw, "When you assume you make an a** out of u and me." Just because it's hoary and clichéd doesn't mean it's not true.
Related Content:
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024