Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

4/5/2021
09:00 AM
Andrew Bolster, Ph.D., Data Science Group, White Hat Security
Andrew Bolster, Ph.D., Data Science Group, White Hat Security
Sponsored Article
50%
50%

Active Learning: Adding a Human Element to Artificial Intelligence, Machine Learning & Cybersecurity

Beneath the cynicism, hyperbole, market-making, and FUD, the strategic importance of AI in cybersecurity is constrained by us "meatbags."

Being a data science practitioner in the cybersecurity space has been a double-edged sword for several years. On the one hand, with the proliferation of automated security testing, network IDS advances, the sheer growth in traffic, and the threat surface of our increasingly complex, interconnected application development practices, these roiling oceans of flotsam and datum are everything our data-hungry little hearts desire. Related innovations in data engineering in the past decade mean that questions that previously had lived only in the craven dreams of executive officers and deranged analysts are now the kinds of tasks that we hand off to interns to make sure they have correctly set up their workstations.

But this glut of "big data" and computational wizardry leads inevitably to the other side of that coin — the zombie-esque re-emergence of casualties from the last "AI winter," proselytizing that "now is the time." Revolutions in highly specific fields like natural language processing and computer vision previously imagined only in big-budget sci-fi tentpole movie franchises were now accessible with URLs like ThisCatDoesNotExist and QuickChat.ai with links to the code on GitHub for all to emulate.

"This isn't your parents' AI," was the rallying call of the entire B2B software engineering industry. "This time it's different," and AI would make it all better, and "no-code" AI/ML deep-learning adversarial recurrent network solutions on the blockchain were the proverbial white whales that just needed to be chased through these oceans of data. And finally, after years of promising research, Captain Ahab would have his prize of human-like intelligence, able to take "meatbag" expertise, judgment, and wisdom, and scale indefinitely, or as much as your cloud compute budget could tolerate.

"Powered by AI" has become an albatross across many parts of the software engineering industry, no more so than in cybersecurity. Considering the fundamental premise of our industry is "computer systems can be bent to induce unintended behavior," the magic wand of "AI" often ends up being relegated to a square in our socially distanced buzzword bingo cards.

The real opportunity for the techniques pioneered in the "big data" and "artificial intelligence" research spaces are already well voiced, "joining the best of human and machine intelligence," but the question of how this is accomplished remains unclear at best and at worst is misleading.

At WhiteHat Security, we have pioneered an Active Learning approach to our development of machine learning models that opportunistically takes tasks off our security experts work queues when that model has confidence in its assessment of a piece of evidence. These items are then either directly and invisibly actioned on behalf of our security team, or, on a probabilistic basis, the item still goes to our security teams to assess, along with the model's assessment of that piece of evidence so that we can cross-verify the ongoing performance of the models under-test. This ensures that both that our security teams have the most "unboring" experience possible and that our models receive continual feedback so that performance or accuracy deviations can be quickly identified, and any models with reduced accuracy can be retrained and the old ones decommissioned rapidly without any loss of security oversight.

Within a standardized deployment and interaction architecture is a behind-the-scenes core approach. It is a "decision system" based on mutual trust between the Data Science capabilities of analyzing and modeling data to use optimal techniques per scenario context. This means that our partners in the rest of the product organization can understand and rely on the "decision support systems" that we as a Data Science group release. Fundamental to this "decision support system" approach is that whatever techniques, tools, strategies, technologies, or technomancy that are used to preprocess, clean, analyze, and train models, that their integration is as simple as possible; a decision support system is fed some evidence, and it responds with a set of recommendations and related confidences.

These specific confidences being expressed and exposed as part of the system fosters the development of a form of "trust" between the decision support system, and the security practitioners that then makes decisions based on that data. And finally, when the decision support systems themselves have conflicting or low confidence in their assessments, not only are these borderline or edge cases raised with the security teams, but they're also collated by our Data Science team, where they're analyzed separately, and if any patterns can be observed in the "confusing" evidence, these are raised with our R&D and security teams and new models are trained against this novel finding.

The intent is not to somehow replace or supplant the contextually informed human expert but rather to provide cognitive shortcuts and contextual evidence to empower them to make heuristic decisions on the edges.

AI, ML, bots, black boxes, decision support systems: Whatever the phrasing, the place of these technologies in the modern cybersecurity landscape is simple — answer the easy questions for me and get out of the way, or give me enough contextual information and trusted advice to take on the hard questions myself.  

About the Author
Andrew Bolster, Ph.D., leads the Data Science group in WhiteHat Security. His professional and academic experience spans from teaching autonomous submarines to collaborate on port protection, establishing guidelines for military application of AI, using biosensors to monitor and communicate human emotions, establishing IEEE standards for applying ethics in AI, and curating data playgrounds for cybersecurity researchers and professionals to experiment with multiterabyte streaming datasets for product innovation. In his "spare time," he is a founding trustee of the Farset Labs hackerspace, and on the board of Vault Artist Studios, both in Belfast, Northern Ireland.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
FluBot Malware's Rapid Spread May Soon Hit US Phones
Kelly Sheridan, Staff Editor, Dark Reading,  4/28/2021
Slideshows
7 Modern-Day Cybersecurity Realities
Steve Zurier, Contributing Writer,  4/30/2021
Commentary
How to Secure Employees' Home Wi-Fi Networks
Bert Kashyap, CEO and Co-Founder at SecureW2,  4/28/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-31755
PUBLISHED: 2021-05-07
An issue was discovered on Tenda AC11 devices with firmware through 02.03.01.104_CN. A stack buffer overflow vulnerability in /goform/setmac allows attackers to execute arbitrary code on the system via a crafted post request.
CVE-2021-31756
PUBLISHED: 2021-05-07
An issue was discovered on Tenda AC11 devices with firmware through 02.03.01.104_CN. A stack buffer overflow vulnerability in /gofrom/setwanType allows attackers to execute arbitrary code on the system via a crafted post request. This occurs when input vector controlled by malicious attack get copie...
CVE-2021-31757
PUBLISHED: 2021-05-07
An issue was discovered on Tenda AC11 devices with firmware through 02.03.01.104_CN. A stack buffer overflow vulnerability in /goform/setVLAN allows attackers to execute arbitrary code on the system via a crafted post request.
CVE-2021-31758
PUBLISHED: 2021-05-07
An issue was discovered on Tenda AC11 devices with firmware through 02.03.01.104_CN. A stack buffer overflow vulnerability in /goform/setportList allows attackers to execute arbitrary code on the system via a crafted post request.
CVE-2021-31458
PUBLISHED: 2021-05-07
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Reader 10.1.1.37576. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handlin...