Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

3/23/2018
08:05 AM
Joe Stanganelli
Joe Stanganelli
Joe Stanganelli
50%
50%

Cybersecurity AI: Addressing the 'Artificial' Talent Shortage

As AI becomes increasingly important to cybersecurity, industry's complaints on the talent shortages in both areas have become louder. However, is there really a lack of qualified experts?

As hackers get smarter, artificial intelligence and machine learning are increasingly seen as vital skills on the cybersecurity battlefield.

Accordingly, the tech industry has taken to kvetching about a skills shortage in AI the same way it howls about a talent shortage in cybersecurity -- while at the same time threatening to cut traditional cybersecurity jobs in favor of AI. (See: AI Is Stealing These IT Security Jobs Now.)

But do these shortages exist? Do they matter?

In the case of AI, the tech industry largely blames the autonomous-vehicle boom. In a panel session at last year's MassIntelligence Conference, Windstream product-management executive Mike Frane, pointed to an AI "talent exodus … specifically to the self-driving car industry." (See: Unknown Document 733575.)

(Source: Flickr)
(Source: Flickr)

"The [autonomous-vehicle] industry is draining the talent… which makes it more difficult for small startups to compete," commented Frane. "I have to think that, in time, it's going to generate more demand in the marketplace for [AI skills]."

Sure enough, small companies are offshoring their AI needs, while large companies are shelling out $300,000 or more in salaries to engineers with modest AI experience and expertise.

Industry cannot even agree on how dire their putative AI talent shortage is. Whereas, points out Bloomberg tech analyst Jeremy Kahn, Chinese tech heavy Tencent estimates 200,000 to 300,000 AI researchers and practitioners globally, Montreal startup Element AI claims that there are only "about 22,000 Ph.D.-level computer scientists" qualified for in-demand AI jobs -- declining to count the mere "contributors" to projects that Tencent counted.

What talent gap?
By counting only Ph.D.-level computer scientists (from self-reported LinkedIn data, no less), however, Element AI's stance is completely abhorrent to the way software development works in 2018. Compared to the proprietary-everything me-first 1980s, 21st-centry tech innovation is driven by open source, open standards and collaboration. There is no room for diploma-reliant elitism when universities no longer hold a monopoly on skills -- particularly in InfoSec AI.

Indeed, some companies have taken to hiring physicists, astronomers, and others from disciplines requiring exceptional mathematical ability -- knowing that that ability can be translated to AI. On the cybersecurity side of AI, however, organizations remain unimaginative -- and thereby shoot themselves in the foot as candidates become similarly unimaginative.

"Because demand has been so high, aspirant security engineers have not decided to start in a different area of IT -- and head straight into security, which leaves them without the context to have as productive conversations as they otherwise would be able to have with their colleagues in IT," Steve Athanas, Associate CIO at the University of Massachusetts at Lowell, related in an email interview. "These folks may have a conceptual and practical mastery of security frameworks, but [they] can struggle to understand the real-world applications in different areas of infrastructure or applications in the organization."

Moreover, Kahn hints, Element AI has too much skin in the game to be blindly trusted.


The fundamentals of network security are being redefined -- don't get left in the dark by a DDoS attack! Join us in Austin from May 14-16 at the fifth-annual Big Communications Event. There's still time to register and communications service providers get in free!

"Element has an incentive to highlight scarcity," writes Kahn. "The more companies despair of hiring their own experts, the more they’ll need vendors such as Element to do the work for them."

Yet even honest-to-God AI experts are hardly immune to failure, as demonstrated by a self-driving Uber car striking and killing an Arizona woman on Monday -- a catastrophe under any threat model. (See: My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars.)

The future of the AI workforce
To wit, the law of diminishing returns applies to security AI hiring.

"The reality is a $1,000,000-a-year employee and a $60,000-a-year employee have the same energy; there is still a limit to how much energy they can spend," Andy Ellis, CSO of Akamai Technologies and frequent critic of these "tech talent shortage" myths, recently told an audience at Next-Gen InfoSec Live. "In many cases, you're better off building that team [that is] growing and developing than finding that singular talent."

The use of AI and machine learning in cybersecurity remains fairly limited -- mostly to help (1) augment human IT workers' ability to contend with the voluminous security alerts they face each day (and would be likely to ignore otherwise), and help (2) identify suspicious network activity. These use cases, however, hardly require the cream of the AI crop to implement. Even in cases where black-hat attackers' AI is used to infiltrate a system, a big part of the solution can be made as simple as banning non-whitelisted bots entirely from a network.

The punchline? This all may be a moot point soon enough.

This year, Google began offering AutoML -- an automated AI service that creates its own AI -- to cloud customers. If AutoML and services like it take off, these supposed talent shortages will become undeniable talent surpluses.

Related posts:

—Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Commentary
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
Edge-DRsplash-11-edge-ask-the-experts
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
News
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Google's new See No Evil policy......
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-31664
PUBLISHED: 2021-06-18
RIOT-OS 2021.01 before commit 44741ff99f7a71df45420635b238b9c22093647a contains a buffer overflow which could allow attackers to obtain sensitive information.
CVE-2021-33185
PUBLISHED: 2021-06-18
SerenityOS contains a buffer overflow in the set_range test in TestBitmap which could allow attackers to obtain sensitive information.
CVE-2021-33186
PUBLISHED: 2021-06-18
SerenityOS in test-crypto.cpp contains a stack buffer overflow which could allow attackers to obtain sensitive information.
CVE-2021-31272
PUBLISHED: 2021-06-18
SerenityOS before commit 3844e8569689dd476064a0759d704bc64fb3ca2c contains a directory traversal vulnerability in tar/unzip that may lead to command execution or privilege escalation.
CVE-2021-31660
PUBLISHED: 2021-06-18
RIOT-OS 2021.01 before commit 85da504d2dc30188b89f44c3276fc5a25b31251f contains a buffer overflow which could allow attackers to obtain sensitive information.