Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

3/23/2018
08:05 AM
Joe Stanganelli
Joe Stanganelli
Joe Stanganelli
50%
50%

Cybersecurity AI: Addressing the 'Artificial' Talent Shortage

As AI becomes increasingly important to cybersecurity, industry's complaints on the talent shortages in both areas have become louder. However, is there really a lack of qualified experts?

As hackers get smarter, artificial intelligence and machine learning are increasingly seen as vital skills on the cybersecurity battlefield.

Accordingly, the tech industry has taken to kvetching about a skills shortage in AI the same way it howls about a talent shortage in cybersecurity -- while at the same time threatening to cut traditional cybersecurity jobs in favor of AI. (See: AI Is Stealing These IT Security Jobs Now.)

But do these shortages exist? Do they matter?

In the case of AI, the tech industry largely blames the autonomous-vehicle boom. In a panel session at last year's MassIntelligence Conference, Windstream product-management executive Mike Frane, pointed to an AI "talent exodus … specifically to the self-driving car industry." (See: Unknown Document 733575.)

(Source: Flickr)
(Source: Flickr)

"The [autonomous-vehicle] industry is draining the talent… which makes it more difficult for small startups to compete," commented Frane. "I have to think that, in time, it's going to generate more demand in the marketplace for [AI skills]."

Sure enough, small companies are offshoring their AI needs, while large companies are shelling out $300,000 or more in salaries to engineers with modest AI experience and expertise.

Industry cannot even agree on how dire their putative AI talent shortage is. Whereas, points out Bloomberg tech analyst Jeremy Kahn, Chinese tech heavy Tencent estimates 200,000 to 300,000 AI researchers and practitioners globally, Montreal startup Element AI claims that there are only "about 22,000 Ph.D.-level computer scientists" qualified for in-demand AI jobs -- declining to count the mere "contributors" to projects that Tencent counted.

What talent gap?
By counting only Ph.D.-level computer scientists (from self-reported LinkedIn data, no less), however, Element AI's stance is completely abhorrent to the way software development works in 2018. Compared to the proprietary-everything me-first 1980s, 21st-centry tech innovation is driven by open source, open standards and collaboration. There is no room for diploma-reliant elitism when universities no longer hold a monopoly on skills -- particularly in InfoSec AI.

Indeed, some companies have taken to hiring physicists, astronomers, and others from disciplines requiring exceptional mathematical ability -- knowing that that ability can be translated to AI. On the cybersecurity side of AI, however, organizations remain unimaginative -- and thereby shoot themselves in the foot as candidates become similarly unimaginative.

"Because demand has been so high, aspirant security engineers have not decided to start in a different area of IT -- and head straight into security, which leaves them without the context to have as productive conversations as they otherwise would be able to have with their colleagues in IT," Steve Athanas, Associate CIO at the University of Massachusetts at Lowell, related in an email interview. "These folks may have a conceptual and practical mastery of security frameworks, but [they] can struggle to understand the real-world applications in different areas of infrastructure or applications in the organization."

Moreover, Kahn hints, Element AI has too much skin in the game to be blindly trusted.


The fundamentals of network security are being redefined -- don't get left in the dark by a DDoS attack! Join us in Austin from May 14-16 at the fifth-annual Big Communications Event. There's still time to register and communications service providers get in free!

"Element has an incentive to highlight scarcity," writes Kahn. "The more companies despair of hiring their own experts, the more they’ll need vendors such as Element to do the work for them."

Yet even honest-to-God AI experts are hardly immune to failure, as demonstrated by a self-driving Uber car striking and killing an Arizona woman on Monday -- a catastrophe under any threat model. (See: My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars.)

The future of the AI workforce
To wit, the law of diminishing returns applies to security AI hiring.

"The reality is a $1,000,000-a-year employee and a $60,000-a-year employee have the same energy; there is still a limit to how much energy they can spend," Andy Ellis, CSO of Akamai Technologies and frequent critic of these "tech talent shortage" myths, recently told an audience at Next-Gen InfoSec Live. "In many cases, you're better off building that team [that is] growing and developing than finding that singular talent."

The use of AI and machine learning in cybersecurity remains fairly limited -- mostly to help (1) augment human IT workers' ability to contend with the voluminous security alerts they face each day (and would be likely to ignore otherwise), and help (2) identify suspicious network activity. These use cases, however, hardly require the cream of the AI crop to implement. Even in cases where black-hat attackers' AI is used to infiltrate a system, a big part of the solution can be made as simple as banning non-whitelisted bots entirely from a network.

The punchline? This all may be a moot point soon enough.

This year, Google began offering AutoML -- an automated AI service that creates its own AI -- to cloud customers. If AutoML and services like it take off, these supposed talent shortages will become undeniable talent surpluses.

Related posts:

—Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Commentary
Cyberattacks Are Tailored to Employees ... Why Isn't Security Training?
Tim Sadler, CEO and co-founder of Tessian,  6/17/2021
Edge-DRsplash-10-edge-articles
7 Powerful Cybersecurity Skills the Energy Sector Needs Most
Pam Baker, Contributing Writer,  6/22/2021
News
Microsoft Disrupts Large-Scale BEC Campaign Across Web Services
Kelly Sheridan, Staff Editor, Dark Reading,  6/15/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-18661
PUBLISHED: 2021-06-24
Cross Site Scripting (XSS) vulnerability in gnuboard5 <=v5.3.2.8 via the url parameter to bbs/login.php.
CVE-2020-21787
PUBLISHED: 2021-06-24
CRMEB 3.1.0+ is vulnerable to File Upload Getshell via /crmeb/crmeb/services/UploadService.php.
CVE-2020-21788
PUBLISHED: 2021-06-24
In CRMEB 3.1.0+ strict domain name filtering leads to SSRF(Server-Side Request Forgery). The vulnerable code is in file /crmeb/app/admin/controller/store/CopyTaobao.php.
CVE-2021-23398
PUBLISHED: 2021-06-24
All versions of package react-bootstrap-table are vulnerable to Cross-site Scripting (XSS) via the dataFormat parameter. The problem is triggered when an invalid React element is returned, leading to dangerouslySetInnerHTML being used, which does not sanitize the output.
CVE-2021-33348
PUBLISHED: 2021-06-24
An issue was discovered in JFinal framework v4.9.10 and below. The "set" method of the "Controller" class of jfinal framework is not strictly filtered, which will lead to XSS vulnerabilities in some cases.