As hackers get smarter, artificial intelligence and machine learning are increasingly seen as vital skills on the cybersecurity battlefield.
Accordingly, the tech industry has taken to kvetching about a skills shortage in AI the same way it howls about a talent shortage in cybersecurity -- while at the same time threatening to cut traditional cybersecurity jobs in favor of AI. (See: AI Is Stealing These IT Security Jobs – Now.)
But do these shortages exist? Do they matter?
In the case of AI, the tech industry largely blames the autonomous-vehicle boom. In a panel session at last year's MassIntelligence Conference, Windstream product-management executive Mike Frane, pointed to an AI "talent exodus … specifically to the self-driving car industry." (See: Unknown Document 733575.)
Sure enough, small companies are offshoring their AI needs, while large companies are shelling out $300,000 or more in salaries to engineers with modest AI experience and expertise.
Industry cannot even agree on how dire their putative AI talent shortage is. Whereas, points out Bloomberg tech analyst Jeremy Kahn, Chinese tech heavy Tencent estimates 200,000 to 300,000 AI researchers and practitioners globally, Montreal startup Element AI claims that there are only "about 22,000 Ph.D.-level computer scientists" qualified for in-demand AI jobs -- declining to count the mere "contributors" to projects that Tencent counted.
What talent gap?
By counting only Ph.D.-level computer scientists (from self-reported LinkedIn data, no less), however, Element AI's stance is completely abhorrent to the way software development works in 2018. Compared to the proprietary-everything me-first 1980s, 21st-centry tech innovation is driven by open source, open standards and collaboration. There is no room for diploma-reliant elitism when universities no longer hold a monopoly on skills -- particularly in InfoSec AI.
Indeed, some companies have taken to hiring physicists, astronomers, and others from disciplines requiring exceptional mathematical ability -- knowing that that ability can be translated to AI. On the cybersecurity side of AI, however, organizations remain unimaginative -- and thereby shoot themselves in the foot as candidates become similarly unimaginative.
"Because demand has been so high, aspirant security engineers have not decided to start in a different area of IT -- and head straight into security, which leaves them without the context to have as productive conversations as they otherwise would be able to have with their colleagues in IT," Steve Athanas, Associate CIO at the University of Massachusetts at Lowell, related in an email interview. "These folks may have a conceptual and practical mastery of security frameworks, but [they] can struggle to understand the real-world applications in different areas of infrastructure or applications in the organization."
Moreover, Kahn hints, Element AI has too much skin in the game to be blindly trusted.
"Element has an incentive to highlight scarcity," writes Kahn. "The more companies despair of hiring their own experts, the more they’ll need vendors such as Element to do the work for them."
Yet even honest-to-God AI experts are hardly immune to failure, as demonstrated by a self-driving Uber car striking and killing an Arizona woman on Monday -- a catastrophe under any threat model. (See: My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars.)
The future of the AI workforce
To wit, the law of diminishing returns applies to security AI hiring.
"The reality is a $1,000,000-a-year employee and a $60,000-a-year employee have the same energy; there is still a limit to how much energy they can spend," Andy Ellis, CSO of Akamai Technologies and frequent critic of these "tech talent shortage" myths, recently told an audience at Next-Gen InfoSec Live. "In many cases, you're better off building that team [that is] growing and developing than finding that singular talent."
The use of AI and machine learning in cybersecurity remains fairly limited -- mostly to help (1) augment human IT workers' ability to contend with the voluminous security alerts they face each day (and would be likely to ignore otherwise), and help (2) identify suspicious network activity. These use cases, however, hardly require the cream of the AI crop to implement. Even in cases where black-hat attackers' AI is used to infiltrate a system, a big part of the solution can be made as simple as banning non-whitelisted bots entirely from a network.
The punchline? This all may be a moot point soon enough.
This year, Google began offering AutoML -- an automated AI service that creates its own AI -- to cloud customers. If AutoML and services like it take off, these supposed talent shortages will become undeniable talent surpluses.
—Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.