![DR Technology Logo DR Technology Logo](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt4c091cd3ac9935ea/653a71456ad0f6040a6f71bd/Dark_Reading_Logo_Technology_0.png?width=700&auto=webp&quality=80&disable=upscale)
News, news analysis, and commentary on the latest trends in cybersecurity technology.
7 Ways to Bring AI to Cybersecurity
Academic researchers are developing projects to apply artificial intelligence to detect and stop cyberattacks and keep critical infrastructure secure, thanks to grants from the C3.ai Digital Transformation Institute.
![Illustration of an Earth model sitting in a data flow pattern Illustration of an Earth model sitting in a data flow pattern](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/bltfa3a54fa6e3b9848/64f153d49ef96af34a42b3c4/cyberglobe-Yingyaipumi-AdobeStock.jpeg?width=700&auto=webp&quality=80&disable=upscale)
Source: Yingyaipumi via Adobe Stock
New ransomware variants and deceptive techniques such as living off the land and store now, decrypt later are sidestepping heuristic analysis and signature-based malware detection. Behavior-based tools can compare network activity against an established norm and flag when they detect unusual and suspicious actions and patterns. Powered by artificial intelligence (AI) and machine learning (ML), such tools represent hope in a post-Colonial Pipeline world.
C3 AI, a company specializing in applying AI to facilitate enterprise-level digital transformation, recently awarded 24 grants under its Digital Transformation Institute (DTI) initiative. This year, DTI awarded grants to candidates who submitted proposals about applying AI to detect and stop cyberattacks and keep critical infrastructure secure. This work is timely, especially since several US government agencies recently released a joint warning against malware distributed by foreign adversaries with the sole purpose of disabling essential services.
Since its founding in March 2020, DTI has focused on supporting collaborative research that furthers work in AI, ML, and related subdisciplines. Its participants include 10 industry and academic organizations, including Microsoft, Princeton, and MIT.
During this grant round, DTI funded AI and infrastructure proposals. However, the priorities for previous years were arguably just as essential. In 2021, the focus was on using AI for climate and energy security, while the emphasis for 2020 concerned applying AI to mitigate COVID-19 and future pandemics.
Candidates go through a selection process that includes peer reviews for their proposals. It also accounts for things such as their previous accomplishments and how the project uses emerging technologies. Moreover, proposals get examined for their scalability potential and how they could positively impact society.
However, the research grant award program is only one of several programs offered through the DTI. A visiting scholars initiative will provide up to $750,000 annually to the University of California, Berkeley, and the University of Illinois at Urbana-Champaign so that those universities can bring six to 10 industry experts to their campuses for teaching, research, and the publication of works to the public domain. An industry partner program and curriculum development support are among other efforts under the DTI umbrella.
Here's what seven of the grant recipients hope to achieve.
Explainable AI — built so that humans can understand why an algorithm took a particular action — could help organizational leaders feel more confident about using AI to stop cyberattacks. If they can see how an algorithm reached its conclusions, they'll be able to verify it's working as intended. Then the increased adoption of AI-based cybersecurity could encourage late adopters to try it, which might make it more difficult for increasingly advanced attacks to succeed.
This project, a partnership between the University of Illinois at Urbana-Champaign and KTH Royal Institute of Technology, takes inspiration from the safety cages that surround industrial robots to keep them from harming humans. The researchers want to apply the same concept to network security using explainable AI.
The team believes explainability will allow them to verify that the network protection is working as it should and to see how the safety cage affects network traffic. They'll focus on a lightweight build to embed within programmable networks or operating system kernels. The ML algorithms for this safety cage will get trained on behavior models that are inherently readable by humans.
Cyrille Artho, associate professor at Stockholm's KTH Royal Institute of Technology, says that the size of large neural networks makes them a "black box" that even experts cannot fully understand.
"We need to provide users also with models that are simpler and based on approaches where a human can design or modify a model [that may be created by AI or a human], so it is small enough to be understood," Artho says. "[It] is key that we do not just begrudgingly accept AI as something that is 'smarter than us and probably right,' but that we can follow and understand its decisions."
Increasing the overall accuracy of the alerts from an AI-based cybersecurity tool will make it more useful and help encourage trust in the system. Increased productivity is one of the main drivers urging decision-makers to adopt advanced cybersecurity tools, especially when IT teams must secure critical infrastructure. However, they won't likely keep using them if the products give too many false alarms.
In this proposal, researchers from the University of Illinois at Urbana-Champaign, MIT, and KTH Royal Institute of Technology sought to use ML to identify the sources of attack against infrastructures that contain both physical and cyber elements. More specifically, their approach will use causal reasoning based on learned dependency models.
One of the main goals is to allow people to get details on the state of a physical infrastructure, any associated communication protocols, and the online infrastructure without suffering from the "alarm fatigue" that can so often frustrate today's security teams.
Additionally, the researchers hope to learn things that will improve organizations' ability to get real-time situational information while achieving low false-positive and false-negative rates. They also want to use information about the IT infrastructure to become more successful in detecting anomalies.
Digital transformation is happening incredibly quickly. Microsoft expects 500 new apps to be created within five years. That's more than came on the market over the past four decades. Many of those additions will spur the development of AI and other emerging technologies. However, organizational leaders must feel confident that they can keep their increasingly high-tech infrastructures safe from attacks.
A proposal — from researchers at the University of Chicago, Stanford University, and Princeton University — might help. It will use fingerprinting techniques to automatically create identifiers linked to security vulnerabilities. Next, they will design, implement, and deploy large-scale network scanning to automatically identify and address likely threats.
New vulnerabilities pose threats regularly, and cybercriminals try to stay ahead of defense mechanisms. Fortunately, technology such as what this proposal describes could thwart attackers' efforts to orchestrate increasingly damaging and complex plans.
Whereas the previous proposal focused on keeping threats out of networks, this one, from researchers at the University of Chicago and the University of Illinois at Urbana-Champaign, aims to help people discover what network weaknesses were exploited and then strengthen them against future malicious efforts.
More specifically, the group will build cybersecurity forensics tools for ML systems deployed within networks. The work centers on two main types of cyberattacks commonly launched on ML models. The first type is where corrupted training data gets embedded into a model to make it misbehave. The second is where cyberattackers augment input data to interfere with the model's performance. The team plans to build post-attack analysis tools to make models more resilient against such attacks.
Bo Li, one of the researchers behind the project, mentions that these tools could recognize threats beyond the two kinds the proposal focuses on, too. She says that it's "possible to achieve certain robustness given the constraints of adversarial behaviors, as well as some properties of the neural networks." Given neural networks' complexity and the limited possible exploits, tools like these could help provide ongoing protection against future attack methods.
In the shadow of the Colonial Pipeline ransomware attack, concerns about cyberattacks severely affecting vital energy infrastructure are no longer theoretical. Shortly after that breach shut down the oil pipeline and despite a push to improve utilities' cybersecurity kicked off just a couple of weeks before by the US Department of Energy, the US Department of Homeland Security and other agencies instituted new rules for pipeline operators. That crisis, along with fallout from the Russian invasion of Ukraine, showed just how important it is to protect energy infrastructure.
To help address this, a team from Carnegie Mellon University envisions creating an ML and AI cybersecurity stack that can help professionals in the energy industry. If successful, it would be another example of AI's potential to transform how people work.
It would include three main components that collectively aid personnel in recognizing normal and potentially harmful network traffic, share that data with other relevant parties, and develop and deploy defenses against possible dangers. The researchers assert that applying AI and ML to secure critical infrastructure will help professionals in the industry by automating some of their security tasks and allowing them to detect new threats sooner.
Vyas Sekar, co-director of the Future Enterprise Security Initiative at Carnegie Mellon, emphasizes the importance of usability for this tool.
"We need to think about usability in a broad sense beyond UI/UX – in particular, understanding and trust in AI," he says.
The toolkit will meet these concerns in three ways: easily shareable but privacy-preserving workflows, easy-to-use AI pipelines, and software-defined infrastructure to automatically implement AI-driven playbooks in response to threats.
The decentralized finance (DeFi) sector holds much promise. However, poor security could limit its momentum. DeFi security issues cost the sector well over $1 billion in 2021 alone, but the cybersecurity resources devoted to addressing the issues are not proportionally allocated. Whereas the previous proposal centered on securing critical energy infrastructure, this one will improve security in the emerging world of DeFi.
A group of researchers from the University of California, Berkeley and the University of Illinois at Urbana-Champaign want to pay much-needed attention to the DeFi industry.
The team wants to build the first DeFi cybersecurity platform, and they will develop it to work with data that comes both from on and off the blockchain. They anticipate using the tool to create a dynamic DeFi knowledge graph that minimizes threats and strengthens security in future DeFi applications.
This proposal and similar work will go a long way in keeping DeFi systems secure so that people will be more willing to explore their positive societal impacts.
A company may have a skilled and well-resourced IT team, but it'll likely be unable to stay sufficiently protected against attacks without educating the workforce about good cybersecurity hygiene. That's particularly true when social engineering comes into play.
Social-engineering attacks can be devastating to the affected parties and organizations, especially since the cybercriminals orchestrating them are often increasingly convincing with their tactics. Researchers from the University of Illinois at Urbana-Champaign and the University of Chicago want to address this by developing a system that encourages people to have good cybersecurity practices through positive reinforcement, rather than through fines or mandates.
They will research the connections between social-engineering threats and a lack of cyber hygiene, then use ML to develop "nudges" that help people learn and retain better behaviors and increase their overall protection from cybercriminals.
Using AI to remind people how to stay safe while using the Internet is not a total solution, but it could make users less vulnerable to hackers' efforts.
A company may have a skilled and well-resourced IT team, but it'll likely be unable to stay sufficiently protected against attacks without educating the workforce about good cybersecurity hygiene. That's particularly true when social engineering comes into play.
Social-engineering attacks can be devastating to the affected parties and organizations, especially since the cybercriminals orchestrating them are often increasingly convincing with their tactics. Researchers from the University of Illinois at Urbana-Champaign and the University of Chicago want to address this by developing a system that encourages people to have good cybersecurity practices through positive reinforcement, rather than through fines or mandates.
They will research the connections between social-engineering threats and a lack of cyber hygiene, then use ML to develop "nudges" that help people learn and retain better behaviors and increase their overall protection from cybercriminals.
Using AI to remind people how to stay safe while using the Internet is not a total solution, but it could make users less vulnerable to hackers' efforts.
New ransomware variants and deceptive techniques such as living off the land and store now, decrypt later are sidestepping heuristic analysis and signature-based malware detection. Behavior-based tools can compare network activity against an established norm and flag when they detect unusual and suspicious actions and patterns. Powered by artificial intelligence (AI) and machine learning (ML), such tools represent hope in a post-Colonial Pipeline world.
C3 AI, a company specializing in applying AI to facilitate enterprise-level digital transformation, recently awarded 24 grants under its Digital Transformation Institute (DTI) initiative. This year, DTI awarded grants to candidates who submitted proposals about applying AI to detect and stop cyberattacks and keep critical infrastructure secure. This work is timely, especially since several US government agencies recently released a joint warning against malware distributed by foreign adversaries with the sole purpose of disabling essential services.
Since its founding in March 2020, DTI has focused on supporting collaborative research that furthers work in AI, ML, and related subdisciplines. Its participants include 10 industry and academic organizations, including Microsoft, Princeton, and MIT.
During this grant round, DTI funded AI and infrastructure proposals. However, the priorities for previous years were arguably just as essential. In 2021, the focus was on using AI for climate and energy security, while the emphasis for 2020 concerned applying AI to mitigate COVID-19 and future pandemics.
Candidates go through a selection process that includes peer reviews for their proposals. It also accounts for things such as their previous accomplishments and how the project uses emerging technologies. Moreover, proposals get examined for their scalability potential and how they could positively impact society.
However, the research grant award program is only one of several programs offered through the DTI. A visiting scholars initiative will provide up to $750,000 annually to the University of California, Berkeley, and the University of Illinois at Urbana-Champaign so that those universities can bring six to 10 industry experts to their campuses for teaching, research, and the publication of works to the public domain. An industry partner program and curriculum development support are among other efforts under the DTI umbrella.
Here's what seven of the grant recipients hope to achieve.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024