informa

Tech News and Analysis

4 MIN READ
DR Technology

Fight AI With AI

By developing new tools to defend against adversarial AI, companies can help ensure that artificial intelligence is developed and used in a responsible and safe manner.

On Wednesday, KPMG Studios, the consulting giant's incubator, launched Cranium, a startup to secure artificial intelligence (AI) applications and models. Cranium's "end-to-end AI security and trust platform" straddles two areas MLOps (machine learning operations) and cybersecurity and provides visibility into AI security and supply chain risks.

"Fundamentally, data scientists don't understand the cybersecurity risks of AI, and cyber professionals don't understand data science the way they understand other topics in technology," says Jonathan Dambrot, former KPMG partner and founder and CEO of Cranium. He says there is a wide gulf of understanding between data scientists and cybersecurity professionals, similar to the gap that often exists between development teams and cybersecurity staff.

With Cranium, key AI life-cycle stakeholders will have a common operating picture across teams to improve visibility and collaboration, the company says. The platform captures both in-development and deployed AI pipelines, including associated assets involved throughout the AI life cycle. Cranium quantifies the organization's AI security risk and establishes continuous monitoring. Customers will be able to establish an AI security framework, providing data science and security teams with a foundation for building a proactive and holistic AI security program.

To keep data and systems secure, Cranium maps the AI pipelines, validates their security, and monitors for adversarial threats. The technology integrates with existing environments to allow organizations to test, train, and deploy their AI models without changing workflow, the company says. In addition, security teams can use Cranium's playbook alongside the software to protect their AI systems and adhere to existing US and EU regulatory standards.

With Cranium's launch, KPMG is tapping into growing concerns about adversarial AI the practice of modifying AI systems that have been intentionally manipulated or attacked to produce incorrect or harmful results. For example, an autonomous vehicle that has been manipulated could cause a serious accident, or a facial recognition system that has been attacked could misidentify individuals and lead to false arrests. These attacks can come from a variety of sources, including malicious actors, and could be used to spread disinformation, conduct cyberattacks, or commit other types of crimes.

Cranium is not the only company looking at protecting AI applications from adversarial AI attacks. Competitors such as HiddenLayer and Picus are already working on tools to detect and prevent AI attacks.

Opportunities for Innovation

The entrepreneurial opportunities in this area are significant, as the risks of adversarial AI are likely to increase in the coming years. There is also incentive for the major players in the AI space — OpenAI, Google, Microsoft, and possibly IBM — to focus on securing the AI models and platforms that they are producing.

Businesses can focus their AI efforts on detection and prevention, adversarial training, explainability and transparency, or post-attack recovery. Software companies can develop tools and techniques to identify and block adversarial inputs, such as images or text that have been intentionally modified to mislead an AI system. Companies can also develop techniques to detect when an AI system is behaving abnormally or in an unexpected manner, which could be a sign of an attack.

Another approach to protecting against adversarial AI is to "train" AI systems to be resistant to attacks. By exposing an AI system to adversarial examples during the training process, developers can help the system learn to recognize and defend against similar attacks in the future. Software companies can develop new algorithms and techniques for adversarial training, as well as tools to evaluate the effectiveness of these techniques.

With AI, it can be difficult to understand how a system is making its decisions. This lack of transparency can make it difficult to detect and defend against adversarial attacks. Software companies can develop tools and techniques to make AI systems more explainable and transparent so that developers and users can better understand how the system is making its decisions and identify potential vulnerabilities.

Even with the best prevention techniques in place, it's possible that an AI system could still be breached. In these cases, it's important to have tools and techniques to recover from the attack and restore the system to a safe and functional state. Software companies can develop tools to help identify and remove any malicious code or inputs, as well as techniques to restore the system to a "clean" state.

However, protecting AI models can be challenging. It can be difficult to test and validate the effectiveness of AI security solutions, since attackers can constantly adapt and evolve their techniques. There is also the risk of unintended consequences, where AI security solutions could themselves introduce new vulnerabilities.

Overall, the risks of adversarial AI are significant, but so are the entrepreneurial opportunities for software companies to innovate in this area. In addition to improving the safety and reliability of AI systems, protecting against adversarial AI can help build trust and confidence in AI among users and stakeholders. This, in turn, can help drive adoption and innovation in the field.

Editors' Choice
Kelly Jackson Higgins 2, Editor-in-Chief, Dark Reading