News, news analysis, and commentary on the latest trends in cybersecurity technology.

Enterprises Don't Know What to Buy for Responsible AI

Organizations are struggling to procure appropriate technical tools to address responsible AI, such as consistent bias detection in AI applications.

Dark Reading Staff, Dark Reading

January 27, 2023

2 Min Read
Source: IAPP

The potential for artificial intelligence (AI) is growing, but technology that relies on real-live personal data requires responsible use of that technology, says the International Association of Privacy Professionals.

"It is clear frameworks enabling consistency, standardization, and responsible use are key elements to AI's success," the IAPP wrote in its recent "Privacy and AI Governance Report."

The use of AI is predicted to grow by more than 25% each year for the next five years, according to PricewaterhouseCoopers. Responsible AI is a technological practice centered around privacy, human oversight, robustness, accountability, security, explainability, and fairness. However, according to the IAPP report, 80% of surveyed organizations have yet to formalize the choice of tools to assess the responsible use of AI. Organizations find it difficult to procure appropriate technical tools to address privacy and ethical risks stemming from AI, the IAPP states.

While organizations have good intentions, they do not have a clear picture of what technologies will get them to responsible AI. In 80% of surveyed organizations, guidelines for ethical AI are almost always limited to high-level policy declarations and strategic objectives, IAPP says.

"Without a clear understanding of the available categories of tools needed to operationalize responsible AI, individual decision makers following legal requirements or undertaking specific measures to avoid bias or a black box cannot, and do not, base their decisions on the same premises," the report states.

When asked to specify "tools for privacy and responsible AI," 34% of respondents mentioned responsible AI tools, 29% mentioned processes, 24% listed policies, and 13% cited skills.

  • Skills and policies include checklists, using the ICO Accountability Framework, developing and following playbooks, and using Slack and other internal communication tools. Government, risk, and compliance (GRC) tools were also mentioned in these two categories.

  • Processes include privacy impact assessments, data mapping/tagging/segregation, access management, and record-of-processing activities (RoPA).

  • Responsible AI tools included fairlearn, InterpreML LIME, SHAP, model cards, Truera, and questionnaires filled out by the users.

While organizations are aware of new technologies, such as privacy enhancing technologies (PETs), they have likely not yet deployed them, according to the IAPP. PETs offer new opportunities for privacy-preserving collaborative data analytics and privacy by design. However, 80% of organizations say they do not deploy PETs in their organizations over concerns over implementation risks.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights