The AI Treasure Chest

The AI Treasure Chest

Share this post

The AI Treasure Chest
The AI Treasure Chest
AI ethics audits: Trends, limitations, and alternative

AI ethics audits: Trends, limitations, and alternative

Research overview and alternative

Ravit Dotan's avatar
Ravit Dotan
Mar 24, 2025
∙ Paid
8

Share this post

The AI Treasure Chest
The AI Treasure Chest
AI ethics audits: Trends, limitations, and alternative
3
1
Share

Dear AI ethics enthusiasts,

AI ethics audits assess whether an organization or AI system aligns with established AI ethics standards. An excellent paper, Schiff et al. (2024), explains the most common types and their limitations based on interviews with 34 auditors.

Today, I review the main types of audits and their limitations, as outlined by Schiff et al., and briefly explain my alternative approach, which is based on academic research I’m conducting with organizational psychologists.

🎁My top six suggested audit questions are available to paid subscribers.

For dessert, an AI-generated take on this post!

Let’s dive in!

The three types of AI ethics audits

Schiff et al. identify three main types of AI ethics audits:

  • Algorithmic audit - Focused on data, performance, or outcomes of one or more AI system(s). These audits generally center on the technical aspects of AI, examining specific models, their inputs, and outputs. According to the Schiff paper, algorithmic audits often involve disparate impact analysis and other forms of algorithmic fairness testing.

  • Governance audit - Focused on organizational processes and organizational structures around a larger set of AI systems. These audits take a broader view, examining how AI development is managed within an organization, including risk assessment processes, oversight mechanisms, and documentation practices. The Schiff study found that governance auditors sometimes incorporate algorithmic audits as part of their more comprehensive approach.

  • SaaS (Software-as-a-Service) - Provide technical tools and associated services for AI ethics principles assessments. These are typically external to the audited organization and focus on providing subscription-based tools to support assessments of specific principles like bias, privacy, or explainability. The research identifies these as a distinct category that offers ongoing technical support rather than one-time audits.

The limitations of AI ethics audits

Schiff et al. reveal that organizations are generally not ready for AI ethics audits, with several key limitations identified through their interviews with auditors:

  • Insufficient data and model governance: The data and models within organizations are often poorly organized, making them difficult to audit effectively. The study shows that auditors often spend a significant amount of their time trying to convince companies to establish a basic documentation infrastructure necessary for the audit.

  • Limited access to people and information: Auditors often struggle to access the necessary individuals and sufficient information within organizations. When they do gain access, they often encounter competing perspectives and priorities across different teams.

  • Lack of standardized methods: There's an absence of clear standards to test even common topics like algorithmic bias. The research highlights that organizations themselves often don't know how principles should be measured in their specific context.

  • Unclear success metrics: Many audits lack well-defined success measures. According to the paper, some auditors look for changes in organizational plans or KPIs, but many haven't developed specific metrics to evaluate audit effectiveness.

Alternative: Proactive AI ethics behavior

Given these limitations, I’m developing an assessment based on employee surveys and some additional components. The surveys will ask employees involved in AI development about what is called “proactive AI ethics behavior” and what they do in their day-to-day on AI ethics issues. Measuring the behaviors of relevant employees is an important proxy of organizations’ alignment with AI ethics given that most organizations are not ready for AI ethics audits.

To choose questions, I am drawing inspiration from similar work in cyber-security. For example, this great paper measured proactive cyber-security behavior by having employees rank their agreement with the following statements:

  1. I try to stay informed about recent trends in software security (e.g., secure coding).

  2. Part of my job is to think of ways to improve the security of the product(s) I develop.

  3. I frequently make suggestions to improve the security of the product(s) I develop.

But what should we ask in the case of AI ethics? Here are some ideas based on the cyber-security paper, the NIST AI RMF, and previous research with my organizational psychology colleagues (which will be public soon!)

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Ravit Dotan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share