Hi AI ethics enthusiasts,
Today, I'm excited to share some eye-opening findings from our latest research paper on AI governance. We've taken a deep dive into how companies are handling AI ethics, and the results are both surprising and concerning. I'll walk you through what we discovered, why it matters, and what it means for the future of responsible AI.
You can find the full paper and a video walkthrough here.
For dessert, an AI-generated take on this post in the style of Minimalist infographic art!
The Study: A Unique Look Behind the Corporate Curtain
We analyzed public disclosures from over 250 companies, looking at their AI ethics practices. This approach is novel - most previous studies relied on self-reporting surveys, which can be less reliable. Our data comes from corporate documents where executives are personally accountable for the information, potentially offering a more accurate picture.
Our goals were to understand:
1. How prevalent are AI ethics practices?
2. Is there a connection between governance signals (like having AI ethics principles) and actual implementation?
3. How have these practices evolved over time?
What We Found: The Good, The Bad, and The Concerning
🔍 The Volume Issue: The overall volume of reported AI ethics activities is very low. Many companies are talking about AI ethics, but few are walking the walk.
🔗 The Correlation Problem: Here's where it gets interesting - and even more concerning. We found that typical governance signals - things like having AI ethics principles - don't necessarily correlate with implementation. In other words, just because a company has an AI ethics policy, principles, or an executive doesn't mean they're actively working to mitigate AI risks. Far from it.
📉 The Trend Concern: Perhaps most worryingly, we saw that during 2022, more companies actually declined in their AI ethics efforts rather than improved. The majority stayed the same, but seeing any decline in this crucial area is alarming.
Why This Matters: The Implications Are Huge
1. Risk of Mass Harm: Most companies in our study are large, publicly traded entities. Their AI products affect many people, and if something goes wrong, the impact could be widespread. The low volume of implementation activities suggests a significant risk of AI systems causing harm at scale.
2. Ethics Washing Alert: The lack of correlation between governance signals and implementation activities is a red flag. It suggests that many companies might be engaging in "ethics washing" - talking about AI ethics without taking substantive action.
3. Voluntary Commitments May Not Be Enough: Our findings call into question the effectiveness of voluntary AI ethics commitments, which many governments are currently promoting, including recent initiatives by the US, Canada, and the UK.
4. Investment and Consumer Choices: For investors, consumers, and procurement teams, this research highlights the need to look beyond surface-level commitments when evaluating a company's AI ethics stance. Governance signals alone are not reliable indicators of responsible AI practices.
5. Underinformed Risk Mitigation: There's a concerning lack of activities related to mapping and measuring AI risks. This suggests that many companies may be implementing risk mitigation practices without fully understanding or quantifying the risks they're addressing.
What's Next: Moving Towards Accountable AI Governance
Our research suggests that we need to:
1. Encourage or require companies to report on their active risk mitigation efforts, not just their principles.
2. Develop better ways to evaluate companies based on their actual implementation of AI ethics practices.
3. Investigate why companies struggle to move from AI ethics commitments to concrete actions.
4. Consider the qualifications of those driving AI ethics initiatives within companies.
5. Explore how to better integrate AI ethics into business models to ensure it's not sidelined as a 'nice-to-have' project.
As we navigate the rapidly evolving AI landscape, these insights are crucial for shaping effective governance strategies. They call for a more rigorous approach to AI ethics that goes beyond principles and focuses on measurable, impactful actions.
Dessert
For dessert, an AI-generated take on this post! This one is in the style of Minimalist infographic art.
Acknowledgments
This project was graciously supported by the Notre-Dame IBM Tech Ethics Lab and collaborated with EthicsAnswer who provided the data.
Huge thanks to:
❤️Tess Buckley who spearheaded the project on the EthicsGrade side
❤️Gil Rosenthal who spearheaded that statical analysis
❤️Dr Joshua Scarpino, Thorin Bristow, and Luke Patterson who contributed to the analysis and writing
Ready for More?
Check out our comprehensive resources, workshops, and consulting services at www.techbetter.ai, and follow us on LinkedIn: Ravit's page, TechBetter's page.