AI Governance in Startups: A new survey sheds light on industry trends
Plus: Why do startups neglect reliability standards?
Hi AI ethics enthusiasts,
A new preliminary survey sheds light on AI ethics implementation in startups. It’s the first survey of its kind that I know of!
The results show low performance all around. The most shocking finding is that only 20% of respondents actively ensured the accuracy, consistency, and trustworthiness of AI-generated outcomes and predictions.
Yes, dwell on this thought for a minute. In the world of rampant GenAI inaccuracies and misinformation, only 20% of respondents address the validity and reliability of AI-generated outcomes and predictions.
I am less surprised, but still very concerned, about the overall trends, which show low levels of best practices implementation.
Today, I overview the results. The point is hope: We can do better!
For dessert, an AI-generated take on this hope!
About the Survey
The survey is led by Tracy Barba from the the Lucas Institute for Venture Ethics (at the Markkula Center at Santa Clara University). I am proud to share that I was a part of the council that supported the development of this survey (The Responsible AI Venture Council)!
This preliminary survey include 45 startups:
57% at the seed stage, 29% pre-seed, and 14% A Series.
The top industries are Healthcare (36%), Finance and Banking (14%), and Retail and E-commerce (14%).
The primary business functions for AI systems are Data Analysis and Insights (71%), Product or Service Innovation (50%), and Operational Efficiency and Automation (43%).
This survey is unique in its focus on startups. Other surveys, like McKinsey State of AI Reports and my own past work, don’t distinguish between different types of companies.
They are looking for additional participants. Share the participation link with your startup friends to help understand the trends better!
You can read the full preliminary results here:
Formal Risk Management activities
The survey asked companies about their level of implementation of RAI practices, ranging from no implementation to established guidelines and formal policies. These are the results:
80% No Validity and Reliability: No formal or informal policies, practices and activities in place to manage the accuracy, consistency, and trustworthiness of AI-generated outcomes and predictions.
80% No Fairness and Bias: No formal or informal policies, practices, and activities in place to identify, prevent, and correct biases in AI algorithms and datasets.
78% No Environmental: No formal or informal policies, practices and activities in place to manage the ecological impact of AI operations and initiatives to minimize carbon footprint and promote sustainability.
70% No Safety: No formal or informal policies, practices, and activities are in place to ensure that AI applications do not harm users or society and comply with safety standards.
60% No Explainability and Interpretability: No formal or informal transparency of AI decision-making processes and the ability for users to understand and trust AI outputs.
57% No Robust Data Privacy Management: No robust measures in place to protect personal and sensitive information from unauthorized access or disclosure.
43% No Data Rights Management: No formal or informal defining, enforcing, and respecting individuals and entities' data ownership and usage rights.
General Governance
The survey also asked about companies’ general AI governance. The results are a bit more encouraging:
Regulation: 71% of the companies have reviewed regulations related to AI use and application
Advisors: 50% of the companies have advisors with RAI expertise
Feedback: 64% of companies engage with stakeholders to solicit feedback during designing, developing, and deploying AI products and services.
Clear values: 43% of the companies have a clearly defined set of values that guide their decision-making
Reflections
This survey is small and preliminary (remember to share the participation link with your startup friends!). However, the results are jarring.
As I mentioned earlier, the most shocking finding to me is the low activity in validity and reliability. AI’s accuracy problems are well known, and companies that ignore them shoot themselves in the foot. Their products are just going to be lower quality.
I am puzzled by this gap, but my sense is that AI hype explains it. I think many are so overconfident about the performance of AI tools that they don’t bother checking. That is the cost of overblown hype.
Moreover, the overall trend is concerning. By and large, respondents didn’t address prevalent AI ethics themes such as fairness and safety. These results are not surprising, given similar trends observed in the industry as a whole (e.g., see the McKinsey survey from earlier this year and my own work from 2022). But it doesn’t make them any less disturbing. The damage tech companies can create is vast.
This reality puts the onus on all of us. If you use, develop, buy, or invest in AI tools, take time to check whether your tools are accurate, safe, and aligned with your values and adjust accordingly. Imagine what the world would be like if we do nothing to change the AI landscape.
Getting started is easier than most think. It starts with curiosity about how you can incorporate AI responsibility in your ongoing work and business model, whether you use, develop, buy, or invest in AI.
On this note, come play the AI ethics game to upskill on AI ethics!
Dessert
To end with hope, this is an AI-generated take on all of working together to make AI responsible!
Ready for More?
Come play the AI ethics game to upskill on AI ethics!
Check out our comprehensive resources, workshops, and consulting services at www.techbetter.ai, and follow us on LinkedIn: Ravit's page, TechBetter's page.
I've responded on Linkedin, :) Mondee to pick just one of hundreds of #ai startups has no security, instead data is exposed to the hackers without any protection. Immature doesn't begin to cover it, and hiring very clever PhD doesn't mean it will be secure, rather more data available to be hacked faster.
Here I have to wear my Encouragement hat.
If I saw a startup begin by hiring Security staff along with Dev and others, that's a good sign. If the leadership took some classes, I'd invest.
Startups are more at risk for the reputation loss, and all the new laws, if they don't start secure and build out that way.
Their IP is their biggest asset....