Framing the future: How a one-page toolkit catches bias

Framing the future: How a one-page toolkit catches bias

3 November 2022

4 minute read

 

Every minute of every hour, of every day, across the world, AI goes right. 

It underpins mission critical systems that keep finance flowing, motorways moving and supply chains functioning. It helps busy health professionals detect serious diseases early on, and ensures the complex energy demands of nations are met.

But along the journey to here and now, otherwise effective and well-intended applications have misfired - and grabbed global headlines. From racial bias in facial recognition software, to gender bias in AI-assisted recruitment, each amplified stumble presents an added challenge in building confidence in this transformative technology. 

From the moment we - as punks, geeks and superfreaks - came together inside Rolls-Royce, we placed the focus on asking ourselves hard questions. Is this AI accurate, true and fair? Is it being operated well? Will it bring good outcomes for the people it is supposed to serve? Will it stand up to the same ethical scrutiny we apply to ourselves?

The Aletheia Framework emerged; a day-to-day practical toolkit for hardwiring gold-standard ethics and trustworthiness into AI solutions being deployed an entire organisation. It’s a breakthrough we’re proud of.

And we soon realised Aletheia - named after the Greek goddess of truth, in case you were interested - was a tool too useful to keep to ourselves. So we made it open source and invited the world to take a look; to help organisations pinpoint risk and work to mitigate the risk of bias in training data and AIs.

"Aletheia is about moving from thinking about the really hard questions around AI ethics to a space in which you can actually take practical steps to deploy artificial intelligences in your processes,” says R2 Factory chief executive, Caroline Gorski. “It’s about doing it in a way that you can not only trust, but you can also feel confident is aligning with the ethical position you take in the world.”

“When you systematise human decision making into a machine, you do run the risk of systemising human bias,” concedes Caroline. “But what's very interesting and doesn't get talked about very much is that the bias was always there. Human bias was always present in decisions.”

“Actually making the outcomes clearer by systematising them can be a very valid way of showing the bias that was always inherently in a decision-making process but was previously hidden because you couldn't see all of the decisions happening.”

The latest cut of our Aletheia Framework guides developers, executives and boards before AI is released and during its use. And if comprehensively applied, it tracks the decisions the AI is making, detects bias and allows human intervention to control and correct it - before it becomes tomorrow’s headline.

Regulators are still playing catch-up in this fast-moving space, which puts the imperative on businesses to take a strong ethical lead - not to wait and be told what to do.

“The Aletheia Framework sparks your creativity. It looks at the human perspective, the accuracy perspective and the governance perspective.”

“We clearly need to think about the future that we are building,” says Caroline. “We need to consider the inputs and whether the data that is based on contains bias. This may reflect that only a very narrow part of the population has been previously involved in making decisions, with those people bringing their own bias to those decisions.”

Writing in Forbes, gender diversity leader Carmen Niethammer reminds us that the design and use of AI models in different industries can significantly disadvantage lives. If the right questions are not being asked when data is collected, she says, gender gaps can actually widen when algorithms are misinformed. “This does not only have negative impacts on women, but also business and economies.”

Data ethics is a complex area - and we know the Aletheia Framework hasn’t solved all its challenges. But - crucially - it helps give organisations, people and communities more confidence that ethical implications of an AI have been fully considered. Outfits from sectors as diverse as music, oncology and education have put it into action - and we’ve used their feedback for further fine-tuning.

The drive to put ethics at the heart of AI development and deployment is one few would challenge. But does prescription in the process drive out all creativity and deny space for spontaneity of ideas?

“It’s actually the opposite,” says Maria Ivanciu, one of our brilliant AI geeks. “The Aletheia Framework sparks your creativity. It looks at the human perspective, the accuracy perspective and the governance perspective. It gives you the chance to think hard about things you wouldn’t have thought about when you’re creating a model. It provokes you to think outside the box.”