Do we need AI standards?

I gave a short talk this morning at the APPG on Data Analytics, about AI standards. This is more or less what I said.

Like many people here, I’ve been coming to meetings about AI ethics and standards for many years, and what seems surprising is how common it is for those conversations to sweat the small stuff. 

As everyone here knows, there is no shortage of sets of AI standards or principles or manifestoes, but on their own those standards are not enough.

So I want to begin by addressing the elephant in the room whenever we talk about AI ethics, which is that AI will not make society fairer.

Those behavioural science studies you read about judges being fairer after lunch but bots being consistent are neither here nor there. Because the work of AI ethics and standards is not done in the product: it's done in the way we build our society.

And there is an extent to which all harms that emerge from AI are harms that were designed into the system.  

This morning I learnt, via a tweet from Professor Tim Bale, that a British citizen is 23 times more likely to be prosecuted for benefit crime than tax crime, even though tax fraud and error costs the country 10 times more

If the same technology, or the same technology standards, are applied to those two systems the outcomes will not be "fair";  instead they will reflect the unjust principles built into the way the country is governed. 

And - given that - we need to stop being surprised when a new use of AI leads to shocking outcomes. 

Which brings me to my second point. 

It is not permissible for industry to keep acting first and applying social tests later. For OpenAI to be lauded for their daring disruption and not chastised for their carelessness. 

This is not the norm in any other domain: we don't build bridges and then see whether they collapse. And yet this is what keeps happening with AI.

If the UK wants to embrace AI, we also need to become a fairer, more equal country: in which there are no poverty penalties and in which no one is discriminated against on the basis of their protected characteristics. We cannot make data better without improving the society it reflects. 

I'm going to close by proposing three tests:

The first is: Should this exist? 

The second is: Should this keep existing?

The third is: How should it change?

These are questions to be asked in govt, on corporate boards, in product teams - but we need to start with the big picture, not with product manifestos.

Without a better, more equitable social contract, AI will only deepen the divisions in society. The hard work is in agreeing and living in accordance with our values, not agreeing which standards library to implement.


Previous
Previous

The Networked Shift: A Creative Industries Foresight Study

Next
Next

2022: A Careful Trouble Year in Review