With two full decades having passed since Haley Joel Osment played the digital-age Pinocchio on who the film centres, we’re yet to have human replicants operating beside us. However, AI is very much here and it’s having a transformational impact on industry.
According to the 2022 AI Activity in UK Businesses Report, 68% of large companies and 34% of medium-sized companies are already using at least one form of AI technology, and adoption rates show no signs of slowing.
With such a scale of uptake, we are fast approaching a point at which the decisions made by AI technology will affect us all on a daily basis.
As the algorithms become increasingly sophisticated, the question of ethics grows in its urgency.
How able are these digital processes to accommodate the likes of diversity, inclusivity, privacy, impartiality, and neutrality… and where will we require the setting of parameters – of regulation?
What’s the impact of no regulation?
Before we can examine how AI regulation could impact industry, we must first address the impact of its absence.
With no stringent and standardised rulebook in place, some AI providers have gone to market with systems that lack explainable decision-making. This has put themselves and their customers in a position where bringing the technology up to regulatory standards (once they’re established) will likely necessitate significant cost and upheaval.
“As the algorithms become increasingly sophisticated, the question of ethics grows in its urgency”
To avert this risk, many of the companies these risks apply to are actively working on mitigation strategies.
Meanwhile, behind the curtains, regulators wrestle with the wording and power of future rules to ensure their enforceability and clarity of interpretation.
Is there a price for a ‘compliance badge’?
Once regulations governing AI are defined and committed to statute, industry will be impacted by a different ripple of tremors altogether.
In order to achieve a ‘compliance badge’, companies may very well have to navigate their way through a maze of bureaucracy. Where this doesn’t completely deter companies from considering AI adoption, it risks making the process complex and time-consuming, which in turn could invite even greater non-compliance.
The obvious solution is for any regulation to be based on a set of broad ethical principles that are easy to comply with.
However, the broader the definitions, the greater the scope for misinterpretation, and that ushers in a whole new assortment of problems, which is why we created the open-source Aletheia Framework™. [link to Framing the Future blog]
Ultimately, establishing what the impact of AI regulation could be on industry must be an exercise with the end-goal of establishing how that impact can be mitigated. How regulation can address the various risks AI presents, while being affordable to comply with, but without suppressing the technology’s potential to drive positive social change.
Getting to this place will involve the participation of multiple players, not least SMEs. With small and medium-sized businesses occupying a precarious spot where, though they stand to benefit greatly from AI, they are also most vulnerable to punitive and limiting regulations, their involvement in the ratification of clear and easy-to-implement rules for organisations of all sizes, is essential.
It’s not all about the regulation
Conversations around the potential impact of AI regulation on industry must also confront the suspicions towards AI that remain prevalent. As well as developing rules that find the right balance between freedom and the rigours of regulation, the ‘black box’ nature of AI must be demystified. To this day, corporate adoption is often thwarted by a lack of understanding of how a system works, what its purpose is, or the rationale behind automated data-driven decisions.
Helping industry understand the impact of AI regulation is crucial, but it must not come at the expense of their understanding of AI.