Trends Forecaster. Studies the intersection of digital breakthroughs, scientific discoveries and brain chemistry
I have been calling for ethical and regulatory oversight in AI for years in my keynote speeches and talks around the world.
According to Fortune Business Insights, the use of AI in business has grown a whopping 270% across the past four years alone and the AI market is expected to be worth over $250 Billion by 2025 – these reflect the potential impact these regulations might have. The nature of ‘high risk AI’ might catch out business leaders. We’re not talking military grade AI robots (because those are exempt from these regulations anyway) but many things we currently consider normal.
Recruitment algorithms, credit check AIs, and educational institution
selections for instance - many core business practices will fall under new
The latest EU regulations are constructive in theory, but the caveat is that any attempt to regulate AI has to be all-encompassing by its very nature…and yet that same broadness causes issues.
Take for example a ban on AI used to take a ‘decision to one’s detriment’ - does a fast-food manufacturer using AI to target their ideal consumer via marketing fall under this banner since health problems are technically a detriment to the quality of life?
Business leaders need to take full stock of their digital process and data usage to see if they’re in line with the new regulations.
The most vital takeaway for business leaders looking at these regulations is that perhaps is that in the ‘not-so-distant future,’ if a company is taken to court over data or AI-related issues, a transparent, well tested, and fair AI with substantial oversight and regulation will form part of the basis of any case. This means it is in an organisation’s best interest to follow these regulations with much enthusiasm.
Many are likening the new AI rules to GDPR, but I don't see it that way. These new regulations are like GDPR in the same way that a car is like a ship. Both use engines and many of the same engineering concepts, but in essence, they’re very different vehicles for different purposes.
In essence, this will change things for the everyday person in much the same way GDPR did - more personal control of data and an increased emphasis on rights and fairness.
However, these new regulations cover a much wider theoretical base than GDPR and address burning issues such as deep fakes, police surveillance, and recruitment fairness.
A great example of how significant this step is the fact that 90% of industry-leading businesses have ongoing investments in AI, according to NewVantage.
This is an extremely significant step. It may contain loopholes and vagueness - much like any attempt to regulate tech - but the fact that as a society we’re taking concrete steps tackling this issue is a huge philosophical and ethical step in the right direction.
Can it deliver on its potential right now? To some extent, yes it can. There are enough vague terms and loopholes that a business can just carefully maneuver around them to still do what they want.
BUT - and this is a big one - it does clearly state to AI developers which way the wind is blowing, and is the first step towards not only forcing AI companies to adhere to the rules but also informs the public of what is right to expect from AI. Consider how loopholed new laws like environmental protection laws were when they first came into effect, or the law surrounding air travel.
What is important to consider is that the public supports this. Recent legal cases such as Bridges vs South Wales Police have shown that the general public, and indeed the judges of the land, also share concerns about AI and in this case were unhappy about how it was being trialed by police forces.
It is vital to remember that though this was a very close case that went through multiple appeals through the skin of its teeth, South Wales Police’s biggest positive in its corner was the fact it wrote and obeyed very strict rules on how its AI systems could behave and be used.
Organisations need to make sure that they have sufficient expertise to make the most of AI. Hiring an AI advisor or training staff should be a priority.
Once adopted, business leaders will have to stay on top of advancements because it is a safe bet that these regulations will change swiftly.
A lack of trained and experienced staff is an expected restriction in the AI market’s growth. Technology Ethics training should be offered to key staff. Many organisations such as The Open Data Institute offer courses such as ‘Introduction to Data Ethics and the Data Ethics Canvas’ which can be completed in just a few hours.
The Centre for Data Ethics and Innovation has already started work on a roadmap for AI assurance in order to ensure public trust in AI. More than half of businesses have noticed a productivity boost after introducing AI in general according to PWC, so Explainable AI stands to push this even further with more fine-tunable white box rules. Many business leaders will be relieved at the added protections and will find it relatively easy to stay on the correct side of the regulations.
However, business leaders who deal entirely in ‘high risk’ areas such as recruitment or credit will find that they have to adapt many of their core business practices, leading to some frustration.
However, this is a good opportunity for smaller businesses to adapt faster and better, thereby taking advantage of larger, slower competitors in order to increase their own market base.
According to IBM, more than 3 in 4 businesses say it is important for them to be able to trust AI’s analysis, results, and recommendations. This means most businesses will be breathing a sigh of relief due to the incoming regulations.
Believe it or not, the leaders in ethical and explainable AI tend to come from academia rather than Private Research. The Open Ethics Initiative and the Future of Humanity Institute are good examples of this. This is slowly affecting the corporate world as well, with many major social media platforms and other companies beginning to take data privacy and data rights more seriously.
Companies such as Pega are well known for making their software transparent, and their approach to AI has been no different. However, there still remains work to be done on a large scale in the industry.
Transparency in AI may take some time, especially if you’re a tech-heavy organisation, but these principles can begin on an organisational level whenever you like.
Workshops from your data lead, transparency to customers about what you use and how, and similar ethical steps can be a very positive early move. Customers will see this and appreciate it, and it is a good head start towards the regulations.
It is critical to remember that even though these regulations are far from perfect, they are a step towards democracy and fairness. Few can forget the case of Robert Williams in Detroit last year who was arrested in front of his family and neighbours, held for 30 hours at a detention centre without charge, had his mug shot, fingerprints, and DNA taken, and was only released when he borrowed enough money for a $1,000 bond.
He couldn’t afford a lawyer, but luckily the American Civil Liberties Union of Michigan agreed to represent him for free as a human rights case. Why is this relevant? There was no evidence against him except an AI system told the police he looked like a criminal.
This AI system was not only the only evidence - it was never put through any kind of testing or quality assurance. Even its creators described the testing phase as ‘unscientific’. Without a lawyer, Robert could very well have gone to prison on AI evidence alone.
We’ve got to ask: Can we risk AI-powered systems controlling many outcomes, here in Europe or in the wider market? It is easy to consider AI a deeply helpful tool - and in many use cases, it certainly is. However, much like other useful tools, it requires constant and careful oversight. These regulations are undoubtedly a step towards that.
Create your free account to unlock your custom reading experience.