paint-brush
Artificial Intelligence Act (AIA): Europe’s Startups to Restore AI Excellence and Trust?by@mhuth
328 reads
328 reads

Artificial Intelligence Act (AIA): Europe’s Startups to Restore AI Excellence and Trust?

by Michael HuthOctober 29th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In August 2021, without much fanfare, the European Commission published its proposal for comprehensive regulation on artificial intelligence (AI) and its various applications. The prospective Artificial Intelligence Act (AIA) is predictably expansive, and it is deeply embedded within the wider body of EU law concerning technology, privacy, and citizens’ rights. The Act is also the first legislative effort of its magnitude intended to govern the development and deployment of AI applications in a jurisdiction.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Artificial Intelligence Act (AIA): Europe’s Startups to Restore AI Excellence and Trust?
Michael Huth HackerNoon profile picture

In August 2021, without much fanfare, the European Commission published its proposal for comprehensive regulation on artificial intelligence (AI) and its various applications.

The prospective Artificial Intelligence Act (AIA) is predictably expansive, and it is deeply embedded within the wider body of EU law concerning technology, privacy, and citizens’ rights. The Act is also the first legislative effort of its magnitude intended to govern the development and deployment of AI applications in a jurisdiction.

The challenges – and, consequently, opportunities – for startups and SMEs in the context of AI in the EU lie in the Act’s two central goals: guaranteeing AI excellence and improving the public’s trust in AI. In typical European legislator fashion, the aims are lofty and well-meaning yet hazily defined.

Startups and SMEs, being more agile in every sense of the word, are uniquely positioned to master the challenges and realize the opportunities the AIA presents. Here’s why.

“Excellence in AI” as public service

The most straightforward interpretation of achieving “excellence in AI” is to ensure algorithms deliver the greatest benefit possible to the greatest number of people while maintaining profitability. In other words, it means giving AI a user-centric perspective and a public service orientation.

Big Tech has a track record of crafting algorithms that serve shareholders, advertisers, and data merchants exclusively.

They have developed an elaborate business model that hinges upon the intrusive, shameless, centralized collection of personal data for profit. This approach has been excellent for a select group of wealthy investors and corporations, but users at large have been growing increasingly concerned about its consequences and side effects, which range from uncanny targeted ads to nudging and reality-warping echo chambers.

Startups and SMEs have a decided advantage in building AI that benefits society at large as opposed to private interests. Above all, they are not part of the Big Tech business model that lives or dies with the personal data trade. They are free to develop solutions that are both high-tech and customer-oriented, and they can turn a profit and satisfy investors without selling private information behind their customers’ backs.

One needs to look no further than the rousing success of private messenger services like Telegram and Signal whose popularity skyrocketed after the mainstream solutions adopted a questionable Privacy Policy tune-up that put the data privacy question front and center in users’ minds.

Similar success stories are happening in the AI-heavy field of online search as well, buttressing the point that small and agile AI companies have the best shot at achieving user-minded excellence.

“Improving trust in AI” means creating algorithms that solve problems

The AIA’s other major goal involves creating greater trust in algorithms, and SMEs have the upper hand here, too. Big Tech’s track record in sneaky data collection and nudging tactics have chipped away at the public’s trust in AI – for good reason.

We are a long way from Google’s pretentious slogan “Don’t be evil,” which the tech giant itself retired quietly, perhaps in a moment of healthy self-awareness.

But it’s not only the tracking and manipulation that give AI a bad public image. It’s the expensive self-driving vehicles that crash inexplicably, the snoopy smart home assistants that order unwanted goods, and the proprietary algorithms that have made their way into our hospitals and courtrooms and discriminate against us based on our sensitive personal data.

AI applications should be solving our problems instead of creating new ones; AI should move us forward, not backward. It’s in the nature of big companies and multinational corporations to preserve the status quo and to propagate it over generations, with little consideration of the changing social context.

This profit drive spawns a number of problems, like the recent revelations about Facebook deliberately targeting users with negative and divisive content that has had catastrophic effects on liberal democracies and ethnic minorities across the world. 

Startups and SMEs, on the other hand, emerge with the drive to solve concrete problems and with the tools and know-how to do so in lockstep with the times. In doing so, they are not only building their own positive brand recognition and reputation; they are also clearing AI’s good name in the public’s eye and restoring trust in this promising technology. But this is not the end of the story.

The AIA as a moral litmus test

Expansive as it is, the AIA is still a piece of European legislation, which means it has plenty of loopholes and backdoors. The Act’s key component is the thorough risk assessment of individual companies’ AI applications. Painstaking as it is, the evaluation procedure can be softened or largely circumvented due to a number of exceptions, such as developing and deploying AI applications outside the European Union’s borders.

After years of making hefty profits and paying zero taxes, it stands to reason Big Tech will be happy to exploit the AIA’s loopholes to eschew proper algorithm oversight, too.

“Move fast and break things” was declared dead a while ago, but when it comes to Big Tech’s AI applications, the principle is alive and well.

The driving force behind algorithm development is to maximize engagement and data collection, toeing the line of manipulation and robbing users of their free will.

Such effects are hardly the product of “excellent AI,” nor do they do much for improving public trust in AI systems, but they will remain viable under the AIA – corporations simply have to develop and deploy the malicious algorithms beyond EU borders. This is where the projected legislation provides a moral litmus test, and the enterprises that choose to continue business as usual will once again confirm that AI excellence and user wellbeing are not on their agenda.

Retire the compliance boogeyman

Startups and SMEs can set themselves apart here yet again. By staying within Europe and committing to a fair and thorough risk assessment, they can send a clear message of transparency and trust. The compliance boogeyman needs to be retired. By doing the right thing in terms of user data collection and processing, companies are automatically in compliance and out of the peril of penalty.

A quick look through the heftiest GDPR non-compliance fines to date, for instance, shows that in each instance, it was levied in direct response to a company choosing to nudge or deceive users into sharing personal information, to deny them rightful control over their data, or to violate their civil and personal rights in a significant way. So, if you respect users’ rights, what do you have to fear?

Startups and SMEs should and will capitalize on their agility and creativity to develop AI of a new generation that does not rest on data collection but focuses on delivering outstanding service instead. Because they were not born into the data collection mill, they are freer to innovate and to apply the latest technology, such as edge AI or federated machine learning, that respects users’ privacy and still delivers excellent results.

Last but not least, small enterprises are eligible for a number of R&D grants and project funding that can further enhance their AI development work and jumpstart them on their way to AI excellence and trust.

For all its lofty goals and vague formulations, the AIA clearly sets up startups and SMEs for algorithmic success. They can – and should – rise to the occasion.

About Michael

Professor Michael Huth (Ph.D.) is Co-Founder and Chief Research Officer of Xayn and teaches at Imperial College London. His research focuses on Cybersecurity, Cryptography, and Mathematical Modeling, as well as security and privacy in Machine Learning. He served as the technical lead of the Harnessing Economic Value theme at PETRAS IoT Cybersecurity Research Hub in the UK. In 2017, he founded Xayn together with Leif-Nissen Lundbæk and Felix Hahmann. Xayn offers a privacy-protecting search engine that enables users to gain back control over the algorithms and provides them with a smooth user-experience. Winner of the first Porsche Innovation Contest, the AI company has already worked successfully with Porsche, Daimler, Deutsche Bahn, and Siemens.

Professor Huth studied Mathematics at TU Darmstadt and obtained his Ph.D. at Tulane University, New Orleans. He worked at TU Darmstadt, Kansas State University and spent a research sabbatical at The University of Oxford. Huth has authored numerous scientific publications and is an experienced speaker on international stages.