Call us
COMMENTARY

AI has escaped the 'sandbox' — can it still be regulated?






Digital / COMMENTARY
Georg Riekeles

Date: 18/04/2023
The stakes for the human race in current AI developments could not be higher. This is no time to cut ethical corners regarding research, regulation, or lobbying.

Recently I was introduced to the concept of "algorithmic gifts" as part of a research interview on tech lobbying in Brussels. The question was how algorithmic favours might be used to sway the direction of debates and policy.

When Twitter released segments of its code a few weeks back, we got a first, perhaps unsurprising, answer: far from the professed neutrality, posts from President Joe Biden, Twitter CEO Elon Musk and a few dozen selected luminaries such as basketball player LeBron James, American columnist Ben Shapiro and entrepreneur Marc Andreessen get an additional, artificial push by Twitter's algorithm.

It only adds to previous questions as to where Musk's Twitter is heading and, more fundamentally, of course, about the structure and integrity of today's platform-mediated public space.

By now, algorithms create and redistribute power across most aspects of our social, economic, and political life. We live in an algorithmic society and with that come steep ethical questions.

AI and singularity

Nowhere is the acceleration and disruption more evident than in artificial intelligence. The combination of immense data sets, massive computational force, and self-learning algorithms promises to unleash enormous powers — in every sense of the word.

In medical research, to take one example, the use of machine learning and mRNA technology (the same as in the COVID-19 jab) holds tremendous potential. Vaccines against cancer, cardiovascular and auto-immune diseases could be ready by the end of this decade.

Few would want to relinquish this promise. Much more contested is the emergence of so-called General Purpose Artificial Intelligence, or GPAI — self-learning algorithms capable of performing multiple, varied tasks to the extent of giving the impression of a sense of thought.

Within two months of its launch, the first application out of the starting blocks, ChatGPT reached 100 million users, a pace not seen for any other tech consumer application.

As these users now play with prompts, the machine learns. The latest version of the software already boasts spectacular analytical and creative capacities, presaging future permutations into practical human activities from finance (and column writing!) to arts and sciences.

With the dynamics of exponential advance, the popular scare is that these powerful, 'intelligent' technologies will radically and unpredictably transform our reality — or even develop some form of life of their own.

It's difficult to blame the naysayers and doomsters. In Silicon Valley, wizards' apprentices, accoutred with a libertarian philosophy and venture capital, have long been yearning for the moment of 'technological singularity', a future where technological growth is out of control and irreversible.

In the mind of Google's chief of engineering and a key figure, Ray Kurzweil, the process towards singularity has already long begun and will happen around 2030 (note: the same as for the vaccines).

Computers will have human intelligence, and our (final) choice might be to put them inside our brains, connecting our neocortex to the cloud.

EU's AI Act on the spot

As often, Europeans will be the first to regulate and can, by and large, take some pride in that. An Artificial Intelligence Act has been on the EU lawmakers' table for two years with the aim of setting the guardrails for safe and lawful AI.

Certain AI practices, such as social scoring, will be prohibited. Yet others, categorised as "high risk", will be subject to third-party audits and significant transparency by the legislation due to be finalised this year.

That is all good, but the response to the commercialisation of GPAI now stands as the decisive test.

In truth, EU lawmakers are very much in the dark about what to do. At a recent lunch with senior Brussels lawmakers and industry representatives, civil society voices raised the question of whether it could all be stopped: the emphatic answer was it could only go faster.

Coincidentally, only days later, more than 1,000 AI experts wrote an open letter asking for a pause in training systems more powerful than GPT-4, saying that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

On the face of it, the wildfire spread of GPAI already marks a failure of European regulatory efforts. To check the unsupervised real-world testing of AI systems, the EU AI Act had envisaged regulatory 'sandboxes' establishing a controlled environment for developing, testing and, validating AI innovation.

Regulation is as ever reactive and lacks speed compared to the disruptive vitality of technology. Lawmakers must now figure out how to frame the unrestrained acceleration and futureproof regulation that will take effect in more than three years.

Competition policy certainly has a role to play to prevent a few GPAI 'gatekeepers' from becoming single entry points for AI tasks. However, lawmakers must also consider how strict liability regimes can force developers to think twice about out-of-sandbox releases without stifling European domestic developments in the global race ahead.

(Big) Tech ethics

In the contest between Silicon Valley tech mavens and Chinese state-controlled innovation, the window to act meaningfully from Europe often can appear vanishingly small. At the same time, the stakes are lasting and high: we are peeking through an Aventine keyhole into major future ethical and societal dilemmas linked to algorithmic powers.

For something as paradigmatic as AI, there's always the hold out hope that all actors will accept the necessity of open democratic debate, control and regulation.

Yet, we should also be wary of illusions: for all the talk of 'doing good' from innovators, most ventures are, in the end, governed by profit-maximising imperatives rather than wider societal interest.

Big Tech's track record, in particular, is appalling. When the EU debated the Digital Services Act in 2022, front groups and other forms of hidden lobbying were swarming all over it.

In a leaked internal memo, Google had then set out a list of by-every-means-possible tactics to fight effective EU regulation, for which Alphabet's CEO Sundar Pichai later had to apologise.

On the other side of the Atlantic, behaviour has been similarly, if not even more, brutal. Ahead of the US midterm elections in 2022, incumbent lawmakers faced arm-twisting threats of being unseated by Big Tech funding going to their political adversaries if congressional bills moved ahead.

Currently, lawmakers in Canada are under fire in the context of regulatory initiatives on online news, broadcasting and online safety. In fact, it has gone so far that the Canadian parliament is undertaking a world-first parliamentary study on the tech giants' use of intimidation and subversion tactics to evade regulation across the globe.

In the end, Europeans are perhaps not alone at the hard edge of regulation. But what is needed is not just regulation but a new and broader paradigm of what I call tech control.

Independence of research is one area where alarm bells are ringing. Former Google insider and AI ethics researcher, now-Signal CEO Meredith Whittaker, has attested how independence from corporate actors in AI ethics can be asphyxiated. Even within the EU AI ethics group, it was hard to come by.

The defence of fundamental interests requires, therefore, not just capable AI agencies and effective liability rules but a wide ecosystem focused and acting on how technology and corporate powers direct our future. As Timnit Gebru, another AI ethics researcher forced out of Google, pointed out, for the time being, we should avoid ascribing agency to the algorithms — rather than to the organisations building them.

A version of this piece was first published by the EUObserver.

Georg E. Riekeles is an Associate Director and Head of the Europe’s Political Economy programme at the European Policy Centre.

The support the European Policy Centre receives for its ongoing operations, or specifically for its publications, does not constitute an endorsement of their contents, which reflect the views of the authors only. Supporters and partners cannot be held responsible for any use that may be made of the information contained therein.






Photo credits:
AI-generated image by Bing/Dall-E: Peeking through Aventine keyhole onto ethical and societal dilemmas linked to algorithmic powers

The latest from the EPC, right in your inbox
Sign up for our email newsletter
14-16 rue du Trône, 1000 Brussels, Belgium | Tel.: +32 (0)2 231 03 40
EU Transparency Register No. 
89632641000 47
Privacy PolicyUse of Cookies | Contact us | © 2019, European Policy Centre

edit afsluiten