The Progressive Post

Future-proofing AI: regulation for innovation, human rights and societal progress

Professor of Responsible Artificial Intelligence at Umeå University, Sweden and director of WASP-HS, the Wallenberg Program on Humanities and Society for AI
15/06/2023

How can AI be regulated so that it enhances innovation and competitive power, that it is future-proof, protects fundamental human rights, and brings forward societal benefits? This is the task the European Union set itself to accomplish with the introduction of the AI Act, which was approved by the European Parliament on 14 June.

Establishing a comprehensive and future-proof European legal framework of ethical principles for the development, deployment and use of AI is an important task. It has implications not only for Europe, but for the whole world as well as for the role of AI applications, their development and use in our world. However, even now, AI does not exist in a regulatory vacuum. We are not starting from zero: already now, many laws, directives, standards and guidelines apply to AI systems, products and results. Supporting better understanding and integration of all these existing frameworks is as important as introducing more regulation. At the same time, a too-narrow understanding of what constitutes an AI system will not improve public trust. From the perspective of those affected by technology, particularly by errors in automated decision-making, it does not matter whether the decision was made using a sophisticated AI algorithm or a simple spreadsheet: if it is wrong, it needs to be corrected, whether AI was involved or not.

The potential impact of AI in society is huge, both in a positive as in a negative sense. In particular, is crucial to address ethical consequences of AI systems, such as:

  • The lack of transparency of AI tools: AI decisions are not always intelligible to humans,
  • Value alignment: AI is not neutral, and therefore AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias,
  • Commodification: we and our societies are more than the data that is available about us, and the same time, data gathering and management raise concerns about privacy of users,
  • Power imbalances: increasingly the ability to deploy and benefit from large AI systems is concentrated in the hands of a couple of (mostly private) organisations. This leads to issues of democratic accountability and th deepening of societal and economic divides.

Most stakeholders agree on the need for some form of regulation in the AI and digital technology landscape. Regulatory efforts must consider both current AI capabilities and future developments, especially in generative AI models, which can be applied in diverse ways with varying risks. For example, the risks of using such systems to summarise a newspaper article are very different from those of using the same technology to give medical advice. Regulatory processes need to monitor these developments, in ways that guide responsible research and innovation rather than inhibiting new developments.

Regulatory processes should guide responsible research and innovation without stifling progress. In attempting to balance scientific progress with ethical considerations and the avoidance of harm, lessons can be learned from the existing regulations concerning genetic manipulation. It is therefore crucial to consider the implications to AI legislation of a  study on genetic technology regulation published in 2021 by the European Commission, which established that limitations to the capacity of the legislation to keep pace with scientific developments can cause implementation challenges as well as legal uncertainties. 

Concrete steps (technical, organisational and legal) are needed now, to ensure that AI legistation will ensure continuous innovation and scientific developments that are aligned with human rights and societal values, rather than becoming a burden in the development process. Moreover, effective AI regulation must involve international cooperation and coordination due to the transnational nature of research and development, potential societal and environmental impact, and the geopolitical strategies. In the case of genetic technology, the Cartagena Protocol on Biosafety aims to establish common guidelines and procedures. In contrast, global coordination on AI regulations is currently less established, although there are ongoing discussions and initiatives to promote international cooperation in this area, notably by UNESCO, the OECD and the Global Partnership on Artificial Intelligence (GPAI).

Contrary to current narratives of existential risks, AI is not out of control, but we are losing control on those that develop and exploit AI. However, demanding responsibility and accountability from organisations developing and deploying AI systems must be accompanied by concrete steps in governance, at the national as well as the global level. Not just nice words about which principles AI should follow, but concrete solutions. For instance, demands on preconditions for making such systems available: technical requirements on transparency and explainability, demands on auditing of the organisations deploying these systems, and maybe also demands on the expertise of those using the systems. We also do not let pharmaceutical companies release medicines without many tests, nor do we let people drive cars without a driver’s license.

It is crucial to avoid embracing an ‘arms race’ narrative that implies a singular finish line, where losers and winners can be identified, and that there is one direction to follow towards that line. Such a narrative undermines the systematic and long-term research and development necessary to explore consequences, improve AI technologies and their implementations, and refine regulations. The primary role of regulation at this stage is to incentivise such exploration. Responsible innovation, focusing on sustainability and fundamental human rights, should serve as the main support for regulation. Just as exploratory innovations have led to higher standards in various sectors, from green energy to fair trade initiatives or LED lamps, we urgently need responsible AI innovations that demonstrate responsible, sustainable, trustworthy alternatives to current AI developments.

Instead of viewing regulation as a constraint on innovation, it should be framed as a powerful steppingstone for innovation. Like a beacon, regulation will signal the expected ambitions of innovation.

Photo credits: Shutterstock.com/MonsterZudio

Find all related publications
Publications
10/07/2024

Digital regulatory power but technology taker

How do we create an ecosystem for the European digital model
12/06/2024

A Digital Union based on European values

FEPS Primer series - Ivana Bartoletti
11/06/2024

Computer in command

Digital Programme: Algorithms at the workplace
15/04/2024

Expected labour market effects of the Green Deal Industrial Plan (1)

The potential of labour policy for Just Transition regions
Find all related Progressive Post
Progressive Post
04/04/2024

The Democratic Shield: safeguarding democracy in the age of AI

04/04/2024

Defending democracy in the digital world

Whole-of-society approaches against disinformation
04/04/2024

The 2024 super-election year: navigating the geopolitical disinformation maze

Find all related events
Events
Past
08/04/2024
Online

The rise of algorithmic management, from platform economy to traditional sectors

Webinar on platform work and algorithms at the workplace
16/02/2024
Sofia, Bulgaria

The cost of personal transition

Forum for Progressive left solutions
23/01/2024
Helsinki, Finland

Algorithms in the Workplace

Breakfast roundtable
Find all related news
News
13/03/2023

Digital programme: Algorithms at the workplace

FEPS, together with Nordic partners, launched a Digital Programme on algorithmic management and workers' rights
Find all related in the media
In the media

Sustainable democracies need a sustainable media sector, says Jourová

by EURACTIV 02/04/2024
FEPS President Maria João Rodrigues discusses AI and journalism at Stars4Media event

AI won’t replace quality journalism, but sector needs safeguarding, says socialist think tank head

by Euractiv 18/03/2024
Interview with FEPS President on the role of media in Europe’s future

Ласло Андор: Европа трябва да създаде нов модел за икономически растеж

by dir.bg 14/03/2024
'Europe must create a new model for economic growth'. Interview by Laszlo Andor, FEPS Secretary General

Цената на прехода – зелен, дигитален и демографски

by BNT 13/03/2024
'The cost of transition - green, digital and demographic' BNT Interview with László Andor about the three major transformations the world is facing.