Artificial Intelligence and social rights: a first assessment

The EC’s recent proposal for AI regulation is the first of its kind in the […]

07/05/2021

The EC’s recent proposal for AI regulation is the first of its kind in the world. It is an excellent base to shape a domain that opens enormous opportunities. But the door for abuses and wide interpretation by some member states, as well as by illiberal regimes worldwide, is not entirely closed yet. And against the backdrop of a technology that can be highly invasive to workers’ rights, better protection is still necessary.

On Wednesday 21 April, the Commission unveiled the long-awaited proposal to regulate AI. It can be considered the first endeavour of this kind across the world, which is why expectations were very high, in Europe, as well as abroad. Regulating the use of fast-evolving technologies with such broad implications on all sectors of our economies and societies is undoubtedly no easy task. A task that required years of work, study, consultations with a wide range of stakeholders.

The European Parliament actively contributed to this effort from the beginning, calling on the Commission to establish a legislative framework on AI as early as 2017, with its latest Resolutions on AI from last October as well as with the ongoing work of the Special Committee on Artificial Intelligence. From a first glance at the proposal, we can notice a range of undoubtedly positive elements, while some challenges remain.

First of all, despite having announced for months that the focus would only be on high-risk applications, the draft proposal introduces four categories of applications, with a different degree of obligations. From the low-risk ones, like the management of a high volumes of non-personal data, to the ones requiring transparency, such as chatbots, going to the high-risk ones, like robotic surgical devices and predictive tools for granting and revoking social benefits, for making decisions on termination of employment contracts, and, finally, to the prohibited applications. This is undoubtedly a response to the concerns that many in the Parliament, but also civil society organisations, have been voicing over the last months, demanding a more nuanced approach rather than a mere binary one.

In particular, in March we sent a letter to President von der Leyen and the involved Commissioners, co-signed by 116 MEPs from across the political spectrum, to demand putting fundamental rights first when regulating AI. In her reply, von der Leyen shared our concerns, and confirmed the need to “go further” than strict obligations, in case of applications that would be clearly incompatible with fundamental rights. The explicitly prohibited uses listed in article 5 of the draft Regulation – subliminal manipulation, exploitation of vulnerable groups for harmful purposes, social scoring, mass surveillance for example – confirm this commitment. The accent on immaterial damage to persons and society is also to be appreciated, as well as the strict requirements on high-quality datasets, robust cybersecurity, and human oversight for high-risk applications. However, several aspects remain to be clarified.

The issue that has attracted the most attention by commentators and on which Commissioners Margarethe Vestager and Thierry Breton spent most of their presentation on 21 April is the one on real-time remote biometric identification systems in public spaces for law enforcement purposes. Compared to the leaked text which was circulated a week prior to the presentation of the proposal, and which generated a lot of discussions and concerns, the draft article 5, point d) and subsequent paragraphs provide for a more detailed and limited framework for the implementation of such practice, restricting the cases in which it can be used (now limited to cases such as child abduction, imminent terrorist threats and the localisation of suspects of crimes punishable with over three years of detention) and making it subject to judicial authorisation and activation by national law. If this constitutes a substantial improvement compared to the leak, concerns persist as to remaining possible abuses and wide interpretation by some member states. 

Another area to flag up is the one on labour rights: Annex III lists high-risk applications, including those that monitor and evaluate workers’ behaviour and performance, those that control the time a worker spends in front of the computer when teleworking, or even assess their mood by detecting emotions when making calls. According to article 43, detailing the conformity assessment procedures to follow, this and other sensitive applications can undergo an internal conformity assessment (or self-assessment) instead of a third-party one. Regulating so loosely a practice so invasive for workers’ rights can be very dangerous, even more so as we consider these rules will apply to all AI developers targeting the EU market, including non-EU entities, that might not necessarily share our values. 

Affixing the CE mark autonomously would entail that potential violations would only be discovered at a later stage by overburdened market surveillance authorities, when damages have already occurred. If we consider that allocation of social benefits, illegal migration, or crime prevention are also on the same list, we can see the risk of undermining constitutional concepts such as the presumption of innocence or non-discrimination.

We cannot afford to make mistakes, in an era where authoritarian regimes are setting their own, illiberal standards. On the contrary, similarly to the case of GDPR, we have the unique opportunity to set a world standard for a human-centric, trustworthy AI, to allow our citizens and businesses to make the most of such a promising technology, whose benefits we already experience in a wide variety of sectors. The European Parliament stands therefore positive and ready to improve the text, to ensure appropriate safeguards are in place for high-risk applications, to stimulate good innovation and the creation of a true Internal Market for AI that serves humanity, and not only the interests of the few.

Find all related publications
Publications
31/05/2023

Is the digital transition a lever for structural reforms or does it reinforce the divide?

Recovery Watch series
28/03/2023

Back to the Dark Ages?

Q-commerce, rapid retail and the changing landscape of retail work
28/03/2023

Getting the goods

Trade unions and strategy in the quick-commerce sector
09/02/2023

Climate and digital policy

'Climate mainstreaming: breaking down the silos' series
Find all related events
Events
Upcoming
05/10/2023
FEPS HQ, Brussels (Expert meeting)

Security at work in an uncertain world III

FEPS and Progressive Britain are partnering up on a major new project on the future […]
Past
30/06/2023
Kielce, Poland

Work-life balance – Life under control

Challenges for Social Democracy Series
19/06/2023
Online

The future of care: platform work & digitalisation

Care4Care Policy series
Find all related news
News
13/03/2023

Digital programme: Algorithms at the workplace

FEPS, together with Nordic partners, launched a Digital Program on algorithmic management and workers' rights
Find all related in the media
In the media

AI, platforms and (human) workers’ rights

by Social Europe 07/07/2023
In Social Europe' article, Gerard Rinse Oosterwijk, FEPS Policy Analyst on Digital, talks about the efforts to regulate AI undertaken by the EU and highlights the importance to grasp this opportunity to set the rules for a human-centric approach

Rapid grocery worker conditions are worsening, states report

by The Grocer 30/05/2023
The findings of our FEPS study on the quick-commerce sector and the conditions of rider workers were picked up by the UK-website 'The Grocer’.

Quick commerce – not turning a fast buck

by Social Europe 15/05/2023
Article on Social Europe by the authors of 'Back to the Dark Ages?' FEPS Policy Study about the quick-commerce workers' rights.

Work insecurity: the high cost of ultra-fast grocery deliveries

by euobserver 30/03/2023
EUobserver on the ultra-fast grocery deliveries and our policy study 'Back to the Dark Ages? Quick Commerce and the changing landscape of retail work', published in collaboration with Uni Europa.