The Progressive Post
Tightening the guardrails for AI
On June 14, the European Parliament took a decisive step towards the adoption of common rules for EU member states on artificial intelligence (AI), approving by a very large majority of 499 votes in favour – and with only 28 against and 93 abstentions –, the negotiating mandate to be taken to the trilogue with the Council and the European Commission.
The vote confirmed the text approved last 11 May, also by a very large majority in the joint Internal Market (IMCO) and Civil Liberties (LIBE) Commissions: on the very evening of the vote, the first negotiation meeting took place, and it is possible that by the end of this European legislature, the historic goal of the first comprehensive regulation of the matter worldwide will be reached.
The regulation must hold together different problems and objectives: to establish a governance system, to manage the risks of AI, to establish obligations and prohibitions for providers and users, to define protections for citizens (for example in the area of privacy and non-discrimination), and to promote the development of innovation and technology in accordance with EU values. The negotiations were particularly challenging due to the breadth and complexity of the issues at stake, but above all because very different positions were pursued even within the same political groups, and because applications such as Chat GPT-4 emerged during the approval process, which had a great public impact.
An initial issue concerns, therefore, the governance system, which will be very important to ‘guide’ the first phase, when there will certainly be disagreements on the application of the rules, and then ‘accompany’ the regulation of a subject that changes very rapidly. A European Office for Artificial Intelligence is to be set up, endowed with legal personality and independent of the European Commission, to which it will give opinions on the implementation of the regulation and the development of AI. The member states must designate a National Supervisory Authority with the same requirement of independence and adequate resources: the Management Board of the AI Office is composed of representatives of the National Authorities, the Commission, the European Privacy Supervisor, the European Cyber Security Agency and the European Fundamental Rights Agency. We have also foreseen a Consultative Forum to ensure a permanent involvement of the various stakeholders, economic actors and civil society.
The regulation defines a series of general prohibitions and establishes a graduation of obligations and responsibilities based on the different levels of risk of infringing rights: the second major issue concerns the regulation of so-called high-risk artificial intelligence systems. Compared to the European Commission’s proposal, in order to be classified in this category, artificial intelligence systems must now also pose a ‘significant risk’ of harm to health, safety and fundamental rights: the application will be delicate because different aspects such as severity, intensity, probability of harm and duration of effects will have to be assessed. We have also achieved the inclusion in some areas of the risk of harm to the environment and extended the scope of application to AI systems related to the provision of essential services, dedicated to migration and border management, or aimed at influencing voters in political campaigns and recommending content through major social media platforms.
As Socialists and Democrats, we then managed to obtain a good compromise on a third important controversy, namely video surveillance, where the ultra-securitarian position was also strongly supported by some centre-right national governments, and a separate vote on this issue was requested in the joint IMCO and LIBE committees last May. Compared to the draft AI Act, presented by the European Commission, we have established that biometric identification software can only be used ex-post and only for investigations of a ‘serious crime’ for which video recordings need to be analysed with an authorisation from the judiciary. At the same time, a ban has been established on any biometric identification carried out in real time and in public places: one of the most relevant effects is, therefore, the impossibility of using facial recognition forms, for example, to combat illegal immigration or the defence of national borders, which clearly fall under ‘public places’.
Still very important for our political group is the introduction of the obligation to carry out a fundamental rights impact assessment by those who deploy a high-risk AI system on people. Only the end user can in fact have precise information on the context and the specific target population to which he intends to apply an AI. Added to this are two important measures for protecting workers: the obligation, again for the end user, to involve the trade unions in an agreement before introducing an AI system in the workplace, and to inform the workers. It goes on to say that since employment is a matter of shared competence, not suitable for regulation, the possibility remains for both the EU and the member states to introduce stricter measures on the subject.
The regulation is long and very articulate, but I would like to touch briefly on two more issues. We have extended the scope of the regulation to include so-called ‘foundational models’, which are not yet real AI systems, but models that are developed and ‘trained’ on millions of data, which can be integrated into AI systems: the best-known example is ‘GPT’, which is the foundational model on which Chat GPT-4 is based. For these models, manufacturers must identify possible foreseeable risks to health, safety, fundamental rights, the environment, democracy and the rule of law, and define appropriate controls. Finally, each member state must define a regulatory framework to foster innovation and ensure that a company using a regulatory sandbox –a space for experimentation in a controlled environment – has a specific plan agreed upon with the responsible authority to test its innovation for a limited period.
Photo credits: Shutterstock.com/Gorodenkoff