The EU’s dangerous proposal for stopping online child sexual abuse material

05/07/2023

Child sexual abuse material is a horror, causing long-term harm to victims. Numbers are increasing: the US National Center for Missing and Exploited Children, which collects and shares child sexual abuse material evidence with authorised parties, reported 29 million cases of online sexual exploitation in 2021. This is a factor of 10 increase over 2011 and a 40 per cent increase of Internet videos of child sexual abuse between 2020 and 2021. Yet most computer security experts and privacy advocates strongly oppose the EU proposal that online providers must recognise and remove all known child sexual abuse material, detect new abuse materials and ‘grooming’ (enticing and luring a minor into a sexually abusive situation). There simply is no technology to do this.

Feasibility and proportionality are at issue. Though current technology can largely recognise previously identified child sexual abuse material (CSAM), it cannot effectively identify new CSAM or grooming occurrences at scale. Current and proposed technology have both false negatives – missing a crime – and false positives: mistakenly identifying innocuous content as CSAM. These misses can be deliberately generated by someone seeking to evade detection or targeting a victim by sending inoffensive-looking content that would trigger a false positive.

Even a whiff of a CSAM investigation is sufficient to make a person and their family community outcasts. False accusations have led to suicides. Implementing the EU regulation would greatly increase the number of false positives and falsely accused people.

Furthermore – and critically – though the regulation does not explicitly say so, satisfying its requirements would effectively break the security guarantees of end-to-end encryption. Such encryption is the basis for the confidentiality provided by messaging apps like Signal and WhatsApp, and is the only method for ensuring confidentiality of communications. As many in national security and law enforcement have noted, such encryption is critical for protecting industry, national security, and individuals. The EU regulation would prevent its use.

A 2021 study for the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) recommended instead a targeted approach: do not monitor everyone, just those already under suspicion of Child Sexual Abuse and Exploitation (CSAE). But the proposed regulation did not adopt this approach. The 2023 report for the LIBE committee concluded that the regulations’ obligations on technology providers “would likely fail the proportionality test”. To understand how investigations might work in the presence of encryption, let us dig into the crimes the proposal seeks to address. CSAE – the more accurate name for these crimes – consists of four separate types of abuse. 

First is CSAM, the production and distribution of photographs and videos of child sexual abuse. The number of instances of reported instances is high, but the number of different instances – and different children affected – is much lower. 90 per cent of Meta’s reporting to US National Center for Missing and Exploited Children (NCMEC) from October and November 2020 was effectively the same as or similar to content reported previously, half of all reporting concerned the same six videos. Second is Perceived First Person (PFP) material, in which a child shares a nude image of themselves. This is not criminal but becomes so when the photo is redistributed without permission. The third form of CSAE is online trafficking of children for sexual purposes, and the fourth is real-time internet video of child sexual abuse. Each crime requires different tools for prevention and investigation. 

Though CSAM investigations can be impeded by encryption, there is a large opportunity for stopping much of the redistribution. Meta learned resharing CSAM often occurs, not for prurient interests, but because of outrage or a warped sense of humour. An American University report observed that warning of severe legal consequences for sharing CSAM can have a strong deterrent effect on these sharers.

The American University report proposed other interventions. In PFP, the child who shares her nude photo is not engaging in criminal activity; she may be incentivised to report when the photo is reshared. Sex education, information about online safety and reporting abuse can enable this – or even prevent the photo creation in the first place. 

Meanwhile, abusers offering internet-enabled child sex trafficking and real-time videos of child sex abuse are most likely to be family members or friends. Thus, a serious investigatory problem is the child’s unwillingness to see the abuser prosecuted. Interventions empowering the child, including providing safe community spaces and sex education can be crucial. And investigators can use online techniques – abusers’ ads and odd communications patterns in the real-time video – even when the video itself is encrypted.

Proponents argue the regulation will pressure technology firms to improve efforts on finding and reporting CSAE. Two techniques are proposed: perceptual hashing and machine learning, the former for recognising previously known CSAM, the latter for discerning new instances and grooming. That such techniques can work effectively is illusory, as a group of computer security experts documented, this argument by proposal supporters ignores technological realities (disclosure: I coauthored this report).

Perceptual hashing, currently responsible for much of CSAM reporting, divides an image into many tiny squares and computes a ‘hash’ – a mapping of a long string of bits to a much smaller fixed-size one – of the image and matches images with ‘close-by’ hashes. This enables recognising a CSAM image even if it is cropped, blurred or otherwise changed in a minor way. Perceptual hashing can be fooled. Researchers have created images of a young girl and a beagle whose hashes match. Technical deceit means that a perceptual hashing system can be misled into reporting a CSAM image when none existed. (Perceptual hashes also suffer from false negatives, meaning modified CSAM images may be missed.)

Scale makes perceptual hashing an inadequate solution for CSAM recognition. A message that tests positive for CSAM must be examined, but users send many billions of messages daily. If even a tiny percentage of flagged messages are false positives, service providers will be unable to manage the huge numbers. As criminals develop additional ways to fool the technology, high numbers of false positives and negatives will be a reality. And although experimental efforts use machine learning to unearth grooming, that work faces the same problems as perceptual hashing. 

Simply put, the two technologies fail the efficacy test. Claims that these systems can be developed to satisfy the EU proposal’s requirements reflect wishful thinking by policymakers rather than hard-eyed analysis by technologists.

Because storing CSAE material is illegal, academics cannot study the efficacy of current detection techniques. European law enforcement is largely silent, but we have some information. In 2022 the Irish Council for Civil Liberties queried An Garda Síochána, the Irish national police, about NCMEC 2020 referrals for investigation. The police had received 4,192 referrals, of which 409 were actionable, with 265 of the cases completed. A higher number – 471 – eleven per cent were deemed not CSAM. An Garda Síochána kept the files anyway. Now 471 people have police records because a programme incorrectly flagged them as having CSAM. 

Based on an illusion, the EU CSAM regulation is downright dangerous. It should not pass.

Photo credits: Shutterstock.com/AndrewAngelov

Find all related publications
Publications
04/10/2023

A progressive politics of work for the age of unpeace

What Labour can learn from the European centre-left
31/05/2023

Is the digital transition a lever for structural reforms or does it reinforce the divide?

Recovery Watch series
28/03/2023

Back to the Dark Ages?

Q-commerce, rapid retail and the changing landscape of retail work
28/03/2023

Getting the goods

Trade unions and strategy in the quick-commerce sector
Find all related events
Events
Past
07/12/2023
European Parliament, Brussels

Shaping Europe’s digital model

Building alliances for a progressive European vision
06/12/2023
Claridge, Brussels

Transforming capitalism in the Age of AI

European and progressive approach to digital transformation
06/11/2023
FEPS HQ

The future of work in Europe

Implications for equity and growth in Europe
Find all related news
News
13/03/2023

Digital programme: Algorithms at the workplace

FEPS, together with Nordic partners, launched a Digital Programme on algorithmic management and workers' rights
Find all related in the media
In the media

A szociális unió imperatívusza

by Új Egyenlőség 09/09/2023
'The imperative of Social Union'. Article about FEPS book 'Europe’s Social Integration: Welfare Models and Economic Transformations' by László Andor.

AI, platforms and (human) workers’ rights

by Social Europe 07/07/2023
In Social Europe' article, Gerard Rinse Oosterwijk, FEPS Policy Analyst on Digital, talks about the efforts to regulate AI undertaken by the EU and highlights the importance to grasp this opportunity to set the rules for a human-centric approach

Rapid grocery worker conditions are worsening, states report

by The Grocer 30/05/2023
The findings of our FEPS study on the quick-commerce sector and the conditions of rider workers were picked up by the UK-website 'The Grocer’.

Quick commerce – not turning a fast buck

by Social Europe 15/05/2023
Article on Social Europe by the authors of 'Back to the Dark Ages?' FEPS Policy Study about the quick-commerce workers' rights.