The Progressive Post
Reclaiming workers’ rights in the age of AI: from data protection to collective justice

As AI and data-driven technologies reshape the workplace, they often undermine workers’ rights, privacy and autonomy. However, regulatory frameworks, like the EU’s General Data Protection Regulation and worker mobilisation, can combat algorithmic exploitation, advocating for collective data rights and systemic reforms to ensure fairer and more transparent digital labour practices.
Artificial intelligence (AI) and data-driven technologies are redesigning the workplace at an unprecedented pace. From recruitment algorithms to employee monitoring systems, AI tools are increasingly deployed to make decisions that profoundly affect workers’ lives. While these systems promise efficiency and streamlined operations – claims that are not always substantiated – they often come with significant costs to privacy, fairness and dignity. It is urgent to address how these systems are used to redefine power dynamics and undermine workers’ rights.
For many workers, algorithmic management systems are far from neutral tools. Instead, they operate as mechanisms that reinforce existing inequalities, stripping workers of autonomy and agency through opaque metrics and decision-making processes. These systems not only reshape the nature of work but also undermine the capacity of individuals and unions to assert their rights effectively.
The stakes are particularly high for gig economy workers, who are often subject to intensive monitoring and evaluation by AI-driven platforms. These platforms frequently operate under the guise of neutrality – a convenient narrative readily embraced by employers. However, the reality is far more insidious: workers are constantly monitored and reduced to data points, stripped of autonomy and agency by metrics they cannot see or influence. This structural imbalance not only jeopardises workers’ privacy but also perpetuates systemic discrimination, as algorithms replicate biases embedded in their training data. Worse, these systems weaponise data to deepen precarity and penalise any deviation from rigid and often inequitable rules.
Without accountability – and the transparency necessary to achieve it – the gig economy risks entrenching inequality and exploitation within the labour market. Nevertheless, amidst these challenges, there is hope: regulatory frameworks, worker mobilisation and union-led advocacy present tangible opportunities to counter these dynamics and reclaim agency in the digital workplace.
Data protection as a tool for labour justice: lessons from the Glovo case
A recent case illustrates both the challenges and opportunities in addressing these issues. Glovo, a Spain-based, delivery app and prominent player in the gig economy, faced a €5 million fine from the Italian data protection watchdog for its misuse of workers’ personal data. Reversing.Works, an initiative exposing abuses within gig economy platforms, had uncovered how the platform’s algorithmic management practices included tracking workers outside their shifts, maintaining hidden performance scores and sharing detailed monitoring data with third parties. These practices violated both Italian labour law and the EU’s General Data Protection Regulation (GDPR), highlighting the dual risks of data protection breaches and exploitative workplace dynamics.
The ruling not only fined Glovo, but also mandated corrective measures, marking a significant victory for workers and their rights in the gig economy. This outcome demonstrated the potential of combining GDPR’s legal tools with advocacy and investigative efforts to hold employers and companies accountable. It also highlights the indispensable role of grassroots organisations and unions in this struggle. The case also shed light on a broader issue: the opacity of AI systems. Proprietary algorithms often operate as black boxes, making it nearly impossible for workers to understand or contest decisions that impact their livelihoods.
Regulatory efforts and shortcomings
The data protection framework is crucial in addressing these challenges. The recent scrutiny of algorithmic management systems under GDPR provisions shows that the regulation has teeth – but only if supported by proactive enforcement and worker mobilisation. By establishing transparency, accountability and purpose limitation principles, the GDPR offers a framework for regulating how AI systems handle personal data. However, enforcement has remained the weak link for years. There were many reasons for that, but a critical one is that workers did not have access to clear information about how their data is processed and robust mechanisms to challenge misuse.
Moreover, systemic changes are additionally needed to mitigate the risks associated with AI in the workplace. Certain technologies, such as biometric surveillance and emotion recognition, pose inherent threats to workers’ rights and should be banned outright. In this regard, the EU’s AI Act received mixed evaluations regarding workers’ rights. Although it introduced safeguards on high-risk AI applications such as automated decision-making and recruiting (included in annex 3.4, which will hopefully be expanded in the future), it did not go far enough to address the broader implications of AI on workers’ rights. For instance, it does not adequately address the power imbalances between employers and workers or guarantee the right to challenge algorithmic decisions that affect employment conditions. The seemingly positive ban on emotion recognition in the workplace also suffered from a major loophole in the form of an oft-misused ‘health and safety’ exception.
The adoption of the Platform Work Directive also represented a significant advancement and an additional layer of protection. It will empower workers to challenge the opaque practices of gig economy platforms thanks to clearer regulations on algorithmic management, enhanced data rights and stronger collective labour rights, alongside robust enforcement mechanisms. However, the broad discretion afforded to national legislators means it will be crucial for trade unions, employers and labour advocates to ensure that the directive leads to effective regulations, avoiding fragmented or ineffective enforcement.
There is reason for optimism that future regulatory efforts will address some of the gaps outlined above. A key development is the initiative on algorithmic management outlined in the mission letter to the Vice-President of the European Commission, Roxana Mînzatu. This initiative is set to pave the way for a directive on algorithmic management, which is expected to include crucial provisions such as prohibitions on the most harmful practices, transparency obligations and rights for workers to challenge, monitor and rectify automated decision-making in the workplace. Additionally, it should ensure information and consultation rights for worker representatives, providing a framework that upholds worker rights in the face of emerging digital labour practices.
Shaping the future of work: collective data rights and the need for systemic change
Ultimately, the promise of AI in the workplace must align with principles of fairness, transparency and respect for workers’ rights. Cases like Glovo’s highlight that accountability is achievable when regulators, advocates and workers join forces. By leveraging existing legal frameworks like the GDPR and parts of the AI Act, and amplifying the voices of those most affected, we can challenge the unchecked power of algorithmic management.
But this is not enough: it is essential to move beyond merely enforcing and adapting current legal frameworks. A more transformative approach should involve establishing a public digital infrastructure that prioritises fairness, transparency and accountability. Unlike the surveillance-driven business models of Big Tech that are not held accountable in the EU, these public systems will be designed with the collective well-being of workers, and all individuals and collectives, at their core. This includes ensuring that algorithms are developed to promote equal treatment and protect against bias. By fostering an ecosystem that encourages democratic oversight and user control and centres on fundamental rights, we can mitigate the risks of algorithmic discrimination while simultaneously enhancing the power of workers to challenge and shape the systems that govern their labour.
Last but not least, it is crucial for the EU to recognise collective data rights as a fundamental aspect of protecting individuals in the digital age. While the Representative Actions Directive has marked a positive first step in empowering citizens and organisations to take collective legal action against infringements of data protection laws, it is not sufficient on its own. The current framework does not fully address the complex and widespread nature of data harms that affect entire communities, especially when individuals may lack the resources or knowledge to assert their rights individually.
To achieve meaningful protection, the EU must also move beyond piecemeal efforts on data rights and instead establish a comprehensive, legally recognised structure for collective data rights. This would enable groups of affected individuals to collectively claim redress and hold companies accountable, guaranteeing algorithmic transparency, notably regarding workers’ rights negotiations and prevent systemic violations of privacy and data protection laws. This approach would ensure that the EU’s commitment to fundamental rights extends to the digital realm in a way that reflects the scale and impact of modern data practices and that has proven incredibly harmful for vulnerable groups like workers.
Photo credit: Shutterstock / gopixa