The Progressive Post
Algorithmic management in Europe: from key features to governance and beyond

In today’s increasingly digitalised workplaces, automated systems orchestrate and monitor tasks, measure performance, and even steer careers – often with little clarity on how they do so. Professionals in almost all sectors can unexpectedly be rated negatively and lose their chances of promotion and professional development. Arbitrary automated decisions can be taken that severely disrupt livelihoods, underscoring the need for a more transparent governance of systems contributing to algorithmic management.
‘Algorithmic management’ is a concept that encompasses a range of operational practices that use a variety of personal data to make decisions. While not new in academic literature, it is a new concept in EU law, introduced only in 2024 with the Platform Work Directive. Essentially, algorithmic management can be used in an organisation to do two different but interconnected things: monitor workers (health, physiological parameters, stress level etc) and make automated decisions about them (allocation of tasks, suspension, termination, increase or decrease of pay etc). It also can be applied, beyond the traditional boundaries of an organisation, to that organisation’s relationship with other actors, including consumers, suppliers and providers.
Researchers, including David Stark and colleagues, have highlighted several distinctive features of algorithmic management: it co-opts and organises both agents and users of algorithms; it is characterised by immediacy, meaning it operates continuously and ubiquitously, fostering a sense of urgency. It is also unmediated, providing direct and unrestricted access to decision-making processes. Furthermore, algorithmic management functions in a synthetic mode, as algorithms synthesise outputs, processes and classification systems. Another key feature is its ‘twisted accountability,’ which complicates the assignment of responsibility for decisions made by algorithmic systems.
While algorithmic management promises to enhance efficiency and accelerate decision-making, its power comes from processing workers’ personal data, raising serious concerns regarding accountability, unpredictability, opacity and privacy risks.
In some cases, monitoring can extend beyond what is considered ‘necessary’ and ‘proportional’ and turn into surveillance. Modern surveillance tools now exist that can capture data points related to the worker’s emotional state (anxiety, frustration, boredom, happiness, fear, insecurity etc); safety (exposure to hazards, risk levels, movements, fatigue, microsleep episodes etc); health (physiological data such as heart rate, blood pressure, breathing rate, temperature, ergonomic data such as ‘good’ or ‘bad’ posture, stress levels, possible burnout etc.); wellness (sleep patterns, fatigue management, level of physical activity etc.); brain activity; security (use of company assets, information leaks, risky behaviours etc); and productivity (engagement with teammates, working time vs rest time, contents of e-mails, internet use etc.) Surveillance tools and techniques are often invisible and non-material, embedded within other technologies and devices. This, coupled with the broad range of data points collected, makes it a markedly distinct practice from monitoring, which can harm workers and has, in several cases. been penalised with hefty fines.
Governance
Algorithmic management, as a core feature of the platform business model, is addressed in the Platform Work Directive. One of the essential aims of the Directive is to promote transparency, fairness, human oversight, safety and accountability in algorithmic management. It does so by establishing limitations on the processing of personal data and by granting specific rights to platform workers.
Those rights emanate from the General Data Protection Regulation (GDPR), but the Platform Directive focuses more specifically on algorithmic accountability, transparency, explainability, review, profiling prevention, biases and opacity. Its provisions on accountability are key in relation to how workers’ data is processed and the limitations of such processing (article 8, 12); the transparency provisions aim at preventing opacity by ensuring national authorities and workers have access to key information about the automated systems. This helps prevent the risks of sudden and unilateral changes that can affect workers negatively (article 9). Additionally, the directive gives workers the right to seek human intervention (article 10), challenge decisions and review them (article 11), thereby providing safeguards against unfair or opaque automated decisions and monitoring.
Also, because algorithmic management can rely on AI systems, it can fall within or overlap with the high-risk categories outlined in the AI Act. According to annex III of the AI Act, algorithmic management typically appears in two forms of AI uses that are classified as high-risk:
- AI systems employed for the recruitment or selection of natural persons (for example, placing targeted job advertisements, analysing and filtering job applications or evaluating candidates).
- AI systems used to make decisions affecting terms of work-related relationships (such as promotion or termination), to allocate tasks based on individual behaviour or characteristics, or to monitor and evaluate performance and behaviour.
As a result, organisations using such systems must comply with the AI Act’s requirements, in addition to other obligations established by the GDPR and, for platforms, the Platform Work Directive.
Final thoughts
Algorithmic management is a complex socio-technological construction, in which those who control data and algorithms possess an outsized influence on the labour market and the working conditions of others.
The von der Leyen Commission II has promised a new initiative on algorithmic management, incorporating the ‘human in control’ principle. Implementing this essential principle, together with a proactive, anticipative and collaborative form of precautionary governance, will help to reduce the asymmetry of power between ‘controllers’ and workers, and will limit the risks of further inequality and protect those who have the least power. Social dialogue can also play a key role in protecting workers while, at the same time, encouraging responsible innovation.
Photo credit: Shutterstock / TippaPatt