In today’s increasingly digitalised workplaces, automated systems orchestrate and monitor tasks, measure performance, and even steer careers – often with little clarity on how they do so. Professionals in almost all sectors can unexpectedly be rated negatively and lose their chances of promotion and professional development. Arbitrary automated decisions can be taken that severely disrupt livelihoods, underscoring the need for a more transparent governance of systems contributing to algorithmic management.
‘Algorithmic management’ is a concept that encompasses a range of operational practices that use a variety of personal data to make decisions. While not new in academic literature, it is a new concept in EU law, introduced only in 2024 with the Platform Work Directive. Essentially, algorithmic management can be used in an organisation to do two different but interconnected things: monitor workers (health, physiological parameters, stress level etc) and make automated decisions about them (allocation of tasks, suspension, termination, increase or decrease of pay etc). It also can be applied, beyond the traditional boundaries of an organisation, to that organisation’s relationship with other actors, including consumers, suppliers and providers.
Researchers, including David Stark and colleagues, have highlighted several distinctive features of algorithmic management: it co-opts and organises both agents and users of algorithms; it is characterised by immediacy, meaning it operates continuously and ubiquitously, fostering a sense of urgency. It is also unmediated, providing direct and unrestricted access to decision-making processes. Furthermore, algorithmic management functions in a synthetic mode, as algorithms synthesise outputs, processes and classification systems. Another key feature is its ‘twisted accountability,’ which complicates the assignment of responsibility for decisions made by algorithmic systems.
While algorithmic management promises to enhance efficiency and accelerate decision-making, its power comes from processing workers’ personal data, raising serious concerns regarding accountability, unpredictability, opacity and privacy risks.
In some cases, monitoring can extend beyond what is considered ‘necessary’ and ‘proportional’ and turn into surveillance. Modern surveillance tools now exist that can capture data points related to the worker’s emotional state (anxiety, frustration, boredom, happiness, fear, insecurity etc); safety (exposure to hazards, risk levels, movements, fatigue, microsleep episodes etc); health (physiological data such as heart rate, blood pressure, breathing rate, temperature, ergonomic data such as ‘good’ or ‘bad’ posture, stress levels, possible burnout etc.); wellness (sleep patterns, fatigue management, level of physical activity etc.); brain activity; security (use of company assets, information leaks, risky behaviours etc); and productivity (engagement with teammates, working time vs rest time, contents of e-mails, internet use etc.) Surveillance tools and techniques are often invisible and non-material, embedded within other technologies and devices. This, coupled with the broad range of data points collected, makes it a markedly distinct practice from monitoring, which can harm workers and has, in several cases. been penalised with hefty fines.
Governance
Algorithmic management, as a core feature of the platform business model, is addressed in the Platform Work Directive. One of the essential aims of the Directive is to promote transparency, fairness, human oversight, safety and accountability in algorithmic management. It does so by establishing limitations on the processing of personal data and by granting specific rights to platform workers.
Those rights emanate from the General Data Protection Regulation (GDPR), but the Platform Directive focuses more specifically on algorithmic accountability, transparency, explainability, review, profiling prevention, biases and opacity. Its provisions on accountability are key in relation to how workers’ data is processed and the limitations of such processing (article 8, 12); the transparency provisions aim at preventing opacity by ensuring national authorities and workers have access to key information about the automated systems. This helps prevent the risks of sudden and unilateral changes that can affect workers negatively (article 9). Additionally, the directive gives workers the right to seek human intervention (article 10), challenge decisions and review them (article 11), thereby providing safeguards against unfair or opaque automated decisions and monitoring.
Also, because algorithmic management can rely on AI systems, it can fall within or overlap with the high-risk categories outlined in the AI Act. According to annex III of the AI Act, algorithmic management typically appears in two forms of AI uses that are classified as high-risk:
AI systems employed for the recruitment or selection of natural persons (for example, placing targeted job advertisements, analysing and filtering job applications or evaluating candidates).
AI systems used to make decisions affecting terms of work-related relationships (such as promotion or termination), to allocate tasks based on individual behaviour or characteristics, or to monitor and evaluate performance and behaviour.
As a result, organisations using such systems must comply with the AI Act’s requirements, in addition to other obligations established by the GDPR and, for platforms, the Platform Work Directive.
Final thoughts
Algorithmic management is a complex socio-technological construction, in which those who control data and algorithms possess an outsized influence on the labour market and the working conditions of others.
The von der Leyen Commission II has promised a new initiative on algorithmic management, incorporating the ‘human in control’ principle. Implementing this essential principle, together with a proactive, anticipative and collaborative form of precautionary governance, will help to reduce the asymmetry of power between ‘controllers’ and workers, and will limit the risks of further inequality and protect those who have the least power. Social dialogue can also play a key role in protecting workers while, at the same time, encouraging responsible innovation.
Towards Sovereign AI: Europe´s greatest challenge?
The European Union is in the middle of crucial negotiations over the Artificial Intelligence (AI) […]
XThis website uses cookies. Some cookies are necessary for the proper functioning of the website and cannot be refused if you wish to visit the website.
Other cookies are used for Advertisement and Analytics (Sharing on social networks, video playing, analysis and statistics, personalized advertising ...) You can refuse them if you want to. REJECTACCEPTCookie settings
Manage consent
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category .
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
csrftoken
past
This cookie is associated with Django web development platform for python. Used to help protect the website against Cross-Site Request Forgery attacks
JSESSIONID
session
The JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Cookie
Duration
Description
__cf_bm
30 minutes
This cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
S
1 hour
Used by Yahoo to provide ads, content or analytics.
sp_landing
1 day
The sp_landing is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content.
sp_t
1 year
The sp_t cookie is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
CONSENT
2 years
YouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
iutk
session
This cookie is used by Issuu analytic system to gather information regarding visitor activity on Issuu products.
s_vi
2 years
An Adobe Analytics cookie that uses a unique visitor ID time/date stamp to identify a unique vistor to the website.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
NID
6 months
NID cookie, set by Google, is used for advertising purposes; to limit the number of times the user sees an ad, to mute unwanted ads, and to measure the effectiveness of ads.
VISITOR_INFO1_LIVE
5 months 27 days
A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSC
session
YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devices
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-id
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextId
never
This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requests
never
This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.