The Progressive Post
Regulating AI: workers’ intellect versus Big Tech oligarchs
If there is any compelling realisation about the protracted attempts in the US and EU to regulate emerging technologies, it is that Big Tech oligarchs do not care about the well-being of the individual, or the workers, the socio-economy or democracy, unless civil society and public authorities force them to do so.
Disruption is said to be the new normal. Digitalisation, globalisation, and demographic shifts are accelerating at an unprecedented rate with far-reaching effects on daily lives, societal interaction, consumption patterns, business operations, and the nature of work. A recent disruptive technology like artificial intelligence (AI) that is based on the advances of the internet, computing, information systems, communications and electronic materials will further alter economic relations, especially working life.
The evolving practice of algorithmic management at the workplace depends on AI’s voracious appetite for data to perform automated decision-making for the remote control of the workforce. Workers are being programmed, motivated, managed, evaluated, or undermined by statistical patterns and algorithms. While the lifeblood of data and algorithms are the workers, both academic theorising and policy debates on today’s platform capitalism and the digital world often overlook the workers’ perspective, agency, and consciousness.
Conventional economic theories and parliamentary enquiries treat workers as passive – rather than active – actors in the evolution of technology. The usual assumption is that multinational corporations are the leading innovators, and the value they create leads to increased productivity, which, in turn, causes job losses and de-skilling. This perception misses out the reality that technological innovation is a process and product of collective labour. Indeed, the rapidly advancing science and technology is a striking reflection of a fundamental aspect necessary for the reproduction of the capitalist mode of production: the division, fluidity, and mobility of labour. Yet despite the dependence of digitalised capitalism on skilled, flexible, and mobile workers, it is the Tech Giants, not the workers, that are being set free.
During AI’s installation period as a revolutionary technology, there is no doubt that the Big Tech companies are well-positioned to direct its deployment as a commodity to their profit-making advantage. In the context of hyper-competition in the world economy, the Big Tech elites have an interest in leveraging scalable AI solutions to reduce labour costs, increase output and outdo their competitors. Years of scrutiny by the US Congress and the European Parliament on Big Tech’s activities have revealed the oligarchical status of these few powerful corporations – which have been implicated in antitrust violations, tax avoidance, privacy invasions, job displacements, and election interference.
With their data monopoly, significant economies of scale, strategic acquisitions of competing enterprises, market dominance and sheer wealth accumulated from the advent of the Information Age and the digital economy, the Big Tech oligarchy enjoy first-mover advantages in influencing the innovation and regulation of AI. The threat of regulatory capture for new technologies is real. These Big Tech companies can enlist AI into their service – including the possibility of weaponising this innovative machine to stifle not only their business competition but also political opposition. The mainstream approach to responsible business conduct, like that of the European Commission (where corporate social responsibility is company-led and the relationship of state regulators with companies is limited to a supportive role through voluntary policy measures), should not therefore be extended to the regulation of AI vis-à-vis the Big Tech companies.
New AI regulatory frameworks must emerge where rules are emancipatory for workers, but disciplinary for Big Tech misbehaviour. Businesses should be made to appreciate why regulation of risky AI systems is not mere cost and burden on them; rather, it is invaluable for the process of innovation itself. Policies should enable them to innovate with ethical and moral considerations, while disallowing them to automate functions that manipulate behaviour at the expense of human dignity and autonomy.
At the same time, a progressive regulatory framework must recognise workers as social individuals having the intellect and sensibility – the cognitive and affective capacities – to make AI liberating technologies, rather than machineries of social control and labour exploitation. Implementing this necessitates the reactivation of working-class consciousness among AI producers and users. The diverse professionals involved in the production and use of AI – from data scientists and robotics engineers to computer programmers – are all workers, and so are the technology consumers and ordinary citizens.
Several ideas about AI regulation from the point of view of workers should be explored. Workers can actively participate in the design of transparent, accountable, explainable, and human rights-based AI. Workers can bargain with employers in negotiating AI-enabled techniques. Workers can individually set their work goals and schedules, yet collectively build humane and value-creating conditions for their workplace. Moreover, the concepts of ‘selective automation’ and ‘responsible innovation’ can also be realised from a labour perspective – concerning the former, workers can influence which work activity should be automated or not; and concerning the latter, workers consider both the intended consequences and the unintended outcomes of AI.
Certainly, ongoing attempts at AI regulation suffer from failed efforts to make a timely check on the modus operandi of Big Techs and their platformised business models. But it is not too late to regulate AI. The current momentum behind media discussions and public awareness about generative AI, language models and chatbot software – which undertake tasks indistinguishable from human work – can help drive effective regulatory approaches. Calibrated state intervention, intergovernmental action and societal vigilance are needed now to mitigate risks and maximise general benefits of budding innovations.
Regulating AI is essentially a political and policymaking endeavour. Politically, internationalist labour unions, in coordination with international institutions for global democratic governance, will have to be created as countervailing organisations against the monopolistic nature and transnational reach of Big Tech oligarchs. Policy-wise, to democratise the development of AI, it is of paramount importance to ask: regulation for whom and by whom? Democratising AI should not be reduced to a discourse on risk level classification or to the dilemma of ‘overregulation versus underregulation’ but must seriously embody the question of representation. Workers and their organised trade unions ought to be represented as proactive participants in the processes of technological innovation and public regulation for the common good.
Photo credits: Shutterstock.com/BluePlanetStudio