The Progressive Post
A post-Trump world of AI and democracy with Europe in the lead

US President Donald Trump’s approval rating has sunk to around 30 per cent. In Europe, Poland and Hungary have returned to the camp of middle-of-the-road democratic countries. It is no coincidence that, just at this time, OpenAI comes out with a Policy Paper on AI and democracy. Silicon Valley, in this way, is starting to prepare for the post-Trump reconstructions of democracy in America. What does this mean for Europe?
A palpable sense of post-Trump reconstruction starts taking hold in Western capitals, and, in the same vein, OpenAI released Industrial policy for the Intelligence Age, a document that reads less like a standard corporate white paper, and more like a blueprint for embedding advanced AI within a revitalised democratic compact. For Europe, this paper is not merely an interesting read from across the Atlantic: it is a direct challenge and an unexpected opportunity. It signals that Silicon Valley is preparing for a world where democracy is back in the driving seat – and Europe must decide whether it keeps mourning about the costs of regulation and overregulation, and catering to neoliberal corporate interests, or whether it returns to acting as an active co-author of a global democratic future in which the law is recognised as the most noble speaking act of democracy.
At its core, OpenAI’s paper is a remarkable document precisely because it rejects the techno-libertarian orthodoxy of its birthplace. It argues that AI, and a fortiori superintelligence, if left to raw market forces, will inevitably widen inequality, concentrate wealth and erode democratic agency. The solution OpenAI proposes is a “new industrial policy” built on three pillars: sharing prosperity broadly, mitigating catastrophic risks and democratising access. The proposals range from portable benefits and strengthening workers’ voices in AI deployment to a public wealth fund giving every citizen a stake in AI-driven growth and tax reforms shifting burdens from labour to capital. Crucially, the paper calls for public input, auditing regimes, and “mission-aligned corporate governance” that embeds public-interest accountability into AI firms.
For Europe, this represents a fascinating pivot. For years, the transatlantic AI debate has been framed as a dichotomy: the US’s innovation-first, no-legislation approach versus Europe’s rights-based, precautionary regulation, exemplified by the EU AI Act and GDPR. OpenAI’s paper collapses that binary. It embraces regulation not as a nuisance but as a tool for resilience and embedding AI in democratic governance. And while the paper, in a nod to the classic corporate preference for non-binding commitments and self-regulation, stops short of saying it clearly, it appears that any of the substantive projects the paper outlines can only be implemented by the rule of law. It calls for “common-sense AI regulation – not to entrench incumbents but to protect children, mitigate national security risks, and encourage innovation”. There is language in it that a European Commissioner of previous Commissions could have written, before the right-wing coalition took hold in the European Parliament and the European Commission became infected by a virus of deregulation, catering to feelings of ‘overregulation’ rather than facts. More importantly, the paper admits that markets alone cannot handle AI and the transition to superintelligence – a concession that validates the good old European instinct for state-guided economic governance in the social economy.
What does this mean for Europe? First, it is an endorsement of Europe’s regulatory framework as a global asset. When OpenAI speaks of needing “democratic processes that give people real power to shape the AI future”, it implicitly acknowledges that the EU – with its parliamentary law making and fundamental rights charter – has institutional machinery that the US lacks. The paper’s call for “structured ways for public input” and “representative input processes” sounds remarkably like the EU’s own deliberative democracy experiments, with public consultations and legislative processes which lead to binding laws, and not just to debates which end in nothing, like most bills in the US Congress. Europe should therefore stop apologising for its regulatory approach and recognise it as a competitive advantage: a governance brand that trustworthy AI developers will increasingly seek out as a license to operate.
Second, the paper offers Europe a chance to escape its strategic paralysis. The self-castigation and fretting about falling behind US and Chinese AI giants, oscillating between neo-liberal de-regulation and dependency, can now be replaced with an optimistic vision of democratic rule-making for AI coming from the US tech industry itself. OpenAI’s industrial policy suggests a way forward: Europe could become the global standard-setter for democratic AI deployment. The paper’s proposals for “AI-enabled laboratories” distributed across universities and community colleges, for “pathways into human-centred work” in care economies, and for “adaptive safety nets” that trigger automatically during disruption – these are policies Europe can take the lead on. They leverage Europe’s strengths: strong social safety nets, robust public research systems and a cultural preference for social partners cooperating rather than raw capitalism. By adopting and adapting these ideas, Europe could demonstrate that democracy can not only survive AI but thrive with it.
Third, OpenAI’s move signals a realignment of Silicon Valley’s political capital. If US tech giants are indeed preparing for a post-Trump democratic reconstruction, Europe must be ready to engage with them in a new way, not just as vendors or adversaries of binding rules. The paper explicitly calls for “public-private collaboration” to “stave off regulatory capture and centralised control”. This is an opening. Europe should respond by hosting a high-level transatlantic convention on democratic AI governance and AI for Democracy, inviting OpenAI, Anthropic, Google and Meta to share their visions and what they can contribute to design the future of democracy, thus AI applications to revitalise people engagement in political parties and democratic processes, the very auditing regimes, incident-reporting systems and international information-sharing networks beyond what already has been agreed on the UN level. The goal would be to lock in commitments to ‘democracy by design in AI’ and binding rules on both domestic and international level, thus setting out a vision for the US to return as a leader of democracy and the rule of law for AI in the world, alongside Europe.
None of this will be easy. Europe remains fragmented, underinvested in frontier AI and vulnerable to regulatory capture by its own incumbents and US corporates alike. The paper’s ideas will face fierce opposition from established interests. Moreover, Europe must avoid the trap of simply importing US corporate-defined agendas under the guise of cooperation. The goal is not to become OpenAI’s European regulatory affiliate, but to assert that democratic governance of AI requires strong public institutions, binding rules that are enforced with rigour and not just well-intentioned corporate pledges.
Ultimately, OpenAI’s paper is an invitation. It admits that the technologists alone do not have the answers. For a political bloc that has often defined itself in opposition to US capitalism, this is a rare chance to define itself in partnership with a forward-looking part of it. Europe can either watch from the sidelines as the US debates its democratic reconstruction, or seize the moment to build the very model of democratic AI governance the world will need.
The paper is written. The conversation has started. Europe should answer and insist that words must be followed by real change. We have seen it before: Mark Zuckerberg wrote in 2019, in a much-quoted op-ed in the Washington Post, that “the Internet needs new rules” and called for legislation in four different areas. What followed were years of a lobby spree against binding rules on the internet and AI in Europe and the US. Open AI, and its key investor Microsoft, have a similar track record of doublespeak and lobbying against legislation both in Europe and in the US. But in politics, people often only learn from bad experiences. Let us test whether at least some of the Internet giants are now willing to turn away from Trump and embrace what democracy must be about, also in the age of artificial intelligence: the making of binding rules for the powerful by the people. And citizens who contribute to making democracy work.
Photo credits: Shutterstock/dee karen