The Progressive Post
Future-proofing AI: regulation for innovation, human rights and societal progress
How can AI be regulated so that it enhances innovation and competitive power, that it is future-proof, protects fundamental human rights, and brings forward societal benefits? This is the task the European Union set itself to accomplish with the introduction of the AI Act, which was approved by the European Parliament on 14 June.
Establishing a comprehensive and future-proof European legal framework of ethical principles for the development, deployment and use of AI is an important task. It has implications not only for Europe, but for the whole world as well as for the role of AI applications, their development and use in our world. However, even now, AI does not exist in a regulatory vacuum. We are not starting from zero: already now, many laws, directives, standards and guidelines apply to AI systems, products and results. Supporting better understanding and integration of all these existing frameworks is as important as introducing more regulation. At the same time, a too-narrow understanding of what constitutes an AI system will not improve public trust. From the perspective of those affected by technology, particularly by errors in automated decision-making, it does not matter whether the decision was made using a sophisticated AI algorithm or a simple spreadsheet: if it is wrong, it needs to be corrected, whether AI was involved or not.
The potential impact of AI in society is huge, both in a positive as in a negative sense. In particular, is crucial to address ethical consequences of AI systems, such as:
- The lack of transparency of AI tools: AI decisions are not always intelligible to humans,
- Value alignment: AI is not neutral, and therefore AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias,
- Commodification: we and our societies are more than the data that is available about us, and the same time, data gathering and management raise concerns about privacy of users,
- Power imbalances: increasingly the ability to deploy and benefit from large AI systems is concentrated in the hands of a couple of (mostly private) organisations. This leads to issues of democratic accountability and th deepening of societal and economic divides.
Most stakeholders agree on the need for some form of regulation in the AI and digital technology landscape. Regulatory efforts must consider both current AI capabilities and future developments, especially in generative AI models, which can be applied in diverse ways with varying risks. For example, the risks of using such systems to summarise a newspaper article are very different from those of using the same technology to give medical advice. Regulatory processes need to monitor these developments, in ways that guide responsible research and innovation rather than inhibiting new developments.
Regulatory processes should guide responsible research and innovation without stifling progress. In attempting to balance scientific progress with ethical considerations and the avoidance of harm, lessons can be learned from the existing regulations concerning genetic manipulation. It is therefore crucial to consider the implications to AI legislation of a study on genetic technology regulation published in 2021 by the European Commission, which established that limitations to the capacity of the legislation to keep pace with scientific developments can cause implementation challenges as well as legal uncertainties.
Concrete steps (technical, organisational and legal) are needed now, to ensure that AI legistation will ensure continuous innovation and scientific developments that are aligned with human rights and societal values, rather than becoming a burden in the development process. Moreover, effective AI regulation must involve international cooperation and coordination due to the transnational nature of research and development, potential societal and environmental impact, and the geopolitical strategies. In the case of genetic technology, the Cartagena Protocol on Biosafety aims to establish common guidelines and procedures. In contrast, global coordination on AI regulations is currently less established, although there are ongoing discussions and initiatives to promote international cooperation in this area, notably by UNESCO, the OECD and the Global Partnership on Artificial Intelligence (GPAI).
Contrary to current narratives of existential risks, AI is not out of control, but we are losing control on those that develop and exploit AI. However, demanding responsibility and accountability from organisations developing and deploying AI systems must be accompanied by concrete steps in governance, at the national as well as the global level. Not just nice words about which principles AI should follow, but concrete solutions. For instance, demands on preconditions for making such systems available: technical requirements on transparency and explainability, demands on auditing of the organisations deploying these systems, and maybe also demands on the expertise of those using the systems. We also do not let pharmaceutical companies release medicines without many tests, nor do we let people drive cars without a driver’s license.
It is crucial to avoid embracing an ‘arms race’ narrative that implies a singular finish line, where losers and winners can be identified, and that there is one direction to follow towards that line. Such a narrative undermines the systematic and long-term research and development necessary to explore consequences, improve AI technologies and their implementations, and refine regulations. The primary role of regulation at this stage is to incentivise such exploration. Responsible innovation, focusing on sustainability and fundamental human rights, should serve as the main support for regulation. Just as exploratory innovations have led to higher standards in various sectors, from green energy to fair trade initiatives or LED lamps, we urgently need responsible AI innovations that demonstrate responsible, sustainable, trustworthy alternatives to current AI developments.
Instead of viewing regulation as a constraint on innovation, it should be framed as a powerful steppingstone for innovation. Like a beacon, regulation will signal the expected ambitions of innovation.
Photo credits: Shutterstock.com/MonsterZudio