Societal values and norms should filter through all aspects of our lives. That’s why it is important that regulation keeps up with the development of technology and data use.
The recent pandemic has demonstrated both the benefits and risks of ubiquitous technologies. The pandemic has accelerated and intensified the AI regulatory debate, introducing new considerations related to commercial versus government control of data as well as the invasiveness and potential misuse of powerful technology. The debate also takes in issues of global competitiveness, national security and digital sovereignty and this is driving the start of a race between governments worldwide to regulate AI.
The global picture today
AI’s autonomy combined often with its opacity and complexity arguably increase the risks it presents and raises regulatory concerns. Europe, Japan and Canada are like-minded thought leaders in this space. These countries have recently adopted AI ethics principles and have started regulating specific aspects of AI such as algorithmic ranking and AI decision-making.
Australia recently joined a wider club of countries in this arena, the Global Partnership on Artificial Intelligence (GPAI). GPAI’s secretariat is hosted by the Organisation for Economic Cooperation and Development (OECD), which has produced important work on AI principles with the consensus of major economies globally.
The first horizontal law with mandatory rules for AI development and use is expected to be proposed in Europe at the end of 2020. The European Union is reflecting on whether to act big or small this time against the background of the global success of the General Data Protection Regulation (GDPR) that created an ongoing spiral effect of privacy laws in every corner of the world. It will be interesting to observe the approach that China and the US will eventually take on AI policy.
The possibility of a common global approach is challenged by diversity in culture and societal values. However, remarkably, all AI principles frameworks today have at least two characteristics in common. Firstly, they call for human-centric AI aiming to protect human dignity, safety and autonomy. Secondly, they pursue AI uptake by promoting trust through principles of transparency, fairness, explicability and accountability.
How should governments navigate this potential turning point in history and how can regulation protect societal values and address the risks while enabling further technology innovations and the transformative potential of AI to better our world?
Think big: A Charter of AI ethics
We need an AI ethics code to guide the design and implementation of AI – a Charter of AI ethics that safeguards human rights and societal values that can be global and future-proof. The Charter of Fundamental Rights of the European Union or UNESCO’s Universal Declaration on Bioethics and Human Rights can inspire this endeavour.
Start small: Targeted AI regulation
A one-size-fits-all regulatory approach is not appropriate for AI given the breadth and range of AI technologies. Many AI applications today are trivial and need not be disrupted with prescriptive regulation.
The type and level of risk that specific AI uses pose to individuals’ safety, well-being, rights and freedoms is a useful starting point. For instance, AI rules may need to focus specifically on liability for the use of autonomous cars; or physical harm related to the use of diagnostic tools in healthcare; or discrimination in the context of law enforcement; or privacy protections for AI uses in smart homes; or algorithmic price-fixing and collusion in an antitrust context.
Actionable and enforceable rules
AI regulation should aim to guide organisations on how to operationalise AI ethics principles towards ensuring accountable AI. In practice, this means flexible, principles-based rules on AI models and AI systems governance as well as data quality.
Work together: Avoid the regulatory race
People, planet and the economy will benefit more from AI in the long run if governments do not pursue a regulatory race. Despite the natural barriers to cross-cultural cooperation and the inherent competition between countries, the nature and transformative potential of AI require global regulatory cooperation.
for more information on deploying AI responsibly.