English

What global governance of artificial intelligence could be like

On the eve of the AI Impact summit in India in February, it is clear that most countries still do not have a workable model for managing this technology. The US has pretty much left the issue to market forces, the EU relies on regulatory oversight, and China relies on the concentration of power with the state. But none of these options are realistic if your country, like many others, has to manage AI without large regulatory structures or massive computer power. No, we need a different system that embeds the principles of transparency, consent and accountability directly into the digital infrastructure.
Reading time: 3 minutes Autor:
Link copied
What global governance of artificial intelligence could be like

With this approach, governance becomes a choice made when digital systems are designed and can be built into their foundations. When governance elements become part of the architecture, responsible behavior becomes the default norm. Regulators gain immediate insight into the data and automated systems landscape, and users gain clear control over their own information. This method is much more inclusive and easier to scale than one that relies on regulation alone.

But what should all this look like in practice? There are many lessons to be learned from India and its digital government infrastructure. India’s platforms for identity documentation (Aadhaar), payments (UPI), travel (DigiYatra), and digital commerce (ONDC) show how government standards and private innovation work together on a national scale. For example, DigiYatra (a public-private initiative that simplifies airline check-in, queuing and other elements of travel) demonstrates the ability to securely and predictably manage, in real time and for large groups of users, the process of identifying individuals and processing consent protocols.

These systems exemplify how digital architecture can increase access, increase trust, and help markets thrive. They will not solve the problem of AI governance in one fell swoop, but they show that technical standards and societal objectives are consistent even in very large and diverse societies.

To become globally realizable, this architectural approach must prioritize sovereignty over computation. Computing power, of course, has become a strategic bottleneck in the AI era, which is why America and China spend hundreds of billions of dollars each year on advanced data centers and AI chips. But most countries can’t even hope to match them in terms of investment. So we must prevent a scenario where meaningful AI governance itself will require computing power – in which case most countries will have little real power over the systems that shape their societies.

Preserving computing sovereignty doesn’t mean that all data centers must necessarily be built domestically. But it does mean that AI systems operating within a country must obey that country’s laws and answer to national authorities. And it doesn’t matter where the computing is done. Multinational tech companies will have to make clear legal and operational distinctions using technical firewalls and verifiable controls. These protections are needed to prevent unauthorized data from crossing borders and to ensure that national data is not included in globally available models without explicit permission. Without such enforced separation, it will be difficult for authorities to oversee digital systems affecting domestic finance, health care, logistics, and public administration.

This is one of the main advantages of the architecture approach: it allows each country to determine its own balance between risk, innovation and commerce. Societies around the world differ in their views on privacy, experimentation, market openness, and security, so no single regulatory model will ever be able to accommodate everyone’s preferences. But a common architectural foundation that relies on transparent data flows, traceable model behavior, and the principle of “sovereignty over computation” gives each country the flexibility to configure its own parameters. The rails are common, but national customizations remain sovereign.

Compared to current global approaches, the architectural model offers a more balanced and realistic way forward. The American system encourages rapid experimentation, but often recognizes harm only when it has already been done. The European system provides strong defenses but requires strong oversight capabilities. And the Chinese system achieves speed at the expense of centralization, making it ill-suited for distributed systems. The architectural approach builds transparency and consent into digital systems from the start, so it makes innovation development predictable and ensures accountability to society.

The Global AI Summit in India serves as an opportune moment for all countries to think about this system. The world needs a common governance system embedded at the core of this powerful technology. This is how we will protect users, preserve sovereignty and enable each country to find its own balance between risk and innovation. AI will change every sector of the economy, and in this environment, an architectural approach will open the most reliable and fair way forward.

Jayant Sinha,
former Minister of State for Finance and Minister of State
of civil aviation in India, now president of private equity firm.
Everstone Group, is a visiting professor of practice at the London School of Economics.

© Project Syndicate, 2025.
www.project-syndicate.org


Реклама недоступна
Must Read*

We always appreciate your feedback!

Read also
Paradoxes of district hospital reform
Society
21 December 2025
Paradoxes of district hospital reform
Mysterious images of Valery Buyev
Art & Culture
21 December 2025
Mysterious images of Valery Buyev
Who’s afraid of stable coins?
Logos Press Exclusive
20 December 2025
Who’s afraid of stable coins?