AI in Government: Risks to Rights and Accountability Grow
EUR/MDL - 20.22 0.0567
USD/MDL - 17.28 0.4203
VMS_91 - 3.03%
VMS_364 - 9.54%
BONDS_2Y - 7.40%
GOLD - 4,614.35 0.18%
EURUSD - 1.17 0%
BRENT - 103.13 45.48%
SP500 - 720.65 0.28%
SILVER - 75.36 1.35%
GAS - 3.04 16.02%

Watch out for AI government

PARIS/LOVELAND-The United Arab Emirates (UAE) announced in early April a plan to deliver half of its government services using agent-based artificial intelligence (AI) within the next two years. According to the plan, AI will play the role of an "executive partner" that "analyzes, decides, executes and improves in real time" without human intervention.
(C) Project Syndicate Reading time: 5 minutes
Link copied
Artificial intelligence in politics

Our experience of working at the intersection of entrepreneurship, science and digital policy allows us to confidently state: this is a reckless plan. But the UAE claims to be a global model for digital policy, so other countries may decide to follow suit.

This danger cannot be ignored. We are already well aware of what happens when governments delegate decision-making to algorithms. In 2021, a self-learning system in the Netherlands erroneously charged an approximately 35,000 families of child benefit fraud. Parents were ordered to pay back tens of thousands of euros they didn’t owe the state; people lost their homes; more than two thousand children were taken away by state care authorities.

This outcome was in fact programmed. Foreign-sounding names and dual citizenship were noted as risk factors in the system. This led to illegal discrimination being captured directly in the model. The result was a nationwide scandal which eventually led to the resignation of the government of the then Prime Minister Mark Rutte.

Similar trends have been observed in Australia. Between 2015 and 2019, the Robodebt program prosecuted 433,000 welfare recipients, demanding the return of allegedly improperly received 1.7 billion Australian dollars ($1.2 billion). The damage was enormous. Mothers told of sons who committed suicide after receiving a notice of arrears that they could not dispute in any way. Subsequently, a Royal Commission found that the program was “neither fair nor legal.”

Meanwhile, in the US, the states of Arkansas and Idaho replaced nurses with algorithms that assessed the need for and amount of home care provided to patients. People with cerebral palsy, quadriplegia and multiple sclerosis were in one fell swoop reduced in-home care by 20-50%. The courts have subsequently ordered the courts to stop the use of these systems, but that was only after the damage had already been done. Some patients were left without adequate support, leading to preventable medical complications.

Each of these cases involved only one system of one agency. Now imagine that such systems will manage half of all public services, as envisioned by the UAE plan.

Think, for example, of a single mother whose child benefits were frozen after an AI agent flagged her banking activity as suspicious. Suddenly she has to deal with appeals procedures when one automated system sends her to another (without any human contact), and exactly when she should be paying her rent. Or the migrant worker who is denied a renewal of his residence permit because the system cannot recognize his employer’s documents (effectively turning him into an undocumented migrant). Or the elderly widow whose pension payments have been suspended because of inconsistencies in two databases and she cannot understand the programs’ interface.

Key risks of AI agents

These are not hypothetical situations. These are documented, typical cases that agent AI exacerbates to such an extent that no training program can fix the problem within the two-year timeframe set by the UAE.

Three key risks stand out. The first is scale: when a social worker makes a mistake, one person suffers; when an AI agent makes a mistake, thousands of people can be affected before anyone notices.

In addition, the decision-making process of artificial intelligence is not transparent. Agent systems make decisions sequentially, and each step is based on the previous one, so by the time the harm becomes noticeable, the causal chain is actually lost. A striking example is the situation with the algorithmic system for evaluating medical benefits in the state of Arkansas. No one – not even the model’s creators – has been able to fully explain how it works, prompting a federal court to call it ““wildly irrational“. In addition, the lack of transparency may be a built-in quality due to trade secrets or private ownership of the systems underlying the algorithms.

Finally, AI systems shift the burden of proof: they force citizens to prove their innocence rather than requiring the state to justify its actions. After the child benefit scandal in the Netherlands and the Robodebt program in Australia, it became clear that those who suffer the most are those who are least able to prove something, i.e. people with limited time, money, poor language skills, and lack of access to legal support.

The flawed “logic of efficiency”

The UAE claims that the principle of “People First” will be the guiding principle for their AI program. However, the approaches chosen tell a different story. A government that rates ministries on “speed of implementation” and level of “AI skills proficiency” is not tracking what really matters; it is copying the very same logic of efficiency that has already done so much damage around the world.

Speed of adoption is the metric for the vendor. And the primary responsibility of government is to take care of people, based on human judgment.

This approach is consistent with citizens’ expectations that government will be accountable and transparent, explaining decisions that affect their rights and freedoms. When authorities enthusiastically opt for autonomous AI decision-making in the name of efficiency, they are essentially giving up that accountability.

All algorithm scandals in recent years raise the same fundamental questions: who is accountable, and who made the decision? In a government run by agent-based AI, these questions no longer have clear answers. The system decides for itself, updates itself, and moves forward on its own, leaving citizens without help when things go wrong.

Because of the rise of AI, democratic accountability is being eroded, not by openly seizing power, but by government procurement decisions that quietly replace human oversight. By weakening trust in public institutions at a time when it is already dangerously low, these systems end up serving the interests of the tech giants promoting the AI revolution.

It doesn’t have to be this way. The UAE has the resources, talent and political stability needed to build a truly human-centered digital government that can become the global standard, complementing (not replacing) human decision-making.

It is not just the UAE that will have to pay for mistakes. It will be the single mother in another country whose benefits are canceled by an algorithm she doesn’t even know exists, and countless others like her around the world.

Jan-Werner Müller

Gabriela Ramos

Gabriela Ramos is Co-Chair of the Working Group on Inequality and Socially Relevant Financial Disclosure, former Deputy Director General for Social and Human Sciences at UNESCO where she oversaw the development of the Recommendations on the Ethics of AI, former Chief of Staff at the OECD, and has worked as a Sherpa at the G20, G7 and APEC.

Jan-Werner Müller

Emilia Stoymenova Spirit

Emilija Stojmenova Duh is Associate Professor of Electrical Engineering at the University of Ljubljana, member of the board of directors of the Globethics Foundation, and former Minister of Digital Transformation of Slovenia.

© Project Syndicate, 2026.
www.project-syndicate.org



Реклама недоступна
Must Read*

We always appreciate your feedback!

Read also