
This is the view of international economist Dambisa Moyo. In an article published in Logos Press, he notes that the forces that have determined technology winners in the past are unlikely to dominate AI adoption. Capital costs are no longer a minor hurdle, but a major barrier. In previous technology waves, capital requirements were largely limited to the startup phase and were relatively modest. For example, Facebook initially raised only $500,000 in seed capital.
But those early innovations were built on top of existing infrastructure such as Linux, Apache, MySQL and PHP (the so-called LAMP stack), which significantly reduced the initial costs, the author argues. AI, on the other hand, is extremely capital intensive. Capital investment in the industry is expected to exceed $7 trillion by 2030, as companies build data centers, expand computing capacity, and invest in specialized hardware.
Furthermore, unlike previous technology cycles, these investment needs will not disappear as the industry matures and may even intensify, states Moyo. Moreover, these costs may never come down significantly, as data center lifespans are often measured in years, not decades. While cloud computing also required huge investments in general-purpose servers, AI requires an entirely new infrastructure, including GPUs and tensor processing units (TPUs), to handle the huge amount of simultaneous computation involved in training and running AI models. Such systems are costly and power-intensive.
By 2027, a single large-scale AI training cycle is expected to cost more than $1 billion dollars. As such, only those companies that can afford the entry price will survive, giving today’s tech giants – with their huge cash flows, robust balance sheets and access to capital markets – a decisive advantage, the author concludes.









