
AI without illusions: businesses face the risks of inflated expectations
Experts warn: the era of “easy wins” in AI is coming to an end – project economics and transparency of algorithms are coming to the fore.
Hype vs. economics
In the article titled “Stupidity, greed and direct marketing deception: the dark side of the “AI” hype”, pvsm.ru edition draws attention to the problem of marketing overheating around AI solutions. According to the author of the article, a significant part of products under the brand name “AI” (artificial intelligence) are either refined statistical models or automation without full-fledged machine learning.
The author quotes the creator of Kaspersky Lab, Eugene Kaspersky: “For many years I was categorically against the term ‘artificial intelligence’. I explained that it is not intelligence at all. It is artificial, yes, but it is not intelligence. These are just smart, good algorithms, complex algorithms, but they are algorithms… To put it strictly, AI does not exist, there is machine learning.”
In a very detailed article, the author analyzes in detail all the narratives surrounding AI and tries to convey to the reader a simple idea: artificial intelligence contributes to progress and it is certainly useful to use it. But this does not exempt from the need to think critically about the role and capabilities of AI, as well as the risks associated with it.
“Today, everything from template-based chatbots to conventional analytics is sold under the banner of AI,” says the author of the publication, emphasizing the gap between advertising promises and the actual functionality of solutions.
Meanwhile, global investment in AI in 2025 has exceeded $150 billion, and companies are massively implementing generative models in HR, marketing and customer support. However, according to Gartner, up to 30% of AI projects fail to achieve the stated economic effect due to inflated expectations and lack of clear ROI business metrics.
The problem is especially acute for SMBs, where AI implementation is often done without sufficient expertise. As a result, companies get an increase in infrastructure and subscription costs without a measurable increase in revenue.
Risks for users and companies
A key risk is the substitution of concepts and lack of transparency. Many solutions use large language models but fail to disclose data sources, limitations or potential errors.
Experts have repeatedly emphasized the problem of “hallucinating” models. OpenAI CEO Sam Altman stated, “AI can confidently produce incorrect information. Users need to understand the limitations of the technology” (source: https://openai.com/blog).
For businesses, this means reputational and legal risks – from publishing incorrect data to copyright infringement.
In addition, the problem of dependence on external platforms persists. If a company builds key processes on API services, a change in tariffs or supplier policy can significantly affect the cost of the product.
What this means for the market
The relevance of the topic is related to the market’s transition from the stage of hype to the phase of rationalization. Investors and customers demand proof of efficiency: cost reduction, process acceleration, and conversion growth.
For AI users, the key recommendations are as follows:
– Evaluate not the technology, but the economic effect;
– require transparency of architecture and data sources;
– test solutions on pilot projects before scaling;
– consider long-term infrastructure and licensing costs.
The conclusion is obvious: artificial intelligence remains a powerful tool, but its value is determined not by loud statements, but by measurable results. In 2026, the winners are not those who talk loudly about AI, but those who can count, the author of the article advises.









