
Of particular concern is the changing nature of such content: the share of the most violent and perverted material has risen from 13% to 69% in just one year. Experts Internet Watch Foundation (IWF) experts attribute this to the availability of modern neural network tools that make it possible to create realistic images and videos without serious technical training. While earlier such technologies required significant resources, today they can be used even by inexperienced users.
An additional threat is the use of photos of real people, including children, taken from public sources and social networks. This not only violates individual rights, but also creates risks of blackmail, cyberbullying and reputational damage. At the same time, it is often extremely difficult to distinguish a dipfake from a real image without specialized analysis tools.
Experts note that attackers are actively using modified versions of open AI models and combining various tools to circumvent built-in restrictions. Online communities where instructions and ready-made solutions are published also contribute to the spread.
Possible countermeasures include the introduction of “AI vs. AI” technologies for automatic detection of prohibited materials, as well as mandatory labeling of legal content with digital watermarks. In addition, strengthening international cooperation, such as the INHOPE initiative, which brings together dozens of countries to combat illegal content online, could be an important step.









