Artificial intelligence, social platforms and regulatory gaps are behind the explosion of online financial scams linked to cryptocurrencies and stock trading. A pattern of repetitive fraud, where thousands of investors are pressured into putting in more and more money by increasingly aggressive fake advisors.
Franco, a retired former professor, sees a fake “government” video online with politicians and entrepreneurs generated with artificial intelligence inviting people to invest in cryptocurrencies. He deposits 250 euros and is followed by a “consultant” on a platform that shows a 40 percent increase in profit in a few days. Convinced, he increases to invest 12 thousand euros; 120 thousand virtual euros in bitcoin appear on the screen. When he asks to withdraw, the account is declared “blocked” and they even show him a fake letter from the ECB asking for more money to free him. Then the scammers change their story several times in order to continue extorting money from him.
The investment scams that target European and North American citizens no longer present themselves as improvised operations or low-grade spam: what consistently emerges from hundreds of public testimonies – collected on financial forums, Reddit message boards, social groups and video platforms – is an industrial, repetitive and scalable model, designed to monetize the trust of others within a system that allows frauds to act before the rules can intervene.
The individual stories change in names and numbers, but the structure is always the same. The narratives change in detail, but follow the same script. There are those who start with 100 euros after seeing a glossy video with a well-known face falsified by artificial intelligence, those who let themselves be guided for weeks by a fake “mentor” who shows graphs and promises “guaranteed” returns, and those who admit to having lost six figures because every step – deposit, platform, simulated profits – seemed technically credible.
In all the stories the same thing returns: the scam does not happen with a final blow, but by convincing the victim to feed it step by step. A successful trick is not in stealing the first transfer, but in convincing the victim that the fictitious profit is real, so that he continues to pay. In many testimonies, the “profit visible on the screen” is defined as the most convincing part of the entire operation: the illusion of immediate success becomes the criminals’ best ally. When the moment of withdrawal arrives, the tone changes: “unlocking fees”, “administrative fees”, “anti-fraud checks” suddenly emerge – all pretexts to push for a final payment.
A man who lost his severance pay says that the real trauma is not the money but the shame of having allowed himself to be involved in conversations that, in retrospect, appear calibrated to him. It wasn’t an improvised scam, but a script designed to tap into his vulnerabilities and push him to trust. Former criminal call center operators told the police that it is an organized structure: objectives, writing, growth tables, penalties on the amount withdrawn and fake trading platforms. The language is corporate: sales, customer retention, conversion of the contact into a paying user, result indicators.
Fake online trading scams represent one of the main economic threats to savings in Italy today. The Postal Police recorded over 3,400 complaints in 2023 alone, while the Financial Police has conducted dozens of investigations in the last two years with seizures of assets worth around 70 million euros and hundreds of people reported. Consob continues to block sites and issue warnings on unauthorized operators, while the Bank of Italy’s UIF reports a growing flow of suspicious transactions linked to fake investments.
However, there is no national data on overall losses, a sign of a large but still partially measured phenomenon. The growth in complaints and seizures indicates that fraud is not an isolated incident, but an expanding structural trend. Moreover, Meta, with over 250 million users in the Union, is today one of the main carriers of fake online trading advertisements. Its business model is almost entirely advertising (98 percent of global revenues), with $160 billion generated last year and a claimed European economic impact of €213 billion. From the point of view of economic incentives, platforms have no reason to effectively prevent these ads: the advertising model monetizes volume and profiling, without distinguishing between legal and illicit content as long as there is no concrete legal risk.
Responsibility, in the current framework, is triggered – if triggered – only ex post and only on individual proven cases. This is why reports are processed slowly and irregularly, while fraudulent adverts get millions of views in the useful window. Since contact with the victim occurs in the early hours, late removal has no deterrent effect. In this framework, fraud is not an accident: it is a product compatible with the social economy. Over the last three years, investment scams spread via digital platforms have become the most economically significant form of consumer fraud in Europe.
The European Commission has publicly estimated – through its Digital Commissioner – losses of more than 4 billion euros a year generated by misleading advertisements for financial services. To understand the scale, this figure exceeds the entire amount recorded by many categories of traditional crime in multiple Member States combined. The phenomenon was characterized by Europol as “unprecedented” both in scale and speed of growth, due to three interlinked factors: automation of the production of persuasive content, fine-grained (light) profiling capacity of the platforms, absence of ex-ante verification obligations of advertisers of financial products in most of the European market.
Pierguido Iezzi, cybersecurity director of Maticmind, explains that «the observatory of Maticmind’s Cyber defense center has found that 2025 marks the transition from simple chatbots to “automatic assistants” that use tools (email, payments, social networks), remember over time and pursue objectives. They no longer operate by commands but by objectives. Here the new risk arises. The scam is now an industrial chain: AI models sold for illicit purposes (WormGPT/FraudGPT), synthetic identity “kits” worth 500-10 thousand dollars and deepfake subscription services, with price lists for static, interactive and even real-time videos for video calls.”
The result is scams that constantly talk, negotiate and insist: in the first quarter of 2025 alone, deepfakes caused 900 million dollars in losses: ranging from the 499 thousand dollars stolen with a fake board of directors meeting in Singapore to the fake trading platforms unmasked by Interpol (65 thousand victims for 300 million dollars), up to the Microsoft case last September, an email scam hidden with AI». Iezzi adds: «By the end of the year, non-human identities will exceed 45 billion, that is, approximately 12 times the global workforce, but only 10 percent of organizations have a strategy, while 80 percent of violations involve compromised identities: the target is no longer the computer, it is the identity. On the public front we have a critical issue: a reporting system that allows accredited teams to send only 20 links at a time: they chase the latest announcement while the criminal counterpart has automated production and distribution on an algorithmic scale.”
Google in Ireland and Meta in the UK have already introduced preventive checks for financial services advertisers: where ex-ante checks are mandatory, scams decrease or migrate elsewhere. In the EU this measure has not been adopted for political and economic reasons: imposing it would mean shifting costs and responsibilities from the platforms to the operators. The Digital Services Act avoids preventive control and makes responsibility only ex-post. In this framework, the persistence of fraud is not an anomaly: it is the consequence foreseen by the system. «The current defense system is not designed for what we already have in front of us» warns Iezzi. «Autonomous attack agents are already in circulation that use tools, conserve memory, and pursue operational objectives on their own. From there, truly new threats arise: poisoning of memories, abuse of tools, swarm attacks. It no longer makes sense to simply remove a single post: we need to target the complete supply chain, verify who buys financial advertising, track and remove campaign variations, block recurring wallets and IBANS upstream.”
With the Maticmind Cyber Team, Iezzi has developed an advanced cyber threat intelligence system integrated with the AI Fraud App Detector platform, which uses automatic assistants and proprietary technologies to constantly monitor App stores and autonomously report potential banking Trojans, thus reducing analysis times and anticipating the risks linked to financial fraud.




