
NEUROSCAMMERS ATTACK. NEW LEVELS OF CYBERFRAUD.
16/05/2025
Updated on 25/09/2025
Cybercrime is nothing new, and common scam schemes like Nigerian letters or entering banking details on fake websites are already familiar to many. But inventive criminals don’t sit idle—they try to establish new schemes using the latest technologies. Automation of fake mailings and communication with bots is no longer surprising to anyone. Especially since inflexible bots often lose the thread of conversation and the “scheme” falls apart, while the "victim" leaves.
Old schemes with new audio, photo, and video evidence.
The automation of communication with potential "victims" peaked between 2015 and 2020, when messenger bots lured trusting people with fake gifts and bonuses to fraudulent websites and crypto exchanges, fishing out their data and passwords. Account and mobile number takeovers were very common at that time. Over time, the situation stabilized thanks to many video bloggers and media figures fighting this problem. Public statements by opinion leaders helped bring some internet users to their senses, and they started trusting unknown numbers and people less.
But what if the person on the other end is someone familiar? In that case, trust is quite natural, since the user believes they are communicating with someone they have known for a long time.
Fake accounts of public and less public people have been forged before. But neural technologies, arriving already by 2023, expanded the possibilities of forgery and deception, providing everyone with tools to impersonate others and create full-fledged content — audio recordings, photos, and videos with nonexistent subjects and situations.
The Age of Deepfake.
Artificial intelligence sharpens its abilities to create realistic content. The principle of training neural networks is simple — take everything available on the internet and feed it to AI, teaching it to accurately reproduce the information.
Thus, neural network generations currently closely resemble all the content that already exists on the web in one form or another. Text generation mimics human speech, sounds replicate the sounds of our world, and photos and videos imitate those from stock sources and social networks that the artificial intelligence "saw" during training.
If your image and your data have ever been publicly available on the internet, then they have also been added to the artificial intelligence library, and it can generate them upon request.
Of course, owners of public neural networks care about security and censorship. But! Neural network capabilities are expanding, parameters are growing, and with them evolves prompt engineering, which allows bypassing many restrictions and blocks.
Moreover, mini models are available offline. They can be installed on home computers, fine-tuned for specific needs, and used as desired. Examples are the popular generative image neural models LoRA and DreamBooth from Stable Diffusion, which produce quite good photorealism on average hardware. Any of these models and dozens of others can be easily downloaded from Hugging Face.
And where there is an image, there is video. After all, video is 30 images per second. For example, the neural network VideoCrafter 2 produces quite decent videos on mid-range computers. Yesterday’s cryptocurrency miners didn’t sell their used powerful video cards but quickly switched to using them for artificial intelligence work. Almost every week, new online interfaces appear on the internet, offering users the ability to generate images or videos for a small fee. Many bots connected to various neural networks via API have spread across messengers and social networks. The number of AI models available for installation on platforms like Hugging Face is impressive.
Accessible to Everyone.
Currently, video forgery is the most complex task. Neural technologies still need some time to develop to a degree of realism. Therefore, scammers more often use voice and photo fakes the old-fashioned way.
Photo image forgery was possible before with specialized editing programs like Photoshop. But good quality required special knowledge and skills, which limited opportunities for a wide audience. In 2024, everything changed — now anyone can use neural networks to create high-quality photo fakes.
The same applies to voice forgery. Today, there are dozens of online interfaces and local models capable of replacing your recorded voice with that of a politician or actor. With proper approach and effort, the differences become unnoticeable. Thus, anyone can receive a phone call from a fake mayor of their city or a favorite movie actor with some request or business proposal. We all know such mostly harmless pranks and laugh about them.
There are known cases of deepfake company directors ordering accountants to transfer money to fake accounts via video calls, as well as a funny case where an employee of a well-known company was invited to a supposed work video conference with deepfake employees and management, some of whom just silently sat still.
For example, fake videos of Elon Musk allegedly promoting cryptocurrency, software, and social projects are quite common. Scammers pay no less attention to other famous personalities.
Smaller scammers similarly sell counterfeit products by placing video deepfakes with media personalities as advertisements.
Another common scheme is deception on dating sites. A pretty girl chats confidently online and even promises a meeting, but asks to pay for something like a movie ticket or taxi. A trusting guy happily agrees to help and pays for fake services on fake websites. Or he invests money in cryptocurrency projects of the “brother” of the girl he likes. As a result, he might lose all the money on his card if he enters his data on a scammer’s site. With neural networks, this scheme became much easier. Generating convincing photos with urban backgrounds and a pleasant voice on the phone is no longer difficult. It’s easier to convince a trusting “victim.” Messaging on the site and in messengers is easily automated. Now it can be managed by artificial intelligence in the form of an AI agent. Thanks to automation, scammers scale up their activities, thus increasing overall effectiveness.
Even more dangerous is the forgery of voices of relatives and close ones, as well as their photo images. For such calls, scammers usually choose nighttime to catch the “victim” off guard. They may send fake photos via messenger and then, during the call, use the voice of a relative to report some problems that the “victim” can solve by transferring a certain amount of money to the provided account. The scheme is old, but with photo evidence and a familiar voice, it’s not so hard to persuade. It is enough to show the fake photo of the described situation, and the person may believe it.

Calls pretending to be from bank employees and government agencies have become more frequent again. Network telecom operators block billions of such calls and numbers annually as part of anti-fraud efforts.
Besides experienced scammers, ordinary citizens also try to deceive by fraud. There have been many attempts to get insurance payouts using fake photos of damaged cars, accidents, natural disasters, and medical conditions.
People try to deceive their colleagues and bosses by posting fake injuries to their limbs or illnesses of their pets on social networks. There are also numerous known cases of attempts to deceive government agencies with fake content. People post fake photos of neighbors’ houses in public groups showing prohibited plants growing in the yard, piles of garbage, and even corpses. There are hundreds of similar examples of malicious actions using neural networks. It should be understood that such jokes or deliberate actions are punishable by law and can end with a real prison sentence, not a fake one.
The American publication FOX5 conducted an investigation, which revealed that "scammers use emails to deceive their recipients. A person receives an ordinary-looking email with no links or attachments. But if they ask Gemini to briefly summarize its content, everything quickly becomes frightening. The summary will contain urgent warnings about a hacked password. They will be prompted to call support to resolve the issue. However, the warnings, the phone number, even the person on the other end of the line — all of this is fake. This is a scam!"
How to Resist.
It is not as bleak as it may seem. As technology develops, so does the average user. A person who understands the deepfake situation will not trust similar voices from unknown numbers, nor photo or video evidence from unfamiliar accounts. A knowledgeable person will not open unknown links and will take care of their banking security at least by using different SIM cards and bank cards.
From an ethical perspective, I would say the role of opinion leaders and media resources is important — educating their communities about the development of informational literacy and critical thinking, which together can resist irrational impulses to trust strangers online.
Development of the C2PA Organization and Google's SynthID.
In February 2021, as part of the Joint Development Foundation project, an alliance was formed consisting of Adobe, Arm, Intel, Microsoft, BBC, Meta, Amazon, OpenAI, PublicisGroup, Sony, and the recognized cryptographic verifier of digital media authenticity Truepic, named C2PA – the Coalition for Content Provenance and Authenticity. By 2022, the alliance had developed an open technical standard for verifying the origin of digital content, including images and videos, even those created using neural networks.
This standard acts as a kind of digital signature and is called a manifest. When embedded into digital cameras, the manifest signs the digital content (photo or video) at the moment of creation, thus verifying its authenticity. After signing, the asset (a file with the signature) is automatically registered in the manifest database via a network and can be confirmed as unique and authorial.
The C2PA specification defines a data structure in formats such as CBOR, JSON, JUMBF, and others, which the manifest assigns to the file. In addition to hashes and watermarks, the file owner can add biometric data (such as a fingerprint), which indicate file ownership. Protocols for transmitting this data to the database have been established, along with other technical aspects, including all cases of content editing and rights transfer.
An interesting and important aspect here is the increased evidentiary value of media materials signed with a manifest in legal proceedings, as their authenticity is already verified.
Google DeepMind has also joined the fight for unique content by integrating its SynthID watermarking technology into its generators of text, images, video, and audio.
Simply put, the technology works on the principle of matching data from randomly generated key-password numbers (for example: 2, 45, 28, 598, 724) with the number of elements being checked in each file block (words in text, pixels, or frames), which is typically 5. During verification, these blocks are overlaid, forming a matrix that the neural network compares with the original matrix and returns a result.
Accordingly, the greater the number of elements being verified, the higher the reliability of verification, but also the greater the vulnerability and potential for editing. More about the technology can also be seen in this video.
C2PA and Google are not directly fighting the spread of fake content, but within their capabilities, they offer an effective solution — the protection of real and original content.
Google has developed and continuously updates the concept of the Frontier Safety Framework (FSF) — a comprehensive approach to identifying and mitigating serious risks associated with advanced neural network models.
There are also a number of detectors for checking text. But the quality of their work is quite questionable. One of the most well-known tools is GPTZero. It analyzes text based on perplexity and burstiness — an attempt to assess how logical and predictable the text is. It is used in schools and universities to check the “humanness” of student work.
Another popular service is ZeroGPT. It outputs a percentage estimate of how likely the text was written by AI. However, users often point out its sensitivity to template phrases and its tendency to overestimate.
An AI text detector has also been integrated into the academic plagiarism checking system Turnitin. Although it is actively used in educational institutions, the level of false positives remains high, especially when analyzing short texts.
In the corporate environment, the Sapling AI Detector is popular. It is designed to analyze business correspondence and documents for AI-generated content.
The neural network text detector SMODIN also offers training in working with texts.
The platform for SEO copywriters, bloggers, and agencies Content at Scale AI Detector allows large volumes of text to be checked for signs of AI generation. It also offers a tool for paraphrasing AI text so that it can pass detection. The same is offered by the service BYPASSGPT.
The service QUILLBOT, which specializes in copywriting, also offers text recognition, paraphrasing, plagiarism detection, and more. It provides browser and operating system extensions with free features included.
There are also plenty of tools for evaluating image and video content. For example, the digital safety service Hive offers AI content moderation tools through Hive Moderation, including detection of generated text, images, videos, and audio.
The organization Sensity AI initially focused on deepfake detection. Today, it offers services for detecting synthetic content, including AI-generated text and video. It is used in cybersecurity and by law enforcement agencies.
The service Roboflow is focused on computer vision and trains its YOLO AI models using the no-code cloud training platform Ultralytics HUB to detect generated images, logos, and memes. It is used in commerce and media.
The platform WINSTON AI also offers scanning of texts, images, and websites to detect plagiarism and AI-generated content. It provides a HUMN-1 content certification service — a verification that the text was created by a human. You get 2,000 credits to try the service for free for 14 days.
One of the beacons of current cyber threats across the internet is the FEEDLY service — a platform that monitors and collects information about cyberattacks and cyber threats worldwide. It allows tracking both general information and industry-specific threats.
The advanced payment transaction operator for businesses, Stripe, uses the neural network service RADAR to detect fraud in online payments. For this purpose, the dashboard provides users with appropriate tools for configuring transaction policies, strong authentication via the 3D Secure protocol (PSD2 in Europe), and more.
A team of scientists from Cornell University (USA) - Peter F. Michael and colleagues - have created a technology for encoding very fine, noise-like modulations of scene illumination, which creates an informational asymmetry favorable for verifying images and videos. Simply put, the technology adds a temporal watermark to any video recorded with coded illumination. The watermark encodes the raw scene image, which appears illuminated only by the coded lighting. This illumination is invisible to humans, significantly complicates the creation of such a fake scene by attackers in neural networks, but is easily recognized by the technology.
China has also joined the fight against online disinformation. The Cyberspace Administration of China (CAC) and three other Chinese internet agencies introduced a set of rules starting September 1, 2025, requiring content generation service providers to label materials generated by neural networks as such — either explicitly or through metadata embedded in each file. In addition, China is massively removing content from platforms that published it if it falls under the definition of disinformation.
The company APOLLO Research proposes its own vision of language model security. The concept of "scheming" in language models, when they pursue a specific goal, has become a problem for industry leaders. Simply training a model is no longer considered a major achievement in the field. Adding optimizing settings and ensuring model scalability has become a priority for researchers. Acting in this direction, Google has also created its own model VaultGemma, based on the noising of confidential data.
All of these people — and many others — are trying to create tools for our digital safety. I sincerely hope they succeed.
Even the coldest minds in 2025 have begun to consider implementing this approach to address the looming problem. To solve it at this level, the most difficult task of today’s para-corporate society must be accomplished — uniting not only software developers, but also hardware manufacturers on this issue.
But until that happens, it is the consumer, the user, the person who stands alone against the wave of diverse media content flooding social networks. And many people believe what they see. They write about it sincerely in comments and reviews. Which means they are vulnerable to the stream of misinformation now directed at them.
Here's an example of a Google Veo3-generated video that, on May 26, 2025, stunned many not only with its realism, but also because it gained millions of views and was widely perceived as real.
Many people believed this video to be real, which may indicate that technology has reached a critical threshold in human perception—beyond which it becomes nearly impossible to distinguish generated content from reality.
An interesting incident also occurred with the police department of the American city of Westbrook. An employee of the department posted a photo online, not noticing that it had been significantly altered by a neural network. Only the vigilance of Westbrook residents helped uncover the background (the link is available only in the U.S.).
Some customers of the online marketplace EBAY were deceived by AI-generated images of nonexistent plants called "Cat’s Eye" flowers, resembling animals. Scammers were selling seeds of these supposedly real plants, posting AI-generated images in their listings.

Allow me to offer a few tips that may save your financial and psychological well-being.
1. Always use a separate SIM card for banking services. Don’t give its number to anyone.
2. Don’t open unknown files right away — check them first using a specialized online service like Hybrid Analysis.
3. Don’t click on unknown links right away that someone sent you online. First, check them using special services like VirusTotal or URLVoid.
4. Don’t trust photos or videos you see on social media. Double-check the information on major news websites, or at least wait a day before acting on such content.
5. Always verify information received in calls from unknown numbers. If it’s about your friends or family, check with them directly — even if it’s late at night. Scammers often flood the real person’s phone with spam calls and texts to prevent you from reaching them. Try to contact the person through other means — messaging apps or mutual acquaintances.
Always stay calm — you should not torment yourself with fear of every unknown number. Standard precautions will soon no longer be enough. Biometric data leaked through harmless apps can become a tool to hack your financial resources. Using fun video masks on social networks and capturing biometrics of your appearance can be used for online registration in banks and government institutions.
Distinguishing realistic generation from real video is becoming more and more difficult. Most institutions simply do not have the resources or specialists capable of quickly detecting fakes. It is hard, but possible. For example, experienced police officers quickly recognize forged documents because they see them often. Minor neural network hallucinations leave barely noticeable traces in generated results, and a specialist can spot them.
We suggest you practice a little in our SAID TEST, learning to determine where generation ends and real photos begin. These skills will help you notice deception in difficult situations and will be very useful in the future that awaits us.
Following these simple rules will help you avoid falling for scammers’ tricks. I sincerely wish that scammers always stay away from you. Take care of yourself and your loved ones.
said-correspondent🌐
Discussion in the topic with the same name in the community.