Stop AI Deception

THE NUMBER OF FAKE AI TOOLS IS GROWING.

14.05.2025

Against the backdrop of a recent half-joking news story about seven hundred hardworking Indian programmers successfully imitating artificial intelligence, alarming signals are coming in from all over the world about a new way to make dishonest money using neural networks.

This time, scammers have decided to profit from the widespread obsession of undiscerning social media users with new video generators. Fun and flashy videos with amazing effects that rack up millions of views seem to naive viewers like a miraculous way to make easy money.

Many viewers try to generate similar videos using advanced neural networks like KLING or Veo3, but quickly realize that desire alone is not enough—and multiple attempts to create even something watchable require paying a fairly hefty price.

Classifieds sites around the world are already flooded with offers for cheap and “high-quality” video generation. After all, many neural networks can run locally, and owners of powerful PCs and GPUs supposedly feel it’s their duty to bring accessible video generation to the masses. The interfaces of these services enthusiastically showcase a flashy new startup and revolutionary image and video generation technologies. These sites also claim alleged partnerships with well-known corporations.

But not all ads and hastily assembled interfaces actually offer such services. Most of these pseudo-services are nothing more than redirects to free models on Hugging Face, generating content at a snail’s pace and with just as poor quality.

Mesmerized by the low price and the high quality of demo videos, a user registers, enters their card details, and receives one or two generations for free. These generations are pulled from KLING or Veo2 or some AI combiner like KREA, where the scammers have a paid subscription. When the gullible user—hooked by the low price and impressive quality—pays for additional services, at best they get access to a weak Hugging Face model or a budget AI agent, and at worst—they’re left with no money and no generations.

The gullibility of social media users has already become a meme and only encourages shady individuals to invent new ways to deceive. Beautiful demo videos and assurances from not-so-truthful videobloggers about how easy and profitable it all is also lead the naive part of their audience into a state of infatuation—one that often ends in harsh disappointment.

Hidden pitfalls—such as loss of copyrights to already-promoted videos generated based on previously created content—are a common issue. If you create a character that happens to look even remotely like one previously drawn by someone else, the video will be yours only until it gathers enough views to attract the attention of copyright farmers. They’ll file complaints and claims, and your video will be blocked and handed over to them.

The shadow copyright market is full of old-farmers who buy up rights to obscure and old images, videos, sounds, tunes, logos. Their bots are constantly crawling across YouTube and TikTok in search of even the slightest resemblance. Now, with neural networks churning out exactly what they were trained on—old, popular content—viral videos are a juicy target not only for the farmers, but also for big corporations, who also frequently file copyright claims.

That’s why it’s worth repeating the old truth about free cheese and the vigilance no adult should lose. Check little-known sites using tools like the Google Transparency Report or SCAMADVISER.

If your AI-related business encounters such scams and issues, take a look at Google Mandiant, which ranked high in the 2025 edition of the renowned Forrester Wave report. This division of Google uncovered many cases of fake AI services. Google Mandiant representatives report another dangerous fraud technique currently spreading across social media, affecting about 3 million users on Facebook and LinkedIn. We're talking about malware disguised as well-known AI sites and apps, stealing user data.

Also, use verified neural network links from our article.

No panic or paranoia—stay calm and thoughtful, and take care of yourself.

Visit the SAID-Test to practice identifying fake generations.

said correspondent🌐

To discuss, create a topic in the community.