
The advancement of generative artificial intelligence (AI) scales the sophistication of fraud in KYC identity verification systems.
The digital security infrastructure that underpins financial institutions and crypto-asset protocols faces a new technical challenge following the emergence of artificial intelligence tools specifically designed to compromise their processes. Identity verification (KYC).
A developer identified under the pseudonym Jinkusu has begun marketing in restricted markets of the Dark web of a toolkit capable of generate synthetic biometric representations with a technical precision that compromises current remote authentication standards.
According to reports from the Dark Web Informer, a tracker of illicit activities, this development does not represent an isolated incident, but rather an evolution in the architecture of cybercrime.
The platform reports that the software in question uses deep neural networks for the Creation of deepfakes and voice manipulation in real time. By integrating open-source libraries such as InsightFaceThe attackers manage to synchronize facial movements and human gestures with minimal latency, allowing them to bypass the tests of «liveness» or liveness detection that many banks and Web3 platforms use to ensure that the user behind the screen is a real physical person.
For its part, the cybersecurity firm Vecert Analyzer has also documented that this sophisticated tool allows for facial recognition with voice modulation adapted to evade traditional biometric models. This technical sophistication, therefore, alters the dynamics of operational risk for financial institutions.
By automating identity theft using AI, attackers drastically reduce the cost and time required to create synthetic accounts or access third-party profiles, shifting manual fraud to an industrial-scale model.
Trade secure crypto assets hereIdentity in the age of AI: a battle against synthetic forgery
In the realm of digital security, trust in methods based solely on sight and hearing is now being questioned. For years, platforms have requested photographs or short recordings to confirm the identity of their users. However, cybersecurity specialists warn that this practice is no longer sufficient given the rapid advancement of Artificial Intelligence.
Many systems rely on static verification mechanisms, making them easy targets for advanced spoofing techniques like Jinkusu's software, which can infiltrate the data stream and replace the original material in real time with AI-generated synthetic images, creating the illusion of a legitimate capture that the receiving system cannot distinguish from the real one.
Faced with this vulnerability, protocol developers and KYC solution providers are forced to reconsider what it truly means to be “verifiable” in the digital environment. Meanwhile, in the cryptocurrency market, the philosophy of decentralization often clashes with the need for robust registration processes to prevent money laundering. Therefore, an alternative path emerges from this challenge. The solutions of Decentralized Identity They use the blockchain structure to immutably record the history of an identity.
Thus, instead of relying on an image or voice susceptible to cloning, verification can be shifted to the user's cryptographic signature. Under this model, authenticity is no longer defined by appearance, but by the integrity of the supporting data and the technology that guarantees its permanence. However, while these solutions are becoming standardized across the crypto industry, the criminal ecosystem led by figures like Jinkusu continues to gain ground.
Access crypto securely at Bit2MeArtificial intelligence is reshaping cybercrime by 2026
Behind the creation of Jinkusu is a developer previously linked to Starkiller malwareThis device set a trend by integrating a reverse proxy system capable of mimicking legitimate browsers and capturing credentials without triggering alerts. This experience paved the way for a new model of automated fraud, where data manipulation and the generation of synthetic faces combine in a sophisticated formula for digital scams.
According to investigationsDuring 2025, a considerable reduction in mass attempts was recorded. Phishing against those who operate with digital assets. Statistics reflect an 83% drop in volume, although this decrease does not imply greater security. Artificial intelligence has allowed attacks to be more precise and personalized, with scripts designed to empty digital wallets and evade traditional controls. In many cases, Scammers use AI-generated voices and faces that mimic trusted individuals to convince victims to authorize fraudulent transactions., which makes each attack more convincing and harder to track.
The scheme known as “slaughter of pigs” It is one of the most devastating forms of this new wave of crime. It combines social engineering techniques with digital tools capable of sustaining realistic conversations for weeks or even months. In 2024 alone, losses associated with this type of fraud reached approximately $5.500 billion, demonstrating the scope of this global threat.
In 2026, the challenge goes beyond the technological sphere, as AI eliminates language barriers and the need for human operators, forcing a rethinking of regulatory and educational strategies to curb the growth of these types of attacks.
Trade crypto on Bit2Me: create your account today

