Hackers use Gemini to steal cryptocurrency: Here's how the new AI trick works

Hackers use Gemini to steal cryptocurrency: Here's how the new AI trick works

A group linked to the North Korean regime has begun using language models like Gemini to generate malicious code and steal digital assets. Google has responded with blocks and new security measures, but the case marks a turning point in the use of artificial intelligence in cyberattacks.

For years, language models like Gemini were seen as productivity tools, capable of drafting emails, summarizing documents, or writing useful code. But in the wrong hands, that same capability can be weaponized. The case of the North Korean hacker group demonstrates this. UNC1069, also known as Masan, which has started using Gemini to generate phishing scripts and steal digital assets.

According to report From the Google Threat Intelligence Group (GTIG), this group of hackers has actively integrated AI into its operations. It's no longer just about using AI for research or to automate tasks; now they invoke it during the execution of their attacks to... generate malicious code in real timea technique that Google has dubbed “just-in-time code creation”.

For researchers, this represents a significant shift from traditional malware, which typically has its logic coded statically. By generating code on the fly, attackers can adapt to the environment, evade detection systems, and customize their attacks based on the target. In other words, malware now “thinks.”

Trade your crypto securely at Bit2Me

What are Masan hackers after? Wallets, passwords, and multilingual phishing.

The objective of UNC1069, according to the research, is steal cryptocurrencyTo achieve this, the hackers have used the Gemini AI model to execute very specific tasks that enhance their attacks. These include:

  • Crypto wallet data location: The attackers asked the model to identify file paths and settings associated with digital asset storage applications.
  • Generating access scripts: Gemini has been used to generate scripts that allow access to encrypted storage or extraction private keysautomating the creation of malicious code.
  • Writing phishing emails: Equally concerning is the use of this AI model to draft highly persuasive phishing emails, written in multiple languages ​​and targeted at employees of cryptocurrency exchanges and platforms, with the aim of obtaining credentials or infiltrating internal systems.

These types of attacks represent a significant leap forward in the field of cybersecurity. Artificial intelligence not only facilitates the scalability of these operations but also automates complex tasks and reduces the margin of human error. Furthermore, by creating dynamic and adaptive code, these scripts become much more effective. more difficult to detect by traditional antivirus systems, which usually rely on fixed patterns to identify existing threats.

The report prepared by Google also highlights that this is not an isolated case. Other malware families are adopting similar approaches with advanced language models. For example, PROMPTFLUX uses Gemini to rewrite its own VBScript code every hour, while PROMPTSTEAL, linked to the Russian APT28 group, leverages the Qwen2.5-Coder model to generate Windows commands in real time. 

In short, this evolution in AI-based attack techniques marks a new challenge for digital security in the field of cryptocurrencies and beyond.

Create your account and securely access the crypto world.

Google reacts: account blocking and new security barriers

Upon detecting this malicious activity, Google acted quickly. According to the GTIG, The company blocked accounts linked to UNC1069 and strengthened access filters to Gemini. to prevent misuse of this tool. In addition, the company implemented new monitoring systems designed to detect any suspicious patterns in queries made to its artificial intelligence models.

Among the measures adopted are limitations in the capacity of their APIs to generate sensitive code or execute commands that could pose a risk, as well as more rigorous filters which make it more difficult for users to create malicious scripts. Also incorporated comprehensive audits to track how the models are used, allowing for more accurate identification of unusual or abusive behavior.

However, Google points out that the problem extends beyond its own technologies. Many open platforms, such as those found in Hugging Face, offer unrestricted access to artificial intelligence systems that can be exploited for malicious purposes. This means that even if large companies strengthen their defenses, malicious actors still have multiple avenues to leverage these increasingly advanced technological tools.

Protect your assets: access them from Bit2Me

What does this mean for the future of cybersecurity?

The UNC1069 case represents a pivotal moment in the evolution of cybersecurity. For the first time, the use of advanced language models has been confirmed in the operational stage of a real cyberattack, not as a mere experiment, but as an active tactic. Google has detected that at least five malware families have begun implementing this technology, highlighting a trend that could change the dynamics of cybercrime.

Given this scenario, essential questions arise for technology companies and regulators: How can we prevent artificial intelligence from being used for illegal purposes? What is the responsibility of the companies that develop these models? Is it feasible to maintain open access to these tools while simultaneously ensuring global security?

The use of AI in attacks also adds complexity to identifying and attributing those responsible. When malicious code is dynamically generated by algorithms, it becomes more difficult to trace its origin and distinguish between legitimate and malicious use.

In the specific case of UNC1069, the objective appears to be to finance North Korean regime activities through the theft of digital assets, but this technique could be adopted by various actors, from criminal organizations to states with hostile intentions. The ability to generate adapted code in real time makes artificial intelligence an efficient tool, but one that also poses significant risks to global cybersecurity.

However, while the advancement of artificial intelligence is introducing new cybersecurity challenges, just like other revolutionary technologies in history, such as nuclear energy, fiat money, or cryptocurrencies, AI is not inherently good or badIts impact depends on how different actors use it. In other words, while it can foster innovation, efficiency, and groundbreaking solutions in areas like health and education, it can also be misused. Therefore, the key lies in promoting an ethical and regulatory framework that maximizes its benefits and minimizes its risks, thus ensuring responsible and safe development for all.

Buy Bitcoin securely from Bit2Me