BlockBeats News, November 7th, Google's Threat Analysis Group's latest report shows that at least five new types of malware are exploiting Large Language Models (LLMs) to dynamically generate and conceal malicious code.
Among them, the North Korea-related hacker group UNC1069 was found to be using Gemini to probe wallet data and create phishing scripts with the intention of stealing digital assets. These malware use "live code creation" technology, leveraging external AI models such as Gemini or Qwen2.5-Coder to bypass traditional security detection. Google stated that it has disabled the relevant accounts and strengthened security measures for model access. (Decrypt)
