2025 Supply Chain Tech Failures: AI & Cloud Lessons

Phucthinh

2025 Supply Chain Tech Failures: AI & Cloud Lessons

The year 2024, as covered extensively by GearTech, witnessed a surge in supply chain attacks, reaching a critical point where thousands, potentially millions, of organizations – including Fortune 500 companies and government agencies – were at risk. This trend continued and escalated into 2025, proving that supply chain vulnerabilities remain a significant threat landscape. These attacks aren't isolated incidents; they represent a systemic weakness in our increasingly interconnected digital world. Threat actors are capitalizing on the leverage offered by compromising single points within complex supply chains, allowing them to inflict widespread damage with minimal effort. This article delves into the most prominent failures of 2025, focusing on the lessons learned from AI and cloud-related incidents, and outlining the critical need for enhanced security measures.

The Gift That Keeps on Giving: Supply Chain Attacks in 2025

For malicious actors, supply chain attacks are exceptionally lucrative. By targeting a single, widely-used component – be it a cloud service, a software maintainer, or an open-source library – attackers can potentially compromise millions of downstream users. The scale and efficiency of these attacks make them particularly appealing, and 2025 saw a dramatic increase in their frequency and sophistication. The interconnected nature of modern software development and deployment means that a vulnerability in one place can quickly cascade into a widespread crisis.

Poisoning the Well: Solana Blockchain Hack

In December 2024, an incident that foreshadowed the challenges of 2025 occurred on the Solana blockchain. Hackers managed to steal up to $155,000 from thousands of smart contract parties. The attack involved injecting a backdoor into a code library used by Solana software developers. Security firm Socket suspects the attackers compromised accounts belonging to the developers of Web3.js, an open-source library, and used this access to introduce the malicious code. This backdoor allowed the attackers to access private keys associated with smart contracts, enabling the theft of funds. This incident highlights the risks associated with relying on third-party libraries and the importance of robust access control measures.

A Rash of Attacks: Notable Examples

The Solana hack was just one example in a relentless wave of supply chain attacks. Other significant incidents included:

  • Go Programming Language Package Seeding: A malicious package, with a name similar to a legitimate one (a “typosquatting” attack), was seeded on a Google-run mirror proxy. Over 8,000 other packages depend on this targeted package, amplifying the potential impact.
  • NPM Repository Flooding: 126 malicious packages were uploaded to the NPM repository, downloaded over 86,000 times. These packages were automatically installed through Remote Dynamic Dependencies, bypassing traditional security checks.
  • E-commerce Backdooring: Over 500 e-commerce companies, including a $40 billion multinational, were compromised through the hacking of three Magento software developers: Tigren, Magesolution (MGS), and Meetanshi.
  • Open Source Package Compromise: Dozens of open-source packages, collectively receiving 2 billion weekly downloads, were updated with code designed to transfer cryptocurrency to attacker-controlled wallets.
  • tj-actions/changed-files Breach: The tj-actions/changed-files component, used by over 23,000 organizations, was compromised.
  • Toptal Package Backdooring: Multiple developer accounts on the npm repository were breached, leading to the backdooring of 10 packages related to talent agency Toptal, downloaded approximately 5,000 times.

Memory Corruption, AI Chatbot Style: The Rise of LLM Exploits

Beyond traditional software supply chains, 2025 saw a new wave of attacks targeting AI chatbots, particularly those powered by Large Language Models (LLMs). The most damaging attacks focused on poisoning the long-term memories of these models. Similar to supply chain attacks, compromising an LLM’s memory can trigger a cascade of malicious actions. This represents a fundamental shift in the attack surface, moving from code vulnerabilities to the manipulation of data and knowledge.

Poisoning LLM Memories: ElizaOS and Google Gemini

One attack successfully instructed a cryptocurrency-focused LLM, ElizaOS, to update its memory with fabricated events. ElizaOS, designed to execute blockchain transactions based on predefined rules, was unable to distinguish between fact and fiction. Researchers demonstrated that they could manipulate the agent’s future behavior by feeding it false information, such as instructing it to redirect funds to attacker-controlled wallets.

Independent researcher Johan Rehberger achieved a similar result with Google Gemini, planting false memories that lowered the chatbot’s security defenses, allowing it to invoke sensitive tools like Google Workspace when processing untrusted data. These false memories proved persistent, enabling repeated exploitation. This highlights the vulnerability of LLMs to data manipulation and the potential for long-term compromise.

Prompt Injection and Code Manipulation

Further attacks leveraged prompt injection to manipulate AI chatbots. A prompt injection attack against GitLab’s Duo chatbot successfully added malicious code to a legitimate code package, and even exfiltrated sensitive user data. Another attack targeted the Gemini CLI coding tool, allowing attackers to execute commands – including destructive ones like wiping hard drives – on developers’ computers. These incidents demonstrate the power of prompt injection to bypass security measures and execute arbitrary code.

AI as Bait and Hacking Assistant

LLMs were also used to *facilitate* attacks. For example, individuals involved in stealing and wiping government data allegedly used an AI tool to learn how to clear system logs, attempting to cover their tracks. In another case, a hacker tricked an employee of The Walt Disney Company into running a malicious AI image-generation tool. A breach at Salesloft Drift AI exposed Google Workspace credentials, leading to data theft. These examples illustrate how attackers are leveraging AI to enhance their effectiveness and stealth.

CoPilot's Data Leak: A Reminder of Vulnerabilities

The vulnerability of LLMs wasn't limited to malicious manipulation. Microsoft’s CoPilot was caught exposing the contents of over 20,000 private GitHub repositories from companies like Google, Intel, Huawei, and even Microsoft itself. Despite attempts to remove the repositories from search results, CoPilot continued to expose them, demonstrating the challenges of controlling data access within LLM-powered tools.

Meta and Yandex: Privacy Breaches and Covert Tracking

GearTech reported on a significant security story involving Meta and Yandex, both of which were found to be exploiting an Android weakness to de-anonymize visitors and track their browsing history for years. This covert tracking bypassed Android’s sandboxing and browser defenses like state and storage partitioning, raising serious privacy concerns. The ability to bypass these security measures demonstrates the lengths to which companies will go to collect user data, and the importance of robust privacy protections.

2025: The Year of Cloud Failures

The original vision of the internet was a decentralized platform resilient to catastrophic events. However, our increasing reliance on a handful of cloud providers has undermined this objective. 2025 witnessed several major cloud outages that disrupted services worldwide.

Amazon Web Services Outage: A Single Point of Failure

The most impactful outage occurred in October, when a software bug within Amazon’s network took out vital services for 15 hours and 32 minutes. A race condition in the software monitoring load balances caused delays in updating DNS configurations, leading to a cascade of errors and ultimately a complete network collapse. This incident underscores the risks associated with centralized cloud infrastructure and the importance of redundancy and fault tolerance.

Cloudflare, Azure, and Beyond: Widespread Disruptions

AWS wasn’t alone. Cloudflare experienced a mysterious traffic spike that slowed down much of the internet, followed by a second major outage. Azure also suffered an outage in October, impacting its customers. These incidents highlight the systemic risks associated with relying on a limited number of cloud providers.

Honorable Mentions and Future Considerations

Other notable security stories from 2025 include:

  • Deepseek iOS App Vulnerability: Code in the Deepseek iOS app sent unencrypted traffic to Bytedance (TikTok’s parent company), exposing data to interception and tampering.
  • Apple Chip Bugs: Bugs were discovered in Apple chips that could potentially leak secrets from services like Gmail, iCloud, and Proton Mail.
  • Signal’s Quantum-Resistant Upgrade: The Signal messaging app underwent a major overhaul to withstand attacks from quantum computers, demonstrating a proactive approach to future security threats.

The failures of 2025 serve as a stark warning. Organizations must prioritize supply chain security, invest in robust AI security measures, and diversify their cloud infrastructure to mitigate risk. The lessons learned this year will be crucial in building a more resilient and secure digital future.

Readmore: