The rapid adoption of AI-powered coding assistants in cryptocurrency and blockchain development is creating an entirely new category of security vulnerabilities. Security researchers have identified a growing threat they call slopsquatting — a technique where malicious actors create software packages with names and descriptions specifically crafted to match the outputs of AI coding agents, tricking both automated tools and their human operators into installing compromised dependencies in crypto projects.
TL;DR
- Slopsquatting exploits AI coding agent outputs to distribute malicious npm and pip packages
- North Korean APT groups are actively crafting packages designed to appeal to AI-assisted development tools
- The technique combines the scalability of AI with the trust developers place in automated suggestions
- Crypto and Web3 projects are prime targets due to their heavy reliance on open source dependencies
- Security firms predict AI-powered attacks will be among the top threats for the remainder of 2026
What Is Slopsquatting
The term slopsquatting draws a parallel to typosquatting, the well-known attack where malicious packages are published with names similar to popular libraries, relying on typographical errors by developers. Slopsquatting operates on a similar principle but targets a fundamentally different vulnerability: the tendency of large language models and AI coding assistants to confidently suggest packages that do not actually exist.
When a developer asks an AI assistant to help implement a specific feature — say, a crypto wallet integration or a DeFi smart contract interaction — the AI may suggest importing a package that sounds plausible but has never been published. Attackers monitor these AI outputs, identify commonly suggested phantom packages, and then publish malicious versions under those names. When the next developer asks the same AI for help, the suggested package now exists, and the installation proceeds without raising any red flags.
Why Crypto Projects Are in the Crosshairs
Cryptocurrency and Web3 projects are particularly vulnerable to slopsquatting for several reasons. First, the crypto development ecosystem is heavily dependent on open source packages from npm and PyPI registries. A typical DeFi application might depend on hundreds of third-party libraries, creating a large attack surface. Second, the fast-paced nature of crypto development means teams often rely heavily on AI assistants to accelerate coding, increasing the likelihood of accepting AI-suggested packages without thorough verification.
Third, the financial stakes are enormous. With Bitcoin trading around $66,691 and Ethereum at $2,023 at the end of March 2026, the total value locked across DeFi protocols represents billions of dollars in potential loot. A single compromised dependency in a major protocol could provide attackers with access to private keys, transaction signing mechanisms, or user wallet connectivity — the same kind of access that enabled the devastating $292 million KelpDAO exploit and the $285 million Drift Protocol breach that would rock the industry in April.
The North Korea Connection
Security researchers have linked the slopsquatting campaign to North Korean advanced persistent threat groups. These are the same actors responsible for the Axios npm compromise that came to light in late March 2026, where a maintainer was socially engineered into installing malware disguised as a Microsoft Teams diagnostic tool. The Axios attack demonstrated that DPRK operatives are willing and able to exploit the software supply chain at multiple levels simultaneously.
The slopsquatting campaign represents a complementary strategy. While the Axios attack targeted an existing widely-used package through maintainer compromise, slopsquatting creates entirely new malicious packages designed to be discovered and installed through AI-mediated development workflows. Together, these approaches give attackers coverage across both established and nascent dependency paths.
According to data from TRM Labs, North Korean actors are responsible for 76 percent of all cryptocurrency hack losses in 2026, a staggering proportion that reflects both the resources and the strategic sophistication of these operations. Security researcher Taylor Monahan has documented at least 40 DeFi platforms that have been infiltrated by North Korean IT workers, often through supply chain compromises that began months or even years before the actual theft.
The AI Security Paradox
The situation creates a painful irony for the crypto industry. AI tools promise to improve code quality, catch vulnerabilities, and accelerate development — benefits that are desperately needed in a sector that lost over $52 million to exploits in March 2026 alone, according to PeckShield data. Yet these same AI tools are creating new attack vectors that traditional security models were never designed to address.
CertiK, a leading blockchain security firm, has identified AI-powered attacks — including slopsquatting, AI-generated phishing, and automated vulnerability discovery — as among the top emerging threats for 2026. The firm predicts that real-time deepfakes, AI-enhanced social engineering, and supply chain compromises facilitated by AI coding agents will drive some of the largest hacks of the year.
Defending Against AI-Mediated Attacks
Protecting against slopsquatting and related AI-mediated supply chain attacks requires a multi-layered approach. Development teams should implement strict dependency verification policies, requiring manual review of any package before it is added to a project. Automated scanning tools that check package age, download counts, maintainer history, and code content can help identify suspicious packages that AI assistants might suggest.
For crypto projects specifically, the use of hardware security modules for key management, multi-signature authorization for critical operations, and regular security audits that include dependency analysis are essential. The industry must also invest in building curated, verified package registries for common crypto development tasks, reducing reliance on the unregulated npm and PyPI ecosystems.
Why This Matters
The convergence of AI capabilities and cryptocurrency infrastructure creates a new paradigm of risk that the industry is only beginning to understand. Slopsquatting is not just a theoretical concern — it is an active, ongoing campaign by well-resourced state actors who have demonstrated their ability to execute sophisticated attacks at scale. As AI coding assistants become ubiquitous in crypto development, the attack surface they create will only grow. The projects that survive will be those that treat AI-suggested dependencies with the same skepticism they would apply to a cold email from an unknown sender.
Disclaimer: This article is for informational purposes only and does not constitute financial or investment advice. Always conduct your own research before making investment decisions.
This is exactly why we need better validation tools for AI agents. Slopsquatting is a nightmare because even if you’re a decent dev, you might miss a malicious package injected by a LLM that sounds perfectly confident. Always audit your dependencies, people! The margin for error in crypto is just too small.
Really insightful read. I’ve been using coding assistants to speed up my Solidity contracts lately, but the thought of an agent hallucinating a package name that leads to a drainer is terrifying. Speed is great, but crypto security is zero-sum. One mistake and the liquidity is gone forever.
solidity specifically is terrifying because one bad import can drain an entire contract. the attack surface from ai agents is massive
speed vs security is the eternal tradeoff. AI agents write code 10x faster but verify dependencies 10x less carefully. the math does not work in your favor
Lol ‘slopsquatting’ is such a perfect name for it. It’s crazy how we are basically training bots to help hackers by trusting them too much. I’ll stick to manually verifying my npm installs for now until these agents get some built-in security layers. Stay safe out there and don’t get lazy!
the name is perfect because its literally squatting on ai slop. north korean groups already doing this at scale is the scary part
squatting on AI slop is genius in the worst way. the attack surface scales with every dev that trusts their coding assistant blindly