DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

How does AI transform chaos engineering from an experiment into a critical capability? Learn how to effectively operationalize the chaos.

Data quality isn't just a technical issue: It impacts an organization's compliance, operational efficiency, and customer satisfaction.

Are you a front-end or full-stack developer frustrated by front-end distractions? Learn to move forward with tooling and clear boundaries.

Developer Experience: Demand to support engineering teams has risen, and there is a shift from traditional DevOps to workflow improvements.

Related

  • Enhancing SQL Server Security With AI-Driven Anomaly Detection
  • Securing Software Created by AI Agents: The Next Security Paradigm
  • Securing the Future: Best Practices for Privacy and Data Governance in LLMOps
  • AI-Based Threat Detection in Cloud Security

Trending

  • How to Master a DevSecOps Pipeline that Devs and AppSec Love
  • A Complete Guide to Modern AI Developer Tools
  • Building Scalable and Resilient UI/UX With Angular and Node.js
  • Java Enterprise Matters: Why It All Comes Back to Jakarta EE
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks

Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks

Slopsquatting and vibe coding are fueling a new wave of AI-driven cyberattacks, exposing developers to hidden risks through fake, hallucinated packages.

By 
Aminu Abdullahi user avatar
Aminu Abdullahi
·
May. 14, 25 · News
Likes (2)
Comment
Save
Tweet
Share
4.1K Views

Join the DZone community and get the full member experience.

Join For Free

Security researchers and developers are raising alarms over “slopsquatting,” a new form of supply chain attack that leverages AI-generated misinformation commonly known as hallucinations. As developers increasingly rely on coding tools like GitHub Copilot, ChatGPT, and DeepSeek, attackers are exploiting AI’s tendency to invent software packages, tricking users into downloading malicious content.

What is Slopsquatting?

The term slopsquatting was originally coined by Seth Larson, a developer with the Python Software Foundation, and later popularized by tech security researcher Andrew Nesbitt. It refers to cases where attackers register software packages that don’t actually exist, but are mistakenly suggested by AI tools; once live, these fake packages can contain harmful code.

If a developer installs one of these without verifying it — simply trusting the AI — they may unknowingly introduce malicious code into their project, giving hackers backdoor access to sensitive environments.

Unlike typosquatting, where malicious actors count on human spelling mistakes, slopsquatting relies entirely on AI’s flaws and developers' misplaced trust in automated suggestions.

AI-Hallucinated Software Packages Are on the Rise

This issue is more than theoretical. A recent joint study by researchers at the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma analyzed more than 576,000 AI-generated code samples from 16 large language models (LLMs). They found that nearly 1 in 5 packages suggested by AI didn’t exist.

“The average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat,” the study revealed.

Even more concerning, these hallucinated names weren’t random. In multiple runs using the same prompts, 43% of hallucinated packages consistently reappeared, showing how predictable these hallucinations can be. As explained by the security firm Socket, this consistency gives attackers a roadmap — they can monitor AI behavior, identify repeat suggestions, and register those package names before anyone else does.

The study also noted differences across models: CodeLlama 7B and 34B had the highest hallucination rates of over 30%; GPT-4 Turbo had the lowest rate at 3.59%.

How Vibe Coding Might Increase This Security Risk

A growing trend called vibe coding, a term coined by AI researcher Andrej Karpathy, may worsen the issue. It refers to a workflow where developers describe what they want, and AI tools generate the code. This approach leans heavily on trust — developers often copy and paste AI output without double-checking everything.

In this environment, hallucinated packages become easy entry points for attackers, especially when developers skip manual review steps and rely solely on AI-generated suggestions.

How Developers Can Protect Themselves

To avoid falling victim to slopsquatting, experts recommend:

  • Manually verifying all package names before installation.
  • Using package security tools that scan dependencies for risks.
  • Checking for suspicious or brand-new libraries.
  • Avoiding copy-pasting install commands directly from AI suggestions.

Meanwhile, there is good news: some AI models are improving in self-policing. GPT-4 Turbo and DeepSeek, for instance, have shown they can detect and flag hallucinated packages in their own output with over 75% accuracy, according to early internal tests.

AI security

Published at DZone with permission of Aminu Abdullahi. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Enhancing SQL Server Security With AI-Driven Anomaly Detection
  • Securing Software Created by AI Agents: The Next Security Paradigm
  • Securing the Future: Best Practices for Privacy and Data Governance in LLMOps
  • AI-Based Threat Detection in Cloud Security

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: