Securing the Future: Defending LLM-Based Applications in the Age of AI
Explore research on LLM-based application vulnerabilities and learn how to implement the OWASP Top 10 framework for enhanced AI security.
Join the DZone community and get the full member experience.
Join For FreeAs artificial intelligence and large language models (LLMs) continue to revolutionize the tech landscape, they also introduce new security challenges that developers, engineers, architects, and security professionals must address. At Black Hat 2024, we spoke with Mick Baccio, Global Security Strategist at Splunk, about their research on the exploitation of LLM-based applications and how organizations can implement the OWASP Top 10 framework to better defend these systems.
The Evolving Threat Landscape
The rapid adoption of LLM-based applications has opened up new avenues for potential exploitation. Baccio emphasizes the importance of understanding these emerging threats:
"We're seeing a lot of separating, like the web chat right where, hey, it's AI-powered. Okay, let me just dig in. Tell me more. What does that mean, right? And if you can't answer that, then you're going to lose that. If you don't have a practical use case that benefits the business, who's going to fail?"
This highlights the need for organizations to not only implement AI technologies but to fully understand their implications and potential vulnerabilities.
Prioritizing Threats: Splunk's Approach to the OWASP Top 10 for LLMs
Splunk's research focused on five key areas from the OWASP Top 10 for LLM Applications. When asked about how they prioritized these specific threats, Baccio explained:
"We looked at the top 10, and then out of that top 10, we identified the most applicable based on what we were doing for that specific project. Download an LLM and plug in those things. The others fold into each other."
This approach allowed Splunk to focus on the most critical and relevant threats while still addressing the broader spectrum of potential vulnerabilities.
The Role of AI in Defending Against AI-Based Threats
One of the most intriguing aspects of Splunk's research is its use of machine learning models to detect threats like prompt injection and cross-site scripting (XSS). When discussing the role of AI in defending against AI-based threats, Baccio offered a nuanced perspective:
"On the detection side, I would say generative AI is probably not a good fit ever. It's just not the right technology for detecting things. You don’t want a lot of false or misleading noise, because it's going to take forever to solve those issues. You need accurate detection, otherwise everything else just falls apart."
Instead, Splunk relies on a symbolic engine they've been developing for years, which uses a tree structure to analyze code patterns and identify vulnerabilities. This approach provides greater accuracy and reliability compared to generative AI solutions.
Balancing Public Models and Custom Solutions
The research mentions using public models for some detections, which raises questions about the potential risks and benefits of this approach. Baccio drew an analogy to common development practices:
"It's the same thing when you go to GitHub and download somebody else's repo, or you go to Stack Overflow, and you're copying and pasting something out of there. You are relying on something that you didn't train, and you were not hands-on with."
While using public models can accelerate development and leverage collective knowledge, it's crucial to understand the limitations and potential risks associated with this approach. Organizations must carefully evaluate and validate any public models they incorporate into their security systems.
The Importance of Feedback Loops and Continuous Monitoring
One of the key takeaways from Splunk's research is the critical importance of establishing robust feedback loops and continuous monitoring for LLM-based applications. Baccio emphasizes:
"The feedback becomes critically important after you have an application that's out there: How is it being used? How is it received? Once you build something to put it out, you're not done, right? That feedback loop has to exist because there may have been new inputs or new paradigms that come out where it affects your training model."
This ongoing process of monitoring, evaluation, and refinement is essential for maintaining the security and effectiveness of LLM-based applications over time.
Implementing Multi-Factor Authentication (MFA) as a Default
When discussing urgent next steps for organizations deploying LLM-based applications, Baccio strongly advocates for making multifactor authentication a non-negotiable default:
"MFA should be required. It should be defaulted. It comes down to which MFA for my logins and for each of my systems. Adding the appropriate level of friction is imperative."
By implementing strong authentication measures by default, organizations can significantly reduce the risk of unauthorized access and potential exploitation of LLM-based systems.
Addressing Privacy Concerns in LLM Interactions
The sensitive nature of LLM interactions raises important privacy concerns that organizations must address. Baccio highlights a potential scenario:
"We have an LLM that we built. You come in one day, and you start plugging in some of our proprietary documents into this LLM. You can't just delete those documents. That’s not how LLMs work. We need to educate users on how LLMs work and what is acceptable behavior with proprietary content and data."
This underscores the need for clear policies and technical controls to prevent the inadvertent exposure of sensitive information through LLM interactions.
The Landscape for LLM Security
As organizations increasingly rely on GPUs to accelerate AI workloads, including those related to security, the landscape is evolving. Baccio sees this as an area of potential growth and innovation:
"What trains an LLM is super important. A lot of folks think they’re going to build their own AI with their data. Many companies I’ve seen aren’t really building their own LLM, they’re just carving out a slice of AWS or going Copilot-style. They are beholden to the policies and vulnerabilities of those systems. In addition, there are specific use cases for an LLM that does a specific thing for a specific team. That’s going to explode since everyone has different needs. You build an LLM. You’ll need a nanny for the LLM to make sure it’s being asked the right questions. And, eventually, you’ll need to build a nanny to watch the other nanny and make sure it’s always running."
This suggests that we may see the development of specialized hardware and architectures designed specifically to support secure LLM operations and monitoring.
The Role of Regulatory Bodies and Industry Standards
As the field of LLM security continues to evolve, regulatory bodies and industry standards will play an increasingly important role. Baccio notes:
"We’ve had stellar legislation for the last four years. Whether the legislation is sufficient for the growth of AI is not yet known. Last year an AI bill of rights was developed and a lot of companies shared how they did things. We’re all going to get together and develop on these principles. I am hopeful this turns into something like the browser certificate. A group of folks coming together for the common good. "
This highlights the need for a balanced approach that encourages innovation while also ensuring adequate safeguards are in place to protect users and organizations.
Urgent Next Steps for Organizations
Based on Splunk's research, Baccio outlines several urgent next steps for organizations deploying LLM-based applications:
- Implement robust MFA across all systems, with a focus on simplifying the ecosystem.
- Establish comprehensive feedback loops to monitor how LLMs are being used and received.
- Continuously assess and update training models based on new inputs and emerging paradigms.
- Develop clear policies and technical controls to prevent the exposure of sensitive information.
- Invest in specialized hardware and architectures to support secure LLM operations.
- Stay informed about evolving regulatory requirements and industry standards.
Conclusion: A Call for Vigilance and Collaboration
As we navigate the complex landscape of LLM security, it's clear there are no easy solutions. However, by prioritizing key threats, implementing robust security measures, and fostering a culture of continuous improvement, organizations can significantly enhance their defenses against LLM-based vulnerabilities.
Baccio's final thoughts serve as a call to action for the entire industry:
"Never underestimate how hard someone will work to do their job, like how many shortcuts they will invent just to get their job right. And that's where I think about MFA: How do I make it as fast as possible, but still have that security in there? And I think that's where developers play a critical role."
By working together, sharing knowledge, and remaining vigilant, we can harness the power of LLMs while mitigating the risks they introduce. The future of AI security depends on our collective efforts to stay ahead of emerging threats and build more resilient systems.
Opinions expressed by DZone contributors are their own.
Comments