The Most Dangerous Bug AI Keeps Writing
SQL injection has been the #1 web vulnerability for over two decades. It's well-documented, easily prevented, and catastrophically dangerous. Yet AI coding assistants generate SQL injection vulnerabilities constantly.
What Is SQL Injection?
SQL injection occurs when user input is inserted directly into a database query, allowing attackers to modify the query's logic.
// User enters: ' OR '1'='1
const query = SELECT * FROM users WHERE email = '${email}'// Becomes:
// SELECT * FROM users WHERE email = '' OR '1'='1'
// Returns ALL users
Why AI Tools Keep Making This Mistake
Training Data Problem
Stack Overflow is full of examples like:
// Highly upvoted "solution"
const result = db.query(SELECT * FROM products WHERE name LIKE '%${search}%')These examples are upvoted for being concise and "working," not for being secure. AI learns these patterns.
Template Literal Bias
Modern JavaScript encourages template literals for string building. AI models over-apply this pattern:
// AI sees template literals as "modern" and uses them everywhere
const query = INSERT INTO logs (message) VALUES ('${userMessage}')Context Blindness
AI doesn't distinguish between:
- Trusted internal data
- Untrusted user input
Real Examples from AI Tools
ChatGPT (Prompted: "Write a function to find a user by email")
async function findUserByEmail(email) {
const query = SELECT * FROM users WHERE email = '${email}'
return await db.execute(query)
}Vulnerable. Direct string interpolation.
GitHub Copilot (Autocompleted after typing "const query =")
const query = SELECT * FROM orders WHERE user_id = ${userId} AND status = '${status}'Vulnerable. Both parameters are injectable.
Cursor (Prompted: "Add search to the products API")
app.get('/api/products', (req, res) => {
const { search, category } = req.query
const query = SELECT * FROM products WHERE name LIKE '%${search}%' AND category = '${category}'
// ...
})Vulnerable. Classic search injection.
The Correct Patterns
Parameterized Queries (PostgreSQL)
const query = 'SELECT * FROM users WHERE email = $1'
const result = await db.query(query, [email])Parameterized Queries (MySQL)
const query = 'SELECT * FROM users WHERE email = ?'
const result = await db.execute(query, [email])ORM Methods (Prisma)
const user = await prisma.user.findUnique({
where: { email: email }
})ORM Methods (Drizzle)
const user = await db.select().from(users).where(eq(users.email, email))Detection Patterns
Look for these red flags in AI-generated code:
| Pattern | Risk Level |
|---|
${variable} inside SQL string | Critical |
|---|
' + variable + ' in SQL | Critical |
|---|
.query(...) with template literal | Critical |
|---|
| Raw SQL with any string concatenation | High |
|---|
LIKE '%${x}%' pattern | Critical |
|---|
How to Fix AI-Generated SQL Injection
Step 1: Find All Database Queries
grep -rn "query\execute\
sql" --include="*.ts" --include="*.js" src/Step 2: Check Each Query for User Input
Trace each variable in the query. If it originates from:
req.bodyreq.queryreq.params- Any user-controlled source
Step 3: Refactor to Parameterized Queries
Replace every instance of string interpolation with parameter binding for your database driver.
Automated Detection
Manual review works, but automated scanning catches patterns consistently. ShipReady scans for:
- Template literals in SQL contexts
- String concatenation in queries
- Missing parameterization
- ORM misuse patterns
The Bottom Line
SQL injection is a solved problem—when you use parameterized queries. AI tools consistently fail to apply this solution because their training data is full of vulnerable examples.
Never trust AI-generated database code without verification.