AI doesn’t think about security unless you tell it to. That’s not a flaw in the technology — it’s just how it works. When you ask it to build a feature, it focuses on making that feature function. Whether the implementation is secure is a separate concern, and the AI won’t raise it on its own.

This matters because a surprising amount of AI-generated code ships to the internet with security vulnerabilities baked in. Most of these aren’t exotic or hard to fix. They’re basic issues that are easy to catch once you know what to look for.

The Real Risks

These aren’t hypothetical. They show up regularly in AI-generated code.

Exposed API Keys and Secrets

This is the most common and most dangerous one. When you ask AI to connect to an external service — a weather API, a database, a payment processor — it often puts the API key directly in the JavaScript code. That code runs in the browser, which means anyone can see it by opening developer tools.

An exposed API key can mean anything from unexpected charges on your account to someone accessing your users’ data. It depends on what the key unlocks, but the safest assumption is that any key visible in client-side code is compromised.

How to catch it: Search your code for strings that look like keys or tokens — long strings of random characters, anything starting with sk_, pk_, api_, or variables named apiKey, secret, token. If they’re in a .js or .html file that runs in the browser, they’re exposed.

Cross-Site Scripting (XSS)

XSS happens when your application takes user input and displays it on the page without sanitizing it first. If someone types a script tag into your bookmark name field and your app renders it as HTML, that script runs in every visitor’s browser.

AI-generated code frequently uses innerHTML to display dynamic content. It works, but it’s one of the most common sources of XSS vulnerabilities in web applications.

How to catch it: Search your code for innerHTML. If it’s being set with any value that came from user input — form fields, URL parameters, localStorage data that users can modify — you likely have an XSS vulnerability. Use textContent instead, or sanitize the input before rendering it.

SQL Injection

If your project has a backend that talks to a database, AI sometimes builds queries by concatenating strings with user input. Something like "SELECT * FROM users WHERE name = '" + userName + "'". This lets an attacker type SQL commands into your form fields and modify or extract data from your database directly.

How to catch it: Look for any database query that builds SQL strings using + or template literals with user-provided values. The fix is parameterized queries — every major database library supports them.

Insecure Dependencies

When AI generates code that uses external libraries, it doesn’t always pick the most secure or up-to-date option. Sometimes it suggests packages with known vulnerabilities, or libraries that are no longer maintained. It might also import a full library when a simpler approach would do.

How to catch it: If your project uses npm packages, run npm audit in your terminal. It flags known vulnerabilities automatically. For projects without a package manager, be cautious about any external script loaded from a CDN — check when it was last updated and whether anyone is still maintaining it.

Hardcoded Credentials

Similar to exposed API keys but broader. AI sometimes generates code with placeholder passwords, default admin credentials, or database connection strings with real-looking usernames and passwords. If these stay in the code when you deploy, they become actual credentials.

How to catch it: Search for strings like password, admin, root, localhost, and any hardcoded connection strings. Replace them with environment variables.

A Pre-Deploy Security Checklist

Before you put any AI-generated project on the internet, run through these:

  • No API keys or secrets in client-side code. If you need an API key, use a backend proxy or serverless function to make the call.
  • No innerHTML with user input. Use textContent for displaying user-provided text.
  • Form inputs are validated. Check length limits, expected formats, and reject anything that looks like code.
  • External scripts are from trusted sources. Verify CDN links point to well-known, actively maintained libraries.
  • No hardcoded credentials. Passwords, tokens, and connection strings should come from environment variables, not source code.
  • HTTPS is enabled. Most modern hosts (Vercel, Netlify, GitHub Pages) handle this automatically, but verify.
  • If using a database: parameterized queries only. No string concatenation with user input in SQL statements.

Prompts That Produce More Secure Code

You can reduce security issues significantly by including security requirements in your prompts from the start.

Security-aware prompt

Build a contact form with name, email, and message fields. Validate all inputs on the client side (check email format, limit name to 100 characters, limit message to 1000 characters). Sanitize any output that’s displayed back to the user — don’t use innerHTML with user-provided data. If you need to make any API calls, don’t put API keys in the client-side code — show me how to use a serverless function instead.

Review prompt

Review this code for security vulnerabilities. Specifically check for: exposed API keys or secrets, XSS vulnerabilities (innerHTML with user input), any hardcoded credentials, and insecure data handling. List any issues you find and show me the fixed code.

Asking the AI to review its own code for security issues isn’t foolproof, but it catches a surprising number of problems. It’s essentially a free second pass that takes about ten seconds to request.

The bottom line

Most security issues in AI-generated code aren’t sophisticated attacks waiting to happen. They’re basic mistakes — exposed keys, unsanitized input, hardcoded passwords. Knowing what to look for takes about 15 minutes to learn and prevents the majority of real-world problems.

Building the Habit

Security doesn’t have to be a separate, intimidating discipline. Once you know the five or six things AI commonly gets wrong, checking for them becomes quick and routine. Add security requirements to your prompts upfront. Scan your code before deploying. Run npm audit if you’re using packages. These small habits make a real difference.

For more on writing effective prompts that produce cleaner output, see our guide on writing better AI coding prompts.