At some point, AI is going to give you code that doesn’t work. This isn’t a maybe. The code will look reasonable, the AI will sound confident about it, and when you run it, something will be broken.

That’s normal. It doesn’t mean the tool is bad or that you did something wrong. It just means you need to know how to handle it. Most of the time, fixing AI-generated code is straightforward once you know where to look.

Why AI Code Breaks

Before jumping into fixes, it helps to understand why AI output fails in the first place.

It makes things up. AI models sometimes reference functions, APIs, or CSS properties that don’t exist. The code looks syntactically correct, but it’s calling something that was never real. This is called hallucination, and it’s more common with less popular libraries or newer APIs.

It loses context. The longer your conversation goes, the less reliably AI remembers what you built earlier. It might overwrite a function you already defined, use a different variable name, or forget the styling conventions you established three prompts ago.

It makes assumptions. When your prompt is vague about something, AI fills the gap with its best guess. Sometimes that guess is wrong — it picks the wrong data format, assumes a different file structure, or adds dependencies you don’t have installed.

The Debugging Process

Step 1

Read the Error Message

This sounds obvious, but most people skip past it. When something breaks, your browser console or terminal usually tells you exactly what went wrong and where.

To open the browser console:

  • Chrome/Edge: Right-click the page, click Inspect, then the Console tab
  • Firefox: Right-click, click Inspect, then Console
  • Safari: Enable developer tools in preferences first, then Develop menu and Show JavaScript Console

Look for red text. The error message typically includes a description and a line number. You don’t need to understand everything it says — just grab the message and the location.

Common errors you’ll see:

  • ReferenceError: The code references a variable or function that doesn’t exist
  • TypeError: Something is the wrong type (like calling .map() on something that isn’t an array)
  • SyntaxError: A typo or structural problem like a missing bracket or extra comma
Step 2

Feed the Error Back to AI

This is the single most effective debugging technique for vibe coding. Copy the exact error message and give it to your AI tool with context.

Debugging prompt

I’m getting this error in the browser console:

“Uncaught TypeError: Cannot read properties of null (reading ‘addEventListener’)”

on line 47 of my index.html file. The script is trying to add a click handler to the “Add Bookmark” button. Here’s the relevant code: [paste the code around line 47]. What’s causing this and how do I fix it?

The more context you include — the error message, the line number, what you were trying to do, the surrounding code — the better the fix will be. “It doesn’t work” is almost never enough information for a useful response.

Step 3

Check the Obvious Things First

Before going deep, rule out the simple stuff.

Did you save the file? This is the answer more often than anyone wants to admit. If you’re editing locally, make sure you saved before refreshing the browser.

Are you looking at the right file? If you have multiple versions floating around, you might be editing one and running another.

Did the AI change something it shouldn’t have? When you ask AI to add a feature, it sometimes modifies existing code in the process. Scroll through the output and check whether anything you already had working got altered.

Is the script loading at all? If your JavaScript is in a separate file, check that the <script> tag path is correct. A wrong path produces no error — the code just silently doesn’t run.

Step 4

Use console.log to Trace What’s Happening

If the error message isn’t clear enough, add console.log statements to see what the code is actually doing. This is the oldest debugging technique in web development, and it works just as well on AI-generated code.

Ask the AI to help:

Tracing prompt

The bookmark form submits but the new bookmark doesn’t appear on the page. Nothing shows in the console. Add console.log statements to the addBookmark function so I can see what data is being processed and where it might be failing.

Check your browser console for the output. Usually you’ll spot where things go sideways — a value is undefined when it should have content, or a function returns early before reaching the part that updates the page.

Step 5

Know When to Start Fresh

Sometimes the code gets tangled enough that fixing it piecemeal creates more problems than it solves. This is especially common in long AI conversations where the context has drifted.

Signs it might be time to start over on a section:

  • You’re on the fourth or fifth fix attempt and new bugs keep appearing
  • The AI keeps suggesting fixes that undo previous fixes
  • The code has gotten significantly more complex than the feature warrants
  • You can’t tell what half the code is doing anymore

Starting fresh doesn’t mean losing everything. Copy out the parts that work, start a new conversation with your AI tool, and describe what you need with the benefit of knowing what went wrong the first time. The second attempt is almost always cleaner.

Red Flags That Mean AI Is Hallucinating

Watch for these patterns — they usually indicate the AI is generating plausible-sounding code that won’t actually work:

  • Referencing libraries you didn’t install. If you’re writing vanilla JavaScript and the code suddenly imports React or lodash, the AI has drifted.
  • Making up API endpoints. If the code fetches from a URL that doesn’t look like a documented API, check whether that endpoint actually exists.
  • Inventing CSS properties or JavaScript methods. AI sometimes combines real property names into something that looks right but isn’t. If a CSS property doesn’t seem to do anything, look it up.
  • Confident explanations of wrong code. AI can explain incorrect code with total confidence. If the explanation sounds right but the code doesn’t work, trust what you see in the browser over what the AI tells you.
Key mindset

Debugging AI code isn’t fundamentally different from debugging code you wrote yourself. The process is the same: find the error, understand what’s happening, fix the cause. The only difference is you’re collaborating with AI on both the writing and the fixing. Treat it like a conversation, not a magic spell.

Getting Better at This

Every time you debug AI-generated code, you learn something about how the code works. After a few projects, you’ll start catching common issues before they become errors. You’ll write prompts that prevent the bugs in the first place. And you’ll develop an instinct for when AI output looks solid versus when it looks like it’s guessing.

That instinct is worth more than any individual tool, and it only comes from practice.

For more on writing prompts that produce fewer bugs from the start, check out our guide on writing better AI coding prompts.