How to Stop AI Error Loops with a Change Log

Stop AI Error Loops with a Change Log

Is your AI agent in Replit, Bolt, or Lovable stuck in an error loop and not fixing the problem with your app?

There’s a problem with your app the AI built. You tell the AI ‘there is a problem, fix it.’

The AI reports that ‘yes, there’s a problem, let me fix it.’ It tells you it’s fixed, then it’s not. Before you know it, your agent has gone through tokens faster than a 12 year old in an arcade.

This happen to a lot of people but there’s a reliable prompt sequence to get the agent to finally fix the errors.

Establish the Change Log

If you didn’t use our PRD Refinery Tool to create your product requirements document, now is the time to tell your AI agent:

PROMPT:

“Create a Change Log. Before modifying this file, please update change log with a summary of your changes. Do not make any other changes.”

The change management instruction that’s built into our PRD Tool creates the rules and instructs the AI agent on how to use a change log. Generally, the AI will follow those instructions about 90-95% of the time, and it will update the change log for you.

If you find that it’s not, or if you’ve just now given your AI the above prompt, the structure will at least be there.

When your AI isn’t following instructions, what you have to do (depending on the AI tool you’re using) is when you’re attempting to do a fix and you make a change, before you test it or do anything, tell the AI to write down the change it made.

Just prompt, “did you put this in the change log?” Just ask it that. Confirm the AI put it in the change log.

Now We Really Troubleshoot

Then, do your test again.

When you find that it failed – because this tends to be the loop that a lot of people get stuck in – and when it errors again, you’re going to right-click Inspect, go into the Console, look for errors or warnings, and right-click on those. Click on the error/warning and copy this console. 

Inspect screenshot of an app with one warning message in the Console log

Then, you’re going to tell the AI:

PROMPT: 

“I tested it. It failed. Here’s the console log. Please look at the code and look at the change log to see what we’ve already tested and tell me exactly why this is still not working. Do not make changes.”

That way the AI agent now has that record of what it already did. It won’t go back and try the same thing again, because you recorded it.

We’ve seen what happens is, a lot of times, the AI agent makes a change and then it creates a surface error.

When you tell the AI ‘it’s not working,’ then it goes back and undoes that surface error change, but it just reverts you back to where you were, basically. Maybe with a slight variation, but basically where you were.

And it will just oscillate back and forth between those two points. Hence, the error loop.

Implement the Change

Now the AI should give you a full description of what that error means, it’s reviewed what it’s already done and can rule those changes out. Because AI’s are usually programmed to be overeager to help, it will have likely presented a plan of changes to try that it hasn’t tried before.

Review the plan it presents, and if all looks good, tell it to proceed and do not make any other changes outside the plan.

ProTip:

Typically by this point in Replit or Bolt, the context window of the Agent chat has been exhausted.

If you suspect your chat is too long, copy the AI’s plan for new changes and paste into a new chat for a fresh context window. Tell it to implement that plan and do not make any other changes.

You might see the AI agent run smoother.

Let us know how this works for you!

So, using the change log to record the change, having it review the console logs, and telling it to read all the files, read the errors and not make changes is how you break that loop and get back to a functional set of changes.

Watch our YouTube video AI coding tutorial on the same topic:

https://www.youtube.com/watch?v=XiBTdavc5fE