AI Coding Tip 009 - Compact Your Context and Stop Memory Rot

Written by mcsee | Published 2026/03/03
Tech Story Tags: ai-coding | ai-coding-tips | context-pruning | ai-token-management | prompt-compression | llm-context-window | ai-hallucinations | ai-coding-tools

TLDRLong AI coding sessions lead to context decay, hallucinations, and wasted tokens. By restarting chats, summarizing state, pruning logs, and managing memory deliberately, you keep your AI focused, accurate, and aligned with your project constraints.via the TL;DR App

Stop the memory rot!

TL;DR: You can keep your AI sharp by forcing it to summarize and prune what it remembers (a.k.a. compacting).

Common Mistakes When Coding with AI

  • You keep a single, long conversation open for hours.
  • You feed the AI with every error log and every iteration of your code.
  • Eventually, the AI starts to ignore your early instructions or hallucinate nonexistent functions.

Problems Addressed in this Article😔

  • Context Decay: The AI loses track of your original goals in the middle of a long chat.
  • Hallucinations: The model fills memory gaps with hallucinations or outdated logic.
  • Token Waste: You pay for the AI to re-read useless error logs from three hours ago.
  • Reduced Reasoning: A bloated context makes the AI less smart and more prone to simple mistakes.

How to Solve this Problem with AI Coding Assistants

  1. Restart often: You can start a new chat once you finish a sub-task.
  2. Request a State Summary: Before you close a conversation, ask the AI to summarize the current decisions and plan.
  3. Add Human Checkpoints: After the summary, confirm you are still on track.
  4. Use Markdown Docs: Keep a small context.md file with your current stack and rules.
  5. Prune the Logs: You should only paste the relevant 5 lines of a stack trace instead of the whole irrelevant 200-line output.
  6. Divide and conquer: Break large tasks into smaller ones, invoking their own skills with local tokens and a fresh context.
  7. Divide the responsibility: A General doesn't need to know what every soldier is doing on the battlefield.
  8. Create and persist as Skill: After you have taught the AI, you should refactor the knowledge and business rules.
  9. Keep an Eye on the Context Size: Most tools have visual indicators of the window consumption.
  10. Use Local Persistence: Some tools allow sharing memory among agents and their sub-agents.

Benefits of this Approach

  • You get more accurate code suggestions.
  • You avoid divergences
  • You follow the AI's train of thought.
  • You spend less time correcting the AI's hallucinations.
  • The AI follows your project constraints more strictly and keeps focused on your tasks

Additional Context

Large Language Models have limited attention.

Long context windows are a trap.

Many modern models offer a very large context window.

In practice, they ignore a lot of them to your frustration.

Even with large context windows, they prioritize the beginning and end of the prompt.

Reference Prompts

Bad Prompt

Here is the 500-line log of my failed build. 

Also, remember that we changed the database schema 

Three hours ago in this chat.

Add the unit tests as I described above.

Now, refactor the whole component.

Good Prompt

I am starting a new session. Here is the current state: 

We use *PostgreSQL* with the 'Users' table schema [ID, Email]. 

The AuthService`interface is [login(), logout()]. 

Refactor the LoginComponent` to use these.

Note: You must ensure you don't purge essential context. If you prune too much, the AI might suggest libraries that conflict with your current setup. Review the compacted information.

More Details About this AI Coding Tip

  • Type: Semi-Automatic
  • Limitations:
    • You can use this tip manually in any chat interface.
    • If you use advanced agents like Claude Code or Cursor, they might handle some of this automatically, but manual pruning is still more reliable.
  • Skill level: Intermediate

https://maximilianocontieri.com/ai-coding-tip-004-use-modular-skills?embedable=true

https://hackernoon.com/ai-coding-tip-005-how-to-keep-context-fresh?embedable=true

AI Coding Tip 010 - Create Skill from Conversation

Conclusion

You are the curator of the AI's memory.

If you let the context rot, the code will rot, too.

Keep it clean and compact. 🧹

Additional Information ℹ️

https://arxiv.org/abs/2307.03172?embedable=true

https://llmlingua.com/?embedable=true

https://www.cursor.com/blog/context?embedable=true

https://www.ibm.com/topics/ai-hallucinations?embedable=true

This AI Coding Tip is Also Known

  • Context Pruning
  • Token Management
  • Prompt Compression

Tools Used

  • Claude Code
  • Cursor
  • Windsurf

Disclaimer 📢

The views expressed here are my own.

I am a human who writes as best as possible for other humans.

I use AI proofreading tools to improve some texts.

I welcome constructive criticism and dialogue.

I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.


This article is part of the AI Coding Tip series.

https://maximilianocontieri.com/ai-coding-tips?embedable=true


Written by mcsee | I’m a sr software engineer specialized in Clean Code, Design and TDD Book "Clean Code Cookbook" 500+ articles written
Published by HackerNoon on 2026/03/03