πŸ’‘ The Plan-Act Loop: How to Coach a Genius Junior Engineer (AKA Claude Code)

Written by rdondeti | Published 2025/10/29
Tech Story Tags: ai-coding | claude-code | product-development | programming | machine-learning | plan-act-loop | software-development | ai-agent

TLDRAI coding is fast but unreliable without strict human oversight. I used a "Plan-Act Loop" and 7 rules CLAUDE.md to treat Claude Code like a genius junior engineer. This process let me launch a production app in 30 days while preventing common pitfalls like token-burning dependency loops and unchecked scope creep.via the TL;DR App

This isn't theory. This is the battle-tested, plan-first playbook that took my project, Apparate, from zero to a production-ready app in about 30 days a project I'd have budgeted eight months for traditionally.

Forget the hype. This "101" is the practical starting point built solely on my real-world experience, emphasizing scoped workflows, brutal failure modes, and why your discipline is the only thing that makes AI coding reliable. You are the Tech Lead; Claude Code is your genius junior engineer fast, brilliant, but requiring constant coaching and explicit rules.

⚑️ The Core Truth: Speed is a Trap Without Checks

The biggest 'aha' is the sheer velocity of code generation when Claude is directed precisely. The biggest disappointment? How quickly it can destroy a working codebase when granted unchecked autonomy.

  • The Contrast: Claude is a race car. It will get you there fast, but it will drive off a cliff if you aren't the one steering.
  • The Takeaway: Use the speed. Never grant it control over critical code. Reliability comes from verification, not autonomy.

This contrast is what shaped my core approach: use the speed, but never grant it unchecked autonomy over important sections of the codebase.

⏳ 30-Day App vs. The 8-Month Slog

My timeline: I launched a production app in 30 days with Claude Code and one "vibe coder" (me). The traditional path would have been an eight-month marathon of full-time development.

This acceleration isn't "AI magic." It’s a direct result of process:

  1. Relentlessly scoping tasks to the smallest possible unit.
  2. Approving a plan (the "Plan" in the Plan-Act loop) before any code is written.
  3. Shipping in small slices that are quick to review (the "Act" in the Plan-Act loop).

Your mantra must be: Plan first, ship in slices. Let speed amplify good process never replace it.

🚨 The Costliest Failure: The Dependency Death Loop

My worst spiral came while integrating an activity-detection API, where the system kept selecting non-compatible library versions and got stuck in an infinite loop of upgrading and downgrading without converging. Token burn and time wasted were astronomical.

  • How I Broke It: I stopped the session immediately, reset the context completely, and reasserted the specific, pinned versions and compatibility requirements before letting it touch another file.
  • The Lesson: If versions oscillate and suggestions repeat, stop immediately and re-verify constraints before you write another line.

The most expensive mistake: My standing rule now is simple: Skipping a written plan led straight to loops, token burn, and code churn. If there is no plan on the table, there is no code.

πŸ“ My Secret Weapon: The CLAUDE.md Contract

I don't just start coding. I follow a strict, end-to-end spec-writing process that creates a living rules-of-engagement document for the agent:

  1. Concept Refinement: Discuss the raw idea in Claude Desktop until a clear software architecture emerges.
  2. PRD Generation: Create a formal PRD from that refined discussion.
  3. Project Initialization: Give the PRD to Claude Code, initialize the project, and demand it generate a CLAUDE.md file based on the PRD.
  4. Enforcing House Rules: I then add my non-negotiable "house rules" to CLAUDE.md. This file is now the contract for how work gets done. It is the guardrail against drift, rework, and accidental scope creep.

πŸ›‘ The 7 Non-Negotiable Rules in My CLAUDE.md

These aren't suggestions; they are guardrails that define the coaching model for your AI junior engineer:

  1. Plan First: Wait for explicit user confirmation before writing code.
  2. Architecture: For major decisions, always present the top three options with a clear list of pros and cons.
  3. Security Gate: Route all changes through a security review agent before they can be accepted.
  4. Task Tracking: Write all TODO items to a dedicated file and let specialized agents update their status as work progresses.
  5. Efficiency: Use sub-agents for specialized tasks, coordinated by a Project Manager agent.
  6. Platform Authority: Read the official API documentation (e.g., Android API) before implementing platform-specific behaviors.
  7. Compatibility Check: Before checking in, verify all libraries are compatible and match the latest stable versions I have approved.

🧘 When to Stop Wrestling and Hit the Break

You never accept a suggestion from Claude Code without challenging it, and you treat the agent like a junior engineer whose steps must be questioned.

  • The Break Signal: You know it’s time to take a break when Claude starts suggesting the same irrelevant changes or minor edits, and your core problem still isn’t solved.
  • My Rule: "STOP, TAKE A BREAK." Your human debugging skills are still the most critical part of the loop; Claude is not autonomous enough to debug safely without active oversight.

Reviewing AI-Generated Code: I use a review agent to generate a report, but I still prefer building in small chunks and reviewing manually. Chunking the work keeps the risk surface small and prevents long sessions from accumulating subtle errors. Even with a review agent, I assume responsibility for final judgment.

πŸš€ Guidance for Managers (A Laddered Rollout)

I didn't pitch this to leadership; I just shipped with it. For organizations, I recommend a laddered rollout to build trust and avoid culture shock:

  1. Phase 1 (Familiarity): Start by using Claude only for minor defects and bug fixes.
  2. Phase 2 (Expansion): Once the team is comfortable, expand to new feature development under the same plan-first/review rules.
  3. Phase 3 (Complex): Only then should you push into larger refactors and more complex features.

This laddered approach builds trust in the workflow and avoids expensive reversals.

Closing: This Is the Starting Point

This is your 101 only your experience, in your words, expanded into a format others can apply immediately. This is the Plan-Act Loop.

Next up, I'll be publishing the deeper dives that build on these foundations: Advanced Multi-Agent Verification, Security Guardrails, Cost Control, and the full Team Rollout strategy all coming soon.


Written by rdondeti | Building tomorrow's software today. AI-powered mobile apps β€’ Carrier-grade infrastructure β€’ Open source security β€’ Hack
Published by HackerNoon on 2025/10/29