You’re Using AI to Write Code - You’re Not Using It to Review Code

Written by paoloap | Published 2026/01/15
Tech Story Tags: ai-assisted-coding | reviewing-code-with-ai | ai-assistant | ai-prompts-for-coding | ai-coding-techniques | ai-code-review | ai-coding-tips | hackernoon-top-story

TLDR7 prompts for the security, architecture, and documentation work you keep&nbsp;skipping. Security audits, architecture reviews, documentation updates. All the stuff that keeps getting pushed to “<em>later</em>.”via the TL;DR App

7 prompts for the security, architecture, and documentation work you keep skipping

Last month, I shipped a feature that passed code review, passed tests, and broke production within 4 hours.


The bug? A SQL injection vulnerability in code I’d written six months ago. Code that had been reviewed twice. Code that AI helped me write faster.


That’s when I realized: I’d been using AI to write code faster while ignoring the work that actually prevents disasters. Security audits, architecture reviews, and documentation updates. All the stuff that keeps getting pushed to “later.”


AI coding tools have gotten good enough that the “write code faster” problem is basically solved. Copilot, Claude, ChatGPT–pick your tool, paste your prompt, get working code. Most developers have this part figured out.


What they haven’t figured out is what to do next.


Writing code quickly doesn’t help when your architecture is a mess, your auth code hasn’t been security reviewed in months, and your documentation is six sprints out of date. That’s where teams actually lose time–on the work that keeps getting pushed to “later.”


I wrote about the speed techniques in Part 1. This piece is about using AI for security audits, architecture decisions, and documentation — the work that matters but never feels urgent enough to actually do.

Level 1: Session Starters (Use These Every Day)

These three techniques pay off immediately. Context for every session. Docs while the code is fresh. Reviews before your teammates see it.

Technique #1: The Context Dump

Time Saved: 30–60 minutes per session

Every AI conversation starts with amnesia. You re-explain your stack. Redescribe your patterns. Get generic answers that ignore your constraints.


The Context Dump fixes this in 60 seconds.


How it works:

Here's my project context:

Project: [Name] - [One-line description]

Stack: [Frontend] + [Backend] + [Database]

Current focus: [What you're building this session]
Key files:
- [path/to/main/file] - [what it does]
- [path/to/config] - [relevant settings]

Conventions:
- [Naming patterns]
- [Error handling approach]
- [Testing strategy]

Known constraints:
- [Performance requirements]
- [Security considerations]
- [Technical debt to work around]

I'm about to [specific task]. Keep this context for our session.


Why it works:

Every answer becomes tailored to YOUR project. The AI knows your stack, your patterns, your constraints. It becomes a teammate who’s been on the project for months instead of a stranger you just met.


Pro Tips:

  1. Save your Context Dump as a markdown file. Paste it at the start of each session.
  2. Update it weekly as your project evolves.
  3. The “Known constraints” section prevents AI from suggesting solutions that won’t work in your environment.
  4. For long sessions, remind AI of key context mid-conversation: “Remember, we’re using PostgreSQL, not MySQL.”

Technique #2: The Documentation Generator

Time Saved: 2–4 hours per module

Documentation is always “next sprint” until a new hire spends three days figuring out what your auth module does. AI can generate docs that actually help — not the boilerplate kind that restates function names, but the kind that saves that new hire three days.


How it works:

Generate documentation for this code:

[Paste your code]

Include:
1. Overview: What this module does and why it exists
2. Quick Start: How to use it in 3 steps or less
3. API Reference: Every public function with params, returns, and examples
4. Common Patterns: The 3 most common use cases with code
5. Gotchas: Edge cases, limitations, and things that will bite you
6. Related: What other modules this works with

Write for a developer who's new to this codebase but not new to coding.


Why it works:

The “Gotchas” section is the key. AI identifies edge cases and limitations you’ve internalized but never documented. It finds the things that would take a new developer three frustrated hours to discover on their own.


Pro Tips:

  1. Generate docs immediately after writing code, while your intent is still fresh
  2. Ask for “examples that would make sense to a junior developer”
  3. Include the “Related” section to help devs navigate your codebase
  4. Review and edit the output. AI gets structure right, but you know the nuances.

Technique #3: The Code Review Partner

Time Saved: 1–2 hours per PR

Code reviews are valuable but slow. AI can do the first pass, catching issues before your human reviewers even look at it.


How it works:

Review this code as a senior developer:

[Paste your code or diff]

Check for:
1. Bugs: Logic errors, off-by-one, null handling, race conditions
2. Security: Injection risks, auth issues, data exposure
3. Performance: N+1 queries, unnecessary loops, memory leaks
4. Maintainability: Naming, complexity, duplication
5. Edge cases: What inputs would break this?

For each issue:
- Severity: Critical / High / Medium / Low
- Line number or section
- What's wrong
- How to fix it

Be harsh. I'd rather fix issues now than in production. 


Why it works:

Without “Be harsh,” AI gives you the diplomatic review. You want the brutal one. The review that catches what you missed after staring at the code for three hours.


Pro Tips:

  1. Run this BEFORE pushing for human review. Don’t waste reviewers’ time on obvious issues
  2. Include your project’s conventions: “We use early returns, not nested ifs.”
  3. Ask for a security review separately if the code handles auth or user data
  4. Use the severity ratings to prioritize fixes


Run this before every PR. Your human reviewers will notice.

Level 2: Catch Problems Early (2–5 hours saved weekly)

Security review? “Next sprint.” Architecture check? “After launch.” Performance audit? “When it becomes a problem.”


These prompts turn “later” into “before lunch.” Run them weekly, and you’ll stop firefighting.

Technique #4: The Architecture Advisor

Time Saved: 2–6 hours of design decisions

Before you write code, run your architecture by AI. It won’t make the decision for you, but it’ll surface tradeoffs you hadn’t considered.


How it works:

I'm designing [feature/system]. Help me evaluate my approach.

Context:
- Scale: [Expected users/requests/data volume]
- Team: [Size and experience level]
- Timeline: [Deadline or runway]
- Existing stack: [What we already use]

My current plan:
[Describe your approach]

Evaluate:
1. What are the top 3 risks with this approach?
2. What would break first at 10x scale?
3. What's the simplest version I could ship first?
4. What alternatives should I consider?
5. What would you do differently if you had [more time / less time]?

Be specific. I want tradeoffs, not best practices.


Why it works:

I want tradeoffs, not best practices” is the line that matters. Without it, you get generic architecture advice. With it, AI analyzes YOUR specific constraints and surfaces what actually matters for your situation.


What would break first at 10x scale?” forces thinking about your specific system, not theoretical patterns.


What’s the simplest version?” prevents overengineering.


Pro Tips:

  1. Do this BEFORE writing code — not after you’ve committed to an approach
  2. Include your timeline constraint — it changes everything
  3. Ask follow-up questions about specific tradeoffs
  4. Use this for database schema design, API contracts, and system boundaries

Technique #5: The Security Auditor

Time Saved: 3–5 hours of security review

When was your last real security review? Not the checkbox compliance stuff — an actual look at your auth code for injection risks and privilege escalation. For most teams, the honest answer is “never” or “we should do that.” AI won’t replace a proper pentest, but it catches the OWASP Top 10 vulnerabilities — the ones behind 90% of breaches — in the time it takes to grab coffee.


How it works:

Security audit this code:
[Paste code that handles auth, user input, or sensitive data]

Check for:
1. Injection: SQL, NoSQL, command, LDAP
2. Auth/AuthZ: Session handling, privilege escalation, token issues
3. Data exposure: Logging secrets, error messages, API responses
4. Input validation: Missing sanitization, type coercion, length limits
5. Cryptography: Weak algorithms, hardcoded secrets, improper key handling

For each finding:
- Severity: Critical / High / Medium / Low
- Attack scenario: How would someone exploit this?
- Fix: Specific code change needed
- Reference: Relevant OWASP/CWE if applicable

Assume an attacker with knowledge of our stack.


Why it works:

Assume an attacker with knowledge of our stack” shifts AI from theoretical risks to practical exploits. The “Attack scenario” forces it to think like a hacker.


Pro Tips:

  1. Run this on the auth code, payment handling, and anything touching user data
  2. Don’t skip “Logging secrets” — it’s the most common issue I see
  3. Ask for both the vulnerability AND the fix
  4. This doesn’t replace penetration testing — it catches the obvious stuff

Technique #6: The Performance Profiler

Time Saved: 2–4 hours of optimization

Last month, I had an endpoint taking 3 seconds to load. I assumed it was the database. Ran this prompt, and AI pointed to a list comprehension two files away — a helper function calling a property getter 400 times per request. Each getter hit the database.


I’d been staring at the wrong file. AI reads everything with fresh eyes.


How it works:

Analyze this code for performance issues:
[Paste code]

Context:
- This runs [how often: per request / batch job / etc.]
- Data size: [typical input size]
- Current pain point: [what feels slow]

Find:
1. Time complexity issues (O(n²) operations, unnecessary loops)
2. Database problems (N+1 queries, missing indexes, over-fetching)
3. Memory issues (large allocations, leaks, caching opportunities)
4. I/O bottlenecks (blocking calls, sequential when could be parallel)
5. Quick wins (simple changes with big impact)

For each issue:
- Impact: High / Medium / Low
- Current behavior
- Suggested fix with code
- Expected improvement

Focus on the 20% of changes that give 80% of the gains.


Why it works:

Focus on the 20% of changes that give 80% of the gains” prevents AI from giving you a 50-item optimization list. You want the high-impact fixes first.


Pro Tips:

  1. Include your “current pain point” — it helps AI prioritize what matters to you
  2. Always ask about caching opportunities. Often the biggest win.
  3. For database code, ask specifically about indexes.
  4. Validate suggestions with actual profiling before shipping. AI identifies candidates; you confirm they matter.

Level 3: The Big Wins (4–8 hours saved per use)

New codebase. Major migration. The kind of work that usually means a week of context-gathering before you write a single line.


You won’t need these often. But when you do, they compress days into hours.

Technique #7: The Migration Assistant

Time Saved: 4–8 hours per migration

Migrations are tedious. Upgrading frameworks, moving databases, changing APIs. AI can handle the mechanical parts while you focus on the tricky edge cases.


Last migration, this saved me: 6 hours on a Rails 6→7 upgrade.


How it works:

Help me migrate from [Old] to [New].

Current setup:
[Describe what you have, paste sample code]

Target:
[Describe where you want to be]

Constraints:
- Must maintain backwards compatibility for [duration]
- Cannot have downtime longer than [limit]
- Must preserve [specific data/behavior]

Generate:
1. Migration checklist (ordered steps)
2. Code transformations for common patterns
3. Breaking changes to watch for
4. Rollback plan
5. Validation tests to confirm migration worked

Start with the riskiest parts first.

Why it works:

Start with the riskiest parts first” is key. AI will identify what’s most likely to break, so you tackle it early when you have time to fix issues.


Pro Tips:

  1. Include sample code from your actual codebase
  2. Ask for the rollback plan upfront. You’ll need it
  3. Run validation tests before AND after migration
  4. For database migrations, always ask about data integrity checks

Bonus: The Full Codebase Analysis

Time Saved: 1–2 days for new codebases

Joining a new project? Inheriting legacy code? AI can give you a codebase tour in minutes instead of days wandering through folders.


How it works:

Analyze this codebase structure:

[Paste your directory tree or file list]

Tell me:
1. Architecture: What pattern is this? (MVC, microservices, monolith, etc.)
2. Entry points: Where does execution start?
3. Core modules: What are the 5 most important files/folders?
4. Data flow: How does data move through the system?
5. Dependencies: What external services/APIs does this rely on?
6. Red flags: What looks concerning from a maintenance perspective?
7. Where to start: If I need to [specific task], which files should I look at first?

Explain like I'm a senior dev who's never seen this codebase.


Then paste key files and ask:

Now explain [specific file] in detail:
- What does it do?
- What depends on it?
- What does it depend on?
- What are the gotchas?

Why it works:

Where to start” is the money question. Instead of wandering through folders, you know exactly which files to read for your specific task.


Pro Tips:

  1. Start with the directory structure, then drill into specific files
  2. Ask about the “Red flags” — AI spots patterns humans miss after staring at code for years
  3. Use this when onboarding to a new team or inheriting legacy code
  4. Combine with The Documentation Generator to create onboarding docs for future devs

The Real Win

The techniques in Part 1 were about getting faster at what you already do. These are about expanding what you’re willing to take on.


That security audit you’ve been putting off for three sprints? You could run it tomorrow morning before standup. That legacy codebase nobody wants to touch? You could have a working map of it by lunch.


The barrier was never skill. It was the sheer tedium of doing it manually. Remove the tedium, and you start doing work that compounds — cleaner architecture, fewer vulnerabilities, documentation that actually helps the next person.


If any of these surface something nasty in your codebase, I’d like to hear about it.


Part 3 is in the works: debugging, test generation, and techniques for production systems where mistakes cost money. Let me know in the comments if that’s something you’d find useful — it helps me know what to prioritize.


Written by paoloap | No BS AI/ML Content | ML Engineer with a Plot Twist 🥷 70k+ Followers on LinkedIn
Published by HackerNoon on 2026/01/15