How to Write Technical Specs That Actually Ship

Written by danielkov | Published 2025/11/10
Tech Story Tags: technical-writing | technical-spec-template | write-a-technical-spec | software-design | risk-assessment | rollout-and-rollback-strategy | write-engineering-proposals | llms-for-software-planning

TLDRTechnical specifications are powerful tools for shipping features from idea to production. This guide teaches you how to write specs that validate ideas early, get stakeholder buy-in, and drive implementation. Covers problem statements, requirements, technical approaches, rollout strategies, and testing. Learn how to use LLMs for spec-driven development and build collaborative cultures around documentation.via the TL;DR App

If you're reading this, chances are you've stared at a half-finished technical spec at 2AM, trying desperately to wrap it up before standup. Or maybe you've been on the receiving end — slogging through someone else's 47-page opus, wondering if unlimited PTO is worth it after all.

But here's the thing: when done well, technical specifications aren't homework from your tech lead. They're one of the most powerful tools you have to drive features from idea to production. I'm going to show you how I approach writing them, based on my experience at Monzo and lessons I've picked up from engineers at AWS, Google, and Meta.

My background is software engineering, but a lot of this transfers to other technical disciplines. Let's dive in.

Why write tech specs anyway?

I get it. Writing documentation feels like busywork when you could be shipping code. But stick with me here, because a good technical spec does way more than satisfy some process requirement.

When used well, technical specs help you:

  • Validate the idea before you waste weeks building it — I've killed more bad ideas during the spec phase than I care to admit. Writing forces you to think through edge cases you'd otherwise discover three sprints in.
  • Get buy-in from people who actually matter — Your staff engineer, your product manager, that principal engineer who's seen it all. Getting their thumbs up early means you won't be defending your approach in a contentious PR review later.
  • Iterate on the high-level plan with smarter people than you — I'm not the smartest person in the room, and if you're honest with yourself, neither are you. A spec is how you tap into collective intelligence.
  • Share knowledge with your team — Onboarding new engineers becomes "read this spec" instead of "let me explain this architecture for the third time this week."
  • Share knowledge with stakeholders outside engineering — Your product manager, compliance team, or finance folks need to understand what you're building and why. A well-written spec bridges that gap.
  • Use as validation when work is complete — Did you actually solve what you set out to solve? The spec is your scorecard.

Spec-driven development and LLMs

Here's where things get interesting. LLMs have changed how I think about technical specifications.

LLMs work best with strict, air-gapped technical specifications. The less you leave up to imagination, the better. A well-defined spec can be fed into an LLM for automatic ticket creation or even technical implementation. You can generate tests from your specifications. You can use LLMs to ideate on parts of your proposal, ask them to do research, or even help identify flaws in your logic.

I have another article coming in the next few weeks dedicated to spec-driven development with LLMs. Keep an eye out for it — I've been experimenting with this at scale and the results are genuinely impressive.

But even if you never touch an LLM, the discipline of writing a rigorous spec pays dividends. It forces clarity. And clarity is how good software gets built.

Anatomy of a good technical proposal (part 1)

A good technical spec is well-structured, easy to understand, and actionable. It should be skimmable for executives and detailed enough for the engineer implementing it. That's a narrow tightrope to walk, but I've found a pattern that works.

Here's the first half of the anatomy. This is what you write before you get buy-in.

What are you trying to solve?

Start with the problem statement. Not the solution — the problem. I've seen so many specs that jump straight into "we should build a microservice with GraphQL" without ever explaining what's broken.

Your problem statement should be clear enough that someone outside your team can understand it. At my startup, Tandem, I have a simple test: could Jack, my non-technical co-founder understand the problem, just by reading it? If not, rewrite it.

Some techniques that work:

  • Use concrete examples: Not "users experience latency" but "users wait an average of 8 seconds for their transaction history to load, and we're seeing 23% drop-off at this screen."
  • Show the business impact: "This costs us approximately £2.3M annually in lost conversions" is a lot more compelling than "this is slow."
  • Include quotes or data: If you have user research, customer complaints, or monitoring data, drop it in. Evidence beats opinion.

What are you NOT trying to solve?

This is criminally underused and incredibly valuable. Scope creep is real, and if you don't explicitly call out what's out of scope, you'll spend the next month arguing about edge cases that don't matter.

Imagine this: you get a proposal from the platform team to upgrade to their shiny new shared UI library. Seems straightforward, right? Just swap out some imports, update a few components. But then you discover the new library uses a completely different routing system underneath. Now you need to refactor every route in your application. Then you realise your product team's navigation patterns don't quite fit the new model, so you need custom wrappers. Oh, and every other product team that depends on this shared library? They're all blocked on the same changes. What started as a "simple utility update" just turned into a six-month migration effort that's holding up feature work across the entire organisation.

An explicit "what we're NOT solving" section could have caught this early. It gives you permission to say no. It keeps the culture of "why-notism" at bay. And it protects your timeline from those seemingly innocent suggestions that balloon into organisational nightmares.

Why should this problem be solved?

Not all problems need solving. Some can live with workarounds. Some aren't actually problems, just minor inconveniences. Why should this one get attention and engineering time?

Make the case clearly:

  • Impact on users: How many people does this affect? How badly?
  • Impact on business: Revenue, retention, conversion, operational costs?
  • Impact on engineering: Is this tech debt that's slowing down every other project?
  • Impact on compliance or risk: Will this blow up in six months if we don't fix it?

I'm a big believer in showing your work here. Don't just say "this is important." Show the math. We once based a proposal to change the text on a button on an experiment that proved a 23% uptick in conversion, which translated to +£12k ARR. That's a much easier sell than "this button copy could be better."

Why now?

Timing matters more than people think. I've seen perfectly good proposals get shelved because it wasn't the right time. And I've seen mediocre proposals get greenlit because the timing was perfect.

Why is now the best time to solve this problem? Think about:

  • External deadlines: Regulatory changes, marketing campaigns, conference demos
  • Internal readiness: Do you have the right infrastructure in place? The right team capacity?
  • Dependencies: Are there other projects that need to ship first?
  • Opportunity windows: Sometimes, there's a narrow window when solving something is easier than it'll ever be again.

Here's a specific example. We had a proposal to use local time zones on bank account statements instead of UTC. The timing was critical — we wanted to do it right after daylight savings ended in the UK. That gave us six months when UTC and local time were the same. We also had to keep historical statements in their original time zone for legal reasons. Starting right after the clocks changed meant we had the maximum window before the next transition, which greatly reduced the chances of someone needing to generate a statement with multiple logical timezone conversions.

If we'd done it in March instead of November, we'd have introduced way more complexity and edge cases. Timing wasn't just important — it was the difference between a clean migration and a nightmare.

Requirements

What's 100% non-negotiable? What are the absolute bare minimums that can't be excluded?

Think of requirements as the must-haves. Not the nice-to-haves, not the maybe-we-should-also, but the core things that must be true for this to be worth doing at all.

I structure requirements as testable statements:

  • "Transaction history must load in under 2 seconds at p95"
  • "Solution must work for users on iOS 14 and above"
  • "Data must be encrypted at rest and in transit"
  • "Must not require any customer to re-authenticate"

Notice these are specific and verifiable. You can test whether you've met them. Vague requirements like "must be fast" or "should be secure" are useless. Pin them down.

Bird's eye view of the solution

This is where most people screw up. They immediately dive into implementation details — which database, which API, which service mesh. Stop. Not yet.

The bird's eye view should be understandable by non-technical stakeholders. I'm talking product managers, operations folks, maybe even finance. If you can't explain your solution to someone who doesn't know what Kubernetes is, you don't understand it well enough yourself.

Some rules for this section:

No tech talk. Seriously. Not yet. Instead of "we'll deploy a new microservice with a Redis cache and expose it via GraphQL," say "we'll store frequently-accessed data closer to the user so it loads instantly instead of making a slow trip to the database every time."

Think user-centric. What changes from the user's perspective? What does success look like for them?

Anchor on outcomes, work backwards. Start with "users will see their transaction history load in under 1 second" and then explain at a high level how you'll achieve that.

If possible, offer multiple potential solutions. Sometimes there are genuinely different approaches. Lay out two or three options with pros and cons. This shows you've thought it through and gives stakeholders a real choice.

At this point, you should stop and reassess. Here's my checklist before moving forward:

  • [ ]Problem statement is clear and specific
  • [ ]Business impact is quantified
  • [ ]Non-goals are explicitly stated
  • [ ]Requirements are testable
  • [ ]High-level solution makes sense to a non-engineer
  • [ ]I'm confident this is worth doing

If you can't check all these boxes, don't proceed to part 2 yet. Seriously. I've wasted weeks writing detailed implementation plans for ideas that weren't actually worth doing.

Getting buy-in and collaboration

Your next step is to get feedback from key stakeholders. This is not a formality. This is where your spec either gets better or falls apart.

Some hard-won tips for a successful pitch:

Use a platform that makes collaboration easy. I'm a fan of Notion because of real-time comments and collaboration, but Google Docs works too. I've even seen some true pioneers use Figma for this once! The important thing is that people can leave inline comments and suggest changes. Avoid PDFs or Word docs attached to emails — you want this to be a living document.

Illustrate your points. If you're proposing changes to a user journey, create mockups or wireframes. If you're optimising a service, show current metrics with screenshots from your monitoring. I recommend Figma (high-fidelity) or Excalidraw (low-fidelity) for user journeys and screenshots from Grafana or Datadog for metrics. Visuals aren't decoration — they make your argument concrete.

Getting feedback should be your main objective. Leave your ego at the door. Every piece of criticism you get now is a production issue you won't have later. I've had specs torn apart in review, and it sucks. But you know what's worse? Building something for three months only to have it fail in production because you missed something obvious.

Guide your contributors. If someone leaves a comment like "I don't like this approach," kindly ask them to explain why and suggest an alternative. Vague negativity doesn't help anyone. But genuine concerns with reasoning? That's gold.

Be ready to defend your value proposition. Someone will ask "is this worth the engineering time?" You need an answer. The best answers have numbers. "This can lead to a 27% reduction in database hosting costs, resulting in annual savings of £1.5M" is a lot more compelling than "this will make things more efficient."

A plan worth implementing (part 2)

Do you treat tech specs as a solo effort? Write it alone, share it for review, get some comments, ship it? That's not how the best specs get written.

The best technical specifications are the result of strong collaboration. You write the first draft, sure. But then you invite smart people to make it better. You incorporate their feedback. You adjust based on what you learn. The spec evolves.

This second part of the anatomy is more dynamic. It varies depending on your company's culture, your team's practices, and the type of project. But there are principles that hold across contexts.

Best practices and company standards

Your implementation plan should be an extension of your company's established practices. If you're using React and Next.js for web frontends, your spec should use React and Next.js. If your company has a standard CI/CD pipeline, use that.

The exception is when you're explicitly proposing to change an established practice. Maybe you want to introduce a new framework or migrate to a different database. That's fine — but make your case with evidence.

When I've proposed changes to established practices, I always include:

  • Outside proof of concept: Has another company done this successfully? Show case studies.
  • Prior art from adjacent domains: Have we used similar approaches in different contexts?
  • Clear reasoning for why the current approach doesn't work: Don't just say "this is better." Show where the current approach fails.
  • Migration path: How do we get from here to there without breaking everything?

At Superbet, I led a team that built a dedicated UI library to replace the continuously problematic Ant Design we were using before. We made it a 100% drop-in replacement via a compatibility layer — technically perfect. But I still failed to convince most teams to adopt it, because I couldn't drive the business case home. That's the lesson here: replacing established tech is tough even when your solution is objectively better. You need the numbers, the case studies, the migration path. Technical excellence alone doesn't win the argument.

Prior art and alternatives considered

Is there something similar that already exists? Have you implemented features with similar attributes in the past? All of this is worth mentioning.

Best case scenario: you discover you don't need to build anything new. You can reuse an existing solution. If this happens, don't throw away your spec. Attach it to the knowledge base related to the solution you found. It helps the maintainers understand who their stakeholders are and why people need this. If they want to sunset, transfer ownership, or refactor their software, they'll know who to talk to.

I've saved months of engineering time by finding prior art. We used Statsig to ship experiments, but we'd written our own SDK on top of it instead of using the official one. Our implementation didn't support persisting allocations, which meant we couldn't run sticky experiments. I started speccing out how to add this feature — thinking I'd need to build it from scratch — and went on a bit of a wild goose chase. Then I found an existing, domain-specific service that already solved persistence for a different use case. I just copied the implementation pattern. What I thought would take weeks, only took two days.

What other approaches did you evaluate? Why didn't they make the cut?

This shows you've done your homework. It also prevents people from suggesting alternatives you've already considered and rejected. I usually include a short table:

Approach

Pros

Cons

Why we rejected it

Use existing service X

Fast, proven

Doesn't support feature Y

Missing critical functionality

Build custom solution

Perfect fit

High maintenance cost

Not worth the ongoing burden

Third-party API

Easy integration

Vendor lock-in, high cost

£250K annually vs £30K to build

The technical approach

Now — finally — you can get into the details. This is where you explain the actual implementation, architecture decisions, and tech stack choices.

Use diagrams. Architecture diagrams, sequence diagrams, data flow diagrams. I recommend Draw.io (now diagrams.net) because it's free and has decent collaboration features. The disadvantage is there's no live sharing like Figma, so you'll need to export and embed images.

For sequence diagrams, I like Mermaid because you can write them in markdown and they render automatically in most tools. Here's a simple example:

sequenceDiagram
    User->>API: Request transaction history
    API->>Cache: Check cache
    Cache-->>API: Cache miss
    API->>Database: Query transactions
    Database-->>API: Return results
    API->>Cache: Store in cache
    API-->>User: Return results

Explain the "why" behind technical decisions, not just the "what." Don't just say "we'll use Redis for caching." Say "we'll use Redis for caching because our access patterns are heavily read-biased (95% reads vs 5% writes) and we need sub-millisecond latency. We considered Memcached but chose Redis because we need data structures like sorted sets for timeline features."

Call out performance characteristics and scalability limits. Be honest about where this will break. "This approach works up to about 100,000 requests per second. Beyond that, we'll need to shard." That's way more useful than pretending your solution scales infinitely.

Mention data models, API contracts, key interfaces. Show what the data looks like. Show what the API requests and responses look like. I usually include JSON examples:

{
  "transaction_id": "tx_123",
  "amount": 1250,
  "currency": "GBP",
  "timestamp": "2025-01-15T14:30:00Z",
  "category": "groceries"
}

A benefit of embedding models in a transferrable format, like JSON is that this can then be copy/pasted into a test or mock service later on. Saves some time and cuts down on ambiguity.

Highlight any new infrastructure or services needed. If this requires spinning up new services, databases, or third-party integrations, call it out. These often have cost implications and procurement delays.

Breaking down the work

How do you decompose this solution into milestones and tasks that can be tracked? This is project management, but it's your job to provide the structure.

Split into phases that deliver incremental value. Ship something usable early. I'm a big fan of the MVP → MLP → Full Product progression:

  • MVP (Minimum Viable Product): The absolute smallest thing you can build to validate the approach. Usually internal-only or limited to a small percentage of users.
  • MLP (Minimum Lovable Product): The smallest version that's actually good enough for real users to use and enjoy.
  • Full Product: All the bells and whistles.

Don't try to ship everything at once. This is tough to admit, but in many cases, we ended up keeping the MLP as the end-result, because it was "good enough".

Identify critical path items vs nice-to-haves. What must ship for this to work? What's optional? Use a prioritisation framework. I like MoSCoW (Must have, Should have, Could have, Won't have).

Call out what can be parallelised vs must be sequential. Can the backend and frontend teams work simultaneously? Or does the backend need to ship first? Making dependencies explicit helps with planning.

Assign rough t-shirt sizes or story points if your team uses them. I actually hate sizing and I'm bad at it too. I prefer developer-hours, developer-days, developer-weeks as units of measurement, when discussing timelines. If you're unsure, go up one measurement, e.g.: not sure if 3 or 4 developer days? Use 1 developer-week.

Make dependencies between tasks explicit. Use a simple dependency graph or just list them: "Task B depends on Task A being complete. Task C can start in parallel with Task B."

Success metrics

How will you measure if the implementation actually worked? What are the KPIs?

This is where you define victory conditions. And you need to be specific. "Make it faster" is not a success metric. "Reduce p95 API latency from 500ms to under 200ms" is a success metric.

Define measurable outcomes. Examples:

  • Latency reduction: "p95 latency drops from 800ms to 200ms"
  • Error rate decrease: "5xx errors drop from 0.3% to below 0.1%"
  • Conversion improvement: "checkout completion rate increases from 73% to 80%"
  • Cost savings: "database costs decrease by £40K annually"

Set baseline metrics before implementation. You need to know where you're starting. I take screenshots of dashboards showing current performance and attach them to the spec. It's way too easy to forget what things were like before you started.

Specify how you'll track these metrics. Will you build a dashboard? Set up alerts? Run an A/B test? Be explicit about measurement methodology.

Include both business metrics and technical metrics. Product managers care about revenue and engagement. Engineers care about latency and throughput. Your spec should speak both languages.

Risk assessment, dependencies and mitigation

What could go wrong? What are you worried about? What do you need from other teams?

I've learned to be brutally honest in this section. Pretending risks don't exist doesn't make them go away. It just means you won't have a plan when they materialise.

List external dependencies. Do you need another team to ship something first? Do you rely on a third-party service that might have rate limits or downtime? Call it out. I've had projects delayed by weeks because of dependencies I didn't flag early enough.

Call out single points of failure in your design. Where can this break? What happens if that one critical service goes down? If you're introducing a new database, what happens if it fails?

Identify areas with high uncertainty. Are you using a technology for the first time? Is this an area of the codebase nobody really understands anymore? Are you making assumptions about user behavior that might be wrong?

For each major risk, propose a mitigation strategy or fallback plan. Don't just list risks — show how you'll handle them. Example:

  • Risk: Third-party API might have downtime
  • Mitigation: Implement circuit breaker with exponential backoff; cache responses for 5 minutes; have a degraded mode that works without the API

Be honest about what you don't know yet. This builds trust and invites help. "I'm not sure how we'll handle the migration of 500GB of existing data — looking for input here" is way better than pretending you have it all figured out.

Mention compliance, security, or data privacy concerns early. Don't wait until security review to discover you need a whole privacy impact assessment. If you're touching user data, call it out. If this needs SOC2 compliance, mention it.

Rollout strategy

How will this actually get deployed? Phased rollout? Feature flags? Canary deployments?

I never ship big changes to 100% of users on day one. Never.

If you only have two users, ship to just one.

Plan for gradual rollout. My usual progression: 5% → 10% → 50% → 100%. At each stage, you monitor metrics and watch for issues. If something breaks, you're only affecting a small subset of users.

Use feature flags to decouple deploy from release. Deploy the code to production but keep it behind a flag. This lets you control who sees the new feature independent of your deployment process. I've used LaunchDarkly, Statsig, and built homegrown solutions. They all work. Pick one and use it religiously.

Define rollback criteria. What conditions trigger a rollback? Be specific:

  • If error rate exceeds 0.5%, roll back
  • If p95 latency exceeds 1 second, roll back
  • If we get more than 10 customer complaints in an hour, roll back

Having these criteria defined in advance means you won't be making emotional decisions during an incident.

Consider dark launching to test in production without user impact. Send real traffic through the new code path but don't show users the results. Compare the output to the old code path (also called: shadow testing). This flushes out bugs before users see them.

Plan for data migrations separately from code deploys. Migrating millions of database records is risky. Do it separately from shipping new features. Ideally, migrate data first, then deploy code that uses the new schema, then clean up the old schema (expand-migrate-contract pattern).

Have a communication plan for internal stakeholders and customers. Who needs to know when this ships? Customer support team? Marketing? External partners? Write the comms plan into the spec.

Rollback strategy

What happens if you suddenly become unavailable when everything's on fire? Would your team know how to handle it?

Think of it like a preflight checklist for pilots. When stress is high and adrenaline is pumping, you don't want to be figuring things out. You want a list you can follow mechanically:

  1. Disable feature flag X
  2. Or: roll back to commit abc123 and deploy
  3. Run database script Y to revert schema changes
  4. Clear cache Z to remove stale data
  5. Monitor metrics A, B, C to confirm rollback successful
  6. Post in #incidents channel with status

This keeps stress to a minimum in incident scenarios.

Testing approach

Unit tests, integration tests, load testing, edge cases — how will you validate this actually works?

Define test coverage expectations. Critical paths need extensive tests. Nice-to-have features can have lighter coverage. Be explicit about what needs what level of testing.

Plan load/performance testing for high-traffic features. If this will serve millions of requests, you need to test it at scale. I use tools like k6 or Locust to simulate load. Don't wait until production to discover your database falls over at 10,000 QPS.

Consider chaos engineering for critical systems. Kill dependencies. Inject latency. Simulate network partitions. See what breaks. Netflix's Chaos Monkey is the canonical example, but you don't need anything that sophisticated. Just deliberately break things and see what happens.

Test edge cases and failure modes, not just happy paths. What happens when the database is slow? When the cache is empty? When a user sends malformed input? These are where bugs hide.

Plan for manual QA/exploratory testing where needed. Automated tests are great, but they only catch what you thought to test for. A human clicking around can find the weird stuff.

Include security testing for sensitive features. Pen testing, OWASP Top 10 checks, dependency scanning. If you're touching authentication, payments, or personal data, you need security review.

Timeline and milestones

Realistic estimates, buffer for unknowns, key delivery dates.

Here's an uncomfortable truth: your initial estimates will be wrong. They're always wrong. The question is how wrong.

Work backwards from hard deadlines if they exist. If there's a regulatory requirement or a conference demo, start there and work backwards to figure out if it's feasible.

Pad estimates for unknowns. If you're working in familiar territory with known tech, multiply estimates by 1.2x. If you're in unfamiliar territory or using new technology, multiply by 1.5x to 2x. I know it feels excessive, but I've never regretted padding estimates and I've always regretted being too optimistic.

Define clear milestones with demos or checkpoints. Every two weeks, you should be able to show something. It keeps momentum and surfaces problems early.

Call out external constraints. Holidays, conference season, compliance deadlines, marketing launches. If half your team is going to be out for a week, factor that in.

Be realistic about team capacity. Account for on-call rotations, other commitments, the fact that nobody is productive 8 hours a day. I usually assume 6 productive hours per engineer per day, and that's being generous.

Update estimates as you learn more. Initial estimates are educated guesses. As you get into implementation, you learn things. Update your timeline accordingly. A spec should be a living document, not a contract written in stone.

Open questions

What still needs to be figured out? What requires more investigation?

This section should shrink over time as you get answers, but it's crucial to have it. It shows intellectual honesty and invites collaboration.

List unknowns that need research or prototyping before committing. "We need to validate that approach X can handle our query volume" or "We should prototype the migration script on a test dataset first."

Call out decisions that need input from specific people or teams. "Need security team review on encryption approach" or "Need product input on what the fallback behavior should be."

Highlight areas where you need to validate assumptions with data. "We're assuming users will click this button, but we should A/B test to confirm" or "We think this cache hit rate will be 80%, but we should measure real traffic patterns first."

Be explicit about trade-offs you haven't resolved yet. "We could optimise for latency or throughput but not both — need to decide which matters more for this use case."

Assign owners to each open question with target resolution dates. Don't let open questions languish. Someone needs to be responsible for getting an answer, and there needs to be a deadline.

How culture plays a role

I've worked at companies where specs were treated as bureaucratic box-checking exercises, and I've worked at companies where they were genuinely valued as thinking tools. The difference is night and day.

Technical specifications should be based on a standard template. This reduces cognitive load and makes sure everyone's on the same page. You shouldn't have to reinvent the structure every time. We had a template in Notion. For Tandem, we use Google Docs templates, and at Docler, we had Confluence. The tool doesn't matter as much as the consistency.

Having a standard template also means reviewing specs is easier. You know where to look for the problem statement, the requirements, the risks. You're not hunting through a free-form essay trying to figure out basic information.

Make all proposals publicly available. Don't be afraid to share yours with a broader audience. I know it's scary putting your ideas out there, but transparency has massive benefits. Other teams might discover they're solving similar problems. Someone from a different vertical might have crucial context you're missing. Serendipity happens when information is public.

All specs were in a shared Notion space that anyone in engineering could read. The number of times someone from a completely different squad dropped a comment that changed the whole approach was remarkable.

Schedule regular review sessions of critical proposals. This serves multiple purposes: it helps others learn, it encourages collaboration, it makes people aware of what's being built across the organisation. We had monthly "architecture review" sessions where people would present proposals in progress and get feedback from a cross-functional group. Some of the best ideas came from those sessions.

Tell people they're doing great. Praise great proposals publicly. Acknowledge all participants, even if they just left a single comment. This encourages a culture of collaboration. I've seen teams where people were afraid to comment on specs because they thought they'd be seen as obstructionists. That's poison. You want people to feel like their input is valued, even if it's critical feedback.

When someone writes a particularly good spec, I call it out in a public channel. "This spec from Alex is an excellent example of how to structure a complex proposal — check it out for reference." It takes five seconds and it makes people feel good about the work they're doing.

The spec is the work

Here's my final thought, and it's probably the most important thing in this entire article: writing the spec IS doing the work.

Too many engineers treat the spec as overhead before the "real work" of coding begins. That's backwards. The spec is where you do the hardest thinking. It's where you discover what you don't know. It's where you find the fatal flaws before they become production incidents.

The best specs I've written have been collaborative, iterative, and occasionally contentious. They've been marked up with dozens of comments. They've gone through multiple revisions. They've forced me to reconsider assumptions I didn't even know I was making.

And every single one of them made the implementation smoother, faster, and more successful than it would have been otherwise.

So the next time you're tempted to skip the spec and just start coding, resist. Open up that document. Start with the problem. Work through the details. Invite smart people to poke holes in your logic. Iterate.

Your future self — and your on-call rotation — will thank you.


My product proposal template is available on GitHub. It captures most of the ideas discussed here. If your organisation doesn't have a technical proposal template yet, use this as a starting point.


Written by danielkov | Self-taught software engineer, founder & CTO. Ex-Monzo, building Tandem, TicketPlug and Speakeasy.
Published by HackerNoon on 2025/11/10