Lately, you can’t open YouTube, read the news, or scroll LinkedIn without getting hit by the same narrative. Every day, another CEO confidently declares that software engineers are obsolete and that anyone can prompt their way to the next billion-dollar app. And to be fair, the AI world has completely tanked the barrier to entry for prototyping. With AI integrated into workflows, a lean team of 10 people can ship faster and better than a firm with 200 people—provided they have the right enablers in place.
Product Managers are using tools like Lovable and Bolt to test out their ideas, bypassing traditional design hand-offs. The backend folks are writing architecture docs and asking Claude and Co-Pilots to build it (hopefully, they are reviewing their code before pushing to prod).
This MVP approach works wonderfully until you want to check Product-Market Fit. AI is great at pumping out a massive volume of code quickly, but when you want to scale the product, these tools seldom help. In addition, if you are making products in fields that require strict compliance, then, oh boy, you are in for a treat if you rely entirely on AI to develop your product. We can’t solve tomorrow’s scaling problems using yesterday’s thinking.
AI is not a Simplifier, but an Amplifier of your Engineering Culture and Discipline.
If you have a weak foundation with manual releases and chaos everywhere, AI just gives you faster chaos. Since AI is taking care of the heavy lifting of coding to some extent, writing the code is no longer a blocker to iterating fast. What we desperately need in this day and age is a faster FEEDBACK LOOP.
The Two Halves of the Feedback Loop
We’re way past the days of building a massive checklist of features and waiting months for user responses. Now, it’s about pushing a single feature and knowing instantly if it lands.
On the product side, this means leveraging platforms like X (formerly Twitter) to reduce the feedback loop with users. This works perfectly in the D2C SaaS world. In B2B SaaS, this same velocity can be replicated via rigorous dogfooding. This starts by treating your internal users (perhaps product teams) as actual customers, or having dedicated "experience teams" whose entire job is to test and use the app as an end-user.
But before you can get product feedback, you need system feedback. To understand how our engineering is and how to improve it, we need a way to qualify and quantify it. Instead of reinventing the wheel, we should leverage existing standards: the DORA metrics.
- Deployment frequency: How often code is shipped to production. (For context, elite companies like Shopify deploy 40+ times a day, and Amazon averages a deploy every ~11.6 seconds).
- Lead time for changes: The time it takes from an idea to a live feature. For elite teams, this is measured in hours. For low performers, it takes weeks or months.
- Change failure rate: The percentage of deploys that break things in production. Elite teams keep this under 5% using preview environments and end-to-end testing.
- Mean time to recovery (MTTR): How fast broken features are fixed. Elite teams can roll back and recover in under 1 hour.
Fixing the Underlying Enablers
For the company/team to achieve all of these metrics, we need to fix the underlying engineering culture that enables them, which we shall break into 4 categories:
1. Robust CI/CD Pipelines You need to know if a push works in under 10 minutes (ideally under 2). You get there with smart caching and incremental builds, so the pipeline only checks what actually changed. Deployments to prod should happen in under 15 minutes via fully automated, zero-touch pipelines. And since we are talking about pushing AI-generated code at lightspeed, integrating security scanning across the 4Cs (Cloud, Cluster, Container, Code) and solid E2E testing into the pipeline is absolutely non-negotiable.
2. Improved Internal Developer Experience (IDE/DevEx) Fast developers ship faster. Improving internal tooling, onboarding, and internal workflows drastically reduces setup times. If spinning up a local environment takes 45 minutes, developers will avoid it. If a dedicated DevEx effort turns that into a 5-minute automated script, engineers become productive immediately.
3. Improved Observability You must understand user feedback by passively observing customer behavior. This means instrumenting your applications with telemetry, session replays, and utilising gradual roll-outs (feature flags). If something breaks, you shouldn't be digging the logs, but you should be getting an alert that points to the exact line of code.
4. Converting Customer Feedback to Fix/Feature. The loop isn't closed until feedback is converted into actionable work. This means implementing A/B testing to understand usage, and turning that feedback into actionable work within a single sprint cycle (days to weeks), not throwing it into a backlog to die.
Structuring Teams and Cross-Team Culture
To move at this speed, the structural culture of the team must fundamentally shift. We can look to companies like Linear for a masterclass in extreme ownership and minimal process:
- Async-first communication: This preserves focus. If it can be written in a doc, it shouldn't be a meeting.
- Protecting deep work: Establish cultural norms that protect engineering time. A 4+ hour no-response window shouldn't just be accepted; it should be expected.
- Extreme ownership: A single engineer owns the entire feature lifecycle. They design it, build it, ship it, and maintain it. There is no throwing code over the wall to a separate operations team.
- Minimal process: Trust engineers to execute rapidly by removing the 3-day code review cycles and weekly sign-off meetings.
Setting the Individual Culture
Whether you are part of a massive organization or building as a solo engineer, your individual habits dictate your velocity.
-
Set up feedback loops first: Before writing your first feature, set up your automated build, test, deploy, and error monitoring. This foundation makes you fast in 6 months.
-
Automate repetitive tasks: If you do something manually twice, automate it the third time. This includes deployments, code formatting, database migrations, and dependency updates.
-
Write documentation: We read code more often than we write. Help your future you and, by extension, your teammate with documentation. Keep decision logs and explain trade-offs so that when you revisit the codebase in 6 months, you aren't reverse-engineering your own logic.
-
Ship small and often: Daily deployments of small changes (50 lines vs 2,000 lines) radically reduce risk, show vulnerabilities early, make debugging trivial, and maintain momentum.
-
Measure your metrics: Track your own deployment frequency, lead time, failure rate, and recovery time to identify exactly where your bottlenecks lie.
Conclusion
Next time you build, either by using AI to write the boilerplate or writing it entirely by hand, ask yourself these 3 questions:
-
How fast do I know if something works?
-
If it does not, how fast do I know where it breaks?
-
How fast can I share it for internal users to test?
If the answer to any of those is measured in days instead of minutes, you don't need a better AI prompt. You need a better feedback loop.
