I Don’t Trust AI to Write My Code—But I Let It Read Everything

Written by capk | Published 2025/12/11
Tech Story Tags: ai | cursor | claude | copilot | vibe-coding | software-engineering | full-stack-development | hackernoon-top-story

TLDRI’m a senior full-stack developer who still cringes at AI-generated code in production. But tools like Copilot, Cursor, and Claude already save me hours every week – not by writing code for me, but by reading code, exploring messy open-source projects, and filling gaps where documentation is missing.via the TL;DR App

I’ll start with a confession: I still don’t really “get” how some people just happily let AI write all their code.


Yes, it’s much better than it was a year ago.

Yes, it keeps improving every month.


But if I look at most of the raw code it generates and ask myself, “Would I be proud to merge this as-is under my name into a serious production repo?” the honest answer is still no. There’s always something a bit off, something that makes me want to refactor, rethink, or at least rewrite parts of it.


So no, I’m definitely not ready to retire and let Cursor or Copilot do my job for me. Not even as a joke. I still like my keyboard way too much.


At the same time, there are things that have genuinely changed how I work. And that’s what I want to talk about: not some magical future where AI replaces senior engineers, but very boring, very real cases where it makes my life easier right now.

Life in startup mode and why I care about open source

Most of my career I’ve worked in what I’d call permanent “startup mode”. There’s almost no budget. Deadlines are always on fire. Everyone is wearing three hats at once. You probably know the vibe.


In that environment, the temptation is strong: just grab a shiny hosted service for everything. Use Firebase for the backend. Use Mailchimp for newsletters. Use Linear for task tracking. You throw in your credit card “just for now” and tell yourself you’ll fix it later.


At first it feels amazing. You ship faster. You don’t worry about infrastructure. The UI is polished, docs are nice, onboarding is smooth.


And then the little annoyances start to pile up:

“Pay to use a custom domain.”

“You’ve hit the free tier limits, please upgrade.”

“That one feature you actually need is only on the expensive plan.”


If you’re reading this, you’re probably not a marketer or a business coach. You know exactly what I’m talking about.


So I ended up with a simple rule for myself: if I can get away with an open-source or self-hosted solution without completely destroying our roadmap, I’ll do that. It’s my personal compromise between “fast enough” and “cheap enough”. In early-stage products, perfection is rarely the first priority anyway.


That’s how I end up replacing SaaS tools with self-hosted ones. Instead of Firebase I might use something like Nhost. Instead of Mailchimp I might use something like Parcelvoy. Instead of Linear I might use something like Plane. The exact products don’t matter here. The pattern does.


This background matters because it explains why I spend so much time in open-source projects that are powerful but not always polished. Some of them are already mature and feel almost like commercial tools: great docs, clear UI, predictable behavior. Others are just as ambitious, but not quite as smooth yet. Documentation lags behind releases. The UI hides settings in strange places. Sometimes a random bug pops up where you’d never expect it.


And this is exactly the kind of environment where AI dev tools turned from “toy” into “daily helper” for me.

Where AI actually shines for me

I’m not going to show you any ten-line super-prompts or “secret prompt frameworks”. I’m not doing AI magic tricks. I use Copilot, Cursor, Claude and similar tools in a very simple, practical way, exactly like I’d talk to a teammate.


Here are real examples from my work.

Real example 1: “Why is there a random null in my UI?”

When I installed Parcelvoy from scratch, I noticed something ugly in the interface. In the bottom left corner, near the “Admin” label, there was a literal null showing up. It didn’t break functionality, but it looked bad. The kind of thing you don’t want to show to a client in a demo.

The traditional “senior engineer” approach would be to dive into the codebase, search through components, chase props and context, and trace the data flow until I find where that null came from. I’ve done that plenty of times in my life.


But now I have another option.


I opened my AI assistant, shared the relevant context, and wrote a simple prompt, nothing fancy:


“Take a look at the bottom left corner. For some reason there is a ‘null’ after the ‘Admin’. Is there a way to fix it without changing the source code?”


I didn’t try to sound clever. I just described the problem like I would in a Slack message to a colleague.


The assistant walked me through what might be happening. It suggested that some data, like a workspace or organization name, was probably missing or misconfigured. It pointed me to configuration settings rather than source code changes. It proposed a way to fix it from the setup side, so I didn’t have to immediately fork the project.

That gave me two wins at once. First, I had a quick hotfix so the UI looked normal. Second, I had enough understanding to open a helpful issue for the maintainers and link to a concrete problem in their codebase.


All of this took much less time than manually hunting through files. The AI didn’t “code” anything for me. It just read the project faster and broader than I would, and led me to the right place.

Real example 2: “Is this behavior even supported?”

Another real case: I was playing with the Journey Builder in Parcelvoy. It’s a visual tool where you design how users move through email journeys. At some point, I tried to be clever and connect two “Send” nodes to a single entry point at the same level.

In my head, it felt completely valid. A user enters the journey, and then two different email branches start. But in practice: nothing worked. No emails were going out from either branch.


This is the classic moment where you’re not sure if you’ve found a bug, misunderstood the feature, or misconfigured something.


I could have read through the entire implementation of the journey engine, from database schema to job queue handling, step by step. Instead, I asked AI a very direct question:


“Is it supported to attach two ‘Send’ nodes to a single entry point at the same level in Journey Builder?”

The answer was clear and, more importantly, came with pointers. It told me this configuration is not supported with the current logic and pointed me at the parts of the code where this behavior is defined. In other words, it not only said “no”, it told me “here’s exactly where to look if you want to change it”.


At that point, the decision was on me as a developer. If I really needed this pattern, I could patch the code, adjust the engine, and open a pull request to the project. If it wasn’t worth it, I could redesign the journey to match the current capabilities.


Again, AI didn’t magically fix anything. It didn’t auto-patch the repository. What it did was answer a very senior question quickly: “Is this supposed to work at all, and if not, where is that behavior coming from?”


That’s the kind of clarity that saves me an hour or two of code spelunking.

Real example 3: “Show me the architecture without reading every file”

Now let’s move from a narrow bug to the big picture.


Whenever I start working with a new open-source project, I want to understand its overall shape. How many services are there? How do they talk to each other? Where is the database? What’s the role of the worker processes? How do requests travel through the system?


In the past, this meant clicking through folders, following imports manually, reading README files, and drawing rough diagrams on a whiteboard or in a notebook. It works, but it’s slow.


With modern AI tools, I can do something much lazier and much more effective. Once I give the assistant access to the codebase, I ask a question like this:


“Create a PlantUML diagram that provides a high-level overview of this application’s architecture, clearly illustrating all the major components and their interactions.”

In response, I get a PlantUML diagram describing the main pieces: a web client, API gateway, authentication, background workers, queues, databases, and so on, along with arrows showing the data flow.

Is it perfect? No. Sometimes it misnames things, or misses a smaller component. But as a starting point, it’s extremely helpful. It gives me a mental model without me spending an hour doing the initial mapping manually. I can then tweak the diagram, correct details, and share it with teammates.


And the really useful part is what happens next. I can ask follow-up questions based on that diagram: where exactly is this “Journey Engine” implemented in the repo? What file handles this queue? Where are retries configured? AI becomes not just a diagram generator, but a guide through the codebase using that diagram as a map.


Again, it’s not replacing me as an architect. It’s giving me a head start.

Other ways I quietly rely on AI every day

The three examples above are very concrete, but the same pattern shows up all over my work.


When I drop into a large or messy codebase, I ask AI to explain individual files or modules to me in simple language. I treat it like a junior engineer who has read everything and is trying to summarize it for me. I still verify, but I don’t have to start from zero.


When I need to understand the impact of a change, I ask what could break if I modify a particular function, or which other modules depend on a certain type or database table. It’s not always perfect, but it often highlights areas I should pay attention to.


When I’m debugging, I paste stack traces, configuration files, and code snippets and ask what’s the most likely cause. It doesn’t always guess right on the first try, but it usually gives me two or three hypotheses worth checking, which is a lot better than staring at the screen hoping for enlightenment.


When I’m moving between ecosystems, I use it as a translator. I’ll ask it to rewrite a small Node script into a Go program or some Flutter widget into React one. I never paste the result directly into production, but it gets me most of the way, and I polish the rest.


Even for tests, it’s helpful. I might ask for a list of edge cases to test for a particular function or endpoint. Or I’ll ask it to suggest property-based test scenarios, or invariants I should enforce. The code it generates is rarely perfect, but the ideas often are.


None of this feels like “AI magic”. It feels like having a very fast, slightly naive colleague who is good at reading and summarizing huge amounts of code and text, and who never gets tired of my repetitive questions.

The key shift: let AI read, not write

If I had to compress my whole experience into one sentence, it would be this:


I don’t trust AI to own my code, but I happily let it read everything for me.


Reading is work. Understanding is work. Tracing logic across files is work. Senior developers spend a huge part of their time just absorbing context: old design decisions, third-party APIs, messy migrations, and half-finished refactors. It’s invisible work, but it’s real.


And this is exactly the kind of work AI is already good at today.


It can read through a huge codebase and give me a decent summary. It can scan docs, examples, and configuration files at the same time. It can point me to the right part of the repo when I describe a problem in plain language. It can turn architecture that “kind of lives in the code” into a visual diagram or a short explanation.


I still cringe when I see AI-generated code that someone wants to merge without thinking. But I strongly recommend using these tools as code readers, not just code writers. Let them chew through the boring parts, so you can spend more time making actual decisions.


You might be surprised how much time you save before you ever need to trust it with a single production function.



Written by capk | I'm a senior full-stack developer with a history of building things from scratch since 2001.
Published by HackerNoon on 2025/12/11