I used to spend weeks writing technical design documents. But last month, I had one ready within a day. The difference was not that I got faster, but that my approach to document writing changed.
After spending a decade building software for users and enterprises, this year felt different. Not because the problems changed, but because how I approached them did.
AI tools quietly integrated themselves into my workflow and now everything has shifted.
Document Writing and Reviewing
As an engineering leader, my job requires me to write and review technical and product documentation, ranging from proof-of-concept proposals to full-fledged design documents containing structured API design, system diagrams and storage layouts. Until last year, writing a solid technical design document would take me weeks. Most of the time went into structuring the document so that reviewers can quickly grasp my proposed solution, critique it and approve.
That changed this year.
Now, I start by prompting what is this document for?
Is it a system design proposal for a feature or an entirely new system?
What problem is this system solving?
And does AI need to refer to existing documents like product requirements?
Within seconds, I have a well-structured first draft. Recently, for a proof-of-concept proposal, it generated placeholder content with sections like:
- Overview
- Proposed System Design (with a blank Lucidchart link)
- Logic and Value proposition
- Screenshots
- Success Criteria
At that point, all I had to do was fill in the blanks. More than half the mechanical work was done.
What AI didn't do was tell me whether this was the right system to build. It didn't challenge me about the scale or dependencies. That judgement still required me.
Document reviewing became easier too. Instead of reading through entire Product requirement documents end to end, I ask Gemini to summarize it and answer targeted questions like "Who is the audience for this feature?". It answers using the document as the source of truth in natural language.
Writing and reviewing code
Writing unit tests was never fun and will never be. But they are essential! What differentiates a good software engineer isn't just writing code, but thinking through edge cases and proving that those cases are handled through thorough testing.
Until recently, writing unit or integration tests would be a time-consuming manual process. Boilerplate code to initialize mocks, setup factories and fetching test data had to be manually written. That workflow has changed.
After writing a new controller for a .NET API, along with its business logic, CRUD layer and mongo repository, I prompt Copilot to generate unit tests. In seconds, it produces *Tests.cs file for each new file I have written with unit tests covering most happy paths and several common error cases.
Most, but not all.
Here's where it gets interesting: Copilot generated tests for a POST api that looked perfect. They had proper assertions, clean setup methods and good naming conventions, and they all passed. But they were all useless. The business logic behind the api was writing to three different collections in a mongo database in a single transaction. The tests checked that calling the api would write to the database, not that the write happened transactionally such that one failure would fail the entire transaction. The tests tested the happy path but not other meaningful scenarios.
It is still my responsibility to identify the true corner cases, ones that require product context. And when I give Copilot additional prompts about the new cases to test, it fills in the remaining tests quickly and accurately. I still have to think about some of these scenarios, but the execution does not have to be done by me.
Code review has also become more focused. Rather than spending time on mechanical issues like missing error handling or unsafe type casts, automated workflows send the entire diff to models like Claude, Gemini or OpenAI to generate a first pass of comments on Github.
My role is to ensure those comments are resolved correctly, the business logic makes sense and the system isn't regressing in subtle ways. That authority to approve a pull request still lies with me, given I have the product context.
Research
Understanding unfamiliar codebase is never easy. Every team has its own conventions for structuring code, managing configurations and handling service-to-service communication. Yet this understanding is crucial when designing a new feature, especially when it needs to integrate with existing systems.
Previously, I would spend hours using tools like Sourcegraph or GitHub code search to build a mental model of how a system worked. Tracing how service A interacted with service B took effort and manually reading through large portions of code.
Not anymore.
Now, I clone each repository, open them in VSCode and ask targeted questions to Copilot. For example:
Does this service emit events to the kafka topic XYZ?
If not, list all the topics that this service emits to.
Using Claude Sonnet 4.5 in Copilot's Agent mode, it scans through the relevant source files and responds in natural language within a minute. No more scratching my head, understanding code archaeology. I get a clear, high level understanding almost immediately.
This is where AI delivers disproportionate value. Not in writing code, but in reading it and understanding it quickly enough that I can make architectural decisions fast.
What Actually Changed
Over the past year, my role quietly shifted. I spend more time deciding what to build and less time actually building.
AI gave me confidence, not just raw productivity. I can now explore unfamiliar codebases faster and validate ideas off ChatGPT without second guessing myself.
What AI has not changed is accountability. Every line of code I ship, every pull request I approve, and every design document I author, still carries my name. If something breaks, that responsibility is mine, not ChatGPT's or Claude's or Gemini's. The trap is that it is easier than ever to ship something that might look real, but is fundamentally wrong.
The engineers who thrive in 2026 won't be those who use AI the most, but those who know what to ask for and what to verify. Those who understand that AI doesn't replace judgement, it amplifies it, both good or bad.
