There's a conversation happening right now in every professional services firm, every consultancy, every law office, every policy shop in the country. It goes something like this: "We should be using AI. But what should we be using it for?"
And the answers, at least at an enterprise level, are mostly the same. Summarize this document. Draft this email. Research this topic. The tasks are real, the outputs are fine, and the productivity gains are modest enough that most people try it for a few weeks, find it helpful but not transformative, and quietly go back to mostly doing things the way they've always done them.
I was one of those people at the start. I'd paste a massive policy document into ChatGPT, get a decent summary, use it as a starting point for a client memo, and move on. It was useful in the way that a slightly faster photocopier is useful. It saved me time on individual tasks without changing anything fundamental about how I worked.
It was novel and exciting, but felt stunted. Even though the early models were genuinely capable. They can synthesize information, identify patterns, produce structured output, anticipate counter-arguments. These are the same cognitive tasks that take up most of my day. So why did the experience feel so underwhelming?
The answer, once I wrapped my head around it, was obvious. The models still had no idea who I was, who my clients were, what I'd been working on, what mattered, or what had already been discussed. Every session started from zero. I was asking a brilliant generalist to do the work of a trusted colleague, and then wondering why the output felt generic.
The AI wasn't the problem. The context was.
What I actually do for a living.
I advise senior leaders across most of the major regulated industries on how to navigate political and regulatory environments. Energy, mining, tech, pharma, financial services. Each client has its own stakeholders, its own government dynamics, its own media landscape, its own history. On a typical day I'll prep a client for a ministerial meeting, write a brief, take a call with a prospect, and review a media monitoring report, all for different clients, all requiring completely different context loaded into my head before I can be useful.
I've always been someone who holds a lot of information in my head. Remembering that what a deputy minister said in passing three weeks ago is directly relevant to what a client just asked me about today. That's the actual skill in this work. Connecting things across conversations, across clients, across time.
But there's a ceiling to that, and I pretty much hit it. Not because the work got harder, but because the volume kept increasing. More clients, more files, more regulatory developments to track, more conversations to remember. The overhead of just staying current, loading context before every meeting, tracking down what was said and when, started eating into the hours I needed for the work itself.
What I tried.
In mid-late 2024 I started getting more deliberate about using AI for my actual workflow, not just one-off tasks. I created persistent project spaces for each client. Uploaded the client brief, the last few meeting notes, the relevant policy landscape. Gave the model enough context that it could produce a first draft of a memo that was genuinely close to what I'd write myself.
The quality was a real step forward. A client memo that would normally take me a couple of hours to draft from scratch could be started in twenty minutes with light editing. The model wasn't just summarizing anymore, it was making connections, using appropriate framing, anticipating the right questions.
But the architecture couldn't hold. Each workspace was isolated. Context didn't flow between clients. There was no continuity between sessions. And the reliability was inconsistent enough that I couldn't fully trust the output without checking every detail, which ate back a lot of the time I'd saved.
Then I tried building an autonomous agent. A bot with persistent memory that could monitor my messages, process documents, and maintain its own understanding of my work. The vision was compelling. The execution was a disaster. After six weeks I had a system so complex and fragile that maintaining it became a second job.
What actually worked.
The useful insight came from the failures, not in spite of them. Every approach that didn't work had one thing in common: I was trying to make the AI smarter. Better prompts, longer context windows, more autonomous capabilities. And every approach that got close to working had something else in common: the AI performed well when it had access to well-organized, accurate, specific information about my work.
The technology wasn't what needed to improve. The information layer was.
So I stopped building AI tools and started building a knowledge base. Plain markdown files in Obsidian, organized into a structure that mirrors how I actually think about my work. One master file per client. One note per idea. Daily logs. Media monitoring digests. People notes. Everything tagged, linked, and maintained with enough discipline that any piece of information can be found and connected to any other piece. I called it the Crypt, and it became the foundation for everything else.
Why markdown.
It's not just a trendy choice, it's a practical one. Every model reads it natively. It's portable, if I switch tools next year, the files come with me. It diffs cleanly so I can see what changed and when. And it's human-readable, which matters more than people think. If the agent layer disappeared tomorrow, I'd still have a functioning knowledge base that I can read with my own eyes.
Why Obsidian.
Because it gets out of your way and lets you work. It's open source, it's lightweight, it stores everything as plain markdown files that you own outright, and the plugin ecosystem means you can extend it in basically any direction without waiting for a product team to build what you need. There's no proprietary database underneath, no server dependency, no moment where you realize your data is trapped behind someone else's login. It's just a fast, clean interface on top of files that are already yours. I've tried most of the alternatives over the years and nothing else comes close for what I need it to do.
Once the knowledge base reached a certain density, the agent layer became useful in a way that felt qualitatively different from anything I'd experienced before. Not because the models got better, but because they finally had something worth reading.
What it does now: I wired an agent layer into the vault that reads the notes and handles most of the operational overhead of my practice.
Before I wake up: An automated issues scan runs against RSS feeds and a set of web searches, filtered against every client's specific interests. By the time I open my laptop there's a digest waiting, colour-coded by urgency, tagged by client.
Before my first meeting: I have a brief that pulls together the client file, recent media, open action items, and anything that's changed since the last conversation. I don't spend thirty minutes trying to remember where things stand. It's already there.
After a call: The call notes or transcript gets processed into structured notes with proper connections to everything else in the vault. Action items get flagged. The client file gets updated. The next time I look at that client, the conversation is already integrated.
What I still do: All of the actual work. The strategic thinking, the relationship management, the judgment calls, reading the room, advising on timing and positioning. The agent layer doesn't do any of that. What it does is eliminate the overhead that used to sit between me and the work, the searching, the context loading, the monitoring, the reformatting, the keeping track of what was said and when.
The capture problem.
The system worked well once information was in the vault. The problem was getting it there. I'm on calls for most of the day and I've never been someone who takes notes during a conversation. I've tried. It doesn't work for me. I actually retain less when I'm trying to write things down in real time.
I looked at the available transcription tools. They all had the same issue: they want your audio on their servers, they lock the output in their own format, and none of them produce a plain markdown file that can feed directly into an agent workflow. For someone managing confidential client conversations, the cloud processing alone was a dealbreaker.
So I built Tome. It's a macOS app that captures your calls, transcribes locally on Apple Silicon, and writes a structured .md file into your Obsidian vault. No cloud, no API keys, no audio or video saved anywhere. Just the notes, the way you would have taken them if you could listen, process, and write simultaneously across every conversation you have. Like having the world's best note taker sitting beside you, all day, every day. It was the first piece of software I've ever built, and it took about two weeks with AI handling the parts I didn't know how to do.
It's open source, it lives at gremble.io, and it's not perfect. But it fills a gap that, as far as I can tell, nothing else on the market is interested in filling.
Which is one of the coolest parts of the current time and place we are in, even just six months ago, there had to be a critical mass of paying customers for most developers to spend the time to build something. But now, someone like me, who's tech savvy but not a developer, can, with a ton of trial and error, build something that works for their exact use case.
The compound effect.
The thing that convinced me this is worth writing about is what happens over time. Every processed meeting makes the vault more complete. Every media scan adds current context. Every daily cycle creates new connections between notes. The quality of the output is a function of accumulated knowledge, not model capabilities. The same model, reading a richer vault, produces better work.
After a few months of this, the system regularly surfaces connections between conversations, between clients, between regulatory developments, that I wouldn't have made on my own. Not because it's thinking for me, but because it can cross-reference hundreds of notes in a way that my brain physically can't when I'm managing this many files simultaneously.
And the knowledge is mine. I vet every word in the vault, and ensure it matches the knowledge in my head. It's not in someone else's database, it's not dependent on any particular AI company's continued existence, it's not locked behind a subscription. If I switched to a different model or a different agent framework tomorrow, the vault travels with me and the new system picks up exactly where the last one left off. The knowledge layer is independent of the technology layer. That turns out to matter a lot.
What I think this means for everyone else.
There's a term floating around, personal context management, that captures what I think is actually happening here. The idea that in a world where everyone has access to the same AI models, the differentiator isn't the model. It's what you feed it. Your accumulated, structured, verified knowledge about your own work, your own clients, your own industry, is the thing that makes the AI useful rather than just impressive.
The people I talk to about this, the consultants, the lawyers, the policy people, the senior professionals managing complex workloads, they all recognize the problem immediately. They're all managing more context than their brains can hold, they're all losing information between conversations, and they're all underwhelmed by AI tools that don't know anything about their specific work.
I don't think the answer is a product that solves this for everyone. The value comes from the specificity, a knowledge base shaped to how you personally think, work, and communicate. What I do think is that the approach, building a structured knowledge layer first and letting the AI read it, is transferable. The architecture is simple. The tools are available. The hard part is the discipline of organizing information well enough that a machine can make use of it.
I'm not going to pretend I've figured everything out. There are rough edges everywhere. The diarization in Tome isn't perfect. The vault architecture is still evolving. I'm still learning what works and what doesn't. But the core of it, a structured knowledge layer that compounds over time and makes any AI model meaningfully more useful, that part works. And I think it's worth talking about, because most of the conversation around AI and productivity is still focused on the models, and I think the real leverage is somewhere else entirely.
If any of this sounds like a problem you're dealing with, Tome is free and open source, and I'm documenting the rest of the system as I go. Everything lives at gremble.io.
