Welcome to the Proof of Usefulness Hackathon spotlight, curated by HackerNoon’s editors to showcase noteworthy tech solutions to real-world problems. Whether you’re a solopreneur, part of an early-stage startup, or a developer building something that truly matters, the Proof of Usefulness Hackathon is your chance to test your product’s utility, get featured on HackerNoon, and compete for $150k+ in prizes. Submit your project to get started!
Today, we are interviewing Jason Craik, the creator of The Crypt, an AI-augmented knowledge management system built on Obsidian. The project is designed to seamlessly manage multi-client professional practices by bringing an intelligent agent layer to a structured markdown vault.
What does The Crypt do? And why is now the time for it to exist?
An AI-augmented knowledge management system built on Obsidian that runs a multi-client professional practice. Structured markdown vault with an agent layer that handles media monitoring, meeting processing, client file updates, and daily briefs. Includes Tome, an open source macOS app I built for local meeting transcription that feeds the vault. Now’s a good time for The Crypt to exist because professionals are increasingly overwhelmed by context switching and need localized, private AI tools that integrate seamlessly with their daily workflows.
What is your traction to date? How many people does The Crypt reach?
Early stage. The Crypt is a personal system I use daily to run my practice. Tome, the open source component, was released publicly on GitHub this month. The LinkedIn article announcing the project reached ~2,000 followers on day one, with submissions pending at HBR, Fast Company, and HackerNoon. Expecting reach to grow significantly as those publish and I hit Product Hunt and Hacker News in the coming weeks.
Who does The Crypt serve? What’s exciting about your users and customers?
Senior professionals managing complex, multi-client or multi-file workloads where context switching is the primary productivity drain. Consultants, lawyers, policy advisors, communications professionals, anyone whose value depends on connecting information across conversations, stakeholders, and time. Particularly relevant to people already using Obsidian or similar local-first knowledge management tools who want to wire an AI layer into their existing workflow.
What technologies were used in the making of The Crypt? And why did you choose the ones most essential to your tech stack?
The Crypt leverages a powerful, privacy-focused stack including Obsidian, Claude (Anthropic), Swift, and Apple CoreML. To ensure top-tier local transcription and system integration, it also utilizes the Parakeet TDT v2 NVIDIA ASR model, ScreenCaptureKit, AVAudioEngine, MCP (Model Context Protocol), alongside standard Markdown and YAML formats. These technologies were chosen specifically to maintain a robust, local-first knowledge base that seamlessly supports an intelligent AI agent layer without sacrificing data ownership or privacy.
The Crypt scored a 54 proof of usefulness score (https://proofofusefulness.com/report/the-crypt) - how do you feel about that? Needs reassessment or is it just right?
Honestly, fair for where we are. The system itself is deeply useful to me, I run my entire practice on it, but the public-facing evidence of that is thin right now because it just launched. Tome is the only component that's open source so far, and the broader system is still being documented. I'd expect that score to move as more people get their hands on Tome, the articles land, and I get further along in documenting how the Crypt is built so other people can adapt the approach. I'm not going to argue with a score that says "prove it" — that's the whole point of putting it out there.
What excites you about The Crypt's potential usefulness?
Everyone has access to the same AI models, but most professionals try them for a few weeks and quietly go back to doing things the way they always have. The reason is context. The models don't know anything about your specific work. The Crypt proves that the leverage isn't in better models, it's in better knowledge. A structured, verified, portable knowledge layer that any agent can read turns AI from a novelty into something that compounds over time. Every meeting processed, every scan run, every note connected makes the system more useful. And the knowledge is yours, not locked in someone else's platform. I built this because I needed it. I use it every day. That's the whole point.
Walk us through your most concrete evidence of usefulness. Not vanity metrics or projections - what's the one data point that proves people genuinely need what you've built?
I use it every day to run a government relations practice managing about a dozen active client files across regulated industries. Before the Crypt, I spent roughly the first 30-60 minutes of every day of meetings trying to reload context, what was discussed last time, what's changed since, what the open action items are. That's gone now. The agent layer produces a pre-meeting brief that pulls together the client file, recent media, and anything that's changed since the last conversation. After a call, the transcript gets processed, action items get flagged, and the client file updates itself. The system compounds — every meeting makes it more complete, every day it knows more. The proof isn't a download number, it's that I physically cannot do my job without it anymore. That's the bar I care about.
How do you measure genuine user adoption versus "tourists" who sign up but never return?
Right now, I'm the primary user, so retention is straightforward — I open it every morning and it runs until I close my laptop. For Tome specifically, the honest answer is I don't have adoption data yet, it just shipped. But the retention signal I'm watching for is whether people who download it keep using it after the first week. Transcription tools are easy to try once and forget. The ones that stick are the ones that integrate into a workflow. Tome is designed to write directly into an Obsidian vault, which means it plugs into whatever system you already have rather than asking you to adopt a new one. That's the retention bet — if it fits into your existing workflow, you don't stop using it, because the output is already where you work.
If we re-score your project in 12 months, which criterion will show the biggest improvement, and what are you doing right now to make that happen?
Audience reach, by a wide margin. Right now the Crypt is a system of one. In twelve months, I expect Tome to have a real user base in the Obsidian and local-first AI community, and I expect the documentation of how the Crypt is built to have given other professionals enough to build their own version. What I'm doing right now: the anchor article is submitted to HBR, Fast Company, and Hacker Noon. Product Hunt and Hacker News launches are queued. I'm writing up the vault architecture so other people can adapt it. And Tome is actively being developed with bug fixes shipping this week. The usefulness is already there — the awareness is what needs to catch up.
How Did You Hear About HackerNoon?
I submitted the anchor article about the Crypt and personal context management to HackerNoon, and it's currently under editorial review. Found the Proof of Usefulness hackathon through the submission process. The editorial workflow has been clean and straightforward so far.
Since The Crypt is currently in daily production use at a top public affairs firm, how do you plan to translate this personal proof-of-concept into broader market adoption following your Product Hunt launch?
I don't think the Crypt itself becomes a product. The value comes from specificity — a knowledge base shaped to how you personally think, work, and communicate. What I think is transferable is the approach. So the strategy is documentation, not productization. I'm writing up the vault architecture, the agent layer configuration, the file structure, the automation workflows, all of it, in enough detail that someone with a different practice in a different industry can build their own version. Tome is the entry point — it's the piece that solves an immediate, tangible problem (local transcription into Obsidian) and introduces people to the broader system. From there, the documentation shows them what's possible when that transcription feeds into a structured knowledge layer with an agent on top.
With the open-source release of Tome as a funnel, what is your strategy for growing the overall user base of The Crypt's ecosystem?
Tome is the door. It solves a specific, annoying problem that a lot of Obsidian users have — getting meeting content into the vault without sending audio to someone else's cloud. Once people are using Tome and seeing structured markdown land in their vault after every call, the natural next question is "what else can I do with this?" That's where the Crypt documentation comes in. The growth strategy is content-led: the anchor article explains the thesis, Tome gives people something to download and try today, and the documentation gives them a roadmap for building the rest. I'm also targeting the communities where these people already are — r/ObsidianMD, the Obsidian Discord, Hacker News, and the broader local-first AI community on Twitter/X.
How does The Crypt ensure that its AI agent layer effectively manages context switching without hallucinating details across sensitive, multi-client professional workflows?
This is the question I get asked the most, and the answer is structural, not technological. The vault is organized so that every client has its own master file, its own meeting notes, its own media monitoring, its own action items. When the agent layer processes a transcript or produces a brief, it's scoped to that client's files. It doesn't cross-pollinate unless the information is explicitly linked. But the more important safeguard is that I vet everything. Every morning brief, every processed transcript, every client file update goes through me before it touches a client. The vault is the source of truth, not the agent. The agent proposes, I verify. If something doesn't match what I know, it gets corrected in the vault, and that correction persists. Over time, the vault gets more accurate because every cycle is a verification loop. The AI doesn't need to be perfect. It needs to be auditable, and it needs to get better every time I correct it.
Meet our sponsors
Bright Data: Bright Data is the leading web data infrastructure company, empowering over 20,000 organizations with ethical, scalable access to real-time public web information. From startups to industry leaders, we deliver the datasets that fuel AI innovation and real-world impact. Ready to unlock the web? Learn more at brightdata.com.
Neo4j: GraphRAG combines retrieval-augmented generation with graph-native context, allowing LLMs to reason over structured relationships instead of just documents. With Neo4j, you can build GraphRAG pipelines that connect your data and surface clearer insights. Learn more.
Storyblok: Storyblok is a headless CMS built for developers who want clean architecture and full control. Structure your content once, connect it anywhere, and keep your front end truly independent. API-first. AI-ready. Framework-agnostic. Future-proof. Start for free.
Algolia: Algolia provides a managed retrieval layer that lets developers quickly build web search and intelligent AI agents. Learn more.
