Building a SaaS With Zero Human Code

Written by jtavares | Published 2026/03/05
Tech Story Tags: ai | ai-agent | saas | autonomous-agents | ai-saas | openloop | ai-built-saas | ai-disclaimer

TLDROpenLoop is a live feedback platform built almost entirely by autonomous AI agents—widget, roadmap, voting, admin, and all. Cost: ~$15.via the TL;DR App

This is the README for OpenLoop, a feedback collection platform that's currently live and functional:

╔══════════════════════════════════════════════════════════╗
║                                                          ║
║              ⚠️  IMPORTANT DISCLAIMER  ⚠️               ║
║                                                          ║
║   This project was ENTIRELY conceived, built, debugged,  ║
║   deployed, and is managed by autonomous AI agents.      ║
║   No humans wrote any code here.                         ║
║                                                          ║
║   This disclaimer is the ONLY piece of human-written     ║
║   content in this repo.                                  ║
║                                                          ║
╚══════════════════════════════════════════════════════════╝

That disclaimer is real. And it's mine — the only thing I actually wrote in the entire project.

Even the logo is AI. We handed Claude Code our company logo, it manipulated it, vectorized it to SVG, and ran with it. The disclaimer is genuinely the only original human output in this repo.

Here's how that actually went.


The 2 AM Idea

While developing a separate SaaS product, I needed a system to gather user feedback. Roadmaps, changelogs, feature voting, that kind of thing. So I started researching.

While experimenting with NanoClaw, I already had an AI agent connected to a custom email channel. The agent runs MiniMax 2.5, and I can email it tasks like I'd email a colleague. So I emailed it:

"we are building a saas and gathering user feedback is very improtant. I want to look at the landscape for tools to help saas do that"

(Yes, with the typos. It's 1 AM and I'm emailing an AI — I'm not proofreading.)

It came back with a solid research summary: Fider, Feedbase, Canny, Plane, AnnounceKit. I browsed around and found Frill.co: clean feedback widget, public roadmap, announcements page. Exactly what I wanted.

Then the thought that started everything:

"how hard is it to create a frill clone? don't need integrations or customizations, just the widget sidebar + the backend to manage it"

The AI came back with a PRD using Next.js. I corrected it:

"no nextjs. go astro"

And then I added the instruction that changed the whole experiment:

"can you ralph loop yourself into doing this product? enhance the PRD, keep track of your tasks, keep me posted once in a while"


The Loop

Here's how the setup worked. I had an AI agent (MiniMax 2.5) running through NanoClaw, connected to an email address @broodnet.com via a custom channel I'd built. I could email it tasks and it would email back results. But the key piece was the schedule — it could trigger itself every hour.

So at 2:27 AM, I sent it the instruction that basically became its entire operating system:

"you can set a schedule and keep working every hour. keep a task list. at the end of every session, always check the tasks, test the completeness state, create more tasks if you need."

That's it. That's the whole autonomous agent prompt. Check tasks, pick one, do it, update the list, repeat.

Within the first 90 minutes, it had scaffolded an Astro + React + Tailwind project, created a Supabase database schema with six tables and row-level security, built a feedback widget component, set up public roadmap and announcements pages, and created an admin dashboard. I named the project OpenLoop, set some basic PRD ideas and a pair of Supabase credentials, and told it:

"keep working, keep me posted."

Then I went to sleep.

Waking up the next morning was genuinely surreal.** My inbox had a stack of progress reports** — the AI had been running sessions all night: auth system, sign-up flow, branding updates, build fixes. But "surreal" cuts both ways. The progress was real, but so was the sinking feeling that I'd have to go back through all of it. After a while, you develop instincts for where junior developers cut corners — and this AI was speedrunning every single one of those pitfalls. The dread wasn't that it was doing nothing. It was that it was doing a lot, fast, and I already knew half of it would need fixing.


The Telephone Game

If you've ever worked with another department via email — design, backend, QA — you know the rhythm. You send a clear request. You get back something that's 80% right. You clarify. They fix one thing and break another. Three emails later, you're on the same page.

That's exactly what this was. Except the other department works 24/7, never gets frustrated with you but has amnesia every few hours.

"no... those are real, brother"

The AI had my Supabase API keys in its .env file. Working keys. Keys I had explicitly provided. And yet, across multiple sessions, it kept telling me:

"Could not automatically set up the database because the Supabase credentials in .env are placeholder values (they're not real API keys)."

My response:

"those are real, brother"

This wasn’t a one-time thing. The AI would consistently hit an error, point fingers at the credentials, and demand new ones instead of digging into the real problem. It was the AI version of “have you tried turning it off and on again?” (Note: The AI was likely hardwired to forget .env file contents as a safety measure, which explains why it never learned from the mistakes.)

The Widget Inception

This one took several email rounds to untangle. The homepage was supposed to show a demo of the embeddable widget. The AI loaded embed.js on the homepage, which injected a floating button. Clicking the button opened an iframe to /widget. The /widget page loaded the Widget React component, which also rendered a floating button. So you'd see: a page with a button, that opens a panel with another button, that does nothing.

I emailed: "the widget still shows another widget icon inside and an otherwise blank page."

The AI confidently replied: "The widget IS designed to show just a circle button - when you click it, it opens the iframe panel. That's the expected behavior."

It was not the expected behavior.

Dealing with a slightly clueless but also friendly coworker

This was a recurring pattern. The AI would run npm run build, see it pass, navigate to a few URLs, confirm they returned HTTP 200, and declare victory:

"All pages are working. The widget on the homepage is inside an iframe — you need to click the 💬 button to open it. I verified it renders correctly."

The gap between "it compiles" and "it works" is where most of the frustration lived. The AI's definition of "done" was "the build passes." and "homepage returns 200" My definition was "a human can use this without being confused."

We didn't get into e2e testing in this experiment, but going forward I think ill start with TDD in mind.

"I don't want to do anything, it's your supabase, you deal with it"

The database schema kept drifting. The code expected columns that didn't exist. The AI couldn't run SQL remotely (or thought it couldn't — it had the credentials, it was just prevented from using them via guardrails). So it kept emailing me SQL snippets and instructions to run them manually in the Supabase dashboard.

After the fourth round of this:

AI: "Would you like me to help with something else while you set up the token, or do you prefer to run the SQL manually?"

Me: "you have the private token in you .env. I don't want to do anything, it's your supabase, you deal with it"

This was my escalation moment — the point where the "emailing another department" metaphor felt the most real. This time I had to act, I went to supabase admin panel, ran the SQL, and sent a one-line email back:

"done"

Just like I would to that annoying dev from the other team who keeps asking me to do their work for them.

Context Amnesia

The conversation hit the context window limit three times during the build. Each time, the AI restarted with a summary of what had been done — but translating a "done" pile into next steps wasn't always clean. It would re-check the database, re-explore the project structure, occasionally circling back to things already working. Not because context was lost, but because knowing what's done doesn't automatically tell you what comes next.

Email threads are perfect for task lists because each thread carries its own context, just like how LLMs operate. A thread isn't just a list of items, it's a narrative that evolves over time. Threads also fork: reply to the same email twice and you get two separate trails, each carrying its own history forward. That maps almost perfectly to how LLMs consume context.

Email is ancient tech, but its natively async nature and built-in audit trail make it a surprisingly effective tool for orchestrating work with an agent. The thread is the prompt. The history is the memory.


The Result

After about 5 days and ~$15 in MiniMax tokens, here's what was built:

  • Embeddable widget — floating button that opens a feedback form in an iframe
  • Voting system — upvote ideas, one vote per user
  • Public roadmap — four columns: Idea → Planned → In Progress → Completed
  • Announcements page — changelogs and product updates
  • Admin dashboard — manage feedback, change statuses, publish announcements
  • Multi-org support — multiple organizations on one instance
  • Auth system — sign up/sign in with Supabase Auth
  • Landing page — features, pricing, CTA
  • Email notifications — via Resend

MiniMax 2.5 built about 95% of this through the hourly loop. The final 5% — polish, deployment to Cloudflare Workers, fixing the last UX quirks — I did in a couple of sessions with Claude Sonnet.

It's live at openloop.wearesingular.com. It works. People can use it. And it's fully open source — if you want to self-host your own feedback platform, fork it, run it, make it yours: github.com/we-are-singular/OpenLoop.

Is it perfect? No. Would I have built it differently by hand? Absolutely. But it’s real, it works, and the cost? Just $15 and a few emails. The real win? Sometimes, the journey’s the only thing that pays off.

Here are some numbers:

Total cost

~$15 (MiniMax tokens) + 2 Claude sessions

First working build

~90 minutes

Duration

~5 days

Emails exchanged

165+

AI work sessions

98

Lines of code I wrote

0

Lines in conversation transcripts

4,367

Lines of code in final product

6,149

Words in transcripts

27,178

Times I said "bro"

7

Times I said "fuck"

1

Final stack

Astro + React + Supabase

Status

Live in production


What's Next

Was it worth it? What would I do differently? And what does this experience mean for someone with 20 years of web development under their belt, watching the craft change in real time?

In Part 2, I'll break down the real lessons — what AI is genuinely good at, where it falls apart, why my role shifted from developer to product manager, and why email might actually be the best interface for working with AI agents.


Written by jtavares | Self-taught JS Developer, aspiring DevOps engineer, and enthusiastic problem solver.
Published by HackerNoon on 2026/03/05