Ever got married? Ever got a large album that just requires your attention to send subsets of it to the people in the album? Too lazy to do that yourself? Here is Sort Moments - sorts your moment, locally.
An Intro, maybe?
I am an extremely lazy person. There, I said it. If there is anything out there that can empower my laziness I would jump on it immediately. However, with the advancements in the world of AI post the release of ChatGPT, it left me in a state of haziness, confusion, and on the edge. I did not like where it was all going, and whenever I actually did try to use them, they never felt competent. Most people lack imagination and creativity, and as a result enjoyed its most basic feature - writing. I hated that even more, and rebelled against it by not using it in anyway at all except to write functions in my developed codebases. This was the only use case that I was genuinely interested in.
To write, to form art, to write songs, to sing, to make videos, to build narratives, I have always assumed this to be the job for humans. Unfortunately, that really isn’t how things panned out. As years passed, more and more content online was syntethic, boring, the same AI styled ‘top 10‘ posts, and then, we had revolutions in video generation and finally we got Instagram influencers curated purely out of AI. And finally, the course sellers.
All of that is a very strong case to continue to hate on it, but it is quite useful in some ways. And those ways could be many, and different for all of us. But since ChatGPT came out, there has been too much noise, too many course sellers, and too many influencers talking about one or the other tech release - leaving you in a state of limbo. The noise is too much, the signal ever so poor. This substack is my effort to help you all in navigating this synthetic dead internet to a more lively one, and help organize thoughts, piece by piece, as I have done over the last few weeks after all that frictionmaxxing efforts.
Without further ado, lets talk about why this app, what it does, how Claude Code helped - how to use it without going overboard.
Attempt 1 with LLM Assistance: Failure ❌
This one was back at the start of 2025, and it was unpleasent, there was far too much friction to call it a tool worth using if you knew very little of what you were doing. Unfortunately, my strength is not in Front End or Mobile Applications so all the gibberish it produced and kept compounding on had me give up on it.
Attempt 2 with Claude Code: Success ✔
This time around, I decided to give it a go again. On every subreddit I went r/LocalLLama, r/codex, r/claudecode, r/ClaudeAI, r/AI_Agents or what have you, everyone was all about automating entire pipelines or getting stuff end to end.
So, with Opus 4.5, a Claude.MD file, and on plan mode explaining what I already have - the backend, and what I want to get done. It asked questions via its AskUserQuestionTool for clarfications (like do I wanna use Electron or PyQT or something else and then me responding whatever I fancy). Over the next few days, I realized it is very much possible. So for planning, I used Opus 4.5, and afterwards I switched to Sonnet 4.5 to get the coding bit done for front end. It ended up looking like this:
This was shockingly good to me. And the fact that all of the buttons did what they were intended for without breaking a lot - I had to watch logs manually a few times to pick up on a few issues related to pathing and model downloads but overall it was much more smoother. But there was a trick I used that I learnt over the course of last few months to get it this stable, and that was something called “skills”.
What are skills: skills are a way to empower your model on a specific element of your workflow, maybe it is web-dev, maybe it is front-end, maybe it is REST- API. You get the gist.
You can create skills using skill-builder from Anthropic itself from here by supplying the skill-builder.md and tell it what you want the skill to be around. It will read the front matter, see if it is relevant to what you are requesting in the prompt, invoke it, and if there are multiple files or scripts in it, it would go deeper and deeper if need be.
An example to illustrate straight from the source:
Read More: This example illustrates what is a frontmatter (This is what the model reads intially and gauges relevancy to the current request in the prompt)
I built all my skills using a skill xD (recursion? :D) and the skill is here. You just need to put this folder inside your .claude folder that is found inside your project if you are using Claude. In other cases like if you are using Gemini or Codex or whatever, they have their equivalent locations and you can place this there.
Also, there is a whole website hosting a wide range of skills that you could use ( I now realize this could be a seperate blog lmao). You can find that here. Its called skillsmp and totally worth a visit.
Anyways, circling back.
By this point, most of it just worked. The website ended up looking like this, which I ended up hosting on Hetzner VPS for how cheap it was and bought a domain from GoDaddy. I then passed on my access to Claude Code to do all the commits and setting up the website after testing locally.
I did have to plug in some of my ideas - and I think that is where humans shine. Ideas. Yet. For example, during this time, AntiGravity had come out (Google’s take on a full enviroment supporting your LLM unlike Claude Code which either lives in your terminal or as an extension so not really comparable). AntiGravity’s website had a really awesome asset in the background that made it come to life that I absolutely loved and fortunately found the perfect site from where you could ‘borrow’ these components. Its called reactbits.dev and you can go directly from here.
I then gave it the asset link and told CC to go browse it and make it happen on my site. And there it was.
The website is here for you to browse, and comes with a demo video if you scroll down.
The website exists only to download the executable and access to my github code so people can upgrade if they like what they see but think more should be done (like integration with GDrive and WhatsApp so people can export the folders directly there).
Anyways, that really is all. I did make use of Factory.ai at the end as it was offering 10M tokens for free and you could use Opus and Codex and GLM if you wanted to. So I said why not. But really the entire structure around CC is so good that it makes its own models perform as if having taken steroids. So its not just models, its models, what enviroment they are in, what access they have, and so much more coming together.
Oh before I completely forget if anyone cares or wonders where the logo comes from, its me querying Nano Banana in the same session for about 20 times and keep asking it what it is trying to do and then correcting where I felt it was failing to grasp the idea and then eventually it just… worked?
So yeah. I did convert. somewhat. I think LLMs in the right context can do a great deal of things on matters you are not the strongest, but whatever it is, you have to be strong at something around it. So for example, if you are not great at Front End but can do Back End just fine, do Back End prototype or layout exactly what you want there, and let it go into flowstate for FrontEnd and decide things for you and you can just comment mostly if its diverging from your desired output because well lets be frank there really is not much you can do and it is much cheaper than hiring someone for a pet project.
If it weren’t for CC, Factory, ReactBits.dev, skills.mp, and all the awesome people in those subreddits, this perhaps would have just assisted me and died out. This is where Vibe Coding truly shines.
Signing out,
Abdullah
(The next article is on OpenClaw, and a Recommender Systems model that not many know about except perhaps in China - and how it changed my entire world view around the complex species that we think we are)
