Read This Before You Write Another Agent Skill

Written by anson | Published 2026/03/02
Tech Story Tags: ai-agent-skills | context-engineering | ai-benchmarks | claude-code-skills | skillsbench | llm-benchmarking | agentic-systems | self-generated-skills

TLDRA paper on self-generated Agent Skills was published on Hacker News. The concept is great, but the paper invalidates the whole thing. Skills are just markdown files that have some metadata at the top to help Agents/Tools know when to use them.via the TL;DR App

The entire ecosystem around Claude Code is pretty confusing, the naming conventions are a mess, and the pace of change is beyond any production tool I've seen. However Skills are probably the most misused.


I see it at work at ton, but a paper just came up on Hacker News:



The paper got me fired up enough to write this post


The Hacker News title is editorialized for some reason, "Study: Self-generated Agent Skills are useless", but it immediately grabbed me since I get massive value from Skills written by Agents, but I also consistently see them misused by my peers. The concept is great, I've been looking at benchmarking specific parts of the Agentic ecosystem myself, so this was highly relevant to me. Overall, the paper is decent, but one bullet invalidates the whole thing:


Self-Generated Skills: No Skills provided, but the agent is prompted to generate relevant procedural knowledge before solving the task. This isolates the impact of LLMs’ latent domain knowledge.


So all they are doing is taking a problem that a model can't solve well on its own, and asking it to write about the task before attempting it. They just reinvented thinking blocks, but worse!

The Skill Anti-Pattern

What they did is a very common mistake that I see constantly. My Agent is bad at this thing, so I ask the Agent to write a skill on this thing. I'll reiterate that this is identical to thinking blocks. For your Agent to create something worthwhile, you have to make sure they can see the gaps. I see this as the classic CS intro where you ask someone to write out the steps to make a PB&J, you don't really understand what makes the problem hard until you've struggled through solving it.


This directly leads to the largest Faux Pas of the AI era, just asking an LLM someone else's question verbatim, and pasting the LLM's answer as your response. If I ask you how you did something cool with an Agent, and you just, on the fly, have a fresh Agent build me a SKILL.md on my question, I will kill you.

What are Skills?

Before getting into proper usage, I just want to cover what skills are. As a primitive, they are just markdown files that have some metadata at the top to help Agents/Tools know when to use them, and then the rest of the document is the skill. Each skill has its own folder so it cannot only teach your Agent how to do something but also give it better tools.


.claude/skills/
└── monitor-gitlab-ci/
    ├── SKILL.md # The file metioned above
    ├── monitor_ci.sh # Complicated command
    └── references/ # Additional references 
        ├── api_commands.md
        ├── log_analysis.md
        └── troubleshooting.md


Above is a Skill I used a ton to let older versions of Claude work on my GitLab CI. It's a folder with a simple markdown Skill that just explained the setup and that the Agent needs to watch the CI until either a job fails or everything passes, a simple CLI to prevent the Agent from writing a script, and additional references for edge cases.

Skills for Context

Agents are completely stateless meaning that every new conversation is like meeting the model for the first time, it has no idea what your project is or what you were working on 10 minutes ago. CLAUDE.md does a lot to fix this, but for a large enough project it can't contain everything. If I open up a monorepo and tell Claude to run a SIL test then it is going to have to run around to figure out how to do that. It has to figure out what language the project is in, then look for common test patterns for that language, its going to see a complicated Docker Compose setup, its going to see that the containers need x86 but we're running on a Mac, then its going to look for CI, etc.


This can all be solved by writing Skills for common, but not universal patterns. Anytime a model struggles to do something in your project that you know is simple and basic, tell it to make a Skill covering the gaps in knowledge it had to complete that task.

Skills for Repetition

Another simple use for Skills is to explain tasks that you often do. For instance I often tell my Agents to make sure my docs/, MR description, Issue, and codebase are all in alignment. So, I made a simple Skill for it to keep me from typing it out all the time.

Skills for Hard Problems

Claude can solve some really hard problems, but it might take $500 in tokens and you might have to yell at it for reward hacking a few times. Almost any time I have to intervene on a problem, once the Agent it unstuck I ask it what the gap was that kept it from figuring it out on its own. Sometimes it something silly, but sometimes it is something genuinely insightful and I have Claude make a Skill to fill the gap.

Conclusion

I edited the original benchmark to do Skills my way and the results were as I suspected, the Agents nailed the test with proper Skills. I don't have the money to spend on fully validating this result but the first pass was good enough for me to be happy. I think this essentially doubles the amount of dataset needed for this benchmarks so I assume thats why the authors didn't include this method.


Remember, there are two reasons to make a skill – Remembering a novel problem, and avoiding repetition. If you are just making a fresh session with your Agent and asking for a Skill on x then its probably no value. It needs to know something the fresh model doesn't which can come from you're prompt explaining a common process, a compilation of knowledge gained from a hard problem, or even having it go off and do its own research on something that isn't novel.

Happy Hacking.



Written by anson | Lets get this shit built?
Published by HackerNoon on 2026/03/02