I reviewed OpenCode's docs, the OpenCode GitHub repo, recent writeups, and a large set of public skills and commands people share for OpenCode, Claude Code, Codex, and related tools.
A clear pattern showed up quickly. The useful ideas are small, concrete workflows that improve routine engineering work: planning, debugging, review, verification, context loading, and memory capture.
So this is not a big theory piece. It is a compact collection of skills and commands worth reusing.
Start with this rule
The best skill is the one you make yourself to automate your own routine.
Public skills are useful because they give you working patterns. The real payoff comes when you turn your repeated work into a reusable skill or command:
- the way your team reviews migrations
- the checks you run before a release
- the steps you use during incident response
- the handoff format you expect before a PR
That is where skills stop being a demo and start becoming infrastructure.
Skills worth reusing
These are listed in rough priority order.
|
Name |
Why it made the list |
Source |
|---|---|---|
|
|
Strong rule set for real debugging. It forces root-cause investigation before fixes. | |
|
|
High-value finishing skill. No success claim without fresh evidence. | |
|
|
Useful because agents often skip the failing-test step and then rationalize it later. | |
|
|
Good before runtime, API, or migration work. It forces design decisions before edits. | |
|
|
Good behavior-shaping skill. It keeps the agent from charging ahead on vague requirements. | |
|
|
Practical repo skill. Run the exact verification stack when code or build behavior changed. | |
|
|
Turns a fuzzy task into a concrete execution plan with files, tests, and checkpoints. | |
|
|
Good isolation skill for parallel work, experiments, and larger agent-driven changes. | |
|
|
One of the better public playbooks for splitting independent tasks across agents. | |
|
|
Teaches the agent to verify review feedback instead of blindly applying it. |
Good runners-up:
docs-syncfor keeping docs aligned with codechangeset-validationfor JS monorepos and release hygieneopenai-knowledgeas a pattern for "use current docs, not memory"variant-analysisfor security and bug sibling hunting
Commands worth reusing
These are the commands I would look at first when setting up an OpenCode repo.
|
Name |
Why it made the list |
Source |
|---|---|---|
|
|
Best memory pattern I found. It writes non-obvious lessons into the right | |
|
|
Strong pre-commit gate covering code quality, docs, API, DB, cross-layer impact, and manual checks. | |
|
|
Catches a common failure: one layer changed, another one was missed. | |
|
|
Small but effective bootstrap command. Load enough repo context before real work starts. | |
|
|
Forces a fresh-eyes pass before the work gets called done. | |
|
|
Good companion to | |
|
|
Useful when the code works but the safety net is thin. | |
|
|
Run isolated implementations in parallel, then choose a winner or merge ideas. | |
|
|
Good handoff command. Capture actions, cost, inefficiency, and next improvements. | |
|
|
Useful meta-command for checking the agent stack itself: tools, quality gates, memory, security, and cost. |
Good runners-up:
/generate-command-difffor porting command improvements across repos/catchupfor rebuilding branch context after clearing a session/orchestratefor multi-agent coordination when the task is large enough to justify it
A small starter pack
Most teams do not need a huge command catalog.
This is a good first pack:
AGENTS.md
.opencode/
commands/
learn.md
finish-work.md
careful-review.md
session-summary.md
skills/
systematic-debugging/SKILL.md
verification-before-completion/SKILL.md
ask-questions-if-underspecified/SKILL.md
writing-plans/SKILL.md
Then add a few rules to AGENTS.md:
- use
systematic-debuggingfor bugs and failed tests - use
ask-questions-if-underspecifiedbefore ambiguous implementation work - use
verification-before-completionbefore saying work is done - run
/finish-workand/careful-reviewbefore commit or PR handoff
That is already enough to improve how an agent behaves inside a real repo.
Final note
The good part of this ecosystem is that you do not need to invent everything from scratch.
There are already plenty of strong public patterns. Reuse them. Trim them down. Adapt them to your own repo.
And remember the main rule: the best skills are the ones that describe your team's repetitive tasks.
Sources
-
OpenCode docs: opencode.ai/docs
-
OpenCode repo: anomalyco/opencode
-
OpenCode
/learn: raw file -
obra/superpowers skills: repo
-
OpenAI's skills writeup for the Agents SDK repos: blog post
-
Claude Code skills docs: docs
-
cook, for
raceandpick: project page
