12 OpenCode Skills Every Dev Team Should Steal

Written by bezgin | Published 2026/04/07
Tech Story Tags: ai | claude-code | codex | opencode | opencode-skills | ai-agent-workflow | ai-assisted-code-review | ai-coding-agent

TLDROpenCode becomes powerful when you turn repetitive engineering tasks into reusable skills and commands. This guide highlights the most useful workflows for debugging, planning, verification, and reviews. Start small with a core set of skills, enforce behavior through AGENTS.md, and evolve based on real team routines. The real value isn’t public skills—it’s customizing them to your workflow.via the TL;DR App

I reviewed OpenCode's docs, the OpenCode GitHub repo, recent writeups, and a large set of public skills and commands people share for OpenCode, Claude Code, Codex, and related tools.

A clear pattern showed up quickly. The useful ideas are small, concrete workflows that improve routine engineering work: planning, debugging, review, verification, context loading, and memory capture.

So this is not a big theory piece. It is a compact collection of skills and commands worth reusing.

Start with this rule

The best skill is the one you make yourself to automate your own routine.

Public skills are useful because they give you working patterns. The real payoff comes when you turn your repeated work into a reusable skill or command:

  • the way your team reviews migrations
  • the checks you run before a release
  • the steps you use during incident response
  • the handoff format you expect before a PR

That is where skills stop being a demo and start becoming infrastructure.

Skills worth reusing

These are listed in rough priority order.

Name

Why it made the list

Source

systematic-debugging

Strong rule set for real debugging. It forces root-cause investigation before fixes.

obra/superpowers

verification-before-completion

High-value finishing skill. No success claim without fresh evidence.

obra/superpowers

test-driven-development

Useful because agents often skip the failing-test step and then rationalize it later.

obra/superpowers

implementation-strategy

Good before runtime, API, or migration work. It forces design decisions before edits.

OpenAI blog

ask-questions-if-underspecified

Good behavior-shaping skill. It keeps the agent from charging ahead on vague requirements.

trailofbits/skills

code-change-verification

Practical repo skill. Run the exact verification stack when code or build behavior changed.

OpenAI blog

writing-plans

Turns a fuzzy task into a concrete execution plan with files, tests, and checkpoints.

obra/superpowers

using-git-worktrees

Good isolation skill for parallel work, experiments, and larger agent-driven changes.

obra/superpowers

dispatching-parallel-agents

One of the better public playbooks for splitting independent tasks across agents.

obra/superpowers

receiving-code-review

Teaches the agent to verify review feedback instead of blindly applying it.

obra/superpowers

Good runners-up:

  • docs-sync for keeping docs aligned with code
  • changeset-validation for JS monorepos and release hygiene
  • openai-knowledge as a pattern for "use current docs, not memory"
  • variant-analysis for security and bug sibling hunting

Commands worth reusing

These are the commands I would look at first when setting up an OpenCode repo.

Name

Why it made the list

Source

/learn

Best memory pattern I found. It writes non-obvious lessons into the right AGENTS.md.

OpenCode repo

/finish-work

Strong pre-commit gate covering code quality, docs, API, DB, cross-layer impact, and manual checks.

Trellis

/check-cross-layer

Catches a common failure: one layer changed, another one was missed.

Trellis

/context-prime

Small but effective bootstrap command. Load enough repo context before real work starts.

just-prompt

/careful-review

Forces a fresh-eyes pass before the work gets called done.

harperreed/dotfiles

/update-spec

Good companion to /learn. Captures design decisions and hard-won lessons in living specs.

Trellis

/find-missing-tests

Useful when the code works but the safety net is thin.

harperreed/dotfiles

/race and pick

Run isolated implementations in parallel, then choose a winner or merge ideas.

cook

/session-summary

Good handoff command. Capture actions, cost, inefficiency, and next improvements.

harperreed/dotfiles

/harness-audit

Useful meta-command for checking the agent stack itself: tools, quality gates, memory, security, and cost.

everything-claude-code

Good runners-up:

  • /generate-command-diff for porting command improvements across repos
  • /catchup for rebuilding branch context after clearing a session
  • /orchestrate for multi-agent coordination when the task is large enough to justify it

A small starter pack

Most teams do not need a huge command catalog.

This is a good first pack:

AGENTS.md
.opencode/
  commands/
    learn.md
    finish-work.md
    careful-review.md
    session-summary.md
  skills/
    systematic-debugging/SKILL.md
    verification-before-completion/SKILL.md
    ask-questions-if-underspecified/SKILL.md
    writing-plans/SKILL.md

Then add a few rules to AGENTS.md:

  • use systematic-debugging for bugs and failed tests
  • use ask-questions-if-underspecified before ambiguous implementation work
  • use verification-before-completion before saying work is done
  • run /finish-work and /careful-review before commit or PR handoff

That is already enough to improve how an agent behaves inside a real repo.

Final note

The good part of this ecosystem is that you do not need to invent everything from scratch.

There are already plenty of strong public patterns. Reuse them. Trim them down. Adapt them to your own repo.

And remember the main rule: the best skills are the ones that describe your team's repetitive tasks.

Sources


Written by bezgin | Data Engineer
Published by HackerNoon on 2026/04/07