← Back to library

By Nate Herk | AI Automation · 819s · transcript ok · added 2026-05-04 00:11 GMT+8

I Tried 100+ Claude Code Skills. These 6 Are The Best

Video: https://www.youtube.com/watch?v=eRS3CmvrOvA
Video ID: eRS3CmvrOvA
Duration: 819s
Transcript status: ok
Analysis updated: 2026-05-03

Actionable Insights

  • Use Skill Creator for client-specific SOP skills first: convert a repetitive business process into a reusable Claude skill before selling complex automations.
  • For production software work, test a discipline framework such as Superpowers before accepting one-shot Claude output; require planning, tests, edge-case checks, and self-review.
  • Use GSD for larger specs where context rot is likely: gsd-build/get-shit-done. Split work into fresh-context subagents and atomic tasks.
  • Use built-in /review on every meaningful change and reserve heavier /ultra review-style review for high-risk code if available in your Claude Code version/account.
  • For long sessions, evaluate Context Mode mksglu/claude-context-mode and memory tooling such as thedotmack/claude-mem; verify install commands from the repo before use.

Creator’s main claims

  1. Businesses pay for boring skills that save time, reduce errors, or lower cost.
  2. Skill Creator is the factory for making client-specific skills.
  3. Superpowers improves coding quality through planning, tests, and review.
  4. GSD fights context rot with fresh subagents and quality gates.
  5. /review, Context Mode, ClaudeMem, and frontend-design skills are practical quality/productivity layers.

Deep research verdicts

1. “Boring” client skills are more sellable than flashy demos

Verdict: Strong agree, high confidence. The business framing is sound.

Supporting evidence: the transcript gives concrete business examples such as real-estate property descriptions and dispatch/reporting systems.

Contradicting / limiting evidence: implementation quality, data access, consent, and maintenance matter more than the skill packaging itself.

Practical takeaway: sell outcomes and maintenance, not “a skill.”

2. GSD/context/memory tooling targets real Claude Code pain points

Verdict: Agree, medium confidence. Context rot and repeated onboarding are real problems.

Supporting evidence: GSD describes itself as a spec-driven/context-engineering system for Claude Code. Context Mode and ClaudeMem repos describe MCP/hooks/local DB approaches for reducing context bloat and recovering relevant history. Sources: https://github.com/gsd-build/get-shit-done/ , https://github.com/mksglu/claude-context-mode , https://github.com/thedotmack/claude-mem

Contradicting / limiting evidence: these tools add moving parts, hooks, local databases, and security considerations. Claims like stars, token savings, and autonomy should be checked against current repo state before adoption.

Practical takeaway: pilot one context/memory tool on a real project and track failure recovery, cost, and setup friction.

3. Review is the right default quality gate

Verdict: Strong agree, high confidence. The final checkpoint is where many agent workflows fail.

Supporting evidence: the transcript positions /review and /ultra review as post-build checks for bugs, edge cases, logic, security, and performance.

Contradicting / limiting evidence: review tools can produce false positives or miss integration issues; CI and human review still matter.

Practical takeaway: make review a merge gate, not an optional afterthought.

Core thesis

The video is a practical sales/operator list: the Claude Code skills worth selling are the ones that turn one-off AI into repeatable business systems.

Comment-derived insights

  • Top comments ask why GSD is needed if Superpowers has subagent behavior, which highlights overlap and the need to choose tools by role.
  • The creator’s own pinned links point to free resources and install commands in the description; the analysis should treat transcript install commands as source-dependent.

Screen-level insights

  • 1:31 frame: branded “Skill Creator” card supports the claim that skill generation is the first/factory step.
  • 4:02 frame: Superpowers UI shows plan/tests/edge-case/gap-check stages, visually supporting the discipline-framework claim.

Verification notes

  • Actionable Insights audit: includes direct links for GSD, Context Mode, and ClaudeMem; unavailable/uncertain commands are not invented.
  • Source/evidence audit: external repo links verified by web search; Superpowers details retained as transcript claims where direct repo was not verified in this pass.
  • Transcript/comment/frame fidelity audit: claims match transcript sections and selected frames.
  • Hallucination/overclaim audit: star counts and token savings are treated cautiously.