← Back to library

By Matt Pocock · 11:19 · transcript ok

Watch videoView transcript

How To De-Slop A Codebase Ruined By AI (with one skill)

Video: https://www.youtube.com/watch?v=3MP8D-mdheA

Video ID: 3MP8D-mdheA

Duration: 11:19

Transcript status: ok

Generated: 2026-05-02T07:00:36Z

Core thesis

AI does not make code architecture irrelevant. It makes architecture debt compound faster. If agents repeatedly change a codebase without understanding its module boundaries, they create duplicated rules, weak seams, and shallow abstractions. The cure is not “use less AI”; it is to make the architecture more legible to both humans and agents through deep modules, explicit interfaces, named seams, adapters, locality, and leverage.

The practical move in this video is to use an `improve-codebase-architecture` skill as a structured architecture-review partner: first teach the agent a shared vocabulary, then have it search for “deepening opportunities,” then let the human choose and refine the refactor before delegating implementation.

Big ideas / key insights

Best timestamped moments with interpretation

Screen-level insights: what the visuals add

Comment-derived insights

The comment section is split between strong agreement, jokes about rediscovering basic software engineering, and skepticism about using yet another AI skill to repair AI-created mess.

Practical workflow to steal

1. Write down your architecture vocabulary. Define what “module,” “interface,” “implementation,” “seam,” “adapter,” “locality,” and “leverage” mean in your codebase.

2. Ask the agent to inspect, not edit. First pass should only find deepening opportunities with evidence: files, duplicated rules, unclear seams, shallow modules, missing tests.

3. Rank candidates manually. Prefer refactors that improve locality and create a testable seam around high-change logic.

4. Interrogate the design. Ask the agent to propose the module interface, invariants, adapters, test cases, and migration path.

5. Turn the chosen refactor into an issue/PRD. Keep implementation separate from diagnosis so another agent or future session can execute with clear boundaries.

6. Add tests at the seam. The whole point of finding seams is to create a harness that prevents future AI changes from silently reintroducing drift.

7. Repeat periodically, but do not outsource judgment. Run architecture review often in fast-moving codebases, but treat the output as candidate strategy, not ground truth.

Visible tools / code / artifacts

My read / why it matters

This is one of the more useful agent-coding patterns because it does not pretend the agent is an architect. It treats the agent as a tireless codebase scout that can find duplicated logic, missing seams, and suspicious module boundaries, then asks the human to make the architectural call.

The strongest lesson is that AI coding raises the value of old-school software design. Deep modules, small interfaces, test seams, and locality are not academic concerns; they are what let agents make changes without wrecking the system. If you want agents to move fast safely, you need architecture that gives them clear handles.