Back to Feed
r/ClaudeCode
by philip_laureano
Analyzed
REMINDER: You can actually ask Claude Code to chain the output of your subagents to each other to form a pipeline that does some very useful things.
9 points
12 comments
81% upvoted
View on Reddit
Content
This means you can ask it to do something like:
Investigation → Implementation → Test Gaps → Validation → Tests → Audit in one prompt (thanks to the F# programming language, which it understands, but I'll get to that bit later).
Now, to give you my setup, I have the following agents:
\- **Legacy Code Characterizer:** This one follows a specific flow chart I wrote. It finds seams in the code (like classic Michael Feathers' style), injects test hooks, and writes the tests, and then verifies it didn't change the existing behaviour. Again, nothing magical here--it just follows what I would normally do by hand and automates it for me with a subagent.
\- **Investigation Agent:** Give it a task and it turns it into a plan that follows YAGNI+SOLID+DRY+KISS principles and defers to Occam's Razor/KISS whenever possible
\- **Refactoring Agent:** Looks at an entire solution and maps it out and finds the ugliest classes and splits them into smaller dependencies, step by step while trying not to break anything (and it usually does a good job because of the steps I have it follow)
\- **Implementer Agent:** This is a special type of agent that has my custom prompts and follows my coding/working style. It's my standards, embodied in the equivalent of a junior dev.
\- **Auditor Agent:** Takes an existing implementation that was already done, looks at the plan that was created for it and compares that implementation plan versus what was actually done, and produces a report that explains what the gaps are and what needs to be done.
Now here's the fun part--since we all know that Claude Code and its family of LLMs understands functonal languages like F#, I can tell it to do an entire dev pipeline just by giving it this prompt:
*"Run the following pipeline on project XYZ and report to me when you're done:*
*Investigation agent (produce plan)*
*|> Implementer agent (execute plan)*
*|> Legacy-code-characterizer (find test gaps, produce test plan)*
*|> Investigation agent (validate test plan)*
*|> Implementer agent (write tests)*
*|> Auditor agent (verify implementation matches intent)"*
For example, in my current Android project, I asked for: "Add a dispatch form to my Android app - hashtag input, repo dropdown, Send button, and loopback/mock mode" for my specs.
NOTE: The reason why I picked the F# programming language as a 'prompt' for Claude is because it's very good at expressing operations that should be chained together. You could try just plain English (if you prefer) but for me I just want to skip on all that extra typing if I can just type the '|>' pipe operator
**Anyway, what it gave me was:**
\- 2 production files created (ViewModel + Screen)
\- 3 files modified (Navigation, SpeedDialFab, ConversationListScreen)
\- 16 validated tests (reduced from 21 proposed - removed redundancy)
\- Audit report: 11/11 outcomes achieved, 0 findings (and if it did find something, it would write it to a file then I can feed it back into the pipeline)
\- Build passing, all tests green
**Total wall clock time:** \~15-ish minutes. And fine, I do have to admit is considered "vibe engineering", but what's important here is that \*every\* agent on this list was based on my standards, which means that every step of the way, each one of these agents was checking the output of the next one so that it would fix the output of the previous one and prevent any tech debt from piling up, and even if it messes up, it moves so quickly that you can regenerate this entire thing in another 15 minutes flat.
The best part about this whole approach is that you don't have to follow my standards. **You can build your own agents and do exactly the same thing you see here.**
There is absolutely nothing "magical" or non-standard about my Claude Code setup that would prevent you from doing the same on your own CC setup. You can do this today, if you want to spend the extra time building those agents to your specifications.
If you have never wrote your own subagents, Claude Code makes this easy for you because you can have it create the prompt file for you just by typing /agents from the CLI itself. It's that easy.
I highly recommend spending the 5-10 minutes setting up your custom agents and try chaining them together.
This is one of those features buried in Claude Code that you can easily forget about if you use it in the vanilla sense (like most people).
You can do things like:
"Refactor the UserService class"
|> Investigation (map dependencies)
|> Refactorer (split into focused services)
|> Implementer (update all call sites)
|> Legacy-characterizer (wrap with tests)
|> Auditor (verify no behavior change)
**Parallel Branching:**
"Add authentication to the API"
|> Investigation (produce plan)
|> (Implementer (backend) ||| Implementer (frontend)) // parallel/conceptually--as long as Claude Code gets the intent here
|> Legacy-characterizer (integration test plan)
|> Implementer (write tests)
|> Auditor (verify against security checklist)
**Or tell Claude Code:**
Investigation |> Implementer |> Test |> Audit |> (if findings > 0 then Implementer else Done)
**Translation:** Investigate, implement, test, audit - and if audit finds gaps, automatically fix them.
The rest I leave to your imagination.
Hope that helps! 😅💪🖖
EDIT: Oh and I almost forgot--you can use just plain text files saved to disk and get each agent to read whatever text file was saved when it starts. Again, if you've worked with Claude Code enough times, this is pretty standard but it is worth mentioning for people just starting with CC.
EDIT 2: If you really want to know how this works, it's because I have Opus 4.5 (the top-level Claude Code) act as the "orchestrator" and I tell it that its responsibility to manage a smooth handoff between agents. To give you an idea of how efficient this system is, take a look at these stats--I did them in one day using this system with "vanilla" Claude Code with my legacy test agent in a pipeline that was filling in all my testing gaps and my testing tech debt:
Updated Grand Total (Today's Session)
| Metric | Value |
|----------------------|--------|
| New tests created | 982 |
| Total verified tests | 2,538+ |
| Before session | 1,556 |
| After session | 2,538 |
| Increase | +63%
And it wasn't slop, either--I had my agents plot out all the points where the test wasn't covering critical features and then handed it off to other agents that would write those tests, followed by other agents with strict standards that act in an adversarial setup and call out bad code when they see it.
It certainly has a long way to go, but the 'defense in depth' approach + every agent has my standards baked in makes it easier to manage this much code being created at scale (without making me the human bottleneck).
As with everything, YMMV, but JFC. After seeing what these things can do, I can't say I can never go back to snail-dev ever again.
Comments
No comments fetched yet
Comments are fetched when you run cortex fetch with comment fetching enabled