Back to Feed
r/ClaudeAI
by Signal_Question9074
Analyzed
I reverse-engineered the workflow that made Manus worth $2B and turned it into a Claude Code skill
934 points
162 comments
92% upvoted
View on Reddit
Content
Meta just acquired Manus for $2 billion. I dug into how their agent actually works and open-sourced the core pattern.
The problem with AI agents: after many tool calls, they lose track of goals. Context gets bloated. Errors get buried. Tasks drift.
Manus's fix is stupidly simple — 3 markdown files:
* `task_plan.md` → track progress with checkboxes
* [`notes.md`](http://notes.md) → store research (not stuff context)
* [`deliverable.md`](http://deliverable.md) → final output
The agent reads the plan before every decision. Goals stay in the attention window. That's it.
I packaged this into a **Claude Code skill**. Works with the CLI. Install in 10 seconds:
`cd ~/.claude/skills`
`git clone` [`https://github.com/OthmanAdi/planning-with-files.git`](https://github.com/OthmanAdi/planning-with-files.git)
MIT licensed. First skill to implement this specific pattern.
https://preview.redd.it/dkvk3d0uc3bg1.png?width=1329&format=png&auto=webp&s=bd3320c687aec02b3a7fa92fc3272485838a36f5
Curious what you think — anyone else experimenting with context engineering for agents?
Comments
No comments fetched yet
Comments are fetched when you run cortex fetch with comment fetching enabled