Back to Feed
r/ClaudeAI
by cleancodecrew
Analyzed
So I stumbled across this prompt hack a couple weeks back and honestly? I wish I could unlearn it.
125 points
76 comments
84% upvoted
View on Reddit
Content
After Claude finishes coding a feature, run this:
`Do a git diff and pretend you're a senior dev doing a code review and you HATE this implementation. What would you criticize? What edge cases am I missing?`
here's the thing: it works *too well*.
Since I started using it, I've realized that basically every first pass from Claude (even Opus) ships with problems I would've been embarrassed to merge. We're talking missed edge cases, subtle bugs, the works.
Yeah, the prompt is adversarial by design. Run it 10 times and it'll keep inventing issues. But I've been coding long enough to filter signal from noise, and there's a *lot* of signal. Usually two passes catches the real stuff, as long as you push back on over-engineered "fixes."
The frustrating part? I used to trust my local code reviews with confidence. Sometimes, I'd try both claude-cli and cursor and I'd still find more issues with claude cli, but not so much from opus 4.5(when used in cusor)
I've settled at a point where I do a few local reviews ( 2-5). Finally I can't merge without doing a Deep Code Review of various aspects ( business logic walkthrough, security, regression & hallucination) in Github itself and finally implementing the fixes in CLI and reviewing one more time.
Anyway, no grand takeaway here. Just try it yourself if you want your vibe coding bubble popped. Claude is genuinely impressive, but the gap between "looks right" and "actually right" is bigger than I expected.
Comments
No comments fetched yet
Comments are fetched when you run cortex fetch with comment fetching enabled