CORTEX
Back to Feed
r/ClaudeCode by purealgo New

LLM Engineering Skills for AI Agents

3 points 0 comments 100% upvoted
View on Reddit

Content

I’ve been building AI agents for real systems, and referencing these skills has proven far more efficient than using MCPs because only the specific skill needed is loaded into the context window, rather than the entire interface. Agents can follow prompts and call tools, but things like reasoning about prompt design tradeoffs, choosing tools deliberately, running evaluation and iteration loops, and handling failure modes and real-world constraints are usually embedded in ad-hoc logic, docs, or the developer’s head rather than being reusable capabilities. I recently open sourced a Skills plugin that tries to make this engineering knowledge explicit and composable, as skills that agents can actually invoke and reuse. It is already installable in Claude Code and Codex. What I am aiming to avoid is the pattern where every agent reimplements similar prompt logic, hardcodes evaluation flows, and loses context between iterations. This project focuses specifically on the practical engineering side of working with LLMs. The kind of knowledge most of us pick up by shipping systems and debugging failures rather than reading examples. I am shaping it based on real usage, not just demos or examples. I would love input from people building agents in practice. What LLM engineering knowledge do you wish your agents could reuse? What skills would actually be valuable across projects? Where do you think this abstraction breaks down? Repo here if you want to explore or critique it: https://github.com/itsmostafa/llm-engineering-skills Fully open source. Ideas, criticism, and contributions are welcome.

Comments

No comments fetched yet

Comments are fetched when you run cortex fetch with comment fetching enabled