6 points hhimanshu | 4 comments | | HN request time: 0.768s | source

Hello folks,

I’m trying to learn how one can build agentic AI systems similar to Claude Code, and eventually adapt that knowledge toward domain-specific use cases (e.g., “Claude Code for healthcare, finance, education, etc.”).

For those of you who’ve studied or built these kinds of systems, I’d love to hear your recommendations on:

• Foundational learning: What books, courses, or papers provide the best grounding for understanding LLM-based systems and their decision-making?

• Architectural patterns: What design patterns are worth studying for things like context management, memory, reasoning, and orchestration?

• Build vs. deploy: How do you think about building internal systems vs. packaging/distributing them as APIs, SDKs, or products?

• Open source projects: Which ones are most valuable to study for internals (decision making, evals, context engineering, tool use, etc.)?

• Evals and observability: What tools or products help evaluate quality, measure system behavior, and observe performance in real-world use?

• Models: Which models are best suited for “thinking” (reasoning, planning, decomposing problems) vs. “doing” (execution, coding, retrieval)?

• Learning path: How would you approach going from theory → prototype → production-quality system?

My goal is to discover high-quality resources that one can truly spend time learning from and building with—through iteration and practice—while also sharing what I learn so others on the same path can benefit.

Thanks in advance for sharing your experiences and guidance!

1. hhimanshu ◴[] No.45045864[source]
Two resources that I am currently learning from are

1. https://deepwiki.com/humanlayer/12-factor-agents/1-12-factor...

2. https://deepwiki.com/anthropics/claude-code/1-claude-code-ov...

2. rbjorklin ◴[] No.45046161[source]
This is pretty much a step-by-step guide for getting started with code: https://ampcode.com/how-to-build-an-agent
replies(1): >>45046567 #
3. hhimanshu ◴[] No.45046567[source]
Great resource, definitely a good place to take the next step. As I looked into detail, the natural question came (based on software developing experience), how do I evaluate the correctness of output produced by LLM given the inputs. Clearly, unit test with fixed in/out pairs won't help so learning methods to evaluate as we develop iteratively will be very useful.

Thanks for sharing the article!