←back to thread

168 points selvan | 1 comments | | HN request time: 0.303s | source
Show context
_pdp_ ◴[] No.44462442[source]
Mildly interesting article - I mean, you can already run a ton of libraries that talk to an inference backend. The only difference here is that the client-side code is in Python, which by itself doesn't make creating agents any simpler - I would argue that it complicates things a tone.

Also, connecting a model to a bunch of tools and dropping it into some kind of workflow is maybe 5% of the actual work. The rest is spent on observability, background tasks, queueing systems, multi-channel support for agents, user experience, etc., etc., etc.

Nobody talks about that part, because most of the content out there is just chasing trends - without much real-world experience running these systems or putting them in front of actual customers with real needs.

replies(1): >>44463146 #
mentalgear ◴[] No.44463146[source]
Agreed, regarding the other parts of the "LLM" stack, have a look at the, IMO best, LLM coordination / Observability platform TS library: https://mastra.ai/
replies(1): >>44463571 #
1. _pdp_ ◴[] No.44463571[source]
thanks, we are building chatbotkit.com / cbk.ai (not opensource)