←back to thread

144 points pranay01 | 1 comments | | HN request time: 0.199s | source
1. _pdp_ ◴[] No.45400332[source]
This might sound like over simplification but we decided to use the conversations (which we already store) as means to trace the execution flow for the agent - for both automated and when interacted with directly.

It feels more natural in terms of LLMs do. Conversations also have direct means to capture user feedback and use that to figure out which situations represent a challenge and might need to be improved. Doing the same with trace, while possible, does not feel right / natural.

Now, there are a lot more things going on in the background but the overall architecture is simple and does not require any additional monitoring infrastructure.

That's my $0.02 after building a company in the space of conversational AI where we do that sort of thing all the time.