←back to thread

224 points azhenley | 1 comments | | HN request time: 0.297s | source
1. shykes ◴[] No.45077033[source]
Basically, this is about implementing core AI primitives (chat completion, tool calling, context management) at the system layer, instead of the application or framework layer like it is today.

If you're curious to see one real-life implementation of this (I'm sure there are others), we're pretty far along in doing this with Dagger:

- We already had system primitives for running functions in a sandboxed runtime

- We added the ability for functions to 1) prompt LLMs, and 2) pass other functions to the LLMs as callbacks.

- This way, a function can call LLMs, a LLM can call functions, in any permutation.

- This allows exploring the full spectrum from fully deterministic workflows, to autonomous agents, and everything in between - without locking yourself in a particular programming language, library or framework.

- We've also experimented with passing objects to the LLM, and mapping each of the object's methods to a tool call. This opens interesting possibilities, since the objects can carry state - effectively extending the LLM's context from text only, to arbitrary structured data, without additional dependencies like complex databases etc.

Relevant documentation page: https://docs.dagger.io/features/llm