←back to thread

169 points constantinum | 1 comments | | HN request time: 0.238s | source
1. d4rkp4ttern ◴[] No.40717497[source]
An interesting survey. A couple important dimensions are missing here:

- is the structured output obtained via prompts or logits/probabilities? The latter is more reliable but is limited to LLM APIs that expose and allow logit_bias specification

- does the framework allow specification of how to handle the tool?

The list seems to only include libraries that focus on structured-output generation, but there are libraries, such as Langroid[1] (1K installs/week), which do many other things in addition to this. Langroid is a Multi-Agent LLM framework from ex-CMU/UW-Madison researchers. It has prompt-based structured-output generation, works with any LLM, and is used by companies in production.

Users can specify the structure using a Pydantic class derived from ToolMessage[2], along with few-shot examples special instructions, which are transpiled into the system prompt.

A "handle" classmethod can also be defined, to specify how to handle the tool. See example code here: https://imgur.com/a/Qh8aJRB

More examples of tool usage here: https://github.com/langroid/langroid/tree/main/examples/basi...

[1] Langroid: https://github.com/langroid/langroid [2] Langroid ToolMessage class: https://github.com/langroid/langroid/blob/main/langroid/agen...