←back to thread

Pydantic Logfire

(pydantic.dev)
146 points ellieh | 1 comments | | HN request time: 0.208s | source
Show context
serjester ◴[] No.40212723[source]
I love pydantic but I really have to wonder why they chose this route. There's already a ton of companies that do this really well and I'm trying to figure out how their solution is any different.

The llm integration seems promising but if you care about LLM observability you probably also care about evals, guardrails and a million other things that are very specific to LLM's. Is it possible to build all this under one platform?

I do hope I'm wrong for the sake of pydantic-core.

replies(3): >>40212966 #>>40214112 #>>40214695 #
scolvin ◴[] No.40214695[source]
Pydantic creator here.

I understand why this might be your reaction, but let me just share my thoughts:

Can we build all the LLM things people want? Yes I think so, early feedback is that Logfire is already much more comprehensive than LLM specific solutions.

How is our solution any different? AFAIK:

* no one else offers opinionated wrapper for OTel to make it nicer to use, but with all the advantages

* no one else does auto-tracing (basically profiling) using import hooks the way we do

* no one else has dedicated instrumentation for OpenAI or Pydantic

* no one else provides metadata about python objects the way we do so we can render a semi-faithful repr of complex objects (Pydantic models, dataclasses, dataframes etc.) in the web UI

* no one else (except Sentry, who are doing something different) makes it really easy to start using it

* no one else lets you query observability data with SQL

In my mind, the obvious alternative would be to do the thing most OSS companies do, and build "Pydantic Cloud", then start adding features to that instead of the open source package. I didn't want to do that, and I don't think our users would like that either.

In the end I decided to build Logfire for two reasons:

1. I wanted it, and have wanted it for years

2. I think building a commercial product like this, then using OSS to spread the word and drive adoption is a really exciting way to incentivize us to make the open source as good as possible — good as-in permissive and good as-in well maintained. And it means that our commercial success is not in tension with the adoption of our open source, which has been a recurring issue for companies trying to capitalize on their open source brand.

replies(4): >>40215045 #>>40215146 #>>40215756 #>>40221179 #
nirga ◴[] No.40221179[source]
OpenLLMetry creator here. We’re building the most popular OpenTelemetry instrumentation for LLm providers including OpenAI, Anthropic, Pinecone, Langchain and >10 others since last August [1]. We’re using import hooks (like other otel instrumentations), and offer an sdk with a one line install. I’m also aware of other OSS initiatives doing similar initiatives, so I wouldn’t say no one has ever done what your doing.

[1] https://github.com/traceloop/openllmetry

replies(1): >>40226179 #
1. kakaly0403 ◴[] No.40226179[source]
+1 to this. Langtrace core maintainer here. We are building a SDK and a client for automatic instrumentation of LLM based applications using OTEL standards. The OpenLLMetry team along with the CNCF OpenTelemetry working group has been doing some great work standardizing semantic naming conventions and we are starting to adopt the same set of standards.

[1] https://github.com/Scale3-Labs/langtrace [2] https://github.com/open-telemetry/semantic-conventions/blob/...