>OpenInference was created specifically for AI applications. It has rich span types like LLM, tool, chain, embedding, agent, etc. You can easily query for "show me all the LLM calls" or "what were all the tool executions." But it's newer, has limited language support, and isn't as widely adopted.
> The tragic part? OpenInference claims to be "OpenTelemetry compatible," but as Pranav discovered, that compatibility is shallow. You can send OpenTelemetry format data to Phoenix, but it doesn't recognize the AI-specific semantics and just shows everything as "unknown" spans.
What is written above is false. Openinference (or for the matter, Openllmetry, and the GenAI otel conventions) are just semantic conventions for otel. Semantic conventions specify how the span's attributes should be name. Nothing more or less. If you are instrumenting an LLM call, you need to specify the model used. Semantic conventions would tell you to save the model name under the attribute `llm_model`. That's it.
Saying OpenInference is not otel compatible does not make any sense.
Saying Phoenix (the vendor) is not otel compatible because it does not show random spans that does not follow its convention, is ... well unfair to say the least (saying this as a competitor in the space).
A vendor is Otel compliant if it has a backend that can ingest data in the otel format. That's it.
Different vendors are compatible with different semconvs. Generalist observability platforms like Signoz don't care about the semantic conventions. They show all spans the same way, as a JSON of attributes. A retrieval span, an LLM call, or a db transaction look all the same in Signoz. They don't render messages and tool calls any different.
LLM observability vendors (like Phoenix, mentioned in the article, or Agenta, the one I am maintaining and shamelessly plugging), care a lot about the semantic conventions. The UI in these vendors are designed for showing AI traces the best way. LLM messages, tool calls, prompt templates, retrieval results are all shown in user friendly ways. As a result the UI needs to understand where each attribute lives. Semantic conventions matter a lot to LLM Observability vendors. Now the point that the article is making is that Phoenix can only understand the Openinference semconvs. That's very different from saying that Phoenix is not Otel compatible.
I've recorded a video talking about OTel, Sem conv and LLM observability. Worth watching for those interested in the space: https://www.youtube.com/watch?v=crEyMDJ4Bp0