←back to thread

196 points vinhnx | 1 comments | | HN request time: 0.213s | source
Show context
siva7 ◴[] No.45057063[source]
These abstractions are nice to not get locked in with one llm provider - but like with langchain - once you use some more niche feature the bugs do shine through. I tried it out with structured output for azure openai but had to give up since somewhere somewhat was broken and it's difficult to figure out if it's the abstraction or the library of the llm provider which the abstraction uses.

Nevertheless i would strongly recommend to not use directly the libraries of the ai providers as you get quickly locked in a extremely fast paced market where today's king can change weekly.

replies(2): >>45057765 #>>45057853 #
DouweM ◴[] No.45057853[source]
Pydantic AI maintainer here! Did you happen to file an issue for the problem you were seeing with Azure OpenAI?

The vast majority of bugs we encounter are not in Pydantic AI itself but rather in having to deal with supposedly OpenAI Chat Completions-compatible APIs that aren't really, and with local models ran through e.g. Ollama or vLLM that tend to not be the best at tool calling.

The big three model providers (OpenAI, Claude, Gemini) and enterprise platforms (Bedrock, Vertex, Azure) see the vast majority of usage and our support for them is very stable. It remains a challenge to keep up with their pace of shipping new features and models, but thanks to our 200+ contributors we're usually not far behind the bleeding edge in terms of LLM API feature coverage, and as you may have seen we're very responsive to issues and PRs on GitHub, and questions on Slack.

replies(1): >>45060412 #
1. siva7 ◴[] No.45060412[source]
Thanks for working on pydantic-ai. I digged up the issue - it seems to have been fixed with the recent releases related to how strictness is handled.