←back to thread

397 points Anon84 | 1 comments | | HN request time: 0s | source
Show context
barrell ◴[] No.45126116[source]
I recently upgraded a large portion of my pipeline from gpt-4.1-mini to gpt-5-mini. The performance was horrible - after some research I decided to move everything to mistral-medium-0525.

Same price, but dramatically better results, way more reliable, and 10x faster. The only downside is when it does fail, it seems to fail much harder. Where gpt-5-mini would disregard the formatting in the prompt 70% of the time, mistral-medium follows it 99% of the time, but the other 1% of the time inserts random characters (for whatever reason, normally backticks... which then causes it's own formatting issues).

Still, very happy with Mistral so far!

replies(11): >>45126199 #>>45126266 #>>45126479 #>>45126528 #>>45126707 #>>45126741 #>>45126840 #>>45127790 #>>45129028 #>>45130298 #>>45136002 #
mark_l_watson ◴[] No.45126266[source]
It is such a common pattern for LLMs to surround generated JSON with ```json … ``` that I check for this at the application level and fix it. Ten years ago I would do the same sort of sanity checks on formatting when I used LSTMs to generate synthetic data.
replies(9): >>45126463 #>>45126482 #>>45126489 #>>45126578 #>>45127374 #>>45127884 #>>45127900 #>>45128015 #>>45128042 #
1. Alifatisk ◴[] No.45126489[source]
I do use backticks a lot when sharing examples in different format when using LLMs and I have instructed them to do likewise, I also upvote whenever they respond in that matter.

I got this format from writing markdown files, it’s a nice way to share examples and also specify which format it is.