←back to thread

397 points Anon84 | 1 comments | | HN request time: 0s | source
Show context
barrell ◴[] No.45126116[source]
I recently upgraded a large portion of my pipeline from gpt-4.1-mini to gpt-5-mini. The performance was horrible - after some research I decided to move everything to mistral-medium-0525.

Same price, but dramatically better results, way more reliable, and 10x faster. The only downside is when it does fail, it seems to fail much harder. Where gpt-5-mini would disregard the formatting in the prompt 70% of the time, mistral-medium follows it 99% of the time, but the other 1% of the time inserts random characters (for whatever reason, normally backticks... which then causes it's own formatting issues).

Still, very happy with Mistral so far!

replies(11): >>45126199 #>>45126266 #>>45126479 #>>45126528 #>>45126707 #>>45126741 #>>45126840 #>>45127790 #>>45129028 #>>45130298 #>>45136002 #
siva7 ◴[] No.45136002[source]
I thought i was the only one experiencing this slowness. I can't comprehend why something called gpt mini is actually slower than their non-mini counterpart.
replies(1): >>45146830 #
1. barrell ◴[] No.45146830[source]
Nooo you are definitely not alone. gpt-5-nano even is slowest model I’ve used since like 2023, second only to gpt-5-mini