←back to thread

432 points tosh | 2 comments | | HN request time: 0.501s | source
1. bumbledraven ◴[] No.39996619[source]
I appreciate @anotherpaulg's continual benchmarking of LLM performance with aider, for example:

> OpenAI just released GPT-4 Turbo with Vision and it performs worse on aider’s benchmark suites than all the previous GPT-4 models. In particular, it seems much more prone to “lazy coding” than the GPT-4 Turbo preview models.

https://aider.chat/2024/04/09/gpt-4-turbo.html

replies(1): >>40000465 #
2. pax ◴[] No.40000465[source]
+ accompanying HN thread (117 comments) https://news.ycombinator.com/item?id=39985596