←back to thread

467 points mraniki | 3 comments | | HN request time: 0.404s | source
Show context
bratao ◴[] No.43534359[source]
From my use case, the Gemini 2.5 is terrible. I have a complex Cython code in a single file (1500 lines) for a Sequence Labeling. Claude and o3 are very good in improving this code and following the commands. The Gemini always try to do unrelated changes. For example, I asked, separately, for small changes such as remove this unused function, or cache the arrays indexes. Every time it completely refactored the code and was obsessed with removing the gil. The output code is always broken, because removing the gil is not easy.
replies(10): >>43534409 #>>43534423 #>>43534434 #>>43534511 #>>43534695 #>>43534743 #>>43535378 #>>43536361 #>>43536527 #>>43536933 #
redog ◴[] No.43534511[source]
For me I had to upload the library's current documentation to it because it was using outdated references and changing everything that was working in the code to broken and not focusing on the parts I was trying to build upon.
replies(2): >>43534560 #>>43535477 #
1. amarcheschi ◴[] No.43534560[source]
using outdated references and docs is something i've experienced more or less with every model i've tried, from time to time
replies(2): >>43534603 #>>43535145 #
2. rockwotj ◴[] No.43534603[source]
I am hoping MCP will fix this. I am building an MCP integration with kapa.ai for my company to help devs here. I guess this doesn’t work if you don’t add in the tool
3. simonw ◴[] No.43535145[source]
That's expected, because they almost all have training cut-off dates from a year ago or longer.

The more interesting question is if feeding in carefully selected examples or documentation covering the new library versions helps them get it right. I find that to usually be the case.