←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 2 comments | | HN request time: 0s | source
Show context
onraglanroad ◴[] No.46237160[source]
I suppose this is as good a place as any to mention this. I've now met two different devs who complained about the weird responses from their LLM of choice, and it turned out they were using a single session for everything. From recipes for the night, presents for the wife and then into programming issues the next day.

Don't do that. The whole context is sent on queries to the LLM, so start a new chat for each topic. Or you'll start being told what your wife thinks about global variables and how to cook your Go.

I realise this sounds obvious to many people but it clearly wasn't to those guys so maybe it's not!

replies(14): >>46237301 #>>46237674 #>>46237722 #>>46237855 #>>46237911 #>>46238296 #>>46238727 #>>46239388 #>>46239806 #>>46239829 #>>46240070 #>>46240318 #>>46240785 #>>46241428 #
1. SubiculumCode ◴[] No.46239829[source]
I constantly switch out, even when it's on the same topic. It starts forming its own 'beliefs and assumptions', gets myopic. I also make use of the big three services in turn to attack ideas from multiple directions
replies(1): >>46240827 #
2. nrds ◴[] No.46240827[source]
> beliefs and assumptions

Unfortunately during coding I have found many LLMs like to encode their beliefs and assumptions into comments; and even when they don't, they're unavoidably feeding them into the code. Then future sessions pick up on these.