←back to thread

868 points vuciv | 1 comments | | HN request time: 0.253s | source
Show context
dolebirchwood ◴[] No.45193344[source]
This is awesome. LLM-powered NPCs is one thing I'm most excited about in the future of gaming. Characters repeating the same scripted dialog over and over again is one of the biggest immersion breakers.
replies(13): >>45193383 #>>45193385 #>>45193639 #>>45194269 #>>45194412 #>>45195160 #>>45196348 #>>45196414 #>>45196479 #>>45196822 #>>45197099 #>>45197922 #>>45199103 #
beckthompson ◴[] No.45194269[source]
I'm unsure how useful they'll really be honestly! In many situations its very useful to know when your "done" talking with an NPC when they start repeating lines etc...

There could probably be cool uses but I don't think it will be a pure "upgrade" as the repeating dialog is kind of a feature honestly.

We'll have to see how it pans out xD

replies(6): >>45194295 #>>45194798 #>>45194966 #>>45196626 #>>45196798 #>>45199109 #
heckelson ◴[] No.45194295[source]
I guess you could just introduce a Symbol that marks the NPC as "I said everything I have" or gray out their text, or some other visual marker
replies(3): >>45194553 #>>45194675 #>>45195756 #
Terr_ ◴[] No.45194675[source]
> a Symbol that marks the NPC

That seems a bit like deck-chairs on the Titanic. The hard part isn't icon design, the hard part is (A) ensuring a clear list exists of what the NPC is supposed to ensure the user knows and (B) determining whether those goals were received successfully.

For example, imagine a mystery/puzzle game where the NPC needs to inform the user of a clue for the next puzzle, but the LLM-layer botches it, either by generating dialogue that phrases it wrong, or by failing to fit it into the first response, so that the user must always do a few "extra" interactions anyway "just in case."

I suppose you could... Feed the output into another document of "Did this NPC answer correctly" and feed it to another LLM... but down that path lies [more] madness.

replies(2): >>45194795 #>>45194813 #
1. madaxe_again ◴[] No.45194795[source]
No, this is the Einstein/student model that has been proposed for improving LLM output quality.

Basically you have your big clever LLM generating the outputs, and then you have your small dumb LLM reading them and going “did I understand that? Did it make sense?” - basically emulating the user before the response actually gets to the user. If it’s good, on it goes to the user, if not, the student queries Einstein with feedback to have another crack.

https://openai.com/index/prover-verifier-games-improve-legib...