←back to thread

343 points sillysaurusx | 9 comments | | HN request time: 1.055s | source | bottom
Show context
v64 ◴[] No.35028738[source]
If anyone is interested in running this at home, please follow the llama-int8 project [1]. LLM.int8() is a recent development allowing LLMs to run in half the memory without loss of performance [2]. Note that at the end of [2]'s abstract, the authors state "This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software." I'm very thankful we have researchers like this further democratizing access to this data and prying it out of the hands of the gatekeepers who wish to monetize it.

[1] https://github.com/tloen/llama-int8

[2] https://arxiv.org/abs/2208.07339

replies(5): >>35028950 #>>35029068 #>>35029601 #>>35030214 #>>35030868 #
1. causality0 ◴[] No.35030868[source]
I feel like we're less than a decade away from being able to hook LLMs into gaming. How incredible would it be to have NPCs driven by LLM?
replies(5): >>35031124 #>>35031255 #>>35033211 #>>35034447 #>>35058462 #
2. visarga ◴[] No.35031124[source]
We'll soon have LLMs in operating systems, LLMs in browsers and you are right, probably also in games. LLMs will be the platform on which we build almost everything.
replies(1): >>35032463 #
3. SloopJon ◴[] No.35031255[source]
There was an Ask HN post about that idea a couple of months ago:

https://news.ycombinator.com/item?id=34478503

I have long wished for less linear stories in video games, where branching narrative (a la Choose Your Own Adventure) is one possible way to give the player agency. The problem is, true branches are expensive, because you end up writing a bunch of content the player never experiences.

I see a lot of potential, but it's going to take a different kind of craftsmanship, and likely many iterations, to realize something more than a novelty.

replies(2): >>35031908 #>>35035818 #
4. causality0 ◴[] No.35031908[source]
I much prefer handcrafted stories and quests. Characters that respond dynamically to the story and the player's actions, however, is quite tantalizing.
replies(1): >>35043708 #
5. bloaf ◴[] No.35033211[source]
I'd be satisfied plugging a game log/history into a system that generates the epic tale of your victory/defeat.
6. pixl97 ◴[] No.35034447[source]
Honestly I don't think it would be completely impossible now in a limited fashion.

Imagine playing a level and doing some particular feats in it. They get presented to GPT with a prompt and the story gets send to a AI voice model in game where the NPC asks/tells the player character about it.

7. bick_nyers ◴[] No.35035818[source]
In general I would say story =/= dialogue (which an LLM can much more easily be used for). I see two main "tricks" that would make the more complicated case (story) possible.

1. You bound the branching in a particular fashion, and provide overall "pressures" into certain story arcs.

2. You use generative AI in a LOT more places in the game.

What happens when you are playing a Sci-Fi game, and you get the enemy NPC to somehow hallucinate that he is the King of Dragons, but you don't have Dragon models/animations/movesets in your game files? You either bound the LLM to not hallucinate that, or you generate that dragon live. I guess a 3rd option, is your game is a comedy and the King NPC gets labeled a crazy person.

8. ElFitz ◴[] No.35043708{3}[source]
We could have handcrafted stories and quests, with LLM-driven dialogues for NPCs canned responses (ie the infamous arrow and the proverbial knee).

And teams with limited resources could also still handcraft the stories and quests but use LLMs to generate or add some variety or context awareness to the dialogues, at a lower cost.

9. ZunarJ5 ◴[] No.35058462[source]
There are already several plugins for Unreal Engine. I am going to assume the same for Unity.

https://www.youtube.com/watch?v=i-Aw32rgM-w&ab_channel=Kella...