←back to thread

358 points maloga | 2 comments | | HN request time: 0s | source
Show context
starchild3001 ◴[] No.45006027[source]
What I like about this post is that it highlights something a lot of devs gloss over: the coding part of game development was never really the bottleneck. A solo developer can crank out mechanics pretty quickly, with or without AI. The real grind is in all the invisible layers on top; balancing the loop, tuning difficulty, creating assets that don’t look uncanny, and building enough polish to hold someone’s attention for more than 5 minutes.

That’s why we’re not suddenly drowning in brilliant Steam releases post-LLMs. The tech has lowered one wall, but the taller walls remain. It’s like the rise of Unity in the 2010s: the engine democratized making games, but we didn’t see a proportional explosion of good game, just more attempts. LLMs are doing the same thing for code, and image models are starting to do it for art, but neither can tell you if your game is actually fun.

The interesting question to me is: what happens when AI can not only implement but also playtest -- running thousands of iterations of your loop, surfacing which mechanics keep simulated players engaged? That’s when we start moving beyond "AI as productivity hack" into "AI as collaborator in design." We’re not there yet, but this article feels like an early data point along that trajectory.

replies(23): >>45006060 #>>45006124 #>>45006239 #>>45006264 #>>45006330 #>>45006386 #>>45006582 #>>45006612 #>>45006690 #>>45006907 #>>45007151 #>>45007178 #>>45007468 #>>45007700 #>>45007758 #>>45007865 #>>45008591 #>>45008752 #>>45010557 #>>45011390 #>>45011766 #>>45012437 #>>45013825 #
zahlman ◴[] No.45006612[source]
> The interesting question to me is: what happens when AI can not only implement but also playtest -- running thousands of iterations of your loop, surfacing which mechanics keep simulated players engaged?

How is AI supposed to simulate a player, and why should it be able to determine what real people would find engaging?

replies(6): >>45006727 #>>45006729 #>>45006732 #>>45007524 #>>45009348 #>>45011331 #
yonatan8070 ◴[] No.45006727[source]
Game companies already collect heaps of data about players, which mechanics they interact with, which mechanics they don't, retention, play time, etc.

I don't think it's much of a stretch to take this data over multiple games, versions, and genres, and train a model to take in a set of mechanics, stats, or even video and audio to rate the different aspects of a game prototype.

I wouldn't even be surprised if I heard this is already being done somewhere.

replies(4): >>45006947 #>>45006996 #>>45007564 #>>45007896 #
uncircle ◴[] No.45007564[source]
> Game companies already collect heaps of data about players, which mechanics they interact with, which mechanics they don't, retention, play time, etc.

Yes, that's how games like Concord get made. Very successful approach to create art based on data about what's popular and focus groups.

replies(3): >>45007928 #>>45008853 #>>45018264 #
1. georgeecollins ◴[] No.45007928[source]
I think you are saying data is no substitute for vision in design. Completely agree! At Playdom (Disney) they tried to build a game once from the ground up based on A/B testing. Do you know what that game was? No you don't because it was never released and terrible.

I think what the previous comment meant was that there is data on how player play, and that tends to be varied but more predictable.

replies(1): >>45010157 #
2. mlyle ◴[] No.45010157[source]
Yah. I think an AI playtester that could go "hey... this itch that lots of players seem to have doesn't get scratched often in your main gameplay loop" or "there's a valley 1/3rd of the way into the game where progression slows way down" or "that third boss is way too hard".

AI/fuzzers can't get far enough in games, yet, without a lot of help. But I think that's because we don't have models really well suited for them.