Most active commenters
  • dan-robertson(3)
  • acdha(3)

←back to thread

3883 points kuroguro | 23 comments | | HN request time: 0.004s | source | bottom
Show context
comboy ◴[] No.26296735[source]
Holy cow, I'm a very casual gamer, I was excited about the game but when it came out I decided I don't want to wait that long and I'll wait until they sort it out. 2 years later it still sucked. So I abandoned it. But.. this... ?! This is unbelievable. I'm certain that many people left this game because of the waiting time. Then there are man-years wasted (in a way different than desired).

Parsing JSON?! I thought it was some network game logic finding session magic. If this is true that's the biggest WTF I saw in the last few years and we've just finished 2020.

Stunning work just having binary at hand. But how could R* not do this? GTAV is so full of great engineering. But if it was a CPU bottleneck then who works there that wouldn't just be irked to try to nail it? I mean it seems like a natural thing to try to understand what's going on inside when time is much higher than expected even in the case where performance is not crucial. It was crucial here. Almost directly translates to profits. Unbelievable.

replies(5): >>26297228 #>>26297263 #>>26297997 #>>26298680 #>>26299917 #
1. dan-robertson ◴[] No.26297228[source]
I don’t think the lesson here is “be careful when parsing json” so much as it’s “stop writing quadratic code.” The json quadratic algorithm was subtle. I think most people’s mental model of sscanf is that it would be linear in the number of bytes it scans, not that it would be linear in the length of the input. With smaller test data this may have been harder to catch. The linear search was also an example of bad quadratic code that works fine for small inputs.

Some useful lessons might be:

- try to make test more like prod.

- actually measure performance and try to improve it

- it’s very easy to write accidentally quadratic code and the canonical example is this sort of triangular computation where you do some linear amount of work processing all the finished or remaining items on each item you process.

As I read the article, my guess was that it was some terrible synchronisation bug (eg download a bit of data -> hand off to two sub tasks in parallel -> each tries to take out the same lock on something (eg some shared data or worse, a hash bucket but your hash function is really bad so collisions are frequent) -> one process takes a while doing something, the other doesn’t take long but more data can’t be downloaded until it’s done -> the slow process consistently wins the race on some machines -> downloads get blocked and only 1 cpu is being used)

replies(6): >>26297354 #>>26297512 #>>26297996 #>>26298417 #>>26300929 #>>26301783 #
2. petters ◴[] No.26297354[source]
Is there some reason sscanf can not be implemented without calling strlen?
replies(1): >>26297827 #
3. Nitramp ◴[] No.26297512[source]
- do not implement your own JSON parser (I mean, really?).

- if you do write a parser, do not use scanf (which is complex and subtle) for parsing, write a plain loop that dispatches on characters in a switch. But really, don't.

replies(2): >>26298311 #>>26301875 #
4. ddulaney ◴[] No.26297827[source]
It could be, and the article acknowledges that possibility. For example, a cursory check of the musl sscanf [0] suggests that it does not (though I may have missed something). However, whichever implementation Rockstar used apparently does.

[0]: https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfscanf.c

replies(3): >>26298967 #>>26301731 #>>26331946 #
5. ant6n ◴[] No.26297996[source]
I thought the lesson is "listen to your customers and fix the issues they complain about".
6. dan-robertson ◴[] No.26298311[source]
I think sscanf is subtle because when you think about what it does (for a given format string), it’s reasonably straightforward. The code in question did sscanf("%d", ...), which you read as “parse the digits at the start of the string into a number,” which is obviously linear. The subtlety is that sscanf doesn’t do what you expect. I think that “don’t use library functions that don’t do what you expect” is impossible advice.

I don’t use my own json parser but I nearly do. If this were some custom format rather than json and the parser still used sscanf, the bug would still happen. So I think json is somewhat orthogonal to the matter.

replies(2): >>26300188 #>>26310769 #
7. acdha ◴[] No.26298417[source]
> actually measure performance and try to improve it

This really rings truest to me: I find it hard to believe nobody ever plays their own game but I’d easily believe that the internal culture doesn’t encourage anyone to do something about it. It’s pretty easy to imagine a hostile dev-QA relationship or management keeping everyone busy enough that it’s been in the backlog since it’s not causing crashes. After all, if you cut “overhead” enough you might turn a $1B game into a $1.5B one, right?

replies(1): >>26299894 #
8. JdeBP ◴[] No.26298967{3}[source]
Slightly less cursory: https://news.ycombinator.com/item?id=26298300
9. Jach ◴[] No.26299894[source]
Lots of possibilities. Another one I imagined is that "only the senior devs know how to use a profiler, and they're stuck in meetings all the time."
replies(1): >>26302629 #
10. Nitramp ◴[] No.26300188{3}[source]
> The code in question did sscanf("%d", ...), which you read as “parse the digits at the start of the string into a number,” which is obviously linear.

I think part of the problem is that scanf has a very broad API and many features via its format string argument. I assume that's where the slowdown comes from here - scanf needs to implement a ton of features, some of which need the input length, and the implementor expected it to be run on short strings.

> The subtlety is that sscanf doesn’t do what you expect. I think that “don’t use library functions that don’t do what you expect” is impossible advice.

I don't know, at face value it seems reasonable to expect programmers to carefully check whether the library function they use does what they want it to do? How would you otherwise ever be sure what your program does?

There might be an issue that scanf doesn't document it's performance well. But using a more appropriate and tighter function (atoi?) would have avoided the issue as well.

Or, you know, don't implement your own parser. JSON is deceptively simple, but there's still enough subtlety to screw things up, qed.

replies(2): >>26300368 #>>26300865 #
11. thaumasiotes ◴[] No.26300368{4}[source]
> I assume that's where the slowdown comes from here - scanf needs to implement a ton of features, some of which need the input length, and the implementor expected it to be run on short strings.

I didn't get that impression. It sounded like the slowdown comes from the fact that someone expected sscanf to terminate when all directives were successfully matched, whereas it actually terminates when either (1) the input is exhausted; or (2) a directive fails. There is no expectation that you run sscanf on short strings; it works just as well on long ones. The expectation is that you're intentionally trying to read all of the input you have. (This expectation makes a little more sense for scanf than it does for sscanf.)

The scanf man page isn't very clear, but it looks to me like replacing `sscanf("%d", ...)` with `sscanf("%d\0", ...)` would solve the problem. "%d" will parse an integer and then dutifully read and discard the rest of the input. "%d\0" will parse an integer and immediately fail to match '\0', forcing a termination.

EDIT: on my xubuntu install, scanf("%d") does not clear STDIN when it's called, which conflicts with my interpretation here.

replies(1): >>26300451 #
12. JdeBP ◴[] No.26300451{5}[source]
No it would not. Think about what the function would see as its format string in both cases.

The root cause here isn't formatting or scanned items. It is C library implementations that implement the "s" versions of these functions by turning the input string into a nonce FILE object on every call, which requires an initial call to strlen() to set up the end of read buffer point. (C libraries do not have to work this way. Neither P.J. Plauger's Standard C library nor mine implement sscanf() this way. I haven't checked Borland's or Watcom's.)

See https://news.ycombinator.com/item?id=26298300 and indeed Roger Leigh six months ago at https://news.ycombinator.com/item?id=24460852 .

replies(1): >>26301721 #
13. dan-robertson ◴[] No.26300865{4}[source]
But sscanf does do what they want it to do by parsing numbers. The problem is that it also calls strlen. I’m still not convinced that it’s realistically possible to have people very carefully understand the performance characteristics of every function they use.

Every programmer I know thinks about performance of functions either by thinking about what the function is doing and guessing linear/constant, or by knowing what the data structure is and guessing (eg if you know you’re doing some insert operation on a binary tree, guess that it’s logarithmic), or by knowing that the performance is subtle (eg “you would guess that this is log but it needs to update some data on every node so it’s linear”). When you write your own library you can hopefully avoid having functions with subtle performance and make sure things are documented well (but then you also don’t think they should be writing their own library). When you use the C stdlib you’re a bit stuck. Maybe most of the functions there should just be banned from the codebase, but I would guess that would be hard.

14. woko ◴[] No.26300929[source]
> actually measure performance and try to improve it

This reminds me that I used to do that all the time when programming with Matlab. I have stopped investigating performance bottlenecks after switching to Python. It is as if I traded performance profiling with unit testing in my switch from Matlab to Python.

I wonder if there are performance profilers which I could easily plug into PyCharm to do what I used to do with Matlab's default IDE (with a built-in profiler) and catch up with good programming practices. Or maybe PyCharm does that already and I was not curious enough to investigate.

15. pja ◴[] No.26301721{6}[source]
Yes, it looks that way. On the unix/linux side of things, glibc also implements scanf() by converting to a FILE* object, as does the OpenBSD implementation.

It looks like this approach is taken by the majority of sscanf() implementations!

I honestly would not personally have expected sscanf() to implicitly call strlen() on every call.

16. pja ◴[] No.26301731{3}[source]
A lot of libc implementations seem to implement sscanf() this way: As well as the windows libc ones mentioned above, I checked the OpenBSD & glibc implemtations & they worked the same way.
17. simias ◴[] No.26301783[source]
The JSON parsing is forgivable (I actually didn't know that scanf computed the length of the string for every call) but the deduplication code is a lot less so, especially in C++ where maps are available in the STL.

It also comforts me into my decision of never using scanf, instead preferring manual parsing with strtok_r and strtol and friends. It's just not robust and flexible enough.

18. thw0rted ◴[] No.26301875[source]
This is probably good advice but not even relevant. It's down one level from the real problem: when your game spends 6 minutes on a loading screen, *profile* the process first. You can't optimize what you haven't measured. Now, if you've identified that JSON parsing is slow, you can start worrying about how to fix that (which, obviously, should be "find and use a performant and well-tested library".)
19. acdha ◴[] No.26302629{3}[source]
I could easily imagine variations of that where this is in maintenance mode with a couple of junior programmers because the senior ones either burnt out or moved on to another project. I’ve definitely gotten the impression that most games studios have roughly the same attitude towards their employees as a strip-miner has towards an Appalachian hilltop.
replies(1): >>26303239 #
20. disgruntledphd2 ◴[] No.26303239{4}[source]
If this were anyone else but Rockstar, I'd agree with you.

But Rockstar essentially only have GTA and Red Dead to take care of, it's not like they're making an annual title or something :)

replies(1): >>26303402 #
21. acdha ◴[] No.26303402{5}[source]
True, but they could still be understaffing and have their senior people working on the next big version rather than maintenance. It’s definitely penny wise, pound foolish no matter the exact details.
22. azernik ◴[] No.26310769{3}[source]
> If this were some custom format rather than json and the parser still used sscanf, the bug would still happen. So I think json is somewhat orthogonal to the matter.

What's the point of using standard formats if you're not taking advantage of off-the-shelf software for handling it?

23. SolarNet ◴[] No.26331946{3}[source]
Part of this is that game companies are notorious for re-implementing standard libraries for "performance". I suspect both shitty implementations of sscanf and the not-a-hashmap stem from this.