←back to thread

3883 points kuroguro | 7 comments | | HN request time: 1.131s | source | bottom
Show context
comboy ◴[] No.26296735[source]
Holy cow, I'm a very casual gamer, I was excited about the game but when it came out I decided I don't want to wait that long and I'll wait until they sort it out. 2 years later it still sucked. So I abandoned it. But.. this... ?! This is unbelievable. I'm certain that many people left this game because of the waiting time. Then there are man-years wasted (in a way different than desired).

Parsing JSON?! I thought it was some network game logic finding session magic. If this is true that's the biggest WTF I saw in the last few years and we've just finished 2020.

Stunning work just having binary at hand. But how could R* not do this? GTAV is so full of great engineering. But if it was a CPU bottleneck then who works there that wouldn't just be irked to try to nail it? I mean it seems like a natural thing to try to understand what's going on inside when time is much higher than expected even in the case where performance is not crucial. It was crucial here. Almost directly translates to profits. Unbelievable.

replies(5): >>26297228 #>>26297263 #>>26297997 #>>26298680 #>>26299917 #
dan-robertson ◴[] No.26297228[source]
I don’t think the lesson here is “be careful when parsing json” so much as it’s “stop writing quadratic code.” The json quadratic algorithm was subtle. I think most people’s mental model of sscanf is that it would be linear in the number of bytes it scans, not that it would be linear in the length of the input. With smaller test data this may have been harder to catch. The linear search was also an example of bad quadratic code that works fine for small inputs.

Some useful lessons might be:

- try to make test more like prod.

- actually measure performance and try to improve it

- it’s very easy to write accidentally quadratic code and the canonical example is this sort of triangular computation where you do some linear amount of work processing all the finished or remaining items on each item you process.

As I read the article, my guess was that it was some terrible synchronisation bug (eg download a bit of data -> hand off to two sub tasks in parallel -> each tries to take out the same lock on something (eg some shared data or worse, a hash bucket but your hash function is really bad so collisions are frequent) -> one process takes a while doing something, the other doesn’t take long but more data can’t be downloaded until it’s done -> the slow process consistently wins the race on some machines -> downloads get blocked and only 1 cpu is being used)

replies(6): >>26297354 #>>26297512 #>>26297996 #>>26298417 #>>26300929 #>>26301783 #
Nitramp ◴[] No.26297512[source]
- do not implement your own JSON parser (I mean, really?).

- if you do write a parser, do not use scanf (which is complex and subtle) for parsing, write a plain loop that dispatches on characters in a switch. But really, don't.

replies(2): >>26298311 #>>26301875 #
1. dan-robertson ◴[] No.26298311[source]
I think sscanf is subtle because when you think about what it does (for a given format string), it’s reasonably straightforward. The code in question did sscanf("%d", ...), which you read as “parse the digits at the start of the string into a number,” which is obviously linear. The subtlety is that sscanf doesn’t do what you expect. I think that “don’t use library functions that don’t do what you expect” is impossible advice.

I don’t use my own json parser but I nearly do. If this were some custom format rather than json and the parser still used sscanf, the bug would still happen. So I think json is somewhat orthogonal to the matter.

replies(2): >>26300188 #>>26310769 #
2. Nitramp ◴[] No.26300188[source]
> The code in question did sscanf("%d", ...), which you read as “parse the digits at the start of the string into a number,” which is obviously linear.

I think part of the problem is that scanf has a very broad API and many features via its format string argument. I assume that's where the slowdown comes from here - scanf needs to implement a ton of features, some of which need the input length, and the implementor expected it to be run on short strings.

> The subtlety is that sscanf doesn’t do what you expect. I think that “don’t use library functions that don’t do what you expect” is impossible advice.

I don't know, at face value it seems reasonable to expect programmers to carefully check whether the library function they use does what they want it to do? How would you otherwise ever be sure what your program does?

There might be an issue that scanf doesn't document it's performance well. But using a more appropriate and tighter function (atoi?) would have avoided the issue as well.

Or, you know, don't implement your own parser. JSON is deceptively simple, but there's still enough subtlety to screw things up, qed.

replies(2): >>26300368 #>>26300865 #
3. thaumasiotes ◴[] No.26300368[source]
> I assume that's where the slowdown comes from here - scanf needs to implement a ton of features, some of which need the input length, and the implementor expected it to be run on short strings.

I didn't get that impression. It sounded like the slowdown comes from the fact that someone expected sscanf to terminate when all directives were successfully matched, whereas it actually terminates when either (1) the input is exhausted; or (2) a directive fails. There is no expectation that you run sscanf on short strings; it works just as well on long ones. The expectation is that you're intentionally trying to read all of the input you have. (This expectation makes a little more sense for scanf than it does for sscanf.)

The scanf man page isn't very clear, but it looks to me like replacing `sscanf("%d", ...)` with `sscanf("%d\0", ...)` would solve the problem. "%d" will parse an integer and then dutifully read and discard the rest of the input. "%d\0" will parse an integer and immediately fail to match '\0', forcing a termination.

EDIT: on my xubuntu install, scanf("%d") does not clear STDIN when it's called, which conflicts with my interpretation here.

replies(1): >>26300451 #
4. JdeBP ◴[] No.26300451{3}[source]
No it would not. Think about what the function would see as its format string in both cases.

The root cause here isn't formatting or scanned items. It is C library implementations that implement the "s" versions of these functions by turning the input string into a nonce FILE object on every call, which requires an initial call to strlen() to set up the end of read buffer point. (C libraries do not have to work this way. Neither P.J. Plauger's Standard C library nor mine implement sscanf() this way. I haven't checked Borland's or Watcom's.)

See https://news.ycombinator.com/item?id=26298300 and indeed Roger Leigh six months ago at https://news.ycombinator.com/item?id=24460852 .

replies(1): >>26301721 #
5. dan-robertson ◴[] No.26300865[source]
But sscanf does do what they want it to do by parsing numbers. The problem is that it also calls strlen. I’m still not convinced that it’s realistically possible to have people very carefully understand the performance characteristics of every function they use.

Every programmer I know thinks about performance of functions either by thinking about what the function is doing and guessing linear/constant, or by knowing what the data structure is and guessing (eg if you know you’re doing some insert operation on a binary tree, guess that it’s logarithmic), or by knowing that the performance is subtle (eg “you would guess that this is log but it needs to update some data on every node so it’s linear”). When you write your own library you can hopefully avoid having functions with subtle performance and make sure things are documented well (but then you also don’t think they should be writing their own library). When you use the C stdlib you’re a bit stuck. Maybe most of the functions there should just be banned from the codebase, but I would guess that would be hard.

6. pja ◴[] No.26301721{4}[source]
Yes, it looks that way. On the unix/linux side of things, glibc also implements scanf() by converting to a FILE* object, as does the OpenBSD implementation.

It looks like this approach is taken by the majority of sscanf() implementations!

I honestly would not personally have expected sscanf() to implicitly call strlen() on every call.

7. azernik ◴[] No.26310769[source]
> If this were some custom format rather than json and the parser still used sscanf, the bug would still happen. So I think json is somewhat orthogonal to the matter.

What's the point of using standard formats if you're not taking advantage of off-the-shelf software for handling it?