Most active commenters
  • hnick(5)
  • rasz(4)
  • nullserver(3)
  • jack_riminton(3)

←back to thread

3883 points kuroguro | 76 comments | | HN request time: 1.553s | source | bottom
Show context
breakingcups ◴[] No.26296724[source]
It is absolutely unbelievable (and unforgivable) that a cash cow such as GTA V has a problem like this present for over 6 years and it turns out to be something so absolutely simple.

I do not agree with the sibling comment saying that this problem only looks simple and that we are missing context.

This online gamemode alone made $1 billion in 2017 alone.

Tweaking two functions to go from a load time of 6 minutes to less than two minutes is something any developer worth their salt should be able to do in a codebase like this equipped with a good profiler.

Instead, someone with no source code managed to do this to an obfuscated executable loaded with anti-cheat measures.

The fact that this problem is caused by Rockstar's excessive microtransaction policy (the 10MB of JSON causing this bottleneck are all available microtransaction items) is the cherry on top.

(And yes, I might also still be salty because their parent company unjustly DMCA'd re3 (https://github.com/GTAmodding/re3), the reverse engineered version of GTA III and Vice City. A twenty-year-old game. Which wasn't even playable without purchasing the original game.)

replies(40): >>26296812 #>>26296886 #>>26296970 #>>26297010 #>>26297087 #>>26297123 #>>26297141 #>>26297144 #>>26297184 #>>26297206 #>>26297323 #>>26297332 #>>26297379 #>>26297401 #>>26297448 #>>26297480 #>>26297806 #>>26297961 #>>26298056 #>>26298135 #>>26298179 #>>26298213 #>>26298234 #>>26298624 #>>26298682 #>>26298777 #>>26298860 #>>26298970 #>>26299369 #>>26299512 #>>26299520 #>>26300002 #>>26300046 #>>26301169 #>>26301475 #>>26301649 #>>26301961 #>>26304727 #>>26305016 #>>26311396 #
1. masklinn ◴[] No.26296886[source]
> The fact that this problem is caused by Rockstar's excessive microtransaction policy (the 10MB of JSON causing this bottleneck are all available microtransaction items) is the cherry on top.

For what it's worth, 10MB of JSON is not much. Duplicating the example entry from the article 63000 times (replacing `key` by a uuid4 for unicity) yields 11.5MB JSON.

Deserialising that JSON then inserting each entry in a dict (indexed by key) takes 450ms in Python.

But as Bruce Dawson oft notes, quadratic behaviour is the sweet spot because it's "fast enough to go into production, and slow enough to fall over once it gets there". Here odds are there were only dozens or hundreds of items during dev so nobody noticed it would become slow as balls beyond a few thousand items.

Plus load times are usually the one thing you start ignoring early on, just start the session, go take a coffee or a piss, and by the time you're back it's loaded. Especially after QA has notified of slow load times half a dozen times, the devs (with fast machines and possibly smaller development dataset) go "works fine", and QA just gives up.

replies(11): >>26297203 #>>26297314 #>>26298126 #>>26298269 #>>26298511 #>>26298524 #>>26300274 #>>26301081 #>>26302098 #>>26305727 #>>26306126 #
2. hanniabu ◴[] No.26297203[source]
That QA bit is too true, "it works for me!" shrug.
3. ldng ◴[] No.26297314[source]
But is quadratic the real issue ? Isn't that a developer answer ?

The best algorithm for small, medium or a large size are not the same and generally behave poorly in the other cases. And what is small? Medium? Large?

The truth is that there is no one size fits all and assumptions need to be reviewed periodically and adapted accordingly. And they never are... Ask a DBA.

replies(4): >>26297536 #>>26299324 #>>26300073 #>>26300359 #
4. gridspy ◴[] No.26297536[source]
quadratic is a fancy way of saying "this code is super fast with no data, super slow once you have a decent amount"

The problem is that when you double the amount of stuff in the JSON document, you quadruple (or more) the scanning penalty in both the string and the list.

Why quadruple? Because you end up scanning a list which is twice as long. You have to scan that list twice as many times. 2x2 = 4. The larger list no longer fits in the fast (cache) memory, among other issues. The cache issue alone can add another 10x (or more!) penalty.

replies(2): >>26301042 #>>26301492 #
5. hnick ◴[] No.26298126[source]
> Plus load times are usually the one thing you start ignoring early on, just start the session, go take a coffee or a piss, and by the time you're back it's loaded.

In GTA V, when I tried to enjoy multiplayer with my friends the abysmal load times were what killed it for me.

You actually have to load into the game world - which takes forever - before having a friend invite you to their multiplayer world - which takes forever, again.

So both a coffee, and a piss. Maybe they fixed that now?

replies(5): >>26298242 #>>26298478 #>>26298536 #>>26298577 #>>26298673 #
6. simias ◴[] No.26298242[source]
I agree. I played GTA online for a bit and quite enjoyed it but I haven't touched it in a while and the insane loading times are a big reason why.

It kind of baffles me that they haven't bothered to fix this trivial issue when the result is to cut 4 entire minutes of loading time.

replies(1): >>26300141 #
7. nullserver ◴[] No.26298269[source]
Was new guy at a startup. Soon noticed that chuck Norris was in our compiled JavaScript. Turned out Someone had included the entire test suite in production deploy.

Had been like that for nearly a year. A few minutes of work brought out client js file from 12MB to less then 1mb.

replies(4): >>26298301 #>>26298716 #>>26299932 #>>26300594 #
8. jogjayr ◴[] No.26298301[source]
> Soon noticed that chuck Norris was in our compiled JavaScript

Is that a library? Or the string "Chuck Norris"?

replies(3): >>26298370 #>>26298499 #>>26300517 #
9. jeffgreco ◴[] No.26298370{3}[source]
Or the actor/martial artist?
replies(1): >>26304936 #
10. agret ◴[] No.26298478[source]
Then when you want to actually do an activity like a deathmatch you have to wait for matchmaking and then the loading - takes forever. Once you are finally in a match it's okay but as soon as the match ends you have to wait for the world to load again and then queue again which takes bloody forever. Spend 2hrs playing the game and have only a few matches, more time spent looking at loading screens than actually playing anything.
replies(2): >>26300300 #>>26301380 #
11. kevincox ◴[] No.26298499{3}[source]
I was assuming that someone used the string as a name in the test?
replies(1): >>26299073 #
12. nicoburns ◴[] No.26298511[source]
You mention quadratic behaviours and there's probably some truth to that, but it seems to me that it's partly a C++ problem. In any other langauge nobody would even consider hacking up JSON parsing using a string function. They'd use the stdlib functional if available or import a library, and this problem wouldn't exist.
replies(3): >>26298626 #>>26300323 #>>26300828 #
13. beached_whale ◴[] No.26298524[source]
It would be interesting to see what JSON library they used that uses scanf for parsing numbers. Nothing like a painters algorithm type scenario to really slow things down, but also JSON numbers are super simple and don't need all that work. That is hundreds of MB's of unneeded searching for 0s
replies(1): >>26301172 #
14. zmix ◴[] No.26298536[source]
> So both a coffee, and a piss.

Reminds me on loading "G.I. Joe" (from Epyx) on the C64 with a 1541 floppy disk. However, the long loads came after every time you died and meant you also had to swap 4 disks.

replies(1): >>26298585 #
15. ajacksified ◴[] No.26298577[source]
They didn't fix it. I tried a few days ago, because it's a really fun game... except for these seemingly easy to fix issues that are huge barriers.
16. hnick ◴[] No.26298585{3}[source]
I remember as a kid I went to someone's birthday party in the 80s and we wanted to play a karate themed game on something that used a cassette tape. It took so long to load we went and played outside!

To be fair to GTA V, I don't think my installation was on a SSD because it was 50GB or something at the time (now it's 95GB?), but that said when it released SSDs were not as cheap or widespread as they are now so that's their problem. The linked article shows the initial load to be much shorter which did not match my experience.

replies(4): >>26300021 #>>26301730 #>>26302175 #>>26303483 #
17. nitwit005 ◴[] No.26298626[source]
A lot of other languages make use of the c standard library functions to parse floats (and to do various trigonometric functions), so they may be more similar than you imagine.
replies(1): >>26308398 #
18. thaumasiotes ◴[] No.26298673[source]
> You actually have to load into the game world - which takes forever - before having a friend invite you to their multiplayer world - which takes forever, again.

Is that... the same problem? Is microtransaction data different in your friend's multiplayer world than it is in the normal online world?

replies(1): >>26298870 #
19. ◴[] No.26298716[source]
20. hnick ◴[] No.26298870{3}[source]
The article mentions story mode loading as well as online loading, but as I mentioned in another comment the story time shown there is much lower than what I experienced, probably because SSDs are now standard and were rarer in 2013 (I could not justify 50GB+ on this one game at the time). So it may be a mixture of factors.
21. hnick ◴[] No.26299073{4}[source]
Yes I think they meant they saw a weird name that stood out (I've seen Donald Duck at work) and they investigated more, finding it was test data.
replies(1): >>26300395 #
22. ◴[] No.26299324[source]
23. rsj_hn ◴[] No.26299932[source]
related: Guy Fieri in node https://nodesource.com/blog/is-guy-fieri-in-your-node-js-pac...
24. jcims ◴[] No.26300021{4}[source]
Not to one up but we didn’t have any storage for our c64 for the first year or so. We would team up to enter a game from Byte or whatever (one reader, one typer, one proofreader) and then protect that thing with our lives all weekend to keep people from unplugging it. The machine code games were the easiest to type but if they didn’t work you were kind of hosed lol.
replies(1): >>26334676 #
25. oivey ◴[] No.26300073[source]
In the small case here, there is no meaningful difference in speed between parsers. Using a quadratic algorithm has no advantage and is just an incorrect design.
26. jjoonathan ◴[] No.26300141{3}[source]
Back in dialup/DSL days I discovered a texture compression issue in America's Army (the free US Army game) that doubled its download/install size. Typical download times were about a day and the resume mechanism was poor, so this had the potential to save a lot of grief, not to mention hosting costs. I emailed them, they banned me for hacking, and the next version still had the same issue. Shrug.
replies(2): >>26301028 #>>26344626 #
27. Agentlien ◴[] No.26300274[source]
For the online games I worked on (a few of the recent NFS games) the items database was similar to the final set quite early in production and we kept an ongoing discussion about load times.

I really liked this article, but I am a bit surprised that this made it into production. I have seen a few instances of this type of slowdowns live for very long, but they tend to be in compile times or development workflow, not in the product itself.

28. ZuLuuuuuu ◴[] No.26300300{3}[source]
Judging from your word choice "deathmatch" and your experience with long loading/matchmaking times I guess you might be a fellow Quake Champions player. Even if you are not, I agree that long loading times are a mood killer when you just want to play a couple of quick matches after work in your limited free time. It is even worse when you know the game's development is abandoned and it will never get fixed, even though you enjoy the game itself.
replies(1): >>26300591 #
29. raverbashing ◴[] No.26300323[source]
But C++ had at least a hash_set/hash_map since forever (or just set/map which are still better than this)

I'm sure there are libraries to parse json in C++ or at least they should have built something internally if it's critical, instead they have someone less experienced build it and not stress test it?

replies(1): >>26301441 #
30. masklinn ◴[] No.26300359[source]
> But is quadratic the real issue ?

Yes. That is literally the entirety of the issue: online loading takes 5mn because there are two accidentally quadratic loops which spin their wheel.

> The best algorithm for small, medium or a large size are not the same and generally behave poorly in the other cases.

“Behaves poorly” tends to have very different consequences: an algorithm for large sizes tends to have significant set up and thus constant overhead for small sizes. This is easy to notice and remediate.

A naive quadratic algorithm will blow up your production unless you dev with production data, and possibly even then (if production data keeps growing long after the initial development).

31. wiz21c ◴[] No.26300395{5}[source]
I've seen "Carmen Electra" myself :-) And other funny (more gross) names in the dark corners of huge databases...

I also had a customer discover a playboy center page at the end of a test we sent to them. One of the dev thought it'd be a nice reward. Things went very bad from him right after the phone call...

replies(1): >>26300533 #
32. nullserver ◴[] No.26300517{3}[source]
String. Used as factory test for populating objects in tests.

It certainly caught my attention.

33. nullserver ◴[] No.26300533{6}[source]
Always assume others have the sense of humor of a 99 year old stick of tnt.

Had someone at work mention they needed to drop off a dog at grooming on the way to pick up Chinese.

I had to walk away for a bit, as I couldn’t hold it in, but didn’t know if that sort of humor would fly with that crowd.

34. franga2000 ◴[] No.26300591{4}[source]
GTA V has a deathmatch mode and the parent comment sounds like it's talking about that. Especially the "once it's over, you need to load into the primary session, then wait for matchmaking, then wait for loading again" sounds exactly like GTA V.
replies(1): >>26300661 #
35. gordaco ◴[] No.26300594[source]
Ha, this is one of the reasons why I also include outlandish and wrong-looking stuff in unit tests. If we see where it doesn't belong, then we know for sure that we are doing something wrong.

Most often I use unicode strings in unexpected alphabets (i.e. from languages that are not supported by our application and that are not used by the mother tongue of any developer from our team). This includes Chinese, Malayalam, Arabic and a few more. There was a time when I wanted to test the "wrong data" cases for some deserialising function, and I was part annoyed and part amusingly surprised to discover that doing Integer.parseInt("٤٣٠٤٦٧٢١") in Java does parse the arabic digits correctly even without specifying any locale.

replies(1): >>26301179 #
36. ZuLuuuuuu ◴[] No.26300661{5}[source]
Ah I see, thanks for the clarification.
37. secondcoming ◴[] No.26300828[source]
Rapidjson
38. jack_riminton ◴[] No.26301028{4}[source]
That's hilarious. Out of interest who in the company did you email?
replies(1): >>26302363 #
39. jack_riminton ◴[] No.26301042{3}[source]
Great explanation. Thanks
40. john_minsk ◴[] No.26301081[source]
I heard that Chrome team had this KPI from very early on - how much time it takes for Chrome to load and it stayed the same to date. i.e. they can't make any changes that will increase this parameter. Very clever if you ask me
replies(1): >>26301324 #
41. dijit ◴[] No.26301172[source]
Unlikely to be a library, either it's libc or it's homegrown.

The only thing most game companies do when it comes to external libraries is to copy the source code of it into their repo and never update it, ever.

OpenSSL is this way, it's a required installation for Playstation but debugging it is seriously hard, and perforce (the games industries version control of choice) can't handle external dependencies. Not to mention visual studio (the game industries IDE of choice..) can't handle debugging external libraries well either.

So, most game studios throw up the hands, say "fuck it" and practice a heavy amount of NIH.

replies(2): >>26301816 #>>26308490 #
42. rasz ◴[] No.26301324[source]
Google lately "optimized" Chrome "time for the first page to load" by no longer waiting for extensions to initialize properly. First website you load bypasses all privacy/ad blocking extensions.
replies(4): >>26301402 #>>26301490 #>>26304875 #>>26312216 #
43. yholio ◴[] No.26301380{3}[source]
> more time spent looking at loading screens than actually playing anything.

This could easily compete for the most expensive bug in history, up there with the Pentium Bug. It might have halved the revenue potential of a billion dollar franchise.

44. raf42 ◴[] No.26301402{3}[source]
Thank you for confirming this, I thought I was going crazy seeing it happen a bunch recently. I assumed my system was just on the fritz.
45. nicoburns ◴[] No.26301441{3}[source]
>I'm sure there are libraries to parse json in C++

There certainly are, but adding a library is much more difficult in C++ than pretty much any other language which seems to tempt people into hacky self-built parsing when they really ought to know better.

replies(1): >>26312256 #
46. Cthulhu_ ◴[] No.26301490{3}[source]
Yeah I think that's the kind of odd behaviour that those KPI's end up causing; they 'cheat' the benchmark by avoiding certain behaviour, like loading extensions later.

I mean I can understand it, a lot of extensions don't need to be on the critical path.

But at the same time, I feel like Chrome could do things a lot better with extensions, such as better review policy and compiling them to wasm from the extensions store.

47. ldng ◴[] No.26301492{3}[source]
> quadratic is a fancy way of saying "this code is super fast with no data, super slow once you have a decent amount"

Well, that is an abuse of the term, by people that sometimes don't actually know what that really means. Up to a point, quadratic IS faster than linear after all for example. Too many developer love too abuse the word blindly.

If it is badly tested with no data, it is badly tested with no data. Period. Not "quadratic".

> The problem is that when you double the amount of stuff in the JSON document, you quadruple (or more) the scanning penalty in both the string and the list.

My point was precisely it depends on the data and initial assumption are to be routinely revised. I was making a general point.

Maybe the guy was pinky-sworn that the JSON would hardly change and that the items were supposed to be ordered, sequential and no more than 101. For all you know it is even documented and nobody cared/remembered/checked when updating the JSON. But we don't know, obfuscated code don't comes with comments and context ...

Or, it is actually a real rookie mistake. It probably was, but we don't have all the facts.

replies(2): >>26301691 #>>26313492 #
48. mytherin ◴[] No.26301691{4}[source]
> Well, that is an abuse of the term, by people that sometimes don't actually know what that really means. Up to a point, quadratic IS faster than linear after all for example. Too many developer love too abuse the word blindly.

There is absolutely no guarantee that a quadratic algorithm has to be faster than a linear algorithm for small N. It can be, in some situations for some algorithms, but the complexity class of the algorithm has nothing to do with that. A quadratic algorithm may well be slower than a linear algorithm for any N.

The only thing the complexity class tells us is that starting from some N the linear algorithm is faster than the quadratic algorithm. That N could be 0, it could be 100, or it could be 1 billion.

In my experience it's usually between 0~1000, but again, that depends. The complexity class makes no such guarantees. The complexity class tells us the general shape of the performance graph, but not exactly where the graphs will intersect: this depends on the constants.

> If it is badly tested with no data, it is badly tested with no data. Period. Not "quadratic".

It is both. The problem is that the algorithm has quadratic complexity. The fact that it was badly tested caused this fact to remain hidden while writing the code, and turn into a real problem later on.

49. mst ◴[] No.26301730{4}[source]
Oh gods, now I'm reminded of a cassette based game where there was one enemy where if he got a solid hit on you, you got bumped back to the beginning of the level.

Which meant the game displayed "rewind to mark 500 and then hit play" and you had to do that to restart lolsob.

replies(1): >>26322973 #
50. thw0rted ◴[] No.26301816{3}[source]
In another decade, there's going to be a story here about somebody getting their hands on the original source for this game, and the JSON parser will be a 10-line function with "//TODO: optimize later" at the top.
51. thefz ◴[] No.26302098[source]
> Here odds are there were only dozens or hundreds of items during dev so nobody noticed it would become slow as balls beyond a few thousand items.

Might be, but this particular issue has been raised by thousands of players and ignored for *years*.

replies(1): >>26305481 #
52. vagrantJin ◴[] No.26302175{4}[source]
That we can say the install bundle is 50 gigs with a straight face though? I remember not long ago games that were 8 gigs caused mayhem.

I suppose devs dont care about that as long as their QA allows it.

replies(1): >>26309481 #
53. jjoonathan ◴[] No.26302363{5}[source]
I was only able to find contact information for one person who I knew was probably technical (from their posts), so I sent it to them.

I never learned the "other side" of this story, but a few years later the same dev team tried to recruit me at a CS contest, to which I politely declined.

More details: I was young, without credit card, and gaming on a mac. AA was free and mac compatible. For a while -- apparently mac ports of unreal engine games were approximately all done by a single very productive contractor and from what I understand the US Army, uhh, stopped paying him at some point. So he stopped releasing the mac ports. From my point of view, this meant that I could only play with other mac users and couldn't use any of the fancy new maps. Logs indicated that the compatibility problems with the new maps were not particularly deep, so I got to parsing the unreal map files and was able to restore compatibility by deleting the offending objects. I implemented texture decoding/encoding mostly for curiosity and because textures were well documented in the reverse engineering "literature." I imagined a workflow where someone would bulk export and re-import textures and aside from the texture names the one piece of metadata I needed was the format: RGBA (uncompressed) or DXT (compressed)? I realized that I could easily identify DXT compression from the image histogram, so I didn't need to store separate metadata. Nifty! But it didn't work. Lots of textures stored in uncompressed RGBA8888 "erroneously" round-tripped to DXT. After poring over my own code, I eventually realized that this was because on many of the textures someone had enabled DXT compression and then disabled it, dropping the texture quality to that of DXT while bloating the texture size to that of RGBA8888 (other textures were still stored as DXT, so compression itself was still working). I wrote a quick tool to add up the wasted space from storing DXT compressed textures in uncompressed RGB format and it came out to about half the total disk space, both before and after the top level installer's lossless compression. They could have re-enabled compression on most of the textures where they had disabled it without loss in quality, and if they had wanted a list of such textures I would have been able to provide it, but it didn't go down that way. When I shared what happened with my father, who had served, his reaction was "Now that's the Army I know!"

replies(1): >>26313818 #
54. rasz ◴[] No.26303483{4}[source]
C64 tape with turbo was actually faster (~500 Bytes/s) than non turbo Floppy (~400 Bytes/s).

Many 8bit Atari owners will have horror memories of aborted Tape loads. After over 20 years someone finally discovered a bug in original ATARI Tape loading ROM routine resulting in randomly corrupted loads, no amount of sitting motionless while the game loads would help in that case :)

55. saruken ◴[] No.26304875{3}[source]
Wow, had no idea about this! Can you link me to a writeup or something?
replies(1): >>26305965 #
56. finnh ◴[] No.26304936{4}[source]
bravo! i'll take the downvotes that this content-free comment will garner, but you just made my morning.
57. tekromancr ◴[] No.26305481[source]
Yea, given how easy it was for the author of the post to find it, I would guess that literally nobody in the last decade bothered to run a profiler to see where they were spending time.

The only possible explanation is that management never made it a priority.

I could see this happening. A project/product manager thinking "We could spend $unknown hours looking for potential speedups, or we could spend $known hours implementing new features directly tied to revenue"

Which is kind of ironic since this fix would keep players playing for more time, increasing the chances that they spend more money.

replies(1): >>26311100 #
58. milesward ◴[] No.26305727[source]
hmm.. the entire pricing table for Google Cloud (nearly 100k skus and piles of weirdness) was only ~2mb... seems pretty big.
59. rasz ◴[] No.26305965{4}[source]
https://github.com/Tampermonkey/tampermonkey/issues/1083

confirmed by Tampermonkey dev.

60. hinkley ◴[] No.26306126[source]
And because a merchandising push in many games may be another 10-50 items, the first couple times the % increase is high but the magnitude is low (.5s to 1s) and by the time you're up to 1000, the % increase is too small to notice. Oh it took 30 seconds last week and now it's 33.

Boiling the frog, as it were. This class of problems is why I want way more charts on the projects I work on, especially after we hit production. I may not notice an extra 500ms a week, but I'm for damn sure going to notice the slope of a line on a 6 month chart.

61. oblio ◴[] No.26308398{3}[source]
Not super relevant, though. The average standard library from another language is a lot more hardened than the function written by Joe Sixpack in C last night.
62. WorldMaker ◴[] No.26308490{3}[source]
Visual Studio keeps toying with the idea of a "NuGet for C++" and it is amazing that it still hasn't happened yet. It may seem to indicate that it isn't necessarily the IDE that can fix it, but the user's attitude. How much of the NIH and "just copy that dependency into the tree" is still encouraged for "security" [0] and "control"/"proprietary source"/"management" reasons?

[0] Despite it being an obvious anti-pattern that you aren't going to update dependencies that require copy/paste and manual merge reviews, so security problems should be obviously more rampant than in systems where updating a dependency to the latest security patch is a single install command line (or update button in a GUI), there still seems to be so many C++ devs that love to chime in to every HN thread on a package manager vulnerability that they don't have those vulnerabilities. They don't "have" dependencies to manage, no matter how many stale 0-Days they copied and pasted from outside projects, they don't count as "dependencies" because they are hidden who knows where in the source tree.

replies(1): >>26309075 #
63. beached_whale ◴[] No.26309075{4}[source]
I suspect vcpkg is the choice they made, it will/does have support for private and binary repo's too
replies(1): >>26309122 #
64. WorldMaker ◴[] No.26309122{5}[source]
That certainly is the most recent attempt. They've had projects of one sort or another going back at least as far as 2013 from mentions in public blog posts but so far none of them seem to have got much traction with the community. Here's hoping it works this time?
65. hnick ◴[] No.26309481{5}[source]
I read that after patches GTA V is now around 95GB.

Call of Duty: Black Ops Cold War is around 200GB fully installed with all features (HD textures etc).

Some people have insinuated this is intentional to crowd out other games on console hard disks and make moving away from CoD have an opportunity cost. It's probably just laziness.

I haven't looked into it in the past but some prior offenders had a lot of space wasted from uncompressed multi-lingual audio. Instead of letting us choose a language it installs them all so you can switch in game, and uncompressed for saving the CPU for game logic. For CoD the optional HD texture pack is 38GB so that's still a lot unaccounted for.

replies(2): >>26315204 #>>26345841 #
66. ironmagma ◴[] No.26311100{3}[source]
> management never made it a priority

I think we need a new vocabulary to cover situations like this one. It's not just that other issues took priority here, it's that this wasn't even entered into the list of things to give a crap about. It's something like digital squalor.

67. sundvor ◴[] No.26312216{3}[source]
I hope the Edge team never merges this in.
replies(1): >>26313715 #
68. bleachisback ◴[] No.26312256{4}[source]
The first C++ JSON library that appears when you google it is a single header file.

https://github.com/nlohmann/json

I know because I used it years ago in school.

One of the fastest libraries (https://github.com/miloyip/nativejson-benchmark#parsing-time) is also header-only and compares its speed to strlen.

https://github.com/Tencent/rapidjson

69. imtringued ◴[] No.26313492{4}[source]
>Well, that is an abuse of the term, by people that sometimes don't actually know what that really means. Up to a point, quadratic IS faster than linear after all for example. Too many developer love too abuse the word blindly.

The problem with this argument is that if the data size and constants are sufficiently small enough people don't care about whether the linear algorithm is slow. In the case of JSON parsing the constants are exactly the same no matter what string length algorithm you use. Thus when n is small you don't care since the overall loading time is short anyway. When n is big you benefit from faster loading times.

I honestly don't understand what goal you are trying to accomplish. By your logic it is more important to keep short loading times short and long loading times long rather than do the conventional engineering wisdom of lowering the average or median loading time which will sometimes decrease the duration of long loading screens at the expensive of increasing the duration of short loading screens.

70. rasz ◴[] No.26313715{4}[source]
Its been in Chrome since 81, Id wager a guess its in Edge and nobody noticed.
71. jack_riminton ◴[] No.26313818{6}[source]
huh, I wonder if the technical person send it to management for a decision?
72. STRML ◴[] No.26315204{6}[source]
Titanfall did this. If I recall correctly, the full install size was about 48GB, 35GB of which was just uncompressed audio. And that was back in the days when 120GB (or less) SSDs were common. A total self-own and never fixed or understood.

It's not like decoding audio takes enough time on any modern multi-core processor to disrupt the game loop. It's not even on the radar.

73. da_chicken ◴[] No.26322973{5}[source]
Maybe this is what they really meant by "GOTO considered harmful".
74. zmix ◴[] No.26334676{5}[source]
Oh, wow! This was some real hardcore sh*t! :-)
75. wing-_-nuts ◴[] No.26344626{4}[source]
Oh man, this brings back memories. That game was a great tactical shooter back in the day. Sadly my PC was unable to keep up with the higher requirements required by it's updates.
76. vagrantJin ◴[] No.26345841{6}[source]
I mean. Jesus christ. Devs used to be proud they could optimize and run their programs on slow machines. What happened? Thats like making a game for casual players but only < 5% of the global population can even afford to play it. How does this make business sense?? I'd understand if it was a game breaking new ground with some hectic VR or something like that but it isnt.