←back to thread

3883 points kuroguro | 6 comments | | HN request time: 0s | source | bottom
Show context
tyingq ◴[] No.26296701[source]
"They’re parsing JSON. A whopping 10 megabytes worth of JSON with some 63k item entries."

Ahh. Modern software rocks.

replies(3): >>26296764 #>>26297102 #>>26297434 #
1. bombcar ◴[] No.26296764[source]
At least parse it into SQLite. Once.
replies(2): >>26297066 #>>26297149 #
2. brianberns ◴[] No.26297066[source]
They probably add more entries over time (and maybe update/delete old ones), so you’d have to be careful about keeping the local DB in sync.
replies(1): >>26297604 #
3. tyingq ◴[] No.26297149[source]
I think just using a length encoded serialization format would have made this work reasonably fast.
replies(1): >>26297224 #
4. hobofan ◴[] No.26297224[source]
Or just any properly implemented JSON parser. That's a laughable small amount of JSON, which should easily be parsed in milliseconds.
5. bombcar ◴[] No.26297604[source]
So just have the client download the entire DB each time. Can’t be that many megabytes.
replies(1): >>26299123 #
6. Twirrim ◴[] No.26299123{3}[source]
I did a very very ugly quick hack in python. Took the example JSON, made the one list entry a string (lazy hack), repeated it 56,000 times. That resulted in a JSON doc that weighed in at 10M. My initial guess at 60,000 times was a pure fluke!

Dumped it in to a very simple sqlite db:

    $ du -hs gta.db
    5.2M    gta.db
Even 10MB is peanuts for most of their target audience. Stick it in an sqlite db punted across and they'd cut out all of the parsing time too.