←back to thread

3883 points kuroguro | 2 comments | | HN request time: 0s | source
Show context
tyingq ◴[] No.26296701[source]
"They’re parsing JSON. A whopping 10 megabytes worth of JSON with some 63k item entries."

Ahh. Modern software rocks.

replies(3): >>26296764 #>>26297102 #>>26297434 #
ed25519FUUU ◴[] No.26297434[source]
Parsing 63k items in a 10 MB json string is pretty much a breeze on any modern system, including raspberry pi. I wouldn't even consider json as an anti-pattern with storing that much data if it's going over the wire (compressed with gzip).

Down a little in the article and you'll see one of the real issues:

> But before it’s stored? It checks the entire array, one by one, comparing the hash of the item to see if it’s in the list or not. With ~63k entries that’s (n^2+n)/2 = (63000^2+63000)/2 = 1984531500 checks if my math is right. Most of them useless.

replies(2): >>26297496 #>>26298129 #
1. tyingq ◴[] No.26297496[source]
The JSON patch took out more of the elapsed time. Granted, it was a terrible parser. But I still think JSON is a poor choice here. 63k x X checks for colons, balanced quotes/braces and so on just isn't needed.

  Time with only duplication check patch: 4m 30s
  Time with only JSON parser patch:       2m 50s
replies(1): >>26300402 #
2. masklinn ◴[] No.26300402[source]
> But I still think JSON is a poor choice here.

It’s an irrelevant one. The json parser from the python stdlib parses a 10Mb json patterned after the sample in a few dozen ms. And it’s hardly a fast parser.