←back to thread

JSON Patch

(zuplo.com)
299 points DataOverload | 9 comments | | HN request time: 0.203s | source | bottom
Show context
skrebbel ◴[] No.41881933[source]
I quite like JSON Patch but I've always felt that it's so convoluted only because of its goal of being able to modify every possible JSON document under the sun. If you allow yourself to restrict your data set slightly, you can patch documents much simpler.

For example, Firebase doesn't let you store null values. Instead, for Firebase, setting something to null means the same as deleting it. With a single simple restriction like that, you can implement PATCH simply by accepting a (recursive) partial object of whatever that endpoint. Eg if /books/1 has

    { title: "Dune", score: 9 }
you can add a PATCH /books/1 that takes eg

    { score: null, author: "Frank Herbert" }
and the result will be

    { title: "Dune", author: "Frank Herbert" }
This is way simpler than JSON Patch - there's nothing new to learn, except "null means delete". IMO "nothing new to learn" is a fantastic feature for an API to have.

Of course, if you can't reserve a magic value to mean "delete" then you can't do this. Also, appending things to arrays etc can't be done elegantly (but partially mutating arrays in PATCH is, I'd wager, often bad API design anyway). But it solves a very large % of the use cases JSON Patch is designed for in a, in my humble opinion, much more elegant way.

replies(7): >>41882045 #>>41882475 #>>41883360 #>>41886201 #>>41886934 #>>41887163 #>>41887172 #
1. gregwebs ◴[] No.41882045[source]
The article has a section at the bottom "Alternatives..." [1]. It links to "JSON Merge Patch" which is what you are describing: https://zuplo.com/blog/2024/10/11/what-is-json-merge-patch

That's the format that people tend to naturally use. The main problem is that arrays can only be replaced.

[1] https://zuplo.com/blog/2024/10/10/unlocking-the-power-of-jso...

replies(4): >>41882087 #>>41882187 #>>41883145 #>>41885477 #
2. skrebbel ◴[] No.41882087[source]
Nice! I gotta say I didn't expect a thing called "JSON Merge Patch" to be simpler and more concise than a thing called "JSON Patch" :-)
replies(1): >>41882397 #
3. gitaarik ◴[] No.41882187[source]
Also the first thing I was thinking. The only reason I can see for using JSON Patch is for updating huge arrays. But I never really had such big arrays that I felt the necessity for something like this.
4. DataOverload ◴[] No.41882397[source]
Same here, I actually made a tool for this called www.jsonmergepatch.com - give it a try
5. jmull ◴[] No.41883145[source]
json merge patch is pretty good. I think it just needs an optional extension to specify an alternative magical value for “delete”. null is a pretty good default, and comports well with typical database patterns, but is outright bad for some things.

I think it also needs a “replace” option at the individual object update level. Merge is a good default, but the semantics of the data or a particular update could differ.

You’re almost surely doing something wrong if replace doesn’t work for arrays. I think the missing thing is a collection that is both ordered and keyed (often not by the same value). JSON by itself just doesn’t do that.

So maybe what’s missing is a general facility for specifying metadata on an update, which can be used to specify the magical delete value, and the key/ordering field for keyed, ordered collections.

replies(2): >>41883299 #>>41886076 #
6. throwway120385 ◴[] No.41883299[source]
> You’re almost surely doing something wrong if replace doesn’t work for arrays. I think the missing thing is a collection that is both ordered and keyed (often not by the same value). JSON by itself just doesn’t do that.

Yeah, you could assign an identity value to each element of the array and then use a subresource to manipulate those elements by identity value. Then you could PUT using the same JSON merge mechanism to clear individual fields, and you could DELETE to remove items from the array by subresource.

This just seems like a reinvention of a crufty piece of XML.

replies(1): >>41888784 #
7. gleenn ◴[] No.41885477[source]
What if you represented arrays as recursively nested triples like '(1 2 3 4 5) as [[1 null 2] 3 [4 null 5]]. Then you could parch the tree of triples much more succinctly. You might have to disallow nested arrays but this would as bad a restriction as disallowing nulls as map values. You could append and delete array indexes better. Or maybe make it an 8-way tree like Clojure does for its vector representation to condense the patch further.
8. skrebbel ◴[] No.41886076[source]
Once you add all that, it loses "no need to learn something new". At that point, I think I'd just go with JSON Patch which solves all of these, and more.
9. jmull ◴[] No.41888784{3}[source]
I just mean a mechanism for specifying metadata, and a little metadata.

Off the top of my head, an optional header like "MergeMetadataObjectPropertyName: @mergeMetadata"

Which would cause objects in the merge containing the property "@mergeMetadata" to be treated specially.

The merge meta data could (optionally) specify an alternative to null for the special delete value. Or (optionally) specify the key value for an array representing an ordered, keyed collection. (Or, possibly, to specify the order value for an object used to represent a keyed, ordered collection.)

I guess you could just do without the header and specify the metadata using magic values (in the same way null is used as a special value meaning delete), but it seems better to opt in to things things like that.

(IMO, json merge patch would have been slightly better if it had no special values by default, but it's not bad. "null means delete" is a small thing, you probably need delete regardless, and, anyway, the ship has sailed on that one.)