←back to thread

428 points ahamez | 5 comments | | HN request time: 0s | source
1. deathanatos ◴[] No.45011259[source]
Pagination: do not force me to drink from a paginated coffee stir. I do not want 640 B of data in a response, and then have to send another response for the next 640 B. And often, pagination means the calls are serialized, so I'm just doing nothing but waiting for round trip latency after round trip latency for the next meager 640 B of data.

Azure I'm looking at you. Many of their services do this, but Blob storage is something else: I've literally gotten information-free responses there. (I.e., 0 B of actual data. I wish I could say 0 B were used to transfer it.)

When you're designing, think about how big a record/object/item is, and return a reasonable number of them in a page. For programmatic consumers who want to walk the dataset, a 640 KiB response is really not that big, and I've seen so many times responses orders of magnitude less, because someone thought "100 items is a good page size, right?" and 100 items was like 4 KiB of data.

> If you have thirty API endpoints, every new version you add introduces thirty new endpoints to maintain. You will rapidly end up with hundreds of APIs that all need testing, debugging, and customer support.

You version the one thing that's changing.

As much as I hate the /v2/... form of versioning, nobody reversions all the /v1/... APIs just because one API needed a /v2. /v2 is ghost town, save for the /v2 APIs.

replies(2): >>45011295 #>>45061446 #
2. atoav ◴[] No.45011295[source]
Yeah, pagination ia a great option — maybe even a good default. But don't make it the only choice, give developers the choice to make the tradeoff between number of requests and payload size.
replies(1): >>45015211 #
3. rirze ◴[] No.45015211[source]
I'm curious, is there a backend reason to only offer pagination? Is it less work on the backend vs a user making X calls to get all the resources anyways?
replies(1): >>45016181 #
4. atoav ◴[] No.45016181{3}[source]
From embedded experience I would say it could be benefitial to do paging only if you operate under heavy memory- or latency-constraints. But most APIs certainly are not.

Of course the should be some sort of maximum size, but I have seen APIs that return 1200 lines of text and require me to page them at 100 per request with no option to turn it off.

5. mulholio ◴[] No.45061446[source]
It’s certainly been my experience that page sizes should be bigger than you initially expect. Paginated endpoints are typically iterated all the way through meaning you’re going to return that data anyway. May as well save the additional overhead from multiple requests.

Not implementing pagination at the outset can be problematic, however. If you later want to paginate data (e.g. if the size of your data grows) then it’s going to be a breaking change to implement that later. Big page sizes but with pagination can be a reasonable balance.