←back to thread

153 points michaelanckaert | 1 comments | | HN request time: 0s | source
Show context
WhatIsDukkha ◴[] No.23485847[source]
I don't understand the attraction to Graphql. (I do understand it if maybe you actually want the things that gRPC or Thrift etc gives you)

It seems like exactly the ORM solution/problem but even more abstract and less under control since it pushes the orm out to browser clients and the frontend devs.

ORM suffer from being at beyond arms length from the query analyzer in the database server.

https://en.wikipedia.org/wiki/Query_optimization

A query optimizer that's been tuned over decades by pretty serious people.

Bad queries, overfetching, sudden performance cliffs everywhere.

Graphql actually adds another query language on top of the normal orm problem. (Maybe the answer is that graphql is so simple by design that it has no dark corners but that seems like a matter of mathematical proof that I haven't seen alluded to).

Why is graphql not going to have exactly this problem as we see people actually start to work seriously with it?

Four or five implementations in javascript, haskell and now go. From what I could see none of them were mentioning query optimization as an aspiration.

replies(19): >>23485889 #>>23485918 #>>23485953 #>>23485962 #>>23486202 #>>23486714 #>>23486794 #>>23487403 #>>23487603 #>>23487611 #>>23487709 #>>23488354 #>>23488907 #>>23489619 #>>23489986 #>>23490334 #>>23491786 #>>23492176 #>>23497167 #
searchableguy ◴[] No.23485889[source]
That is upto the graphql framework and the consumers of them. Graphql is just a query language.

You need to have data loader (batching) on the backend to avoid n+1 queries and some other similar stuff with cache to improve the performance.

You also have cache and batching on the frontend usually. Apollo client (most popular graphql client in js) uses a normalized caching strategy (overkill and a pain).

For rate/abuse limiting, graphql requires a completely different approach. It's either point based on the numbers of nodes or edges you request so you can calculate the burden of the query before you execute it or deep introspection to avoid crashing your database. Query white listing is another option.

There are few other pain points you need to implement when you scale up. So yeah defo not needed if it's only a small project.

replies(1): >>23485977 #
kabes ◴[] No.23485977[source]
"You have to calculate the burden of the query before you execute it so you don't end up crashing your database."

This sounds like disaster waiting to happen.

replies(2): >>23486024 #>>23490291 #
staticassertion ◴[] No.23490291[source]
It's not nearly as complex as paging, which has a similar purpose of limiting single-query complexity.
replies(1): >>23492275 #
1. tomnipotent ◴[] No.23492275[source]
Anyone use MSSQL before it got ROW_NUMBER and window functions? Paging was a literal nightmare - if you wanted records 101-110, you had to fetch 1-110 and truncate the first 100 rows yourself (either in the DB via stored procedure or in your app code). I wish LIMIT/OFFSET was SQL ANSI standard.