←back to thread

153 points michaelanckaert | 1 comments | | HN request time: 0.262s | source
Show context
WhatIsDukkha ◴[] No.23485847[source]
I don't understand the attraction to Graphql. (I do understand it if maybe you actually want the things that gRPC or Thrift etc gives you)

It seems like exactly the ORM solution/problem but even more abstract and less under control since it pushes the orm out to browser clients and the frontend devs.

ORM suffer from being at beyond arms length from the query analyzer in the database server.

https://en.wikipedia.org/wiki/Query_optimization

A query optimizer that's been tuned over decades by pretty serious people.

Bad queries, overfetching, sudden performance cliffs everywhere.

Graphql actually adds another query language on top of the normal orm problem. (Maybe the answer is that graphql is so simple by design that it has no dark corners but that seems like a matter of mathematical proof that I haven't seen alluded to).

Why is graphql not going to have exactly this problem as we see people actually start to work seriously with it?

Four or five implementations in javascript, haskell and now go. From what I could see none of them were mentioning query optimization as an aspiration.

replies(19): >>23485889 #>>23485918 #>>23485953 #>>23485962 #>>23486202 #>>23486714 #>>23486794 #>>23487403 #>>23487603 #>>23487611 #>>23487709 #>>23488354 #>>23488907 #>>23489619 #>>23489986 #>>23490334 #>>23491786 #>>23492176 #>>23497167 #
real_ben_michel ◴[] No.23490334[source]
Having seen many product teams implement graphQL, concerns were never around performances, and more around speed of development.

A typical product would require integrations with several existing APIs, and potentially some new ones. These would be aggregated (and normalised) into a single schema built on top of GraphQL. Then the team would build different client UIs and iterate on them.

By having a single queryable schema, it's very easy to build and rebuild interfaces as needed. Tools like Apollo and React are particularly well suited for this, as you can directly inject data into components. The team can also reason on the whole domain, rather than a collection of data sources (easier for trying out new things).

Of course, it would lead to performance issues, but why would you optimise something without validating it first with the user? Queries might be inefficient, but with just a bit of caching you can ensure acceptable user experience.

replies(1): >>23490882 #
jayd16 ◴[] No.23490882[source]
I wonder if GraphQL would make more sense as a client side technology. The goals of dev ease seem better served by a graph the client can build (and thus span multiple remote services). Instead of transforming the backend, you simply get a better UI dev experience and the middleware handles query aggregation.

You'd want code gen to easily wrap REST services.

You could get some of the pipeline query/subquery stuff back (and lose caching) by setting up a proxy running this service or fallback to client side aggregation to span services not backed by the graph system (and maybe keep caching).

Maybe we're back to SOAP and WSDLs, though.

replies(2): >>23491107 #>>23493304 #
1. andrewingram ◴[] No.23493304[source]
I always say that GraphQL is best thought of as frontend code that lives on a backend server. If in your current world you have front-end code that talks to multiple traditional REST endpoints, the frontend is therefore responsible for managing all the relationships between the API data in order to build a coherent object graph which the UI can interpret. GraphQL turns this object graph into a first class citizen that can be consistently queried, rather than something everyone keeps having to reinvent, and moves it closer to the source of truth (the downstream services) for performance reasons (so that each network hop can be measured in single digit milliseconds rather than tens or hundreds).

I've seen GraphQL schemas being implemented on the client, it's certainly doable, but the performance is terrible compared to doing it on a server close to the source of truth.