←back to thread

112 points zardinality | 9 comments | | HN request time: 1.074s | source | bottom
Show context
pjmlp ◴[] No.42195953[source]
> Using a full-featured RPC framework for IPC seems like overkill when the processes run on the same machine.

That is exactly what COM/WinRT, XPC, Android Binder, D-BUS are.

Naturally they have several optimisations for local execution.

replies(4): >>42196335 #>>42197280 #>>42201053 #>>42209092 #
1. jeffbee ◴[] No.42196335[source]
Binder seriously underappreciated, IMHO. But I think it makes sense to use gRPC or something like it if there is any possibility that in the future an "IPC" will become an "RPC" to a foreign host. You don't want to be stuck trying to change an IPC into an RPC if it was foreseeable that it would eventually become remote due to scale.
replies(5): >>42196677 #>>42197248 #>>42198111 #>>42198183 #>>42198692 #
2. pjmlp ◴[] No.42196677[source]
Kind of, as anyone with CORBA, DCOM, RMI, .NET Remoting experience has plenty of war stories regarding distributed computing with the expectations of local calls.
3. dietr1ch ◴[] No.42197248[source]
In my mind the abstraction should allow for RPCs and being on the same machine should allow to optimise things a bit, this way you simply build for the general case and lose little to no performance.

Think of the loopback, my programs don't know (or at least shouldn't know) that IPs like 127.0.0.5 are special, but then the kernel knows that messages there are not going to go on any wire and handles that differently.

4. bunderbunder ◴[] No.42198111[source]
This could possibly even be dead simple to accomplish if application-level semantics aren't being communicated by co-opting parts of the communication channel's spec.

I think that this factor might be the ultimate source of my discomfort with standards like REST. Things like using HTTP verbs and status codes, and encoding parameters into the request's URL, mean that there's almost not even an option to choose a communication channel that's lighter-weight than HTTP.

replies(1): >>42201987 #
5. loeg ◴[] No.42198183[source]
Binder is totally unavailable outside of Android, right? IIRC it's pretty closely coupled to Android's security model and isn't a great fit outside of that ecosystem.
replies(1): >>42198573 #
6. p_l ◴[] No.42198573[source]
It's usable outside Android, it's more that there's "some assembly required" involved - you need essentially a nameserver process which also handles permissions and the main one is pretty much the one in android. Also Binder itself doesn't really specify what data it exchanges around, the userland on Android uses one based on BeOS (which is where Binder comes from) but also has at least one other variant used for drivers.
7. mgsouth ◴[] No.42198692[source]
One does not simply walk into RPC country. Communication modes are architectual decisions, and they flavor everything. There's as much difference between IPC and RPC as there is between popping open a chat window to ask a question, and writing a letter on paper and mailing it. In both cases you can pretend they're equivalent, and it will work after a fashion, but your local communication will be vastly more inefficient and bogged down in minutia, and your remote comms will be plagued with odd and hard-to-diagnose bottlenecks and failures.

Some generalities:

Function call: The developer just calls it. Blocks until completion, errors are due to bad parameters or a resource availability problem. They are handled with exceptions or return-code checks. Tests are also simple function calls. Operationally everything is, to borrow a phrase from aviation regarding non-retractable landing gear, "down and welded".

IPC: Architectually, and as a developer, you start worrying about your function as a resource. Is the IPC recipient running? It's possible it's not; that's probably treated as fatal and your code just returns an error to its caller. You're more likely to have a m:n pairing between caller and callee instances, so requests will go into a queue. Your code may still block, but with a timeout, which will be a fatal error. Or you might treat it as a co-routine, with the extra headaches of deferred errors. You probably won't do retries. Testing has some more headaches, with IPC resource initialization and tear-down. You'll have to test queue failures. Operations is also a bit more involved, with an additional resource that needs to be baby-sat, and co-ordinated with multiple consumers.

RPC: IPC headaches, but now you need to worry about lost messages, and messages processed but the acknowledgements were lost. Temporary failures need to be faced and re-tried. You will need to think in terms of "best effort", and continually make decisions about how that is managed. You'll be dealing with issues such as at-least-once delivery vs. at-most-once. Consistency issues will need to be addressed much more than with IPC, and they will be thornier problems. Resource availability awareness will seep into everything; application-level back-pressure measures _should_ be built-in. Treating RPC as simple blocking calls will be a continual temptation; if you or less-enlightened team members subcumb then you'll have all kinds of flaky issues. Emergent, system-wide behavior will rear its ugly head, and it will involve counter-intuitive interactions (such as bigger buffers reducing throughput). Testing now involves three non-trivial parts--your code, the called code, and the communications mechanisms. Operations gets to play with all kinds of fun toys to deploy, monitor, and balance usage.

8. imtringued ◴[] No.42201987[source]
Nobody does REST, because nobody needs the whole hateoas shtick. When people say REST, they mean "HTTP API" and I'm not being pendantic here. The difference is very real because REST doesn't really have a reason to exist.
replies(1): >>42204845 #
9. bunderbunder ◴[] No.42204845{3}[source]
But most HTTP APIs that are commonly described as REST still use those parts of REST-over-JSON-over-HTTP, even if they aren't truly RESTful.

One thing I like about gRPC is that gRPC services use a single URL. And it doesn't ask you to define your own application error codes instead overloading HTTP status codes, thereby making it harder to distinguish application errors from communication channel errors. Things like that. But it still tightly couples itself to other parts of HTTP, particularly HTTP 3. That makes it impossible to use in a straightforward way in a number of scenarios, which then gives birth to a whole mess of reverse proxy shenanigans that people get into in an effort to avoid having to implement multiple copies of the same API.