←back to thread

228 points Retro_Dev | 1 comments | | HN request time: 0.203s | source
Show context
thrwyexecbrain ◴[] No.44464908[source]
Just to start some discussion about the actual API and not the breaking change aspect of it:

I find the `Reader.stream(writer, limit)` and `Reader.streamRemaining(writer)` functions to be especially elegant to build a push-based data transformation pipeline (like GREP or compression/encryption). You just implement a Writer interface for your state machine and dump the output into another Writer and you don't have to care about how the bytes come and how they leave (be it a socket or shared memory or file) -- you just set the buffer sizes (which you can even set to zero as I gather!)

`Writer.sendFile()` is also nice, I don't know of any other stream abstraction that provides this primitive in the "generic interface", you usually have to downcast the stream to a "FileStream" and work on the file descriptor directly.

replies(1): >>44465021 #
1. AndyKelley ◴[] No.44465021[source]
re: sendfile in the interface - that's important because while downcasting the stream to "FileStream" would work if your pipeline looks like A -> B, it falls apart the moment you introduce an item in the middle (A -> B -> C). Meanwhile I have a demo of File -> tar -> HTTP (Transfer-Encoding: chunked) -> Socket and the direct fd-to-fd copies make it all the way through the chain!