←back to thread

123 points eterm | 5 comments | | HN request time: 0s | source
1. cadamsdotcom ◴[] No.43925554[source]
Looks like the framework is just going to keep reading to the end of the random number stream, but of course there isnt an end to it.

Is there some kind of `IClosableStream` you can implement? That’d give you a `Closed` method, which you can then use to let either your server or stream know that it’s time to stop reading (or the stream reached EOF) - even if it’s done with a flag that’s set when the client disconnects.

Maybe there’s already an optional `Close` method you’re not overriding?

replies(2): >>43925653 #>>43925786 #
2. __s ◴[] No.43925653[source]
Stupid idea: throttle stream by putting Sleep in Read
3. eterm ◴[] No.43925786[source]
Thanks for trying to help.

On the client side, randomStream.Close will get called when it's disposed.

On the server side, I'm not sure what I could put into an overriden Close that wouldn't just be base.Close()? RandomStream itself doesn't own any resources that need cleaning up.

I could force WCF to use Session mode, and then add flow-control through a side-channel, so other messages could prepare the stream to internally buffer and then rewrite in requested chunks?

But at that point I might as well just use an apprpriately sized GetRandomBlock(ValueWithSequence[]), and chunk requests that way and abandon using a stream for this at all.

I'll have an experiment with that approach to try to find the best buffer size and whether streaming the buffer actually helps vs just having it as the message and letting WCF control the sending.

replies(2): >>43931001 #>>43934073 #
4. cadamsdotcom ◴[] No.43931001[source]
What if your Close implementation sets a flag in the stream that, if set to true, causes it to respond to Read calls with an EOF?

There'd be some bookkeeping to keep track of the stream (so you can set its flag) then replace it with a new one the next time a client connects, effectively making each stream single-use. But you seem to be optimizing for rate of reads, not number of opens and closes you can do, so it seems that shouldn't be a blocker..

5. wzdd ◴[] No.43934073[source]
(non-Core) WCF seems to have a concept of "drain on close stream". Could CoreWCF be replicating that? https://stackoverflow.com/questions/1676563/why-is-wcf-readi... . The commenter there suggests Abort() instead (which is apparently a bad idea).

It's weird behaviour and I wouldn't have expected it either, since infinite length streams are pretty common.