Fork me on GitHub
Timur Latypoff15:03:11

Is there a way to make aleph's TCP server (or my own stream handler) detect that TCP client is too slow, in order to drop the connection to prevent infinite buffering of old data? From what I experience, the TCP socket buffers (server side + client side + intermediary router buffers I guess) could store minutes of stale data, I would love to find a way to prevent it.


how do you want it to behave @timur058? would the timeout on connect work for you ?

Timur Latypoff16:03:07

I think it would, if sending outgoing messages actually slowed down. For some reason, I don't feel the back pressure from the system — it seems to gorge all my outgoing messages into its endless internal socket buffers without slowing me down as their producer. At the same time, my consumer received messages with bigger and bigger delay. It there a way to maybe reduce outgoing TCP buffer?


sadly beyond my ken, i've not done anything serious with tcp and aleph

Timur Latypoff13:03:25

This is some kind of rate-limiting solution for manifold streams, right?


It is flow control, not entirely the same thing as rate limiting, and it is using core.async, not manifold, but a similar approach there would allow you to directly control the rate of flow without having to wait for back pressure from lower level buffers filling up

Timur Latypoff08:03:29

@U0NCTKEV8 I see. Thank you for the tip. I kind of want to see what could be done automatically, based on how slow the network client is. My use case is streaming real-time financial data (e.g. exchange quotes) — there’s a lot of data, and sometimes consumer could get slow to process it. I would like to detect the situation — and to start programmatically filtering out some non-essential data to reduce bandwidth requirements (instead of building up delivery delays for real-time data). To be fair, it’s not Clojure- or aleph-related question per se, more like general TCP server question. I am using aleph, so I thought maybe other people using aleph servers know best practices in this case.