Fork me on GitHub

So here’s an open topic for discussion: how do you decide on the size of channel buffers? It’s kind of implicit that you want things to balance so that producers can just keep ahead of consumers but will also slow down with back-pressure if they get too far ahead. The buffer size of the channel seems to be the way to tune that … but that feels like black magic to get the numbers “right”, and core.async doesn’t have a lot of hooks for monitoring, to help with making these determinations.


Also, there’s the vast difference between development environment and production.


Is it cave man numbers? (0, 1, many)?


I think it would be pretty straightforward to wrap your choice of buffer with a logging implementation, to get some idea of what is happening


I have gone through and updated a bunch of go blocks to write information to an atom before doing channel operations, and then compared channel read and writes, but that was more for debugging deadlocks


pipeline might be a good place to look, it makes the buffer size equal to the number of consumers, maybe that is a good rule of thumb


my understanding of queueing theory is limited, it does seem to be exactly for modeling these things