Fork me on GitHub

@gowder my suspicion is that if maximizing your cpu & io, that memory won’t be an issue with that type of problem. Unless, as you said, the images turn out to be pathologically huge.


The interesting part will be balancing it such that you keep each scaling / io thread busy, without waiting for the result of the previous step. And that will have to be tuned to the available cores and disk i/o.


If one task gets ahead of another, that’s when your memory consumption will grow (e.g. images read but not scaled, scaled but not written)


You can easily back pressure tasks with appropriate chan buffer sizes, though. Such that a particular task cannot produce too much more than the next task is ready for.


Personally, #onyx feels like overkill for your task. But, admittedly, I had a 10K+ events/sec stream processing with real-time visibility problem and I still looked an onyx and got scared away. So, take that as you will.


@rwilson I think you meant to ping @gowder - I was the one who suggested he look at #onyx.


Yep, sorry about that.