Fork me on GitHub

Need help in implementing a webhook client: I have a Google PubSub topic of data-representations of HTTP requests, and I need to execute all of those requests. All of these requests are to the same domain. Assuming all of them are idempotent, how can I implement a subscriber process to achieve maximum throughput? I'm thinking of using an async HTTP client, and implementing something akin to TCP flow-control, but at the HTTP layer, so that the process "finds" the optimum throughput, using 5xxs and 429s as backpressure. Am I on the right track?


^ As far as throughput is concerned, it could be upto a thousand HTTP requests per second (peak not sustained). Is this even achievable with one process?


Say each http request takes 100ms


you have 1 second to do 1000 of them


each thread can do 10 in that time


so you need 100 threads


OR you need to use a reactive library that yields on IO


thats in JVM land


if you used something like rust or nodejs for this bit, you can use their async/await stuff to reach your IO throughput easy


Not sure if I agree with the dichotomy between threads and reactive libraries. One can use java.nio http from vanilla threads right?


the number you need to do per time period and the length of time each takes will tell you how many you need to do concurrently, the number you need to do concurrently will tell you if a thread per request will be fine or not


100 threads is basically nothing


the back pressure tuning will be, at least a little easier to implement with a thread per request, because you will just be tuning up and down the number of threads the threadpool has to make requests


for some none blocking http client (java even has one built in these days you'll need to do your own accounting for how many are in progress

👍 2

I worked with a couple of reactive frameworks on the JVM. But they seem much more hassle in maintaining then a few additional vm's would have been..

👍 2