Fork me on GitHub

Thanks, @dominicm, that’s useful information. It would be nice to have a benchmark in the repo to establish a baseline for performance. Is this something you could share (in a gist or something)?


@msolli It was pretty basic, I was technically testing a bunch of other stuff too (e.g. our auto-scaling setup, our http layer, my local network). So it's not a great test at all. For my test, I used wrk with a lua script to set the HTTP body.


I suppose the processing is more interesting, but for that I was measuring with our datadog setup. So, again, not great for a gist.


Maybe something could be put together with You'll still want to externalize the PG instance somehow, so the JVM doesn't impact it and it's an accurate measure of the latency you will see in production.

👍 5