Fork me on GitHub
#ring
<
2023-12-05
>
ag21:12:24

I need to start using QueuedThreadPool with jetty, and I'm not sure where to start. Can someone point at some tutorials, articles, snippets of code? I'm not sure how to determine optimal minThreads/maxThreads values. And in general, I would like to learn more about QueuedThreadPool, what to expect, what to avoid, etc.

weavejester22:12:39

The standard Ring Jetty adapter uses a QueuedThreadPool by default. The optimal minimum and maximum threads can be set with the :min-threads and :max-threads options respectively. If you want the optimal values, you'll need to load the adapter onto the hardware you want, and then benchmark it with a tool like ApacheBench (ab).

seancorfield22:12:53

@U0G75ARHC I'm curious what makes you think you need something different than the default behavior? (although QTP is default behavior) Also, which version of Jetty are you using?

ag22:12:25

what makes you think you need something differentoh, I don't actually know if we do. we have a few things to improve and some people in my team think that we need to start using sensible min/max threads defaults and let Ops to be able to override them with env vars. + expose thread count metrics to Premetheus.

weavejester22:12:51

Overriding them with environment variables should be straightforward:

(run-jetty handler
           {:min-threads (Integer/parseInt (System/getenv "MINTHREADS"))
            :max-threads (Integer/parseInt (System/getenv "MAXTHREADS"))})

ag22:12:06

That I know. I'm just in general looking for some insights of how this affects things, etc.

weavejester22:12:51

It just controls how many threads to use. The default is to always have a minimum of 8 threads, and a maximum of 50. Whether that's good or bad depends on a number of factors, such as how many cores your server has, and if your threads are blocking at all (for instance, in getting the result of a database query).

ag22:12:17

Do you think that if we expose thread counts in Prometheus and allow Ops to override the values, it would let them find the optimal values at some point? Or is it a waste of time, and the potential gain may not be worth the time?

weavejester22:12:10

Exposing the thread counts will allow ops to benchmark at various settings and find the optimum, assuming their benchmarks are accurate and a good representation of normal usage. But it's not something I would personally bother changing unless I'm having performance issues that I believe are due to blocked threads.

weavejester22:12:58

But it costs nothing to expose the options, so if someone really wanted to mess around with those values, I'd probably let them rather than argue about it.

seancorfield22:12:04

FWIW, we've never found it necessary to change those values... because the defaults are "sensible" 🙂

ag22:12:26

Oh, I'm not even at the point of arguing. I just wanted to understand first what the problem is and how potentially "solving" it may create ten different other issues. Everything you've said makes a lot of sense. I guess I'll expose the values and let them play around if that's what they desire.