Fork me on GitHub
#clojure
<
2021-10-23
>
Carlo09:10:08

I'm... confused:

(read-string "00007")
;; => 7
(read-string "00009")
;; => EXCEPTION

slipset09:10:30

0000 makes it be interpreted as octal

🙌 1
Carlo09:10:46

why? I mean, where's the docs for that?

Carlo09:10:40

thanks, so it's just the first 0 that causes the problem! So, is there a standard way to read a number as decimal, or should I manually drop the leading zeros?

p-himik09:10:34

(Long/parseLong "0009" 10)
=> 9

🎯 1
Carlo09:10:47

thank you!

👍 1
vemv15:10:25

is there interesting prior art for self-adjusting thread pools? e.g. I have an IO-bound workload, can't know in advance if the ideal amount of threads will be 50 or 200 or ... it might even depend on day I'm running the task at hand, since 3rd party APIs can be less or more available so ideal usage is specifying "I want to optimize for throughput", having such a measurer, and a thread pool that adds/decreases threads depending on what it observes. Would be sort of a git bisect

jumar15:10:12

Perhaps don't limit the threadpool but do it higher up the stack? See https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c > you should always push your blocking operations (IO or otherwise) over to a separate thread pool. This separate thread pool should be caching and unbounded with no pre-allocated size. (emphasis mine) and later in a comment: https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c#gistcomment-3158950 > Bounded thread pools contain unbounded task queues that are entirely outside your control. You can't see how many outstanding tasks there are, reschedule them, cancel them, change your semantics, etc. When you start running out of scarce resources, you need to be able to propagate that information back upstream in the form of temporary connection drops, or even better, trigger autoscaling to create more resources. You want to do this at the highest possible level in your stack, because that gives you the greatest semantic control. A thread pool hitting its thread count limit cannot interact with kubernetes to allocate a new pod.

👀 1
Ben Sless15:10:37

the "easy" answer is to have a theoretically unbounded number of threads, where back pressure is reflected by the external resources instead of the thread pool size

👀 1
vemv15:10:02

@U06BE1L6T the problem with unbounded is how to use an ideal amount of threads? If I use too few I might be underutilizing resources, if I use too many I might be decreasing throughput, hitting rate limits, etc

vemv15:10:27

@UK0810AQ2 I guess that would work but it merely shifts the problem from one place to the other? 😁

jumar15:10:42

I think Ben is saying the same thing as the blog post

jumar16:10:34

And I don't think that the app itself, without considering the context / world around it, can make reasonable decisions about the optimal number of threads / concurrent operations.

vemv16:10:07

I see, the Gist writing is sufficiently ambiguous to not be particularly useful tbh "saturate your pool until you have to have to scale out" is not universal advice, maybe I just want to run a task from a single node. And again saturating a pool can decrease throughput, hit rate limits, etc.

jumar16:10:11

It's a long shot but perhaps there's something in https://github.com/resilience4j/resilience4j or a similar library? It's a successor of Hystrix which is meant to be more adaptable - but I haven't used it

👀 1
Ben Sless16:10:22

@U45T93RA6 yes and no, your IO boundedness is the other peer's CPU bound, at some spot. Let them throttle you, and apply backpressure based on failed response codes or long response times

vemv16:10:43

Using a specific (but fictional) example, let's say I'm downloading videos from a provider. Maybe at thread count = 1000, I'm getting 2kbps throughput for each thread because the 3rd-party API splits throughput evenly. I'm still not getting a long response time or a rate limit, but the throughput will be effectively unbearable and equivalent to not getting done with the tasks. Maybe the "curve" is like this: Thread Count of 1 = 1000kbps/thread Thread Count of 5 = 1000kbps/thread (doesn't decrease) Thread Count of 100 = 700kbps/thread (decreases a little) Thread Count of 1000 = 2kbps/thread (decreases a lot) Thread Count of 2000 = 1kbps/thread (further decreases) Thread Count of 2222 = I start getting rate-limit errors So the ideal thread count would could be around 100, give or take, if optimizing for global throughput.

vemv16:10:59

Hope it's not an overly contrived example - personally I've sensed for a long time that this sort of dynamic allocation makes sense for certain use cases, even if not for every. Maybe I should just go and code it next time I encounter this!

lukasz16:10:07

Sounds like you need a rate-limiter, but not for incoming requests but outgoing - we've built something like that based on redis, so we can have a pool for workers spread across multiple nodes and we maximize throughput per-tenant, while also staying 50% under API request limits (so that we don't disrupt our customer's API usage - in the early days we happened to screw up one of our customer's custom Zendesk extension, because our sync process would just go all-in and maximize how much data we can fetch). It's tricky to get this right, as you've pointed out: you run into a state where more threads means that everything will slow down. My advice is to figure out max req/s (or whatever rate you have to optimize for) and walk backwards from there

👍 1
Ben Sless16:10:48

What you're describing falls under the theory of systems control using negative feedback. In your example, you want to maximize throughput, your control signal is the number of threads. Easy to implement using a PID controller https://en.wikipedia.org/wiki/PID_controller Or using some adaptive control, which is more complicated. But it's feasible to reach a "hands off" solution

✌️ 1
vemv17:10:19

Thanks all! Interesting inputs.

👍 1
Ben Sless17:10:14

A simple way you could implement a negative feedback system is by controlling the number of threads and measuring the rate. As long as increasing the number of threads increases the rate, increase the number of threads. When you add a thread and the overall rate decreases you flip a "switch" and stop increasing the number. Maybe even go one back. Since this system is inherently noisy, it's usually beneficial to pass the signal through some hysteresis filter. You can easily implement this system with core.async. Keep in mind that discrete time control is slightly different than continuous time control, too. I'd probably implement something like emitting a constant +1 signal for the number of threads as long as the derivative of the overall rate is positive (i.e. rate[t+1] - rate[t] > 0)

💯 1
Ben Sless17:10:35

I rarely get opportunities to apply this, but I really enjoyed systems control in school

😊 1
Ben Sless17:10:40

So to start with, a "bad" implementation will be: D threads[t] = sign(D rate[t]) with D being the discrete time derivative operator

didibus21:10:41

That's what a cached thread-pool is for. You do a load-test to see when your host tips over, that tells you what is the MAX size of your pool for your given resource. You set that as the MAX size, but otherwise you make it a cached thread pool. It'll grow and shrink as usage goes up/down.

didibus21:10:45

But you need to use the standard constructor to put an upper-bound:

ThreadPoolExecutor​(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue)
corePoolSize is like the minimum number of thread, you probably can set it to 0. maximumPoolSize is the max you've identified before you have too many threads and you application starts to slow down. keepAliveTime is how long you want to cache threads, 60 second is a good default. unit is the unit of time for the number you set too keepAliveTime. workQueue is the queue to use for pending tasks. This is where you could handle backpressure if you need too
(ThreadPoolExecutor.
  0
  500 ;; Your max threads
  60
  TimeUnit/SECONDS
  (SynchronousQueue.))

didibus21:10:29

A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. If you use SynchronousQueue, basically its like not having a queue at all. So each time you submit it will try to reuse one of the cached threads if available, if there isn't any it will create a new thread and run the task immediately in it. But if your max threads is reached, it will reject the task. You can define the strategy for rejection by passing an additional parameter to ThreadPoolExecutor, which is a handler that chooses how to handle the rejection. By default it will just throw an exception when rejected. There's a few availabile choices too if you see here: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html under Rejected tasks but you could also implement your own if you want.

slipset14:10:13

Perhaps only tangentially related, but ztellman has a library somewhat related to this https://github.com/ztellman/dirigiste

vemv15:10:44

seems something Clojure would excel at :)

p-himik15:10:16

If I'm not mistaken, it sounds exactly like the Java's ThreadPoolExecutor, given its keepAliveTime parameter.

vemv15:10:25

It doesn't to me, generally java's stock io-oriented executor merely grows/shrinks but it doesn't have a notion of "optimize for x"

p-himik15:10:51

Ah, so you want not just automatically increasing/decreasing pool size according to the current load and some delay, but a sort of learning thread pool where it adapts its size while expecting higher load during some specific times of the day/days of the week/etc?

vemv15:10:47

Yes, even when not considering the day of the week I cannot know in advance if the ideal thread count will be 50 or 200 (that is, without some trial/error. Which I'd rather automate)

vemv18:10:20

that happens to be a transitive dep in my prod app however it left me a mixed impression (https://github.com/ztellman/dirigiste/pull/30 plus some other thing that I also had to fix for it to play fine w/ Component) it might have something worth salvaging though!

danielgrosse20:10:10

I have a project, where I want to add a middleware to a reitit router only in the dev profile. Currently I use Integrant to setup my system. Is it possible to define a middleware in the dev folder and pass it to the handler function via the integrant config? Or do you suggest a different solution?

Ben Sless03:10:43

You can use the middleware registry, and in production register that specific middleware as identity. Then you would manage the registry as a component