Fork me on GitHub

Follow up question: what could go wrong if you replace go blocks with fully threaded core.async operations, so (go (<! ...)) becomes (async/thread (<!! ...)). In which scenario is this going to be problematic? Babashka has had this fallback for a while now. I have tested various examples and they all worked the same. My assumption was that using threads causes more overhead, so it will be slower, but the results will be the same. There was one user on reddit who bumped into this (documented) behavior so I'm wondering if we should change this.


I think it’s fine. Yes it will use more resources (and it might be too expensive in some cases) but I’d prefer it to the alternative of having to change my code to not use go at all

🙏 3

Go blocks are cheap. Threads aren’t necessarily so. Spawning 30,000 go blocks are in one of the examples of core async (or maybe just an example I remember from Tim baldridge). Spawning 30,000 threads seems like an obviously bad idea.


Can you point me to the example? The threads in core.async are still managed by a fixed threadpool.

dpsutton12:08:13 still looking for what I remember. I thought it was a lot more


Tried that example:

$ bb /tmp/foo.clj
Read 1000 msgs in 709 ms


Upping that number to 10k makes bb crash :)


so I guess that's a good repro


Doing the same on the JVM and making it threads:

$ clj /tmp/foo.clj
[5.453s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 4k, detached.
Syntax error (OutOfMemoryError) compiling at (/tmp/foo.clj:3:1).
unable to create native thread: possibly out of memory or process/resource limits reached


Maybe there is a way to increase the limit in macOS, but I think this is a good demonstration about the difference


8000 still works:

$ bb /tmp/foo.clj
Read 8000 msgs in 49862 ms


I added this to the README: - Managing expectations: This text was already in the README but I've now made it more prominent. This text links to: - Differences with Clojure:, which links to: - An example that shows where mapping go to thread may break down for a high number of threads, [here](


that sounds like a reasonable compromise. i think that reddit comment was too rough in tone but had a valid feedback. i also think some documentation is the answer. having this in some form is (to me) better than not having it due to a caveat under extreme usage


Maybe one day we can support the go macro properly. It got me thinking about it anyway.


I haven't really considered it so far, but http-kit also has an http client which is modelled after clj-http, but probably much lighter on GraalVM, so it becomes a contender in this space. So far clj-http-lite was the only light-weight GraalVM-compatible http client library. Clj-http recently became compatible, but still results into bloated binaries. Is anyone familiar with this client?


@borkdude We use http-kit's client at work in several places. Works great.


We started using it instead of clj-http in some places because we wanted to be able to interrupt an HTTP request if it is taking too long. I like that it is async by default.

Kari Marttila17:08:09

Babashka is great! I just used it in one personal exercise.


Hmm, http-kit also has HTTP connection reuse, which clj-http-lite hasn't. It also supports a patch request. Doesn't seem like a bad thing to include - although we already have babashka.curl, creating a new process for every request might not be optimal for all use cases.


I'd definitely like to have a built-in http client (server too?) in babashka, rather than rely on babashka.curl


at the cost of binary size


Note that we can already run clj-http-lite as library, but having a client built-in is more convenient. I tested the server too, it was only +0.5MB which seems great. Made an issue here about the server: Feel free to respond there.


Awesome, thanks!


(that promised write up about BB + ECS is coming, I swear 🤞 )

bananadance 3