Fork me on GitHub

Are there some interesting future implications on Clojure and possibly core.async given Java 19 introduces project Looms green threads other than "you can use green threads now"? This has always been something that made me a bit jealous of Erlang and Go so to speak. It can be quite liberating to work with. What do people think? (sorry for the gross simplification but you get the drift)

Cora (she/her)11:05:55

some people have already started working on libraries for using them

👏 1
Cora (she/her)11:05:36

it's not core.async, of course, but it's something. I have no idea about clojure's adoption plans

Cora (she/her)11:05:53

the advisories section of that library says this which I think in practice means clojure (or libraries) will need to be modified to some degree in order to be safe:

Cora (she/her)11:05:57

> Clojure's built in locking uses java's native monitors to handle locking. This can prevent a loom fiber from being able to yield - thereby blocking the underlying native thread and potentially lead to starvation under enough contention

matt sporleder12:05:38

yeah exciting stuff


Is it actually slated for release?


Not that it impacts me that much, but I always assumed I would just use the java concurrency utils. e.g. blocking queues, timers, etc.


> This change to LockSupport enables all APIs that use it (`Lock`s, Semaphores, blocking queues, etc.) to park gracefully when invoked in virtual threa from:


locking is still safe, just not maximally performant.


its also generally rare to use in clojure so i dont imagine it will matter that much

Cora (she/her)14:05:45

Monitors belong to threads, though, right? in order to be re-entrant? and a fiber might park on one thread and resume on another? I suppose loom should fix that but I'm guessing that's what the library was running in to


fibers/virtual threads are threads

Cora (she/her)14:05:47

I meant OS threads

Cora (she/her)14:05:58

but the library ran into something

Cora (she/her)14:05:34

I'm guessing at an in-between state when monitors hadn't been fixed to support virtual threads


right, they ran into locking stopping other virtual threads from running. That doesn't affect correctness, just performance


if you have 8 threads doing a thread sleep inside of a locking then you've used up the underlying platform thread pool until they are done. That wouldn't be true with ReentrantLock and might not be true in a future iteration of the jvm

Joshua Suskalo14:05:33

I would expect nothing in core to support this for a long while yet. It took a long time for them to drop java 7 support, and assuming we stick to just supporting lts releases that means we have 11 and 17 to drop, and even if java 19 stabilizes loom the next lts release is java 21. So more or less we can only expect to see core support for it when the vast majority of the clojure community is no longer using java 17 or older.

Cora (she/her)14:05:16

but we can expect libraries that make use of it, and to be able to use interop ourselves, so that's nice

Joshua Suskalo14:05:47

Yes absolutely. From a community perspective we could expect to see core.async on the JVM transition from a concurrency library to a channel library effectively.


Would it be lazy to publish a library that just re-exports all the core.async stuff leaving out the macros?

Joshua Suskalo14:05:51

Honestly I'd think it'd be better to publish a library that uses the core.async namespaces and exports the macros as thin wrappers around loom primitives, making go and friends start virtual threads while thread starts an os thread


(defmacro go
  [& body]
  (let [c (chan)]
    (Thread/startVirtualThread (fn [] (!!> c (do body))))


Thats what i mean

Joshua Suskalo14:05:43

besides that needing to be a put and not a take, yeah, I think that's exactly what a community fork of core.async should be.

Joshua Suskalo14:05:06

Go for it, I'll use it

Joshua Suskalo14:05:41

That way projects can put that at the top level and exclude core.async from everything transitively and include this, and everything should Just Work™ for the most part

Joshua Suskalo14:05:25

The other thing I'd like to see come from loom support is community libraries to replace the default agent/future thread pool with virtual threads.


idk how that should be packaged. You can do (set-agent-executor! (Executors/newVirtual....)) but it would feel strange for a library to do that


plus the virtual thread executors only make daemon threads, so their behavior would be slightly different


im also unsure how the difference between the normal and send-off executor should be handled


should they both be virtual threads or just one?

Joshua Suskalo14:05:22

I think they should both be virtual threads, but there's an argument to be made that send-off should be the only one of the two that's virtual.

Joshua Suskalo14:05:37

since send-off is about blocking IO and send is about computation

Joshua Suskalo14:05:40

also just about that correctness point and locking being correct, but potentially just a performance problem: I'm not sure I agree. I'm fairly certain that in the presence of locking with virtual threads you could cause non-deterministic deadlocks that would not occur if all of the threads were "real" threads. I don't think that introducing deadlocks where there were none before is correct.

Joshua Suskalo14:05:27

For example, if I understand correctly, there is a fixed sized pool of OS threads used to back all the virtual threads, and it will not grow if the backing threads are all blocked. If you have a highly-contended resource and one virtual thread paused while it held the monitor, then other threads could all start attempting to lock on that monitor, consuming all the OS threads and causing deadlock.

Joshua Suskalo14:05:02

The only way this wouldn't happen is if loom is designed so that you can't interrupt a virtual thread if it's inside a critical section.

Joshua Suskalo14:05:24

I don't know if that's true.


We can set the thread pool size down to 1 or 2 and try to break it


the restriction on monitors (locking in clojure, synchronized in java) means that when a virtual thread holds a monitor, it pins the carrier thread. same thing happens when there is a native frame on the stack


before Loom exits preview status, java monitors will be rewritten in java, rather than subtle hand coded assembly


(however native calls will still pin the carrier thread)

Joshua Suskalo16:05:12

ah, that's good to know

Ben Sless18:05:44

I'm trying to get a REPL going with loom and even though the java process started with --enable-preview it didn't enable virtual threads. What did I miss?

Joshua Suskalo18:05:44

which jdk are you using for that?


> it didn't enable virtual threads What do you mean?


What happens when you try to use (Thread/startVirtualThread #(println "Hello"))

Ben Sless18:05:49

I downloaded the jdk19 ea20

No matching method startVirtualThread found taking 1 args for class java.lang.Thread


It hasn’t been integrated into 19 yet


Just proposed


It = Loom


There are Loom-specific builds you can use

Ben Sless19:05:29

silly me. Thanks

Arjen van Elteren12:05:59

Is there any way I can add dependencies to a running clojure app? I.e. extend deps.edn and then run "some magic" in the application so that it will be picked up?

Linus Ericsson13:05:40

There is pomegranate: I'm not sure if that relies on leiningen or would work with deps as well.


You can use this specific branch of tools.deps.alpha:


For more details, read here The :add-libs alias.


Note that all of the solutions in this space break the assumptions of various other tools and libraries, so may not work reliably in all cases. see and for details.


Despite those articles, the general case of "add[ing] dependencies to a running clojure app" will work. I use the add-lib3 branch of t.d.a for this all the time and my workflow is to start a REPL, start the app in that REPL, and develop against the live, running app, adding dependencies as needed. I demonstrate that in my London Clojurians talk about REPL-Driven Development (it's on YouTube and it's looonng). You can't update an existing dependency, only add new ones. Also note: I do not use nREPL at all so I don't encounter the cascade of new classloader instances mentioned in that first article.


@U0K064KQV That's out of date -- referring to an old branch and the old name of the function. See for up-to-date SHA and function name.


Ah, good to know. What do you think about this advice: > don't get/set ContextClassloader if you don't have to. If you need the classloader to be set in a certain scope then dynamically bind clojure.lang.Compiler/LOADER From:


I have no opinion on it: I don't use nREPL or CIDER so that issue doesn't impact me at all 🙂


I don't think that recommendation is due to multiple dynamic class-loader, but I'm not sure why it is made to be honest, maybe just as a, that's the cleaner way to do it.


> Clojure has two places where it looks for a classloader, Compiler/LOADER and the thread’s context classloader. Setting the first is easy, setting the second requires some more care. From: Not sure I followed everything, but seems its due to JDK9+ module system maybe?


I think there are a number of incorrect assumptions being made in that issue thread, based on subsequent comments from other contributors.


I read that article (it was linked above).


Oh I see, ya possible


It's true that JDK9 broke a number of tools that manipulated classloaders. They've either been fixed or abandoned. vemv talks about some of the changes in that issue thread where they switched away from broken/problematic tooling (or were planning to).


In any case, I do like the deps.edn watch feature of the library: Have not tried it, but I think its a cool idea to just watch the deps.edn


I do not like file watchers that auto-(re)load stuff. I prefer my workflow to be explicit.


I have a hot-key bound in VS Code that calls add-libs (from the add-lib3 branch of t.d.a). My workflow is: edit deps.edn to add a new dependency, select the surrounding hash map, hit the hot-key. That way nothing is fruitlessly trying to do things just because I edited something in a file that might be completely unrelated to what the file watcher is doing.

Arjen van Elteren19:05:07

Thanks all! I just now setup the add-libs3 and adding a new depency went very smooth. (Also, autoload is not needed for me)


I prefer the watcher, because it messes me up when the files and the repl get out of sync


But I'm not currently using add-lib at all, I just restart the repl


dev=> (up-since)
#inst "2022-04-23T20:08:17.614-00:00"
That's my HoneySQL project REPL. My work REPL has been running for slightly less time:
user=> (dev/up-since)
#inst "2022-04-25T21:44:00.802-00:00"


(and that's working with 130k line monorepo across multiple branches with nearly two dozen services)


You never worry it only works in your REPL? because you forgot if you have some extra functions, or dependencies in them, or some old state or fn definition?


CI would catch that if it happened. But my workflow is to eval every change as I make it, often without even saving files.


If I remove stuff from a namespace, I'll generally unmap/unalias it too. Or a quick "save, hot-key to 'clean' a namespace, load-file" sequence to sync things up -- 'clean' does some unmapping/unaliasing but doesn't remove the namespace itself (since that can be the root cause of many of the reload/refresh workflow failures).


Fair enough, to be honest for me it's a bit more of a paranoid thing, similar to how I press Ctrl+s constantly even if I made no edits

Joshua Suskalo18:05:23

When working on a clojure webserver, is clauth the go-to for OAuth implementations? Or what's the current state of the art for identification and authentication?


I'd use buddy-auth but it's in maintenance mode

Joshua Suskalo14:05:14

I dunno why it being in maintenance mode would be a bad thing. Outside of the research area I feel like the basics of auth that buddy provides are pretty stable.

Patrick Brown18:05:03

I'm trying to process a csv file, before I transact it into datomic. My code is error free on indiviual rows, but when I go to process the whole file I get an error message that leads me to think I'm messing up with the lazyness. Any help would be a cool thing! I'm attaching my error, and the final function I call. CHEERS! class clojure.lang.LazySeq cannot be cast to class clojure.lang.IPersistentMap (clojure.lang.LazySeq and clojure.\ lang.IPersistentMap are in unnamed module of loader 'app') (defn transact-surface-data [well-id] 1 reference (for [row (-> (doall (get-surface-for-well well-id)) (process-row) (ready-row-for-transaction well-id))] (d/transact *conn* {:tx-data row})))


the rest of the stacktrace will tell you what function the error is in


but my guess would be process-row


the result of (doall (get-surface-for-well well-id)) is going to be a seq


process-row looks like the kind of thing that handles elements in a seq, not a seq itself


for is also not a loop


it is a lazy seq constructor


so using it to do side effects like inserting into a database is going to be a bad time

Patrick Brown19:05:40

Yeah, doall is definitely in the wrong place. and process row does one at a time. What sould I be doing in place of for, map?

Patrick Brown19:05:01

Yup, it looks like map is going to work. Thanks for the point in the right direction.


map is still the wrong thing to use for side-effects. Look at run! instead.

Patrick Brown19:05:32

Yup, looks like map isn't working as I wanted. BTW, I'm a sponsor of your's Sean. Keep up all the good work. You're very appreciated!


Thank you for your sponsorship! Much appreciated! map and for are both lazy -- you don't want to be mixing laziness and side effects (so doall is often a "code smell").


doseq is the loop version of for


Yup, doseq or run! are reasonable (eager) ways to deal with side-effects.

Patrick Brown19:05:41

I'm on the right track. (defn transact-surface-data [well-id] 1 reference (let [data (get-surface-for-well well-id) ready (doseq [x data] (ready-row-for-transaction "001" x))] (run! #(d/transact *conn* {:tx-data %}) ready))) But all I get back is nil and nothing appears transacted. It took a long time and didn't blow up, so I might be close. Any pointer on what I'm doing wrong?


doseq produces nil


It's only for side-effects.


Why not do this instead:

(doseq [row data]
  (d/transact *conn* {:tx-data (ready-row-for-transaction "001" row)}))

Patrick Brown19:05:49

That sure looks simple. Well, that got me a different error, but one that makes it fairly obvious it's in my code. So I'm going to call this thing solved on your side. CHEERS! I'm getting a datomic schema error, so it's probably something simple further up-stream.


extra hint: I think you want to also use partition-all for CSV -> DB.


to add to DB chunk by chunk instead of whole very large file. Assuming you want to to do muli insert in 1 request to DB instead of send a request for each row separately.


(dorun (map proc coll)) also works, though I guess run! is more idiomatic, is that right?


you can build something from this

(defn read-csv [file-path]
  (with-open [reader (io/reader file-path)]
      (csv/read-csv reader))))
(doseq [candlestick-chunk (partition-all 2000 candlesticks)]
          (binance-db/create-candlesticks-1m candlestick-chunk))
(defn create-candlesticks-1m [candlesticks]
  (postgres/insert-multi! :binance_candlesticks_1m

Patrick Brown01:05:51

Thanks to everyone! I took a nice walk in the park, came back and found I forgot to put a vector in front of my run. partition-all looks necessary, because yeah, not fast that way, but I've at least got a good start thanks to y'all! I'm just using doseq, it needs to get chunked to do anything non-trivial.


How do you guys contribute to libraries? I'd like to do it, but I want some opinions on how to stay organized and make things easier on myself. My initial thought is to do this: 1. Create project 2. Fork the repo of the library I want to contribute to 3. Clone the repo in my project 4. Use deps.edn to point to the repo for the library


Roughly that, yeah. If it's a testable functionality and the library already has tests, then you can skip 1 and 3 and just add an extra test for whatever you're contributing. Alternatively, if what you're writing can be used from, say, a single file, I'd just add that file temporarily to the library's project - just to avoid creating a new project.


I have a folder (called workspace) where I clone repos from GitHub and in my own project that depends on such a library, I can switch between git deps and :local/root while I'm working on changes to that library (and back to git deps once I've made a PR).


But, yeah, what @U2FRKM4TW says: if the library is well-covered with tests, you just fork it and clone locally, then branch and work on the update, push and send a PR.


It depends whether I have a need of that new feature or bug fix in an existing project 🙂


Excellent, thank you guys!


Funny story: I'd submitted a PR to Stu Halloway's reflector project and somehow he hadn't noticed. My .clojure/deps.edn file used git deps to depend on that PR directly and I'd carried on using it for ages. At some point, he happened to read over my deps.edn because it had been mentioned several times on Slack and he saw my git dep -- and that's when he realized he'd missed the PR I sent and he went off and merged it 🙂 He told me that in the bar one night at the last Conj we had!

🎉 2
😄 4

The PR had been there from January until he merged it in November:


Lol, nice!


In theory git has had request-pull/`send-email` forever, which is a more decentralized way to contribute (at least compared to forks and PRs) but GitHub doesn't really support it (forks and PRs are definitely a step up in terms of UX for most users but are clearly proprietary - you can't create forks or raise PRs outside GitHub). I think GitLab (kind of) supports send-email - not sure, never tried it - and so do smaller forges like sourcehut and gitea, I think. Call me crazy but Embrace, Extend, Extinguish.


@UEQPKG7HQ forks and PRs can be done from the command-line with git with any repos -- not sure what you mean about them being proprietary?


How can you create a PR with git CLI without using any GitHub tools? Actually, same question about forks. I'm pretty sure neither "PR" nor "fork" (in terms of whole repositories) are coming from Git. Rather, they're coming from GitHub or some of the earlier platforms. Git itself doesn't have PRs - it has patches spread via email. It also doesn't have forks - it has repositories. And they aren't inherently linked as far as I'm aware, as opposed to forks.

☝️ 1

Sending patches via email seems a bit odd, but that's out of the scope of what I was asking.


> forks and PRs can be done from the command-line with git with any repos I'm pretty sure that both are features of github, the service, that aren't supported by git, the tool. Maybe I'm missing something though! > Sending patches via email seems a bit odd, Technically speaking it's the way it was intended to work. Git was created to assist the development of the Linux kernel and, if I'm not mistaken, contributions have been made exclusively with send-email until now. > that's out of the scope of what I was asking. Yes, you're right, it's out of scope, I just wanted to vent a bit. Sorry for hijacking the thread, 🙂


"forking" is just cloning a repo from one place and pushing it up to another -- doable from the command-line.


> "forking" is just cloning a repo from one place and pushing it up to another Makes sense


GitHub's PR UI (and their command-line gh tool) just automate the part that coordinates the equivalent of git request-pull across the two repos so the target owner can review it. There's nothing really magic or proprietary about it, IMO. Is it a nicer UX than dealing with email? Yeah, I guess, for some folks. Other folks prefer a patch file, as we know 🙂


I don't even wanna think about figuring out how to make a workflow with patch files.

Joshua Suskalo22:05:13

way easier than patch files is working with git send-email

👍 1