Fork me on GitHub

I may be missing something, but the mechanism that atoms use for safe concurrent access in Clojure seems to be quite poor performance in the following situation (a) high contention due to many concurrent swap! calls from multiple threads, and (b) their swap! calls return new values, i.e. they do not return the identical object their update function was given. In such cases, a different mechanism that used a lock around the "read, call update function, write new value back" would avoid any caller ever having to call its update function multiple times. The current implementation seems like it must be an explicit design decision, so I was wondering if anyone knew in what situations the current implementation is preferable? e.g. is it higher performance in the low/no contention case, perhaps?


Aside: I know agents can handle high contention situations better than atoms, too, but they have an asynchronous API, not a synchronous one, so aren't a drop-in replacement.


I think there is an assumption of a) function to be called during swap is fast and b) often low contention


In case of uncontended swap, should be very fast (cas in jvm is I believe intrinsified down to hardware in some/most cases)


I’m not sure what you mean by your b - atoms rely on operations on immutable values so a successful op should nearly always be a nonidentical object


A swap! function can return the same value, i.e. no change. I doubt that is a common kind of swap! operation that people would choose to perform, but if it were done often, it would never cause other threads to re-execute their update functions.


Locking for swap! would mean atoms won't comply with the epochal time model, because that way perception would impede action.


Locking need not impede observation, though, right? It only impedes others starting their update function executions.


I'm assuming here a scenario where the atom always contains immutable values. The locking would not be for allowing them to contain mutable values -- it would be to prevent repeated update function execution by contending threads.


There's an assumption of the contained value being an actual value (immutable) in the first place


Atoms state this as a precondition


Because swap! can retry, the value must be immutable (so optimistic changes can be discarded)

👍 1

Locking would enable atoms to contain mutable values, and have mutating update functions, but that would definitely break the ability to observe the atom's value at arbitrary times.


I've always wondered how the problem of starvation is handled with atoms. By starvation, I mean scenarios where a slow swapping fn keeps getting retried forever, while faster fns succeed. Is the answer just "make sure all your swapping fns take more or less the same amount of time"?


Do Jave methods marked sychronized guarantee no starvation under contention?


At least a few quick Google search results on terms like "java synchronization starvation" suggest that Java makes no guarantees that a thread calling a synchronized method will eventually be granted the lock. If that is how a JVM implements synchronized, then starvation is possible from calling a highly contended synchronized method.


Java synchronization does not prevent starvation


Biased/fair locking defaults have changed back and forth over time as well so particular jdk versions differ in this


Java Locks do have options for fairness - you pay a lot in raw latency for that feature

💯 1

There are no guarantees in the implementation of atoms that a thread will not be starved, that I am aware of. Alex's mentioned assumption above of usually low contention should enable all threads to finish, if "usually low contention" is interpreted as "there are often periods as long as the longest thread's update function when no one tries to change the atom's contents"


A lock-based implementation of atoms could guarantee no starvation, if the locking mechanism used guaranteed no starvation (e.g. it implemented FIFO queues of waiting threads in its implementation during contention, as opposed to something like spin locks)


If you want to use locking and mutation, then that’s possible now, it’s just not what atoms are


If atoms are a bad fit, then don’t use atoms


Understood. I'm just trying to clarify in my head (and perhaps in an article) when it is a good/bad fit, and why, and realized that an imagined alternate implementation of atoms with locking would be less prone to repeated update function calculation than the current one. Not suggesting a change in Clojure, by the way -- asking because I wanted to ensure I wasn't missing something subtle.


was the "pod" data structure intended to solve this problem: something like an atom but with transients (or other mutable things)?


Atom swap retries are in my experience exceedingly rare


Even stm retries are rare when I’ve used them in anger, unless creating benchmarks that deliberately s provoke retries


Atoms are most commonly used with maps - it’s possible to choose granularity by using a map holding multiple atoms. Cgrand explored this in a thing called megaref long ago


Re pods, don’t know, predates me using Clojure :)


From a discussion in beginners... are there any known problems running Clojure on a J9 JVM rather than a hotspot JVM?


Alex has confirmed that some test failures occurred trying to run Clojure's test suite against J9 in the past.


I think they’ve all been worked through


Hmm, well, the beginner was getting weird, somewhat random failures trying to run lein repl on J9 and they all went away when he switched to Hotspot so... ¯\(ツ)


Discussion starts here and then goes into a long thread ("I got the Java 11...") where the J9 issue was mentioned.

Aaron Cummings22:02:39

I ran into some weird performance issues around java.nio byte buffers on J9 which just went away after switching to Hotspot. This was probably 4 or 5 years ago, so might not still be true.


I had a problem on j9 about 6 months ago on a cljc/cljs/clj project. I never dug in deep enough, but switching to HS made everything gravy.


The "map holding multiple atoms" approach means giving up the ability to observe a consistent snapshot of the entire state, true? If that is an explicit part of the tradeoff of the megaref approach, versus single atom, then understood.


A “mega atom” behaves like an atom but it allows for more concurrency by offering swap-in! (which has the semantics of swap! update-in)


so what's the trade-off between a "meta atom" and STM/dosync?


Ok, my memory was fuzzy and I mixed megaref and megaatom. So megarefs participate in the STM (through -in variants of the STM verbs). The paths space is partitioned and each partition is guarded by a regular ref, so a megaref holding a big map is less concurrent than one ref per key but more than a simple ref (and the partition count can be fine-tuned). However you can get a consistent snapshot of a megaref without a transaction (which is good in write heavy scenarios).

💯 1

interesting, thanks


@andy.fingerhut my memory of the thing was fuzzy. So the thing of interest is megaref not megaatom. A megaref is an hybrid between an atom and a STM ref. It snapshots like an atom but participates in STM transactions at the path level (it’s more concurrent than a simple ref holding a map – its level of concurrency is tunable).