This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-08-05
Channels
- # announcements (7)
- # babashka (9)
- # beginners (47)
- # calva (28)
- # clj-kondo (17)
- # clj-otel (20)
- # clojure (193)
- # clojure-brasil (1)
- # clojure-europe (43)
- # clojure-norway (12)
- # clojure-uk (6)
- # clojurescript (18)
- # datalevin (15)
- # figwheel-main (3)
- # honeysql (3)
- # hyperfiddle (44)
- # introduce-yourself (2)
- # java (10)
- # lsp (19)
- # malli (9)
- # meander (4)
- # off-topic (14)
- # polylith (48)
- # re-frame (21)
- # releases (3)
- # shadow-cljs (6)
- # tools-deps (29)
- # yamlscript (3)
Hi, just stacked with exception - Execution error (ClassNotFoundException) at
jakarta.servlet.AsyncContext
when trying to load ring.adapter.jetty
please, need your help
Those errors usually comes from jars missing on the classpath. Sometimes because of dependency conflicts. How do you start your application?
clj -M -m app.core
when you do a clj -Stree
or clj -Spath
you should see the org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api
. That AsyncContext.class is coming from org/eclipse/jetty/toolchain/jetty-jakarta-servlet-api/5.0.2/jetty-jakarta-servlet-api-5.0.2.jar
what ring dependencies and versions are you including in your deps.edn?
@U0739PUFQ thanks! after added org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api 5.0.2 the exception is disappeared but now got Execution error (ClassNotFoundException) at jdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:641). org.eclipse.jetty.util.Attributes
I don't think you should be adding those by hand, .`ring/ring-jetty-adapter 1.12.2` for example will drag org.eclipse.jetty.websocket/websocket-jetty-server 11.0.21
which will drag org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api 5.0.2
can you share you ring deps.edn related stuff?
i already have ring/ring-jetty-adapter {:mvn/version "1.11.0"} in my deps
can you paste the output of clj -Stree
?
or maybe better of clj -Spath
can you also try to run with -Sforce
?
there is something weird there
clj -Sforce just bring me to repl
it have worked before my recent system reinstall
I mean, add -Sforce
to your command
clj -Sforce -M -m app.core
the same result Execution error (ClassNotFoundException) at
jakarta.servlet.AsyncContext
what I find weird is that from you pasted tree, you have :
...
ring/ring-jetty-adapter 1.11.0
X ring/ring-core 1.11.0 :use-top
. org.ring-clojure/ring-jakarta-servlet 1.11.0
X ring/ring-core 1.11.0 :use-top
. org.eclipse.jetty/jetty-server 11.0.18
. org.eclipse.jetty.websocket/websocket-jetty-server 11.0.18
...
and it doesn't show the rest of the dependencies for websocket-jetty-server which should be there https://central.sonatype.com/artifact/org.eclipse.jetty.websocket/websocket-jetty-server/11.0.18I guess it should work if you replace ring/ring-jetty-adapter
and ring/ring-core
in your deps.edn by just ring/ring
let me try it locally
what is the output of this in your box clj -Srepro -Stree -Sdeps '{:deps {ring/ring-jetty-adapter {:mvn/version "1.11.0"}}}'
?
does it include the org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api 5.0.2
dep? it does in my box
also, what is your cli version? clj --version
clj -Srepro -Stree -Sdeps '{:deps {ring/ring-jetty-adapter {:mvn/version "1.11.0"}}}' -- https://pastes.dev/CUTDk0t0Yz
clj --version Clojure CLI version 1.11.1.1413
can you do that command in a empty folder? because it is grabbing your project deps.edn
outside of project
oh, that is very weird, can you update to the latest clojure cli ?
looks like a bug in the old tools.dep to me
why do you think it looks weired
because this is the same command on my box :
$ clj -Sforce -Srepro -Stree -Sdeps '{:deps {ring/ring-jetty-adapter {:mvn/version "1.11.0"}}}'
org.clojure/clojure 1.11.3
. org.clojure/spec.alpha 0.3.218
. org.clojure/core.specs.alpha 0.2.62
ring/ring-jetty-adapter 1.11.0
. ring/ring-core 1.11.0
. org.ring-clojure/ring-websocket-protocols 1.11.0
. ring/ring-codec 1.2.0
. commons-io/commons-io 2.15.0
. org.apache.commons/commons-fileupload2-core 2.0.0-M1
X commons-io/commons-io 2.13.0 :older-version
. crypto-random/crypto-random 1.2.1
. commons-codec/commons-codec 1.15
. crypto-equality/crypto-equality 1.0.1
. org.ring-clojure/ring-jakarta-servlet 1.11.0
. ring/ring-core 1.11.0
. org.eclipse.jetty/jetty-server 11.0.18
. org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api 5.0.2
. org.eclipse.jetty/jetty-http 11.0.18
. org.eclipse.jetty/jetty-util 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty/jetty-io 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty/jetty-io 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty/jetty-util 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty.websocket/websocket-jetty-server 11.0.18
. org.eclipse.jetty.websocket/websocket-jetty-api 11.0.18
. org.eclipse.jetty.websocket/websocket-jetty-common 11.0.18
. org.eclipse.jetty.websocket/websocket-jetty-api 11.0.18
. org.eclipse.jetty.websocket/websocket-core-common 11.0.18
. org.eclipse.jetty/jetty-io 11.0.18
. org.eclipse.jetty/jetty-http 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty.websocket/websocket-servlet 11.0.18
. org.eclipse.jetty.websocket/websocket-core-server 11.0.18
. org.eclipse.jetty.websocket/websocket-core-common 11.0.18
. org.eclipse.jetty/jetty-server 11.0.18
. org.eclipse.jetty/jetty-servlet 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty.toolchain/jetty-jakarta-servlet-api 5.0.2
. org.eclipse.jetty/jetty-servlet 11.0.18
. org.eclipse.jetty/jetty-security 11.0.18
. org.eclipse.jetty/jetty-server 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.slf4j/slf4j-api 2.0.5
. org.eclipse.jetty/jetty-webapp 11.0.18
. org.eclipse.jetty/jetty-servlet 11.0.18
. org.eclipse.jetty/jetty-xml 11.0.18
. org.eclipse.jetty/jetty-util 11.0.18
. org.slf4j/slf4j-api 2.0.5
. org.slf4j/slf4j-api 2.0.5
. org.slf4j/slf4j-api 2.0.5
hm interesting
which clojure version do you have?
I'm running with Clojure CLI version 1.11.3.1463
my clj --version Clojure CLI version 1.11.1.1413
try to upgrade to the latest one, that will also bring new versions of tools.dep
can i just download jar file for cli?
download the new cli
that will download everything you need
i could't find download link
what OS are you using?
that should give you the install shell script
that doesn't work in case of guix
but i will try that other way
why not? it is a shell script, and you can run it with :
sudo ./linux-install.sh --prefix /opt/infrastructure/clojure
if you want to install it on a different placethanks, that works! now the output in/clj -Sforce -Srepro -Stree -Sdeps '{:deps {ring/ring-jetty-adapter {:mvn/version "1.11.0"}}}' https://pastes.dev/N7f4nqemq7
Huh, thanks a lot man ) 👍
hey, you are welcome! that was kind of a hard one
Hi, can anyone help me understand this Clojure 1.12 specific detail?
;; Clojure 1.12.0-rc1
foo.core=> (fn? Integer/parseInt)
true
foo.core=> (source fn?)
(defn fn?
"Returns true if x implements Fn, i.e. is an object created via fn."
{:added "1.0"
:static true}
[x] (instance? clojure.lang.Fn x))
nil
foo.core=> (prn Integer/parseInt)
#object[foo.main$eval2781$invoke__Integer_parseInt__2783 0x2125726b "foo.core$eval2781$invoke__Integer_parseInt__2783@2125726b"]
nil
The java.lang.Integer
class does not implement the clojure.lang.Fn
interface, but the source for fn?
only checks for an instance of clojure.lang.Fn
, then how does the fn?
check return true?user=> (supers (class Integer/parseInt))
#{java.io.Serializable clojure.lang.IMeta clojure.lang.AFunction java.util.concurrent.Callable clojure.lang.IObj java.lang.Runnable clojure.lang.Fn clojure.lang.AFn java.util.Comparator java.lang.Object clojure.lang.IFn}
Whoa! Thank you, @U06PNK4HG
Clojure compiler in 1.12 automatically emits a lambda when it encounters a method reference.
You can see it with clj-java-decompiler:
=> (decompile Integer/parseInt)
...
// Decompiling class: user$foo$invoke__Integer_parseInt__7056
import clojure.lang.*;
public final class user$foo$invoke__Integer_parseInt__7056 extends AFunction
{
@Override
public Object invoke(final Object arg1, final Object arg2, final Object arg3, final Object arg4) {
final CharSequence s = (CharSequence)arg1;
final int intCast = RT.intCast(arg2);
final int intCast2 = RT.intCast(arg3);
final int intCast3 = RT.intCast(arg4);
this = null;
return Integer.parseInt(s, intCast, intCast2, intCast3);
}
@Override
public Object invoke(final Object arg1, final Object arg2) {
final String s = (String)arg1;
final int intCast = RT.intCast(arg2);
this = null;
return Integer.parseInt(s, intCast);
}
@Override
public Object invoke(final Object arg1) {
final String s = (String)arg1;
this = null;
return Integer.parseInt(s);
}
}
nil
Hello hello 👋 I am about to start a backend rest clojure service. Whats the current latest and great clojure stack (libraries) while maintaining simplicity (no polylith). Any feedback would be very helpful
example repositories would be really helpful as well
> simplicity Probably still the same - just glue it by hand from small components. Or take a look at Pedestal. I haven't used it myself though.
Hi @UHJAPK5D0 in a recent side project I used the following: • Ring for api • Hiccup for html • Htmx extensions to better ux (ie to send chunks of html rather the whole page) • Jdbc to SQL interaction • Reitit for routing • I recommend lein for compiling, build, because is very easy to use it. Deps is the preference nowadays however needs more time to learn
I'd say: Ring, ring-jetty (or http-kit), Reitit (or Compojure), jdbc.next, honey.sql, hiccup2 (or Selmer), htmx is nice for simple interactivity in the UI. With tools.deps and tools.build instead of lein.
Oh, you said rest service. Then definitely use Reitit for your routes, because it has nice support for swagger and open API
I used Reitit ring for a REST API setup. It has support for swagger and open API. as stated above. Personally I used duct to organize my code, but it is complicated. Check out biff — you might be able to restructure that project to serve a REST API and it is a “batteries-imcluded” web framework.
I think for "reloaded" workflow and stateful resources management use either Component, donut.system, Integrant or Redelay. I personally prefer Redelay, but the choice is really just personal here, they all work fine.
I wasn't aware of Redelay. Its API looks nice!
Hey, anyone knows how to make inner classes via insn
? I am generating Java classes and I want to make enums as inner classes. For example this:
public class AnimationNode {
public enum FilterAction {
FILTER_IGNORE = 0,
FILTER_PASS = 1,
FILTER_STOP = 2,
FILTER_BLEND = 3;
}
// ...
}
If enums are too much trouble to make, I could settle with final static int
fields.One trick is to write a java class that has the shape you want and use a decompiler to see what the instructions should be.
Oh yeah, I was searching for some information and apparently inner classes are just regular classes with some metadata.
If there are no options, I will probably try decompiling class files.
With tools.namespace.depend
is there a way to add a node to the graph when there aren't any deps?
I am using topo-sort
to get a sorted list of namespaces, but some namespaces don't depend on anything else, so I'm missing those
we look for namespaces not connected here: https://github.com/metabase/metabase/blob/master/bin/build/src/build/uberjar.clj#L82
cool. I was trying to store all the stuff in the graph, but it seems that's not a possibility. thanks for confirming
or actually I'll just make a dependency on nil and I'll filter the nils out later when topo-sorted
or actually, I've used a special keyword like this: (or (seq deps) [::orphan])
and then I filter the ::orphan
keyword, seems a bit safer than nil
Hi All, I’m trying to reconcile clojure.cache behavior between what the docstring/code both suggest, and what I observe actually happening. I am referring to this line: https://github.com/clojure/core.cache/blob/99fbd669a429bc800ff32b64b21cf82b6b486f06/src/main/clojure/clojure/core/cache/wrapped.clj#L43
value-fn (and wrap-fn) will only be called (at most) once even in the
case of retries, so there is no risk of cache stampede.
more details in threadIt seems quite trivial to get the value-fn to be called more than once when the cache is under contention. I’ve emulated this behavior in the repl, as follows
(require '[clojure.core.cache.wrapped :as cache])
=> nil
(def c (cache/fifo-cache-factory {}))
=> #'user/c
(pmap (fn [_] (cache/lookup-or-miss c :foo (fn [k] (println "miss") k))) (range 10))
miss
miss
miss
miss
miss
miss
miss
miss
=> (:foo :foo :foo :foo :foo :foo :foo :foo :foo :foo)
in this case, we see that the miss function was invoked 8 out of the 10 times when there were threads competing. I see this same behavior in the real world, too.
what I dont fully get is why, when the swap! protects the call to c/through-cache…im guessing it has to do with how swap resolves conflicts. But if so, doesnt that mean the docstring is wrong?
Can you put this up on http://ask.clojure.org with a repro? That'll help get it into Jira to be worked on (by me or @U050WRF8X)
@U04V70XH6 sure thing, ty
Is it specific to fifo-cache-factory
or can you repro against other cache types like TTL?
That's useful info. Thanks. Make sure that's in the "ask".
https://ask.clojure.org/index.php/14037/clojure-behavior-lookup-entrancy-misunderstanding-docstring
I think this has to do with the default wrapping function
if you don't provide one, the default wrapping function calls your value-fn, so I think it is always going to be called
I have never been clear on what the wrapping functions purpose is, or strategies to consider for providing one
(please add any insights to the ask so it doesn't get lost in Slack history!)
this doesn't generate the same behavior for me :
(pmap (fn [_] (cache/lookup-or-miss c :foo (fn [vf e] e) (fn [k] (println "miss") k))) (range 10))
@U13AR6ME1 I've just realized your example calls lookup-or-miss
ten times so your value-fn is called at most once in each of those ten times. In fact it is called zero times in two of those cases (and once in all the other eight cases).
For an individual lookup-or-miss
call, the value-fn will either be called once or not called at all. So this seems to be the expected behavior.
So it seems like the docstring needs that clarification added?
Anyway that we can make the docstring more clear has a thumbs up from me.
I've added a qualifying clause directly above the line @U13AR6ME1 originally linked to.
It certainly would be nice if the guarantee could be made across multiple concurrent invocations of lookup-or-miss
but I don't believe that will be possible without rethinking through-cache
somewhat?
it is also not clear to me the wrapping function thing, (which was the thing that confused me)
clj -Sdeps '{:deps {org.clojure/core.cache {:mvn/version "1.1.234"}}}'
Clojure 1.11.3
user=> (require '[clojure.core.cache.wrapped :as cache])
user=> (def c (cache/fifo-cache-factory {}))
user=> (pmap (fn [_] (cache/lookup-or-miss c :foo (fn [vf e] (println "wrapping") e) (fn [k] (println "miss") k))) (range 10))
wrapping
wrapping
wrapping
wrapping
(:foo :foo :foo :foo :foo :foo :foo :foo :foo :foo)
so if you provide a wrapping function it can decide if it calls the value function or not? What is it for?> I don’t believe that will be possible without rethinking through-cache
somewhat?
I guess this is part of my question: In looking at the code, I would have expected it to work since its deciding to call the value-fn based on has? within a swap! context. Shouldnt that have effectively been the gate that made it “at most once” ?
im referring to https://github.com/clojure/core.cache/blob/99fbd669a429bc800ff32b64b21cf82b6b486f06/src/main/clojure/clojure/core/cache.clj#L63
It seems to me that the first one in should have hit the has? -> miss path, and the others would follow has? -> hit
I suspect I am maybe seeing some kind of dynamic of how swap! resolves conflicts, but its confusing me
What is the value of c
in your original code?
Apologies, I am on my phone and not in front of a computer. Is it an atom?
@U050WRF8X IIU what you are asking, its an instance of the cache from the wrapped ns which I believe is atom based but I didnt look at the internals
Yes, the wrapped
ns provides the "convenience" of wrapping immutable caches in an atom for each one -- and it provides the lookup-or-miss
function with stronger guarantees than the basic API (but clearly not quite as strong as some people would like).
(and, again, please have the salient parts of this discussion on the Ask post so the nuances here are not lost to Slack history if we lose our Pro sponsorship!)
Even as a maintainer of core.cache
, I don't really understand the wrap-fn / value-fn thing -- I just inherited that from @U050WRF8X 😉
been a bit since i’ve been in c.cache, but my memory is that line between “API to implement a cache” and “consumer API of a cache” is quite blurry.
Yeah, the confusing part of that doc string is "no risk of cache stampede." I happened to be looking at this this weekend and saw that it probably doesn't prevent the stampede that people are thinking about (multiple threads hitting the same key).
It looks like it maybe is trying to do what core.memoize does, but the way core.memoize makes it "work" is by putting the delay in the cache.
I say "work" because I'm pretty sure there's also a subtle-and-very-hard-to-reproduce timing error in core.memoize, but I don't have time to dedicate toward it right now. I took another look at this, and there's no bug here.
I'll take a look at the Ask ASAP and see if I can derive a problem statement. Recommendations welcomed.
@U050WRF8X I’ll see if I can formulate it in that manner
I did learn a bit more, and I do think the current problem I am having is indeed related to the way swap! works. I also found a solution that works (for me). I’ll provide some more details
Heres the problem statement I posted to the Ask thread:
# Problem Statement
I need to implement cache semantics in front of certain IO operations. These IO operations are not mutating, but they are relatively expensive compared to the overhead of cache lookup.
An example would be the OpenID JSON Web Key Set (JWKS) protocol: Parallel HTTP requests coming into a cluster of Clojure HTTP servers may carry a JSON Web Token (JWT), which needs to be verified as signed by a trusted issuer. The JWKS protocol provides a mechanism to retrieve public keys from the trusted issuer for this verification. The ratio of requests to keys can be billions to 1; thus, it is highly conducive to caching in some form.
I have built this caching to date using clojure.cache.wrapped where the cache essentially holds promises to JWKS responses, keyed by JWT key-id. The basic idea is that when a new JWT key-id is encountered, the cache misses and starts an HTTP request to the IDP to retrieve the key, caching a promise. The overhead to kick off the request is lightweight, even if the round-trip response may take a few milliseconds.
Clients of the cache may deref this promise in whatever way makes sense. Long resolved keys deref immediately. Keys in the middle of being resolved may have multiple clients waiting on the promise.
The problem I currently have is related to inefficiency caused by the clojure.cache internal use of (swap!) when the cache is highly contended. In the above scenario, a given key-id typically expires simultaneously. Thus, API clients tend to refresh JWTs around the same time and present a new yet-to-be-cached key to the system.
The net result is a stampede from multiple threads for the same key, thus putting substantial pressure on a specific entry in the cache. Multiple value-fns are called to kick off a JWKS operation for the same key ID, even though the response is stable and we only need one. The system still behaves correctly with the current behavior, but it is less efficient because we issue multiple redundant IO operations to get the same key ID.
One could argue that this IO is "side-effecting" and is thus incompatible with (swap!). While I agree that mutating side effects would certainly be a problem, these "read-only" side effects are not incompatible with the general notion of caching, so it would be ideal if the caching library could support them by eliminating the stampeding re-entrace to the value-fn for the same key.
I have been able to work around the current behavior by wrapping the lookup-or-miss with a construct such as:
(:require [clojure.core.cache.wrapped :as cache]))
(defn lookup-or-miss
[cache-atom e value-fn]
(or (cache/lookup cache-atom e)
(locking cache-atom
(cache/lookup-or-miss cache-atom e value-fn))))
However, perhaps it makes sense to consider a contribution to the upstream library, assuming others see value in solving the stated problem.
Here are a few thoughts on this front:
1) Introducing locking either in the current wrapped implementation or as an option/alternative for those who need stronger guarantees.
2) Re-implement the clojure.cache.wrapped internals to something that supports at-most-once semantics by getting away from (swap!) in favor of something like (volatile!) + (locking).
The original goal of c.c was to implement an API for building caches, but I have always appreciated the inclusion of a "common use" namespace. I would like to explore the idea of your recommendations in terms of the builder + common-use model. Maybe there's room for something like a contentious-lookup-or-miss
implementation that could be used instead of the base function when users know that they have that need?
I'm not married to that name BTW 😉
sure, i think it makes sense…i have yet to do the analysis on any negative consequences on something like locking+volatile! so I think making it opt-in makes total sense
I think that core.memoize might be a drop in replacement that solves your woes. I also think that a new datastructure similar to java.lang.Atom but that had update built in and would compareAndSet not on the whole but on the key you are swapping would solve 80% of atom swap contention issues in real life
@U11BV7MTK last time I looked, the problem with memoize was that it essentially was unbounded memory growth (vs say fifo/ttl). Unless I am misunderstanding you?
clojure.core/memoize
is unbounded. clojure.core.memoize/memoize
allows for pluggable storage. The thing that might work for you is that it is aware if it has already requested the value once and won’t fire a bunch of them but just let the first computation succeed
@U050WRF8X @U04V70XH6 I wrote a patch. I took a look at the contributing guidelines and saw that it mentions “no PRS, use JIRA”. However, I don’t have access to JIRA. Please take this in the spirit that it was intended: putting the legwork in on something I have a vested interest in. Feel free to ignore, merge without attribution, or provide feedback and ask for changes. All fine by me. Thanks for all the work on this lib. I use it every day. https://github.com/ghaskins/core.cache/commit/5e1369e14d28b91c3b8f21448180c082914fedfd
The main issue re PR vs patch on Jira is about the Contributors' License Agreement, which you've signed https://clojure.org/dev/contributors so @U050WRF8X or @U064X3EF3 could set you up on Jira so you could add that commit as a patch there.
Using locking
and a volatile is an interesting approach... It has different tradeoffs than using an atom (and swap!
)...
Yeah, and I am open to suggestion if there is a better way. I was trying to get away from any of the “optimistically try and retry if collide” type constructs that exist, like atom/ref
OTOH, the very notion of a cache is already mutable and often impure, so maybe it warrants the unusual approach
I can say ive been stubbing my toe on the general problem ive highlighted for quite some time now, so I welcome some kind of solution
one thing I will admit right off the bat is the locking approach is probably best suited to the async style approach I generally use, vs some kind of heavy synchronous operation in the value-fn…..on the other hand, an expensive synchronous value-fn is probably a bad idea no matter how the concurrency issue is solved
The atom
version could use locking
but that's the tradeoff: use locks (on all actions) and guarantee no contention on the cache via single-threading, or don't use locks and allow for contention to be handled via the atom/swap! mechanism (no locks, but some retries).
Which is right for the user definitely depends on how much contention they might have and how they might want to handle that...
yeah, thats essentially what my wrapper-workaround does: it uses locking on top of the existing wrapped…it does work
i figured at that point, the extra overhead of swap! vs vswap! probably isnt doing anything useful, but it is simpler than two interfaces
by “my wrapper” I am referring to
(:require [clojure.core.cache.wrapped :as cache]))
(defn lookup-or-miss
[cache-atom e value-fn]
(or (cache/lookup cache-atom e)
(locking cache-atom
(cache/lookup-or-miss cache-atom e value-fn))))
i.e. I think the IDeref on the atom/volatile! takes care of the read-path operations
It would be nice to abstract the wrapping strategy out of this, to make it easier to provide alternatives like this... Have to give it some thought...
Most of the code in wrapped
uses @
and swap!
so if swap!
was handled via a protocol on the "wrapper" type (and that type implemented IDeref
) then only the factory functions would need updating to accept some sort of (optional) wrapper constructor function -- which would be called with a regular immutable cache object and should return that wrapped in something mutable. But if you want a locking type, I think you'd need to use a record around the mutable wrapper so that you weren't going to collide with protocol extensions on the low-level wrapper type...
FYI I think this ask/issue also touches the same or at least a very similar problem https://ask.clojure.org/index.php/12567/multi-threaded-cache-stampede-in-core-cache
Ah, interesting... that also lands on core.memoize as a potential solution...
Out of curiosity, I ran a benchmark comparing the two fundamental models
(def d (volatile! 0))
=> #'user/d
(criterium.core/quick-bench (locking d (vswap! d inc)))
Evaluation count : 18932700 in 6 samples of 3155450 calls.
Execution time mean : 27.349020 ns
Execution time std-deviation : 4.068904 ns
Execution time lower quantile : 24.090110 ns ( 2.5%)
Execution time upper quantile : 33.691267 ns (97.5%)
Overhead used : 7.167237 ns
Found 1 outliers in 6 samples (16.6667 %)
low-severe 1 (16.6667 %)
Variance from outliers : 47.0587 % Variance is moderately inflated by outliers
=> nil
(def d (atom 0))
=> #'user/d
(criterium.core/quick-bench (swap! d inc))
Evaluation count : 22390512 in 6 samples of 3731752 calls.
Execution time mean : 22.067023 ns
Execution time std-deviation : 3.297516 ns
Execution time lower quantile : 19.648233 ns ( 2.5%)
Execution time upper quantile : 27.265072 ns (97.5%)
Overhead used : 7.167237 ns
Found 1 outliers in 6 samples (16.6667 %)
low-severe 1 (16.6667 %)
Variance from outliers : 47.0844 % Variance is moderately inflated by outliers
=> nil
so an atom is slightly more efficient (in this case, at least) though they are both in the same ballpark
I’m now wondering whether it makes sense to have both: the atom is really only appropriate if we want to limit the wrapped cache value-fn to pure functions. IO isnt really appropriate given the side-effect potential. The volatile! approach solves for both without changing the API. Thoughts?
My instinct says that offering an API that allows opt-in for side-effecting usage (with supporting motivations in docs) is the way to go.
@U13AR6ME1 swap! is faster when there's no contention (which is expected), but probably much slower under high contention
@U07S8JGF7 understood that there would be a difference under contention, but we cant ignore that retrying the swap! on the atom has a cost, too, and its not easy to quantify them directly
id also argue that any retry under contention is really only suitable for pure functions….we could say that this is all that clojure.cache is meant to support, but that is a bit limiting in things that are useful for a cache to do
in any case, my main point still stands that I know my comparison without contention is a bit incomplete, but I dont know how to reliably compare them together
I was really just making an observation on your measurement. (I was surprised when I first saw it, thought through it, and thought I'd chime in)
I do suspect that swap! retries have the potential to be much worse, but that is only intuition
fwiw ConcurrentHashMap
might actually give you the best semantics and perf for your use case (if I'm understanding you correctly).
my gut is that for the use cases that clojure.cache is meant to support, avoiding a stampede is highly desirable…but I also understand that not eveyone may care about the distinction and maybe there are negative consequences in protecting against them
Id have to look at ConcurrentHashMap, but Im guessing the lack of eviction control would be a limiting factor for me
oh but you wanted eviction. Yeah, in that case I'd personally reach for caffeine. (But feel free to ignore this. Don't wanna derail the rest of the convo).
I really appreciate this thread. We have also identified this issue for the exact same use case but haven't hit a point where it's a problem for us yet. We have a ticket in our backlog to keep an eye on it.