This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-10-04
Channels
- # announcements (6)
- # babashka (7)
- # beginners (2)
- # biff (5)
- # calva (2)
- # cherry (17)
- # cider (3)
- # clj-kondo (8)
- # clojure (202)
- # clojure-brasil (8)
- # clojure-europe (20)
- # clojure-norway (23)
- # clojure-uk (4)
- # clojuredesign-podcast (5)
- # conjure (1)
- # cursive (9)
- # eastwood (22)
- # events (8)
- # fulcro (3)
- # hyperfiddle (22)
- # introduce-yourself (7)
- # lsp (67)
- # malli (1)
- # matrix (1)
- # meander (6)
- # off-topic (76)
- # pedestal (8)
- # polylith (17)
- # quil (12)
- # re-frame (2)
- # reagent (8)
- # releases (3)
- # shadow-cljs (67)
- # sql (93)
- # squint (39)
- # tools-deps (46)
- # vim (7)
Hello Folks, What’s the best way to achieve the equivalent of this:
(do (something)
;; wait one minute
(something-else))
Where the running context is a Ring handler (so ideally i wouldn’t like block the whole execution of the ring server if a few handlers happen at the same time)
If the response does not depend on (something-else)
, then yes, core async with a timeout channel would indeed work.
If your plan is to keep the request open for a minute, you might want to look into server sent events and async ring handlers.
thanks. it does not depend indeed, in fact it’s just sending an email, but i want it to be sent slightly later, without resorting to mailqueues if I can (being pragmatic for now :-))
Remember to use thread
instead of go
when performing long-running tasks (like contacting an http endpoint), due to the small number of threads in core.async
and I would highly suggest switching to some sort of queue if you care about the delivery of these emails, once your "for now-pragmatism" runs out. 🙃
The simplest and dirtiest would be to
(future
(Thread/sleep 60000)
(something-else))
So @U07FCNURX I currently have this:
(defn after-30-seconds
"executes body this after 30s"
[f]
(async/<!! (async/timeout 30000))
(f))
You might want to use a ScheduledExecutorService
if there are lot's of these. E.g. manifold has https://cljdoc.org/d/manifold/manifold/0.4.1/api/manifold.time#in for exactly this purpose
Yeah, after-30-seconds
should probably look like this:
(defn after
[n f]
(let [ret-chan (a/chan)]
(a/go
(a/
This will wait for the timeout without holding up a thread, perform the function on a dedicated thread so that it can perform blocking io, and the returned channel from after can be used as normal from go blocks, as if the whole thing is a standard async function.
@U5NCUG8NR I'm sorry, but this makes very little sense. You are mixing go blocks, explicit threads, and a redundant channel.
(a/go
(a/<! (a/timeout 30000))
(f))
That's it, if the goal is to use core.async for this.Otherwise, it's enough to:
(.schedule
(java.util.concurrent.Executors/newSingleThreadScheduledExecutor)
^Runnable #(println "hello!")
10 java.util.concurrent.TimeUnit/SECONDS)
You would probably want to reuse the executor once you get it working.So I had specific reasoning for all the choices I made there.
The use of a/thread
for the final execution is to permit blocking io in the function to be deferred, the use of core.async is to prevent the need to spin up a new thread only for it to sit around and be blocked for a long time increasing the liklihood of needing to allocate many threads, and the "redundant" channel there is so that the interface the function provides can be equivalent to a standard core.async async function.
Yes using a scheduled executor is probably better.
I say redundant because both go
and thread
already return the one-off channel that receives the result of the last form in the block.
Right, but it's not redundant because if I left it alone then I'd have a channel returning a channel returning the final object.
If I move it into a single a/thread
block to remove the extra layer then I'm holding up a thread while it blocks for timeout, and if I move it into a/go
then blocking io in the final function results in blocking a core.async executor thread.
I have to have both layers, but to provide an interface with one layer I have to introduce this third channel which has a value sent on it and then is closed.
Sure, that expands to about the same thing as I did
Hey there. Is there a way to express a logical nand between two options natively using clojure.spec, without defining helper-functions?
(s/def ::a int?)
(s/def ::b int?)
(s/def ::a-or-b (s/or ::a ::b)) ; i only want, at most, one of these
(s/def ::nand
(s/keys :opt-un [::a-or-b]))
(s/valid? ::nand {}) ; true
(s/valid? ::nand {:a 1}) ; true
(s/valid? ::nand {:b 2}) ; true
(s/valid? ::nand {:a 1, :b 2}) ; true (want false)
I think you're doing this at the wrong level - you're trying to say something about the map, so ::a-or-b is not sensible here.
but you can add arbitrary predicates to your spec so...
(s/def ::nand (s/and (s/keys :opt-un [::a ::b]) #(not (and (:a %) (:b %)))))
Seems like a neat solution! When I thought about it, I approached it like :a
, :b
and the relationship between them should be defined as a single composable unit, but approaching it from the context of the map seems clean. Thank you!
I'm trying to debug an issue in our leiningen-based app where macro-expansion errors print the :clojure.main/triage
object instead of the clojure.spec pretty-printed error. has anyone run into this before?
if I write (let [x])
, I expect to see:
Syntax error macroexpanding clojure.core/let at (.../core.clj:315:1).
[x] - failed: even-number-of-forms? at: [:bindings] spec: :clojure.core.specs.alpha/bindings
but instead I see:
{:clojure.main/message
"Syntax error macroexpanding clojure.core/let at (.../core.clj:315:1).\n[x] - failed: even-number-of-forms? at: [:bindings] spec: :clojure.core.specs.alpha/bindings\n",
:clojure.main/triage
{:clojure.error/cause
"Call to clojure.core/let did not conform to spec.",
:clojure.error/phase :macro-syntax-check,
:clojure.error/symbol clojure.core/let,
:clojure.error/column 1,
:clojure.error/line 315,
:clojure.error/class clojure.lang.ExceptionInfo,
:clojure.error/source "core.clj",
:clojure.error/spec
{:clojure.spec.alpha/problems
[{:path [:bindings],
:pred clojure.core.specs.alpha/even-number-of-forms?,
:val [x],
:via
[:clojure.core.specs.alpha/bindings
:clojure.core.specs.alpha/bindings],
:in [0]}],
:clojure.spec.alpha/spec
#object[clojure.spec.alpha$regex_spec_impl$reify__2503 0x67fa82a8 "clojure.spec.alpha$regex_spec_impl$reify__2503@67fa82a8"],
:clojure.spec.alpha/value ([x]),
:clojure.spec.alpha/args ([x])},
:clojure.error/path "crossbeam/test/framework/core.clj"},
:clojure.main/trace
{:via
i'm not sure what's going on that would cause thisturns out it was the jvm-opts "-Dclojure.main.report=stderr"
in project.clj. very annoying
Going a little crazy here: How do you pass :args
to server/start-server
? I'm on clojure 1.11.1.
(def server (clojure.core.server/start-server
{:name "repl1"
:accept 'clojure.main/repl
:port 5555
:args {:print clojure.pprint/pprint}}))
(Both the server and pprint namespace are required)
But when I connect, on the server I get:
java.lang.IllegalArgumentException: No value supplied for key: [:print #object[clojure.pprint$pprint 0x8ebc676 "clojure.pprint$pprint@8ebc676"]]
I also tried wrapping args in a vector to no avail:args Vector of args to pass to accept function
what does clojure.main/repl
take?
I this pertains to https://clojure.org/news/2021/03/18/apis-serving-people-and-programs but that feature seems not to work for me despite being on 1.11
*clojure-version*
{:major 1, :minor 11, :incremental 1, :qualifier nil}
i'll answer that for you: Options are sequential keyword-value pairs.
clojure.main/repl
doesn't use that
it takes & options
and then it calls (apply hash-map options)
sorry thanks, yeah I even looked at the source but my brain read it as {:as options}
it would be nice if it used that new functionality, but i suspect they don't want to change it for fear of breaking something
are you aliasing clojure.string as string
or str
? React with for string and
for str (emojis chosen arbitrarily, just 2 legends)



str
is the default alias for babashka fwiw :^)
I used to use string/
but changed it to str/
after reading it more in other people's code.
Does that mean introduced the str/ idiom, and that we can find
code with string/ if we go looking?
I'm team string
so I don't clash with str
, but I'll follow the project/team convention.
oh maybe this settles it? The clojure repo is a mix of string, str and s
believe it or not.
how much authority is given to this is up to the individual, but <https://guide.clojure.style/#use-idiomatic-namespace-aliases> recommends str
Obviously we need tooling to resolve this. The import is some kind of gensym, and the IDE resolves it to s
/`str`/`string` according to personal preferences.
@U06B8J0AJ Not entirely sure what you mean, but at least in Cursive if you type str/includes?
where str
hasn't been defined yet but you use it as an alias for clojure.string
somewhere else (or maybe if some library uses it, I think), then you'll be prompted to press Alt+Enter to add that :require
vector for you automatically.
It was more of a joke than a serious suggestion @U2FRKM4TW. The basis being, wouldn’t it be nice if everyone just saw what they preferred? Which when extrapolated leads to filter bubbles and dystopia.
Yeeeeeaaaaaah… You can’t go too far the other way either, then there’s a different kind of dystopia, where everything is enforced and there’s no room for personal preferences.
string
is used on my projects as it's far less likely to clash when using text search through code than str
I never understood why names in application code has to be so short and more cryptic, it's not as if names are every fully typed after the first time. So if it's not a typing speed issue, is there some other reason why str or s would be preferable?
I would argue that clojure.core isn't a comprehensive guide to writing clean code for applications / services.
There are also many discussions around what should be in the style guide
The most important thing is that the naming is consistent.... then I can use Emacs to change everything in one go with helm edit
(I am of course kidding and will use what someone pays me to use)
That’s the one, issue resolved. Which reminds me, Clojure needs a https://www.emojicode.org/ dialect.
> I never understood why names in application code has to be so short
> is there some other reason why str or s would be preferable?
@U05254DQM Cognitive load and speed of reading.
I genuinely feel dumber and slower when reading code with longFunctionNamesThatSpecifyExactlyWhatTheFunctionIsDoing
. And not for the lack of trying. It's much easier for me to track short names across their scopes (and truly short names must either be consistent and global or very local) than to jumble though a stockade of long names.
Regarding searching - I rarely do full-text search as my IDE lets me search for the exact thing that I need and not some string. But even with strings, you can always search for str/
and str[^/]
.
I agree that supercalifragilisticexpialidocious
style naming is unnecessarily verbose and hard to parse and retain, although str and especially s is feels to me a similar drain on cognitive load at the other extreme. There are so many things that could be str and s could represent other than the Clojure standard string library when writing business solutions.
If narrowly focused libraries are being created then I assume its less noticable, but within a business context id like to make it as clear as possible what something represents (as there are many other things to think about and retain)
If you follow the https://stuartsierra.com/2015/05/10/clojure-namespace-aliases convention, using str
will feel out of place
(although I'm sure some team does because https://xkcd.com/1095/)
I doubt that the community on a whole will actually agree on a standard and in many ways I dont think there should be one way. Its one of many design choices we make with the team we work with in respect of the work that we are creating and should be made with a clear understanding of the value and constraints of that decision.
I am using a java library, which has a block of code like so:
private Foo() {
try {
...
}
catch (SpecificError e) {
throw new FooException("...");
}
...
The author wanted to give me a helpful hint with their FooException
. But alas it is not enough. I would love to see what the message in SpecificError
was. Is there some way I can "capture" SpecificError
and inspect it?If the author used a chained reception then yes
ie if it’s throw new FooException("some message", e)
the specific error will be the ex-cause
of the FooException
core=> (Exception. "stuff" (ex-info "inner cause" {:other :stuff})) ;; an exception chained with another
#object[java.lang.Exception "0x4776d1b9" "java.lang.Exception: stuff"]
core=> (ex-cause *1) ;; ask for the cause of the exception
#object[clojure.lang.ExceptionInfo
"0x51af6ebc"
"clojure.lang.ExceptionInfo: inner cause {:other :stuff}"]
core=> (ex-data *1) ;; get data from that cause
{:other :stuff}
Given ("hello there·" "
Java.lang.String.trim()
removes null terminator ·
while cljoure.lang.string/trim
doesn't.
it can be very surprising how long the java doc for String.trim ends up, and sort of head slapping that it uses a very different definition of whitespace from Character/isWhitespace which is what clojure.string/trim uses
it might be nice if clojure.string/trim's docstring defined its notion of whitespace,
I'm surprised that (assoc {})
is illegal but (dissoc {})
is legal.
I was trying to (apply assoc {} pairs)
where pairs
can be empty.
Sure but that wasn't really the point.
There's always alternatives.
I'm just wondering why assoc
requires vals and dissoc
does not.
but also @U06PNK4HG (apply hash-map keyvals)
was helpful for me so thanks.
Because vararg arity is more expensive than the one where just the required arguments are passed and nothing else; and assoc
is a very commonly used function for this to matter.
assoc is defined
([map key val] ... )
([map key val & kvs] ...)
Are you saying that adding
([map] ...)
would affect performance enough to matter?(just trying to be certain I understand)
Is your pairs
a flat vector/sequence like [:a 1 :b 2 :c 3]
?
(kind of a confusing name -- I would expect pairs
to be a sequence of pairs: [[:a 1] [:b 2] [:c 3]]
)
It's the value of a mapcat
ok I should have said kvs
That doesn't answer my question -- mapcat
can produce all sorts of things 🙂
So, are you saying the answer to my Q above is "yes"?
it's a clojure.lang.PersistentVector in this case
sorry had to check. I don't know these things off the top of my head
Not the type, the value?
it would be something like [:a true :z true]
If it was a sequence of pairs, (into {} pairs)
would be the idiomatic approach. If it is a sequence of keys and values, then (apply hash-map kvs)
would be idiomatic.
If you used map
instead of mapcat
, would you get a sequence of pairs? (and thus be able to use into {}
)
I'm happy to learn new idioms, but again this was not my question
I'm trying to understand the use case...
and I'm trying to understand why
> (assoc {})
is illegal but (dissoc {})
is legal.
I mean, you could def. post on http://ask.clojure.org about it, requesting a 1-arity assoc
be added, but you'll get the same line of questioning there as I'm pursuing here because the core team will want to understand why you need this rather than using something else (that is more idiomatic).
(for dissoc
there really isn't an alternative)
I'm not stuck. I'm not looking to change the core. I just wondered the reasoning for this (if it was done on purpose). @U06PNK4HG gave an answer but I'm not sure if it was an educated guess or canon. So I'd like a bit more clarity.
I don't see why (dissoc a-map)
has to work.
When dissoc
was originally added, it didn't support (dissoc {})
. That arity was added later (in early 2008). (apply dissoc sequence-of-keys)
with an empty sequence is why.
But folks don't use assoc
that way because there are "better" alternatives.
apply hash-map
or into
depending on the structure of the key/value data are used instead of apply assoc
I can certainly see an argument in favor of adding that arity -- consistency, the ability to assoc
an open-ended set of key/value data into an existing hash map, etc.
right. so is it a performance thing or a tuits thing?
I don't know whether adding a 1-arity variant will impact performance in general (when you're in apply
territory, you're usually away from the most performant approach already).
nod. well thanks for the insights and idioms. appreciated.
There are definitely a lot of "corner cases" in Clojure where folks might scratch their head and go "Hmm, why is it like X instead of Y?" and sometimes there are good reasons and sometimes there aren't really... There are quite a few cases where (apply f [])
will fail so assoc
isn't unique.
Sometimes, the f
in question does get updated to have the extra arity needed for the apply .. []
case to work.
(`into` didn't have 0 or 1-arities until 2016)
Now that I have a good enough answer here,
I'll share my use case:
> to turn a string of certain keywords :a
, :b
:c
and positive integers (separated by ws) into a mapping with those values as keys, and anything else ignored.
":a :c :m 14 -3"
->
{:a true, :c true, 14 true}
You've made me curious enough to dig into the Clojure commit log! 😆 comp
got its 0-arity in 2010 apparently...
OK, so you "parse" the string to a sequence of tokens, then filter it to contain just "interesting" data (certain keywords and positive integers), and at that point you have a sequence of keys...
...so (zipmap the-keys (repeat true))
would be an option here?
looks tasty.
but it's already vararg
just no arity of [map]
FYI a related CLJ ticket for the transient versions of some of these functions. I suspect that in most cases where one of these functions accepts "no additional arguments" and in that case returns the input collection, it was simply that it was implemented that way originally in Clojure, and others simply weren't: https://clojure.atlassian.net/browse/CLJ-1103
Requests to make them more consistent with each other are typically not high priority, unless the Clojure core team has a member that wants it to happen.
It mostly comes up occasionally because someone did what you did, i.e. (apply some-fn coll1 coll2) with empty coll2.
I do not recall whether there is a similar issue open for assoc
Oh, never mind, I'm pretty sure that CLJ-1103 includes assoc
as one of the things proposed to be changed.
(defn required-zero [& args])
(defn required-two [a b & args])
(defn just-two [a b])
(crit/quick-bench (just-two 1 2)) ;; 1.15 ns, 0 bytes allocated
(crit/quick-bench (required-two 1 2)) ;; 1.13 ns, 0 bytes allocated
(crit/quick-bench (required-two 1 2 3 4)) ;; 11.79 ns, 56 bytes allocated
(crit/quick-bench (required-zero 1 2 3 4)) ;; 14.25 ns, 64 bytes allocated
You can vote on the corresponding http://ask.clojure.org issue if you hope for more attention to be drawn to it sooner rather than later: https://ask.clojure.org/index.php/2370/make-conj-assoc-dissoc-transient-versions-handle-similarly
@U0CMVHBL2 I really don't care one way or the other if this happens in clojure.core.
That is a view that can help keep one's serenity long term 🙂
@U06PNK4HG but that's not the same thing here.
=> (source assoc)
(def
assoc
(fn ^:static assoc
([map key val] ...
([map key val & kvs] ...
It's already multi-arity and already var-args.
It's a bit hard to believe that adding an arity of [map]
here would have a meaningful impact on performance.When the function has varargs but also has a number of required arguments, and it is invoked with the number of arguments that is exactly the number of required arguments, it doesn't go through varargs codepath but rather a more effective codepath.
I can direct you to a large nightmarish autogenerated Java file that contains that logic so that you can stare into that abyss, or you can simply believe me 😄.
It's not that I don't "believe" you. I just don't yet see the basic logic of what you are positing.
(apply assoc a-map kvs)
I think you are saying that this invokes the varargs fn.Is that right?
Oh man, don't say I didn't warn you:) See this https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/RestFn.java#L132 And this https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/AFn.java#L147
Like I've said, there is an optimization that tries to avoid stuffing the arguments in a list for vararg consumption
C'mon @U06PNK4HG That's not even fair
Specifically in layman's terms, what happens if we add the [map]
arity to assoc
?
So after that when (assoc m :a 1)
is invoked, the compiler would have to construct a list '(:a 1)
and pass it to the function.
Whereas now it would pass them as normal method arguments (through stack) and for varargs it will give the function an empty list which is free
By the way, all of this is possible because
This is a stupid thought, disregardapply
itself has a few constant arities, check it's source
i don't see how adding a 1 arity to an already-varargs function would make it slower
i did and i don't see where you've answered that
Is the performance penalty limited to the compiler here?
I don't quite get it, but thanks for the explanations.
@U05H8N9V0HZ Sorry, my bad, my bad, I didn't link one more important place https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/RestFn.java#L427
I'll take a look later. Thanks.
maybe we're speaking past each other? are you referring purely to when calling (apply assoc ...)
and performance loss there because you have to build the 3 item seq first?
because (as i understand it) the base call (assoc m)
would go directly to the 1 arity, and (assoc m k v)
would go directly to the 3 arity
which is the performance I was talking to, which explains my confusion lol
But if you try to add [map & args]
arity, Clojure would no longer allow you to have [map k v & args]
arity
(defn foo
([& args])
([a b & args]))
Syntax error compiling fn* at (REPL:1:1).
Can't have more than 1 variadic overload
that's separate from adding a 1 arity to assoc
@U06PNK4HG I think you mean the 3-arity of assoc
would be affected via https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/RestFn.java#L435 ?
@UEENNMX0T my specific case is (apply assoc m (returns-empty-vec))
right, i understand the situation, i'm trying to understand the performance loss argument
I get what you're saying -- and I'm a bit surprised but, yeah, I think you're right. If you add a lower arity to a variadic fn, more calls go through the arrayseq construction and invocation.
And yes, @U06PNK4HG I was not asking about [map & args]
but about adding [map]
to the existing arities
Hmm, just a plain [map]
arity is another story, indeed. I'm sorry for jumping to assumptions
So adding [map]
to assoc
would cause (assoc m k v)
to go through the variadic path because getRequiredArity()
would return 1 instead of 3.
I doubt that arity would change performance. But why argue, bring forth Criterium and try it.
(what a rabbit hole this has become! 😆 )
@U06B8J0AJ Scroll up. But I measured something different from what @U05H8N9V0HZ asked, my bad
Thanks, I thought we were on different pages 🙂
Just because the transducer arity of into
is one of my favorite things. Though I’ve never actually used this particular one.
OK, Ingy and Noah are correct,
(defn foo
([x])
([x y z & args]))
is no different from
(defn foo [x y z & args])
when called like (foo 1 2 3)
. I have been answering a different question all this time.That's not the right test tho' because you don't have a specific 3-arity. Here's what I see
user=> (defn test-fn ([a b c] 3) ([a b c & more] 4))
#'user/test-fn
user=> (bench (test-fn 1 2 3))
Evaluation count : 10027900260 in 60 samples of 167131671 calls.
Execution time mean : 1.333287 ns
Execution time std-deviation : 1.517360 ns
Execution time lower quantile : 0.050961 ns ( 2.5%)
Execution time upper quantile : 4.908632 ns (97.5%)
Overhead used : 5.958002 ns
Found 6 outliers in 60 samples (10.0000 %)
low-severe 5 (8.3333 %)
low-mild 1 (1.6667 %)
Variance from outliers : 98.3139 % Variance is severely inflated by outliers
nil
user=> (defn test-fn ([a] 0) ([a b c] 3) ([a b c & more] 4))
#'user/test-fn
user=> (bench (test-fn 1 2 3))
Evaluation count : 9025901580 in 60 samples of 150431693 calls.
Execution time mean : 2.101737 ns
Execution time std-deviation : 0.556681 ns
Execution time lower quantile : 1.593812 ns ( 2.5%)
Execution time upper quantile : 3.648752 ns (97.5%)
Overhead used : 5.958002 ns
Found 8 outliers in 60 samples (13.3333 %)
low-severe 2 (3.3333 %)
low-mild 6 (10.0000 %)
Variance from outliers : 94.6650 % Variance is severely inflated by outliers
nil
user=>
Which is like assoc
and then adding the 1-arity:
([map key val] (clojure.lang.RT/assoc map key val))
([map key val & kvs]
Huh, just re-ran it and got different execution times so 🤷:skin-tone-2:
@U06PNK4HG Thanks for sticking with it.
Looks like assoc
is the way it is simply because that was the first draft and there was never any big need to add that arity to it. Thanks for the historical spelunking there @U04V70XH6.
I definitely picked up a lot of #C053AK3F9 insights here so thanks to everyone for those.
I'm not getting statistically reliable timings on repeated runs so I can't say either way whether it would affect performance at this point 😐
Well if @U06PNK4HG says it doesn't affect perf, I (now) "believe" him. :face_with_cowboy_hat:
Yeah, @U04V70XH6, it's just variance. The difference would be much more apparent if one of the arity calls started allocating, which is the main performance penalty from using the vararg arities of functions.
I have a protocol defined with multiple implementations. I want to “hook” into the protocol function without changing the implementation - for example add a log statement before/after the protocol function is called on an implementation. Is there a way to do this without changing every implementation of this protocol? Update: The thread has a bit more context of my ask and interesting suggestions to approach this ^
this is one of the motivators for the pattern to never call a protocol directly, but have a regular function be the entrypoint
(defprotocol A (-foo […]))
and then everyone calls a foo
wrapper which is the single entrypoint where you can log, spec checks, etc
This is kinda the opposite of that morelike. Wanting to have every single protocol implementation delegate out to a var so that you can do a rebind of just that one implementation.
To add a little more context, I have multiple implementations that exist at the same time and I want to do something with side effects like updating a counter in an atom everytime a protocol fn is called.
(defprotocol A
(foo [..] ..)
(bar [..] ..))
(defn a-impl-1 [counter-atom-1]
(reify A
(foo [..]
(swap! counter-atom-1 :foo inc)
...)
(bar [..]
(swap! counter-atom-1 :bar inc)
...)))
(defn a-impl-2 [counter-atom-2]
(reify A
(foo [..]
(swap! counter-atom-2 :foo inc)
...)
(bar [..]
(swap! counter-atom-2 :bar inc)
...)))
Is there a good way to take out the counter from individual implementations?from this snippet you want to track every time a particular implementation is called
Yes, I considered that but the macro would be still called inside the individual implementation. Which would work for me but I was wondering if there is a better way to do this to not “pollute” the individual implementations.
You could rename foo
to foo*
and make a new var foo
that calls pre-foo-hook
which is a multimethod with a dispatch value on the protocol's type that adds the counters only to the dispatch values you want to.
Or ditch the multimethod and just use a vector of functions as the pre-foo-hook
where each function can do predicate matching before doing stuff.
notably you can also do all this at runtime with e.g. a macro because the protocol already generates a var. You could move the value from that var to a new temporary var while injecting your hooked-var into the name.
Notably this wouldn't be robust to redefinition, but protocols are already a little fragile around redefinition.
Interesting ideas here. Thank you @U11BV7MTK, @U5NCUG8NR. Macro emitting the impl itself and the hook method suggested above could both work for me. Let me try and see what “feels” better. Thank you!
by say, wrapping a function around every function
oh that sounds cool. I hadn't looked into the implementation details too much yet, that sounds like you could do what I wanted to in a much better way by messing with internals.
i always forget they are maps to a point. i like reading http://clojure.java.io to remind myself sometimes
they are maps to the extent that they can be just hung on IMeta
-capable objects as meta data and then u can call protocol methods on that object, eg. a map:
https://clojure.org/reference/protocols#_extend_via_metadata
just define your protocols with a :extend-via-metadata true
to enable its method dispatch mechanism to look for method implementations in meta data too.
btw, there is an Elisp-like advice library called Richelieu https://github.com/thunknyc/richelieu which allows u to wrap existing functions with extra behaviour.
we use it do transparently enhance the datomic client api, so it understands and returns java.time.Instant
s instead of java.util.Date
s.
it works, but im not sure if i would recommend using this approach.
also, we just used clojure.walk/postwalk
to wrap d/transact
and d/pull
, which is not exactly performant on bigger responses, which is why we haven't wrapped d/q
or d/qseq
...
also, it's a bit confusing to grapple with many similar words, like advice
, advise
, advised