This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-10-28
Channels
- # aleph (4)
- # announcements (5)
- # babashka (28)
- # babashka-sci-dev (13)
- # beginners (63)
- # calva (76)
- # cider (113)
- # clara (7)
- # clj-kondo (42)
- # cljdoc (1)
- # clojure (170)
- # clojure-europe (20)
- # clojure-nl (17)
- # clojure-norway (3)
- # clojure-spec (12)
- # clojure-sweden (1)
- # clojure-uk (6)
- # clojurescript (55)
- # clojureverse-ops (1)
- # consulting (1)
- # core-async (9)
- # cursive (16)
- # data-science (1)
- # datascript (8)
- # datomic (27)
- # emacs (14)
- # events (1)
- # fulcro (10)
- # graphql (9)
- # gratitude (1)
- # jobs (6)
- # jobs-discuss (5)
- # leiningen (10)
- # lsp (35)
- # missionary (4)
- # nextjournal (9)
- # off-topic (46)
- # pathom (15)
- # pedestal (5)
- # polylith (37)
- # portal (15)
- # re-frame (22)
- # reagent (4)
- # reitit (5)
- # reveal (18)
- # shadow-cljs (20)
- # tools-deps (7)
- # xtdb (10)
since i'm doing all of this zipper stuff internally in a single-threaded context, seems fine to use mutation actually
You might be interested in lenses, they are similar to zippers, the naive version is very simple, so you don't need a library, and you can in theory get better performance because at each step you are applying a type and field specific view or update function
Can you recommend a good clojure lenses library? The two that come up in my google search are pretty old
https://github.com/redplanetlabs/specter/ is the Clojure version of lenses
(you get zipper like navigation by keeping a stack of setters to apply as you walk down applying viewers)
the data i’m writing against is (unfortunately) a generic tree of Clojure data. I suppose I could write a zipper that doesn’t parameterize the branch?
et al. fns and might eke out some perf, is that kind of what you mean?
i’m trying to take an arbitrary tree and normalize it, so there’s only so much specificity
So I just learned that reducible does not create intermediate collections. I’m curious how does it do so? Is it reducible will process each item through all function composition at once?
there's a few concepts that are close to what you might be talking about here but none actually called reducible. there are transducers which can prevent the creation of intermediate collections. And then there's clojure.lang.IReduceInit
which allows you to determine how to consume something with reduce
please correct me if I’m wrong. From my understanding reducer is what prevent the creation of intermediate collections. (https://clojure.org/reference/reducers) Quote: A reducer is the combination of a reducible collection (a collection that knows how to reduce itself) with a reducing function (the “recipe” for what needs to be done during the reduction). The standard sequence operations are replaced with new versions that do not perform the operation but merely transform the reducing function. Execution of the operations is deferred until the final reduction is performed. This removes the intermediate results and lazy evaluation seen with sequences. I think of transducers as a generic way to define a process that can work without the context of what the input is
oh i see. honestly, i've never used the reducers. And they predate transducers so i'm not sure there's a real need for them any longer
oh I think I referred to the wrong concept here. A reducible
is like a collection but it knows how to reduce itself, hence we often refer it as reducible collection
. Please correct me if I’m wrong.
So my question should be: how reducers prevent the creation of intermediate collections?
read the guide for transducers. takes a while to get to grips with them. https://clojure.org/reference/transducers . The thing that made it click for me was the simple definition > A transducer (sometimes referred to as xform or xf) is a transformation from one reducing function to another:
@qn.khuat You can check out the reducer source if you’re curious: https://github.com/clojure/clojure/blob/master/src/clj/clojure/core/reducers.clj#L128-L136
I’m also not super informed about them, but they look like a sort of precursor to transducers.
It sounds like you're taking about reducers. The idea there is to have a collection that embeds a reducing function. You can apply another reducer function to the reducible collection and it will return a new reducible collection (same data, transformed reducing function). When you finally apply the reducing function with reduce, you do so once, on the original data
The downside is, you have to write that (pretty complex) reducing function transformation for every transformation
Transducers is a further evolution that separates the transformation from the collection
Those reducing function transformations are easier to write, and can be applied more generally not just to collections but to other things too
Colls, seqs, channels, etc
https://clojure.org/news/2012/05/08/reducers https://clojure.org/news/2012/05/15/anatomy-of-reducer https://clojure.org/news/2014/08/06/transducers-are-coming
Reading those in order tells this story pretty well
Or https://youtu.be/6mTbuzafcII if you like the video approach
> create a recipe for a transformation, which can be subsequently sequenced, iterated or reduced
I suppose that is eduction
?
After making all of the ops for seqs, then remaking them for reducers, then remaking them for core.async, Rich said…. maybe we can make an abstraction that lets us write all these once
So after watching reducers and transducers talks by Rich and reading the resources Alex mentioned I think I got the basic ideas.
So my question was how reducers prevent the creation of the intermediate collection?
The short answer is instead of processing a collection all at once and then moving to the next step like seq does, reducers takes each element and process it all the ways through.
This comes with the idea of reducers as a recipe to process a collection, the collection only being processed when we apply it with reduce
function.
That’s why this piece of code
(->> (range 100)
(r/filter even?)
(r/map inc))
return an object (the recipe) instead of a list.
As Alex pointed out, this will only be computed when we apply it to the reducing function by doing
(->> (range 100)
(r/filter even?)
(r/map inc)
(reduce conj)
then we get a list.
Just for comparison, this is a version for seq
(->> (range 100)
(filter even?)
(map inc))
return a list
P/s: instead of using (reduce conj)
to reduce the reducers, it’s encouraged to use (r/fold conj)
. By using r/fold
we will get the benefit of parallelism, because under the hood r/fold
will partition the input to groups of 512 elems and apply reduce
with each of them.You may get parallelism if the coll is foldable (persistent vectors or maps)
So in this example, range does not return a foldable (it's actually a highly optimized self reducible object, but not foldable)
Dealing with a tough library API and being forced to use with-local-vars
. Came across this,
(let [x 9]
(with-local-vars [x x]
(var-get x)))
=> #<Var: --unnamed-->
(let [x 9]
(with-local-vars [x' x]
(var-get x')))
=> 9
It was a little unexpected. Is this the expected behaviour?arguably a bug. the implementation of with-local-vars
creates the vars first, and then assigns the initial values to them
https://github.com/clojure/clojure/blob/clojure-1.10.1/src/clj/clojure/core.clj#L4352 source
I'm trying to use java.util.Base64.Decoder.
(import '[java.util Base64$Decoder Base64$Encoder])
Using reflect I can see a decode
function:
...
{:name decode,
:return-type byte<>,
:declaring-class java.util.Base64$Decoder,
:parameter-types [java.lang.String],
:exception-types [],
:flags #{:public}}
...
however when trying to decode:
(Base64$Decoder/decode "hello")
I'm receiving the following error:
1. Caused by java.lang.IllegalArgumentException
No matching method decode found taking 1 args for class
java.util.Base64$Decoder
Why 😄?Hmm. But the Base64$Decoder is a static class. 😮
Wait so how can I use it?
(:import
[java.nio.charset StandardCharsets]
[java.util Base64]))
(defn ^:private base64-decode
[verification-code]
(-> (.decode (Base64/getUrlDecoder) verification-code)
(String. StandardCharsets/UTF_8)))
Yeah. But why I can't call the public method of the static class.
Because relative to that class, that method is not static. And you're calling it as if it is.
Indeed, have a look at the source code for the class, both encode and decode are non-static methods
(.decode (Base64/getDecoder) "aGVsbG8=")
#object["[B" 0x59c13f3e "[B@59c13f3e"]
oh, that’s what i expected. the javadoc says it returns a byte array: > byte[] decode(String src) > Decodes a Base64 encoded String into a newly-allocated byte array using the Base64 encoding scheme.
Ok thank you so much. After reading the source code I have understood my mistake. Thanks!! @U11EL3P9U @U06TTFDB8 @U2FRKM4TW
just for practice’s sake, i tried using ..
:
(.. (Base64/getDecoder ) (decode "aGVsbG8="))
Are there comparisons of calva vs cursive vs cider (okay if it is 3 different articles) where the people in the calva, cursive, and cider camps all consider fair ?
I'm not sure there is such a thing (all of them are constantly growing and influencing each other and they share common tooling in some cases) so any such article would be out of date quickly
From a very high level, speaking in generalizations, I think Cursive is a clear win if you are doing a mixture of Clojure and Java - the tools for both are great and cooperative. emacs gives you tremendous breadth and a hackable environment with decades of lisp heritage. Calva is taking advantage of a big active place of dev tooling and has spent a lot of intentional time trying to create an env for new users.
But all of these provide the table stakes for Clojure - syntax support, structural editing, integrated repl, etc.
People can and do use all of them for professional work and it's largely a matter of personal history and preference which one people prefer
IntelliJ's 'jump me to the code of this membver function of a java class' is second to none, but I never quite got that feature working in cider (last I tried). What does Calva bring to the table ?
Millions of dollars of Microsoft investment
For instance, letting you almost seamlessly work from a cloud instance
So hypothetically, not that this is a good idea, I can drop a vscode-server on a live-server, run a repl remotely, and then connect to it locally ?
You don't need a vscode server to do that now - that works with all these tools using a remote repl
> There’s even an example of remote desktop UI development Are you referring to this one, @U2FRKM4TW? https://github.com/PEZ/pirate-lang It will open up a machine with Java, clojure, VNC, and VS Code with Calva, in your browser. A quick jack-in and evaluation later and you are developing a Java UI app, all still in your browser.
Thanks for reminding me. I now thought it might be good to surface it on Calva’s Youtube channel. I added some music that a friend of mine has composed. https://clojurians.slack.com/archives/C8NUSGWG6/p1635494324107000
> it's largely a matter of personal history and preference which one people prefer +1 to this ... I used to prefer emacs as a java dev ... I never got on with eclipse/intelij/netbeans/etc
CIDER will add first-class support for a variety of Java related features in a matter of weeks. It's finally coming along after some work that started last Dec :)

If you want to try it now (and are a Lein user) simply use our plugin https://github.com/clojure-emacs/enrich-classpath
I have considered something around this in clj-kondo, but I'm still not decided on bytecode parsing or source parsing
> What kind of Java features are you most excited about?
Simply having the basics working well for every dep (whether it's from JDK core or from arbitrary deps):
jump to source, show docstring of class/method/..., show arglist completions will full names (vs. arg0
)
Can always grow from there
are you using a Java source parser for this or simply the doclet-based stuff from before?
The reason I'm asking is that I'm considering something like this for clj-kondo / lsp but I still haven't decided on what's the right approach
https://github.com/clojure-emacs/orchard/blob/master/src-newer-jdks/orchard/java/parser.clj is the main parser
> still not decided on bytecode parsing or source parsing I'd imagine that not only types, but also docstrings and arglist names are erased from bytecode? So it seemed a no-brainer to me. Still bytecode could be some sort of fallback...
jep ok, that's the doclet stuff. I've been using that in clj-kondo too but only for built-in stuff
I feel Java source will almost never be available in transitive Java deps. But both is probably ideal no?
Also, Java doesn't have true docstrings, it's just a convention. Though.you can include the Java doc innthe jar I believe, I don't know if all jars will. I wonder if IntelliJ and eclipse try to just show the comment above the method in the source or...
> I feel Java source will almost never be available in transitive Java deps. certainly, those must be intentionally/separately fetched (which is what the plugin does)
> Also, Java doesn't have true docstrings, it's just a convention. I'd characterize it as sth stronger as a convention, as javadoc is part of JDK and the "culture" in the java ecosystem Much less ad-hoc than say, docstrings in Ruby (as an example that comes to mind of having no docstring OR javadoc-like mechanism). Same for JS I believe?
I might take my happy emoji back if this only works in the purest form of open source development. If it can't work within an enterprise context with in house tooling, repositories, and all that it's not as exciting to me personally.
> for clj-kondo / lsp one would have to also have these sources available. yes, you'd have to use enrich-classpath or what have you. could be async if you don't want to affect startup time. But async doesn't play well with classpath Nobody forces you to use the classpath though, you can consider files to be "just files"
enterprise doesn't write docstrings? or do you mean, libs of other enterprises for which you have no source?
This works mostly at Maven level 99% of one's deps (direct or transitive) are public Private ones are also on Maven, so there's no difference
The latter, but also, even if I do have the source, I'm not sure if they can be pulled down, might be non standard, sometimes some teams source is permission restricted, etc.
They're just a Maven artifact like any other .jar, I doubt there's such a sophisticated firewall that will block source jars
In fairness, I'm still excited. Just explaining my preference for bytecode, because bytecode just works in all cases. Source might always be a bit more finicky. Both would be ideal off course
> bytecode just works in all cases Only for a subset of functionality, that might as well be fetched via reflection
I tried a little bit with ASM but there are the following issues: • No reliable/precise line locations for methods, etc. • How to show the source when you navigate to it? Disassemble? Hmm In all cases: • How to deal with overloaded methods and hierarchies? It can get quite complex to point to the right thing
Well, you both I think know more than I do at this point haha. And as you're doing the work I trust your judgements. It might be source will work better than I think.
I'll report back how it works for me once I use it. Probably a better way to deal with real limitations is working backwards from them instead of me making hypotheticals
> Hum, are we sure about that? I think that's an optional thing no? From a quick googling https://central.sonatype.org/publish/requirements/#supply-javadoc-and-sources I heard that it's mandatory from Alex Miller, from my side I have not much experience pushing to Maven Central which is the one with this requirement
Personally I am not missing the Java features from a real IDE in emacs much. When I need to deal with Java I look up the docs online. And this isn't something I do every day.
From my side I've been happy throughout 2021 with various bespoke Java integration for my Emacs. It's kind of a feedback loop, once you see its value first-hand one gets to use it more
When I think about both Eclipse and IntelliJ, I feel like when it's for code that is not my own Java source, it disassemble it, but it gives you the option to "attach source" for it.
Which makes me think that they probably just support both, favouring source if they can find it
Anyway, just wanted to say thank you to both of you for even looking into this problem and investing time and effort to make that aspect better. Those things have major impact I think on the ecosystem for Clojure.
I have the impression that people use (set! *warn-on-reflection* v)
as if it was a per-ns option, however it's not?
~ $ clj
Clojure 1.10.3
user=> (ns foo)
nil
foo=> *warn-on-reflection*
false
foo=> (ns bar)
nil
bar=> (set! *warn-on-reflection* true)
true
bar=> (in-ns 'foo)
#object[clojure.lang.Namespace 0x6955cb39 "foo"]
foo=> *warn-on-reflection*
true
i.e. if your lib, module, whatever has such a set!
, it can plausibly be a global side-effect, as far as the current thread is concerned?
I'd fail to see how that is a clean thing to do, correct me if I'm wrongCorrect. If you do clojure -M -e '(set! **warn-on-reflection** true)' -e '(prn *warn-on-reflection*) ...
it will be true for whatever comes after that.
then you load a ns via require it is effectively wrapped in a binding for some vars, including *warn-on-reflection*
which is popped after loading
which is why people set it after the ns form but before code that follow it, which in effect makes it ns specific
that binding mechanism is the same one that allows code to be required without effecting the current value of *ns*
, if you eval a (ns ...)
form it has the effect of mutating *ns*
, but when you load the code via require (I think it is actually one of the primitives require is built on that does this, load or load-file) it is effectively
(binding [*ns* *ns*] (eval '(ns ...)))
so the value of *ns*
after the require is the same as beforeThough I'd started doing
(ns my-ns)
(def initial-x *x*
(set! *x* ...)
;; My code
;; At the end
(set! *x* initial-x)
Could someone help my understanding of the all-ns
function. The documentation says "Returns a sequence of all namespaces."
, however I might be misunderstanding what "all" means in this context.
~ clj
Clojure 1.10.3
user=> (def x (->> (all-ns) (map ns-name) set))
#'user/x
user=> x
#{clojure.core.specs.alpha clojure.spec.alpha clojure.java.shell user clojure.java.javadoc clojure.core.server clojure.core.protocols clojure.java.browse clojure.core clojure.main clojure.pprint clojure.uuid clojure.edn clojure.spec.gen.alpha clojure.instant clojure.repl clojure.string clojure.walk}
user=> (require '[clojure.set :as set])
nil
user=> (def y (->> (all-ns) (map ns-name) set))
#'user/y
user=> (set/difference y x)
#{clojure.set}
I would have expected clojure.set
to be present in the result of the first call to all-ns
. I have also noticed that not all of the clojure namespaces eg clojure.data
are not present without explicitly requiring it...
How would one programatically find all namespaces?if the code that defines a namespace hasn't been loaded, the namespace doesn't exist
Does that mean that one must know about the namespaces a priori in order to require/load them?
there are libraries that can do things like scan the filesystem for clojure files and then you can load them, but that is fairly brittle outside of dev
Won't scanning the classpath be fairly close? (although I still agree with @U0NCTKEV8 about it being brittle outside of dev)
I am wondering about when connecting via a prepl
socket... could a user discover all possible namespaces?
clj-kondo can also give you a list of namespaces in a directory using its static analysis (this will also miss dynamically creates ones)
and tools namespace can scan the filesystem for clojure source files and parse ns forms
no - I was perhaps a bit fast and loose with my comment... of course dynamically created namespaces are not fully possible
the classpath is used to populate one classloader in the jvm, but clojure can load code from any classloader, and the classloader interface is a basic get value for key one, there is no listing of keys to scan a classloader for the resources it provides
For Repl use, you can kind of do it by scanning some of the class loaders and classpath, but it isn't exhaustive of all setups
Hello everyone! Is it possible to catch a warning in clojure? (I saw that it is possible to define a warning handler in CLJS but my question goes toward Clojure)
"warning" is not a thing in Clojure - can you give an example?
Sure, we use Hikari to manage connections pool. This library has a connection leak detection mechanism that triggers a warning when a leak is detected. We want to handle such scenario. https://github.com/brettwooldridge/HikariCP/blob/dev/src/main/java/com/zaxxer/hikari/pool/ProxyLeakTask.java#L84
You can probably do something to intercept that specific logger, but is there an exception or some other condition triggered when a leak is detected?
none, that's the problem 😅
in general logging is not something to be caught or handled, it is something to go in a log
depending on the logging infrastructure being used you can do things like direct the logs to a file or email or whatever
ok thx
what about this? https://github.com/brettwooldridge/HikariCP/blob/ed2da5f1f4ef19f871fac12effc0b199706905dc/src/main/java/com/zaxxer/hikari/pool/HikariPool.java#L175
MiscTest.testLeakDetection: This one of their tests detects the actual log line by saying "when ProxyLeakTask logs something, log it to this stream", then reading the results later TestElf.setSlf4jTargetStream: For the above, this one puts in a StringAppender - maybe a different "appender" could pick up the event as it happens? HikariPool.leakTaskFactory - maybe this field can be swapped out, to use one with a better/subclass "ProxyLeakTask"? These ideas would all be very brittle from using reflection, though https://github.com/brettwooldridge/HikariCP/blob/789ce3a76ad14521179f159ffa0bd2e904c17ff3/src/test/java/com/zaxxer/hikari/pool/MiscTest.java#L99 https://github.com/brettwooldridge/HikariCP/blob/f9b2c3d372b3f28911a872191ad62953c84bafa4/src/test/java/com/zaxxer/hikari/pool/TestElf.java#L120 https://github.com/brettwooldridge/HikariCP/blob/ed2da5f1f4ef19f871fac12effc0b199706905dc/src/main/java/com/zaxxer/hikari/pool/HikariPool.java#L80
Part of why this is outside the scope of the clojure language is most java logging is done via some abstract facade, which can be directed in some way or another to use some concrete logging library, which in turn is configured to log in some way either programmatically or with a config file
So it is possible to capture logging, but it is highly project dependant, and often kind of brittle
So like, even if the hikari tests show it being done with slf4j , depending on your project, you might be bridging slf4j to log4j2, so what the test does won't work for you
If you've never sat down and explicitly figured out logging for your project, you may have multiple logging libraries, all doing their thing, all doing it slightly differently with whatever their defaults are
You can increase the leak detection threshold in HikariCP, making more unlikely to encounter this warning: https://github.com/brettwooldridge/HikariCP/blob/dev/src/main/java/com/zaxxer/hikari/HikariConfig.java#L221
TY guys. We have the threshold already set to a value > 2s Intercepting logs is not an option so I guess we'll have to look for a different approach to our problem.