Fork me on GitHub

since i'm doing all of this zipper stuff internally in a single-threaded context, seems fine to use mutation actually


wtb a mutable zipper


You might be interested in lenses, they are similar to zippers, the naive version is very simple, so you don't need a library, and you can in theory get better performance because at each step you are applying a type and field specific view or update function


Can you recommend a good clojure lenses library? The two that come up in my google search are pretty old


(you get zipper like navigation by keeping a stack of setters to apply as you walk down applying viewers)


the data i’m writing against is (unfortunately) a generic tree of Clojure data. I suppose I could write a zipper that doesn’t parameterize the branch? et al. fns and might eke out some perf, is that kind of what you mean?


i’m trying to take an arbitrary tree and normalize it, so there’s only so much specificity

Ngoc Khuat02:10:59

So I just learned that reducible does not create intermediate collections. I’m curious how does it do so? Is it reducible will process each item through all function composition at once?


there's a few concepts that are close to what you might be talking about here but none actually called reducible. there are transducers which can prevent the creation of intermediate collections. And then there's clojure.lang.IReduceInit which allows you to determine how to consume something with reduce

Ngoc Khuat03:10:15

please correct me if I’m wrong. From my understanding reducer is what prevent the creation of intermediate collections. ( Quote: A reducer is the combination of a reducible collection (a collection that knows how to reduce itself) with a reducing function (the “recipe” for what needs to be done during the reduction). The standard sequence operations are replaced with new versions that do not perform the operation but merely transform the reducing function. Execution of the operations is deferred until the final reduction is performed. This removes the intermediate results and lazy evaluation seen with sequences. I think of transducers as a generic way to define a process that can work without the context of what the input is


oh i see. honestly, i've never used the reducers. And they predate transducers so i'm not sure there's a real need for them any longer

Ngoc Khuat03:10:39

oh I think I referred to the wrong concept here. A reducible is like a collection but it knows how to reduce itself, hence we often refer it as reducible collection. Please correct me if I’m wrong. So my question should be: how reducers prevent the creation of intermediate collections?


read the guide for transducers. takes a while to get to grips with them. . The thing that made it click for me was the simple definition > A transducer (sometimes referred to as xform or xf) is a transformation from one reducing function to another:


love the monoid in there 🙂


I’m also not super informed about them, but they look like a sort of precursor to transducers.

Alex Miller (Clojure team)03:10:56

It sounds like you're taking about reducers. The idea there is to have a collection that embeds a reducing function. You can apply another reducer function to the reducible collection and it will return a new reducible collection (same data, transformed reducing function). When you finally apply the reducing function with reduce, you do so once, on the original data

👍 1
Alex Miller (Clojure team)03:10:48

The downside is, you have to write that (pretty complex) reducing function transformation for every transformation

Alex Miller (Clojure team)03:10:14

Transducers is a further evolution that separates the transformation from the collection

Alex Miller (Clojure team)03:10:12

Those reducing function transformations are easier to write, and can be applied more generally not just to collections but to other things too

Alex Miller (Clojure team)03:10:49

Colls, seqs, channels, etc

Alex Miller (Clojure team)03:10:52

Reading those in order tells this story pretty well


interesting: (iteration xform data) from the announcement of transducers


> create a recipe for a transformation, which can be subsequently sequenced, iterated or reduced I suppose that is eduction?

Alex Miller (Clojure team)03:10:14

After making all of the ops for seqs, then remaking them for reducers, then remaking them for core.async, Rich said…. maybe we can make an abstraction that lets us write all these once

Ngoc Khuat07:10:44

So after watching reducers and transducers talks by Rich and reading the resources Alex mentioned I think I got the basic ideas. So my question was how reducers prevent the creation of the intermediate collection? The short answer is instead of processing a collection all at once and then moving to the next step like seq does, reducers takes each element and process it all the ways through. This comes with the idea of reducers as a recipe to process a collection, the collection only being processed when we apply it with reduce function.

👏 1
Ngoc Khuat07:10:25

That’s why this piece of code

(->> (range 100)
    (r/filter even?)
    (r/map inc))
return an object (the recipe) instead of a list. As Alex pointed out, this will only be computed when we apply it to the reducing function by doing
(->> (range 100)
    (r/filter even?)
    (r/map inc)
	(reduce conj)
then we get a list. Just for comparison, this is a version for seq
(->> (range 100)
    (filter even?)
    (map inc))
return a list P/s: instead of using (reduce conj) to reduce the reducers, it’s encouraged to use (r/fold conj). By using r/fold we will get the benefit of parallelism, because under the hood r/fold will partition the input to groups of 512 elems and apply reduce with each of them.

Alex Miller (Clojure team)12:10:07

You may get parallelism if the coll is foldable (persistent vectors or maps)

👍 1
Alex Miller (Clojure team)12:10:47

So in this example, range does not return a foldable (it's actually a highly optimized self reducible object, but not foldable)


Dealing with a tough library API and being forced to use with-local-vars . Came across this,

(let [x 9]
  (with-local-vars [x x]
    (var-get x)))
=> #<Var: --unnamed-->

(let [x 9]
  (with-local-vars [x' x]
    (var-get x')))
=> 9
It was a little unexpected. Is this the expected behaviour?


arguably a bug. the implementation of with-local-vars creates the vars first, and then assigns the initial values to them

👍 1

so the second x in (with-local-vars [x x] will refer to the just-created var

Karol Wójcik12:10:27

I'm trying to use java.util.Base64.Decoder.

(import '[java.util Base64$Decoder Base64$Encoder])
Using reflect I can see a decode function:
{:name decode,
    :return-type byte<>,
    :declaring-class java.util.Base64$Decoder,
    :parameter-types [java.lang.String],
    :exception-types [],
    :flags #{:public}}
however when trying to decode:
(Base64$Decoder/decode "hello")
I'm receiving the following error:
1. Caused by java.lang.IllegalArgumentException
   No matching method decode found taking 1 args for class
Why 😄?


decode is not a static method.


Indeed, this is how use it...

Karol Wójcik12:10:27

Hmm. But the Base64$Decoder is a static class. 😮

Karol Wójcik12:10:46

Wait so how can I use it?


   [java.nio.charset StandardCharsets]
   [java.util Base64]))

(defn ^:private base64-decode
  (-> (.decode (Base64/getUrlDecoder) verification-code)
      (String. StandardCharsets/UTF_8)))


something like that


I normally declare the getUrlDecoder as a def, like this:


(def ^:private url-safe-decoder (Base64/getUrlDecoder))

Karol Wójcik12:10:52

Yeah. But why I can't call the public method of the static class.


Because relative to that class, that method is not static. And you're calling it as if it is.


Indeed, have a look at the source code for the class, both encode and decode are non-static methods


they need an instance to work with


Here, for example, is where an instance is created


Which is then used here


Hope that helps! 🙂



(.decode (Base64/getDecoder) "aGVsbG8=")
#object["[B" 0x59c13f3e "[B@59c13f3e"]


It returns a byte array, you have to give it to a string to get out a string


oh, that’s what i expected. the javadoc says it returns a byte array: > byte[] decode(String src) > Decodes a Base64 encoded String into a newly-allocated byte array using the Base64 encoding scheme.

Karol Wójcik12:10:03

Ok thank you so much. After reading the source code I have understood my mistake. Thanks!! @U11EL3P9U @U06TTFDB8 @U2FRKM4TW


just for practice’s sake, i tried using .. :

(.. (Base64/getDecoder ) (decode "aGVsbG8="))


Are there comparisons of calva vs cursive vs cider (okay if it is 3 different articles) where the people in the calva, cursive, and cider camps all consider fair ?

Alex Miller (Clojure team)12:10:17

I'm not sure there is such a thing (all of them are constantly growing and influencing each other and they share common tooling in some cases) so any such article would be out of date quickly

Alex Miller (Clojure team)12:10:34

From a very high level, speaking in generalizations, I think Cursive is a clear win if you are doing a mixture of Clojure and Java - the tools for both are great and cooperative. emacs gives you tremendous breadth and a hackable environment with decades of lisp heritage. Calva is taking advantage of a big active place of dev tooling and has spent a lot of intentional time trying to create an env for new users.

Alex Miller (Clojure team)12:10:36

But all of these provide the table stakes for Clojure - syntax support, structural editing, integrated repl, etc.

Alex Miller (Clojure team)12:10:23

People can and do use all of them for professional work and it's largely a matter of personal history and preference which one people prefer


IntelliJ's 'jump me to the code of this membver function of a java class' is second to none, but I never quite got that feature working in cider (last I tried). What does Calva bring to the table ?

Alex Miller (Clojure team)12:10:59

Millions of dollars of Microsoft investment

Alex Miller (Clojure team)12:10:23

For instance, letting you almost seamlessly work from a cloud instance


So hypothetically, not that this is a good idea, I can drop a vscode-server on a live-server, run a repl remotely, and then connect to it locally ?

Alex Miller (Clojure team)12:10:05

You don't need a vscode server to do that now - that works with all these tools using a remote repl


There's even an example of remote desktop UI development.


Oh right, even now, we can just ssh-tunnel a nrepl.


It's like Emacs Tramp if you ever used it, just polished and better tested.


I don't personally use it though, just what I was told


> There’s even an example of remote desktop UI development Are you referring to this one, @U2FRKM4TW? It will open up a machine with Java, clojure, VNC, and VS Code with Calva, in your browser. A quick jack-in and evaluation later and you are developing a Java UI app, all still in your browser.


Here’s a demo.


Yes, that one. :)


Thanks for reminding me. I now thought it might be good to surface it on Calva’s Youtube channel. I added some music that a friend of mine has composed.

👍 1

> it's largely a matter of personal history and preference which one people prefer +1 to this ... I used to prefer emacs as a java dev ... I never got on with eclipse/intelij/netbeans/etc


CIDER will add first-class support for a variety of Java related features in a matter of weeks. It's finally coming along after some work that started last Dec :)

👍 1
partywombat 2

If you want to try it now (and are a Lein user) simply use our plugin


What kind of Java features are you most excited about?


I have considered something around this in clj-kondo, but I'm still not decided on bytecode parsing or source parsing


> What kind of Java features are you most excited about? Simply having the basics working well for every dep (whether it's from JDK core or from arbitrary deps): jump to source, show docstring of class/method/..., show arglist completions will full names (vs. arg0) Can always grow from there

👀 3

are you using a Java source parser for this or simply the doclet-based stuff from before?


The reason I'm asking is that I'm considering something like this for clj-kondo / lsp but I still haven't decided on what's the right approach


> still not decided on bytecode parsing or source parsing I'd imagine that not only types, but also docstrings and arglist names are erased from bytecode? So it seemed a no-brainer to me. Still bytecode could be some sort of fallback...


jep ok, that's the doclet stuff. I've been using that in clj-kondo too but only for built-in stuff


I feel Java source will almost never be available in transitive Java deps. But both is probably ideal no?


Also, Java doesn't have true docstrings, it's just a convention. can include the Java doc innthe jar I believe, I don't know if all jars will. I wonder if IntelliJ and eclipse try to just show the comment above the method in the source or...


> I feel Java source will almost never be available in transitive Java deps. certainly, those must be intentionally/separately fetched (which is what the plugin does)


for clj-kondo / lsp one would have to also have these sources available.


How? I guess this only works for open source?


> Also, Java doesn't have true docstrings, it's just a convention. I'd characterize it as sth stronger as a convention, as javadoc is part of JDK and the "culture" in the java ecosystem Much less ad-hoc than say, docstrings in Ruby (as an example that comes to mind of having no docstring OR javadoc-like mechanism). Same for JS I believe?


I guess one could use your plugin in the classpath command of lsp as well


I might take my happy emoji back if this only works in the purest form of open source development. If it can't work within an enterprise context with in house tooling, repositories, and all that it's not as exciting to me personally.


> for clj-kondo / lsp one would have to also have these sources available. yes, you'd have to use enrich-classpath or what have you. could be async if you don't want to affect startup time. But async doesn't play well with classpath Nobody forces you to use the classpath though, you can consider files to be "just files"


enterprise doesn't write docstrings? or do you mean, libs of other enterprises for which you have no source?


This works mostly at Maven level 99% of one's deps (direct or transitive) are public Private ones are also on Maven, so there's no difference


I think 99% or more of the Java libs I've ever used were open source


The latter, but also, even if I do have the source, I'm not sure if they can be pulled down, might be non standard, sometimes some teams source is permission restricted, etc.


They're just a Maven artifact like any other .jar, I doubt there's such a sophisticated firewall that will block source jars


Maven requires artifacts to have source/javadoc counterparts


In fairness, I'm still excited. Just explaining my preference for bytecode, because bytecode just works in all cases. Source might always be a bit more finicky. Both would be ideal off course


Maven is not for source


It is, all .jars have counterpart with a "sources" classifier


> bytecode just works in all cases Only for a subset of functionality, that might as well be fetched via reflection


Hum, are we sure about that? I think that's an optional thing no?


I tried a little bit with ASM but there are the following issues: • No reliable/precise line locations for methods, etc. • How to show the source when you navigate to it? Disassemble? Hmm In all cases: • How to deal with overloaded methods and hierarchies? It can get quite complex to point to the right thing


Well, you both I think know more than I do at this point haha. And as you're doing the work I trust your judgements. It might be source will work better than I think.


I'm not sure about either, it's far less trivial than clojure ;)


I'll report back how it works for me once I use it. Probably a better way to deal with real limitations is working backwards from them instead of me making hypotheticals

🙂 1

> Hum, are we sure about that? I think that's an optional thing no? From a quick googling I heard that it's mandatory from Alex Miller, from my side I have not much experience pushing to Maven Central which is the one with this requirement


Personally I am not missing the Java features from a real IDE in emacs much. When I need to deal with Java I look up the docs online. And this isn't something I do every day.


Similar for JS


From my side I've been happy throughout 2021 with various bespoke Java integration for my Emacs. It's kind of a feedback loop, once you see its value first-hand one gets to use it more


yes, I can imagine.


When I think about both Eclipse and IntelliJ, I feel like when it's for code that is not my own Java source, it disassemble it, but it gives you the option to "attach source" for it.


Which makes me think that they probably just support both, favouring source if they can find it

👍 1

Anyway, just wanted to say thank you to both of you for even looking into this problem and investing time and effort to make that aspect better. Those things have major impact I think on the ecosystem for Clojure.

❤️ 1

I have the impression that people use (set! *warn-on-reflection* v) as if it was a per-ns option, however it's not?

~ $ clj
Clojure 1.10.3
user=> (ns foo)
foo=> *warn-on-reflection*
foo=> (ns bar)
bar=> (set! *warn-on-reflection* true)
bar=> (in-ns 'foo)
#object[clojure.lang.Namespace 0x6955cb39 "foo"]
foo=> *warn-on-reflection*
i.e. if your lib, module, whatever has such a set! , it can plausibly be a global side-effect, as far as the current thread is concerned? I'd fail to see how that is a clean thing to do, correct me if I'm wrong


Same for unchecked-math.


Correct. If you do clojure -M -e '(set! **warn-on-reflection** true)' -e '(prn *warn-on-reflection*) ... it will be true for whatever comes after that.

👀 1

then you load a ns via require it is effectively wrapped in a binding for some vars, including *warn-on-reflection* which is popped after loading


which is why people set it after the ns form but before code that follow it, which in effect makes it ns specific


that binding mechanism is the same one that allows code to be required without effecting the current value of *ns*, if you eval a (ns ...) form it has the effect of mutating *ns*, but when you load the code via require (I think it is actually one of the primitives require is built on that does this, load or load-file) it is effectively

(binding [*ns* *ns*] (eval '(ns ...)))
so the value of *ns* after the require is the same as before

👀 1

verified thanks! I guess that the example I gave is fringe enough to not matter a lot


Interesting, makes sense


Though I'd started doing

(ns my-ns)
(def initial-x *x*
(set! *x* ...)
;; My code
;; At the end
(set! *x* initial-x)


Could someone help my understanding of the all-ns function. The documentation says "Returns a sequence of all namespaces." , however I might be misunderstanding what "all" means in this context.

~ clj
Clojure 1.10.3
user=> (def x (->> (all-ns) (map ns-name) set))
user=> x
#{clojure.core.specs.alpha clojure.spec.alpha user clojure.core.server clojure.core.protocols clojure.core clojure.main clojure.pprint clojure.uuid clojure.edn  clojure.spec.gen.alpha clojure.instant clojure.repl clojure.string clojure.walk}
user=> (require '[clojure.set :as set])
user=> (def y (->> (all-ns) (map ns-name) set))
user=> (set/difference y x)
I would have expected clojure.set to be present in the result of the first call to all-ns. I have also noticed that not all of the clojure namespaces eg are not present without explicitly requiring it... How would one programatically find all namespaces?


all loaded namespaces


if the code that defines a namespace hasn't been loaded, the namespace doesn't exist


Does that mean that one must know about the namespaces a priori in order to require/load them?


basically, yes


which is where a tool like tools.namespace comes in


there are libraries that can do things like scan the filesystem for clojure files and then you can load them, but that is fairly brittle outside of dev


Won't scanning the classpath be fairly close? (although I still agree with @U0NCTKEV8 about it being brittle outside of dev)


I am wondering about when connecting via a prepl socket... could a user discover all possible namespaces?


I mean namespaces can be dynamically created


apart from that 🙂


clj-kondo can also give you a list of namespaces in a directory using its static analysis (this will also miss dynamically creates ones)


and tools namespace can scan the filesystem for clojure source files and parse ns forms


but that is not even close to "all possible namespaces"


no - I was perhaps a bit fast and loose with my comment... of course dynamically created namespaces are not fully possible


even the classpath is a simplifying assumption


the classpath is used to populate one classloader in the jvm, but clojure can load code from any classloader, and the classloader interface is a basic get value for key one, there is no listing of keys to scan a classloader for the resources it provides

👍 1

For Repl use, you can kind of do it by scanning some of the class loaders and classpath, but it isn't exhaustive of all setups

Clément Ronzon18:10:41

Hello everyone! Is it possible to catch a warning in clojure? (I saw that it is possible to define a warning handler in CLJS but my question goes toward Clojure)

Alex Miller (Clojure team)18:10:59

"warning" is not a thing in Clojure - can you give an example?

Clément Ronzon18:10:17

Sure, we use Hikari to manage connections pool. This library has a connection leak detection mechanism that triggers a warning when a leak is detected. We want to handle such scenario.

Ben Sless18:10:52

You can probably do something to intercept that specific logger, but is there an exception or some other condition triggered when a leak is detected?

Clément Ronzon18:10:46

none, that's the problem 😅


in general logging is not something to be caught or handled, it is something to go in a log


depending on the logging infrastructure being used you can do things like direct the logs to a file or email or whatever


but java logging is well outside the scope of clojure

Space Guy19:10:04

MiscTest.testLeakDetection: This one of their tests detects the actual log line by saying "when ProxyLeakTask logs something, log it to this stream", then reading the results later TestElf.setSlf4jTargetStream: For the above, this one puts in a StringAppender - maybe a different "appender" could pick up the event as it happens? HikariPool.leakTaskFactory - maybe this field can be swapped out, to use one with a better/subclass "ProxyLeakTask"? These ideas would all be very brittle from using reflection, though


Part of why this is outside the scope of the clojure language is most java logging is done via some abstract facade, which can be directed in some way or another to use some concrete logging library, which in turn is configured to log in some way either programmatically or with a config file


So it is possible to capture logging, but it is highly project dependant, and often kind of brittle


So like, even if the hikari tests show it being done with slf4j , depending on your project, you might be bridging slf4j to log4j2, so what the test does won't work for you


If you've never sat down and explicitly figured out logging for your project, you may have multiple logging libraries, all doing their thing, all doing it slightly differently with whatever their defaults are


You can increase the leak detection threshold in HikariCP, making more unlikely to encounter this warning:


Or set it to 0 to disable it altogether.

Clément Ronzon14:10:26

TY guys. We have the threshold already set to a value > 2s Intercepting logs is not an option so I guess we'll have to look for a different approach to our problem.