Fork me on GitHub

Clojure primitive vectors created using, e.g. (vector-of :long ...) contain small Java arrays of primitive values at the "leaves", each of which individually should be well positioned for cache locality. Different leaves might not be in adjacent cache lines, but except for cache line prefetch optimizations in processors that shouldn't make much difference.


I would consider using an array with the typical power of two increase in size and copy whenever it overflows


although my personal belief is that if you're ever at that level of tweaky performance you're really close to just making random noise anyway 🙂


And the compacting GC will also optimize cache hits, because it compacts in allocation order


since it sounds like the only reason you are avoiding primitive arrays is because you don't know what your final size will be


The switch to ArrayList over vector was pretty good. Now I'm only 3 times slower than the C++ impl


arraylist is also going to box any primitives


Ya, that's mostly what I think is the remaining reason for C++ being faster


arraylist if I recall internally does what I just suggested


it keeps an array, and doubles its size and copies the contents over if it needs more capacity


so if you do that bookkeeping, you can do with a primitive array instead of an Object array


It's difficult to measure the full impact of boxing (or anything that allocates) because much of the overhead is deferred until GC time. It all depends on whether GC is a performance factor for your workload. But you can be sure you will reduce overhead to some degree by not boxing. I have done what @hiredman recommends with success in many situations.


Meh, I just went with an array of size that I know is big enough, even though its possibly too big


Learned a lot


java.util.Arrays.copyOf methods make copying easy, if you need it.


Thx, that's good to keep in mind


Oh, I see, ya so in Java it would be boxed, but of type Long.


The slowdown in Clojure is the call to uncheckedLongCast. I can't totally say the difference, but ArrayList<Object> get still is probably slower than ArrayList<Long>


With an int-array, that cast is gone, and its much faster


an int-array removes both the cast and the boxing


I don't think a cast from Object to Long should have any cost at runtime?


it does, it will result in a checkcast jvm instruction, but the jvm is typically very good at optimizing that instruction

Alex Miller (Clojure team)00:01:40

There are security concerns with stuff like that


Well, the RT.uncheckedLongCast does have a cost in Clojure, about 47% of my profiled samples

Alex Miller (Clojure team)00:01:29

That sounds ... unlikely to be right


I mean, I am at under 200ms and need to sample with 1000 nanosecond to even see what's taking most time


So "having a cost" here I mean relatively

Alex Miller (Clojure team)02:01:01

Having spent many hours of my life looking at profilers, I find it’s good to have a healthy distrust of them :)


I have experimental "builder" support in org.clojure/ now on git and it would be great if some folks could test it before I cut an actual release with it in... add this to your deps.edn:

{:git/url ""
 :sha "20d8562bf451f41bf58ac0b38401648ba21232a3"}
and then
(require '[ :as builder])
(builder/to-java some.buildable.Thing {:foo "bar"})
By default it looks for some.buildable.Thing$Builder, constructs it, calls the various property methods on it (`(.foo builder "bar")` in this case) and then builds it (by default calling (.build builder) with the updated builder). It assumes each builder property method returns the updated builder instance (as opposed to regular which assumes setters return void and mutate the original instance).



(builder/to-java java.util.Locale {:language "en"})


(it supports both setLanguage and language style property methods, as long as they return the builder instance)

Joe Lane02:01:06

I've also seen some builders that use withLanguage. Not sure if you are interested in supporting that scenario.


@U0CJ19XAM interesting. Yes probably worth supporting somehow


Added support for that. We'll see if it causes any weirdness in the wild.


async-profiler upon which clj-async-profiler is based is nice in that it has fairly extensive documentation describing "safe point bias", which is at least one way that some other JVM profilers are misleading that async-profiler is not. But its documentation is very engineering-oriented towards caution, if not distrust.


What's an easy way to create a non caching seq? Or something similar? Basically, a generator? But it just needs to remember the previous state, not all of it. I normally use iterator, but that remembers the whole sequence no ?


Many Java Iterator implementations are mutable objects, I thought, and thus are very limited in the amount of memory they allocate.


e.g. a single object with a few dozen bytes


Hum, eduction might work


Or maybe I'm just thinking of something where you call .next and get the next thing, forgetting what came before, but still continuing from where it left off...


so something built via iterate or reductions? d'oh, n/m those cache


Most Clojure seq implementations are immutable objects, so every call to next/rest returns a freshly allocated object. I am not sure exactly what you are looking for, but if you are looking to avoid allocating one object per sequence element, Clojure seq implementations are not what you want. They are good in that they are immutable, but not for minimizing memory allocation.


I'm not too sure what I want. It might not make sense. But I'm thinking I have a loop, and on every iteration, I want a number of increment 10 lets say (this is an example). So sure, I can (loop [i 0] (let [num (* 10 i)] ... (recur (inc i))


But, I'm thinking, well if I had an iterator that every time I looped I could call getNext on or something


You can also (recur (+ i 10)) in that example, without num


(iterate #(+ 10 %) 1) kind of does it, its not quite, since it will cache all prior results


iterate doesn't cache, but anything with a handle to the head will


if you don't bind it, it will gc-churn but won't blow up space usage


that is, if you don't bind it in a way that escapes scope - the compiler is smart about letting lazy nodes be gc'd if it can prove they can't escape


When you say you are trying to avoid caching prior results, what exactly is it you are trying to avoid? The allocation of all of those objects, or something else? Because if allocating many objects that are collectable garbage before you get to the end of the sequence is good enough, then just don't "hold on to the head", and every element becomes collectable garbage as soon as you move past it.


Like: (loop [ten (iterate #(+ 10 %) 1)] (let [num (first ten)] ... (recur (next ten)))


Hum, yes, I think I was overthinking and assuming that this would not get garbage collected


I am nearly certain that every element of the return value of iterate is collectable garbage as soon as you go to the next loop iteration (in that code snippet as shown at least -- it is easy to hold on to the head of the sequence if you try to, and occasionally by surprise if you are not aware of the issue)


Right so I guess a seq is what I want 😛


What happens here: (first (next (eduction (map #(* 10 %)) (range))))


Are the parens matched the way you want there?

Alex Miller (Clojure team)03:01:26

Yes, that’s a transducer


Right, but I'm kind of confused. The call to next does what?


I haven't grokked eduction enough to answer, sorry.


And clearly this isn't eager?


next returns a clojure.core.Eduction

Alex Miller (Clojure team)03:01:35

Eductions are usually reduced


As an aside that may be irrelevant for why you are asking, but I am thinking about since you were worried about the performance differences between doing boxing/unboxing operations on primitive values earlier -- if you are that worried about performance, then creating Clojure sequences is probably something you also want to avoid, because of all of the object allocation, and later GC, they require.

Alex Miller (Clojure team)03:01:10

as the doc string says, it returns something that is "reducible/iterable" (notably, not seqable)

Alex Miller (Clojure team)03:01:11

actually, I guess it is seqable, I forgot about some of the details here


How would I iterate on it ?


haven't used korma in a while, what's the more "proper" way to do (korma/where "create_time < now()")


Can I call getNext or something?

Alex Miller (Clojure team)03:01:01

what are you actually trying to do?

Alex Miller (Clojure team)03:01:11

we're just taking a random walk afaict


I guess I'm just trying to loop over a collection applying a transducer but using loop/recur


sequence take a transducer

Alex Miller (Clojure team)04:01:01

why are you constraining to loop/recur?

Alex Miller (Clojure team)04:01:12

what are you actually trying to do?


(let [iter (.iterator (eduction (map #(* 10 %)) (range 10)))]
       (loop [element (when (.hasNext iter) (.next iter))]
         (println element)
         (when (.hasNext iter)
           (recur (.next iter)))))


iterators are from a different unpleasant galaxy and you should seek to avoid them when possible


Except perhaps when you are trying to eke out the last bit of performance of some inner loop 🙂


(run! println (eduction (map #(* 10 %)) (range 10)))


Well, I need to loop over something, and I need 3 different values at each iteration. And one of those values I want to use a chain of transformation to compute the next one, but avoid creating intermediate results like with seq.


which is partly what reduce is intended to help with, while hiding the mutable parts from your code.


So sometimes I do that with reduce where I wrap things in a vector and destructure, but the reduce that takes xf applies the transducer to the whole vector

Alex Miller (Clojure team)04:01:42

you can exit reduce early with reduced


I'm probably overengineering trying to not just do it all in the loop, which is what I have now. But I was wondering if I could extract some of that logic out


This is a contrived example, so mostly right now I'm just experimenting and exploring Clojure, not doing anything serious, but:

(loop [i 0 letters (map clojure.string/upper-case (cycle ["a" "b" "c" "d"]))]
        (when (< i 10)
          (let [next-letter (first letters)]
            (println i ":" next-letter)
            (recur (inc i) (next letters)))))


Now seeing if I can do this with a transducer instead of a seq


And it seems eduction is the trick:

(loop [i 0 letters (eduction (comp (filter #(or (= "b" %) (= "d" %)))
                                         (map clojure.string/upper-case))
                                   (cycle ["a" "b" "c" "d"]))]
        (when (< i 10)
          (let [next-letter (first letters)]
            (println i ":" next-letter)
            (recur (inc i) (next letters)))))


But I'm not sure what it does under the hood. Does it somehow just iterate statefully?

Alex Miller (Clojure team)04:01:06

but you don't even need that, you can just use transduce for this


So it be the same as this:

(let [iter (.iterator (eduction (comp (filter #(or (= "b" %) (= "d" %)))
                                         (map clojure.string/upper-case))
                                   (cycle ["a" "b" "c" "d"])))]
       (loop [i 0]
        (when (< i 10)
          (let [next-letter (when (.hasNext iter) (.next iter))]
            (println i ":" next-letter)
            (recur (inc i))))))


The (when (< i 10) ...) is basically a take. And map-indexed gives you access to the index. So, you can push almost all of the logic into transducers and use run! (which uses reduce) over an eduction, like this:

(let [fmt-line #(str % " : " (clojure.string/upper-case %2))
      xform (comp (filter #(or (= "b" %) (= "d" %)))
                  (take 10)
                  (map-indexed fmt-line))]
  (->> (cycle ["a" "b" "c" "d"])
       (eduction xform)
       (run! println)))
Won't help with boxed nums though.


This is fully eager, produces no intermediate collections. (Except for the one that cycle produces, of course)


Thanks. Ya my actual algorithm was more complex and the i wasn't simply an incremental count. Also I was doing math with it inside the loop and wanted it to be primitive


The best solution I found was to wrap the xf inside sequence


Using the iterator was still a bit faster, but negligible compared to its extra ugliness


And either way, having an xf proved too slow for me. I had to instead compute the transformation myself as part of my loop in a more optimal way


Though I wonder if a custom transducer could be written to for it. Didn't try that

Alex Miller (Clojure team)04:01:36

god, this is awful, make it stop :)

Alex Miller (Clojure team)04:01:13

you don't need eduction here


lol sorry, getting my hands dirty over here

Alex Miller (Clojure team)04:01:18

transduce takes an iterable source (like cycle), looks at each element in turn, applies a transducer chain, and passes the results to a final function f that can do whatever you want


But I have i here as well

Alex Miller (Clojure team)04:01:42

you can combine transducers for map, filter, take, and map-indexed to do everything here


If this was some more comlicated i, where say it checks i against the outer loop y ?

Alex Miller (Clojure team)04:01:24

then it's a constant, so who cares


Well, so part of what I was optimizing, math is applied to i as well, and I needed that to be primitive

Alex Miller (Clojure team)04:01:12

none of that is going to work

Alex Miller (Clojure team)04:01:22

transducers apply functions and anything will be boxed

Alex Miller (Clojure team)04:01:37

if you want primitive loops, you have to loop/recur


Right, I'm talking inside my loop/recur, I have other bindings which I want to have as primitive


But one of the binding is basically iterating over a transducer along the way

Alex Miller (Clojure team)04:01:28

well, then probably I'd use your eduction with first/next above (not the java interop to iterator blech) or just use seqs - you're not holding onto the head, so who cares

Alex Miller (Clojure team)04:01:00

seqs are designed to gc behind the iteration if you're not holding the head

Alex Miller (Clojure team)04:01:41

or sequence if you want to use transducers


Ya, I'll probably use seq. I was just trying to see if transducers can be used as well. So if not for this, what would be a good use case for eduction ?

Alex Miller (Clojure team)04:01:36

it's mostly used to return a delayed reduction over an external resource (file, result set, etc)

Alex Miller (Clojure team)04:01:19

so you can return it from a function without it having done any work yet, then reduce it elsewhere, and know that when the reduce is complete, the resource is no longer needed and can be closed


anyone ever played with javazoom.jl.Player? Can't seem to get any audio output from playing an mp3. Ideally looking for some java that can play and edit mp3s if anyone knows of anything offhand

Alex Miller (Clojure team)04:01:20

when used with first and next, the eduction is still getting put into a sequence, so I'm just not sure it's buying you much


Okay I see. Ya that makes more sense


FYI, eduction with first/next seems about 60% faster than using plain sequences in my case


From 10s to 4s


And ugly .next iterator mutation is 3s


Using sequence with xf instead of eduction yields similar performance, about 4s


So I'm guessing there's performance benefit to using sequence with transducer over just using sequences directly


Most likely my guess is that it reduces the number of intermediate seqs, as the transducer steps don't need one, even though there is one between calls to next


Cool, good to know.


(this has been a fascinating discussion this evening so thank you @didibus and everyone else!)


Hello guys, I try to use cljs-ajax for file uploading. Everything seems to work, but on the serverside i get {... :form-params {}, :websocket? false, :session/key nil, :query-params {}, :content-type "multipart/form-data; boundary=----WebKitFormBoundaryoWBHFfHqkdAWW8Tt", :character-encoding "utf8", :uri "/send-files", :server-name "localhost", :query-string nil, :path-params {}, :body #object[org.httpkit.BytesInputStream 0x22b4bc94 "BytesInputStream[len=100003]"], :multipart-params {}, :scheme :http, :request-method :post, :session {}...} instead of

  {"file" {:filename     "words.txt"
           :content-type "text/plain"
           :tempfile     #object[.File ...]
           :size         51}}
I use multipart-params, what am I doing wrong? Or how can I convert BytesInputStream into something usable? I want to save multiple files. Any help is appreciated


@paul931224 It's been a long time since I uploaded a file, but IIRC tempfile is just a File object where the actual content is stored in a temp folder. You can just use that as it is. For instance you could create an input-stream from a file object as shown here:


I don’t understand it fully, is object[org.httpkit.BytesInputStream 0x22b4bc94 "BytesInputStream[len=100003]"] a file object? Where is it stored? I did it a while ago aswell, but I remember getting a map with the name, size and temporary file, now I only get a BytesInputStream


Ah sorry, I thought you got the request map from your second example. So you have a org.httpkit.BytesInputStream instead. According to the code: this extends .InputStream so you can do everything with it you can do with an inputstream. For instance use the example I pasted before:


The test server makes use of middlewares which includes wrap-multipart-params So I guess that you are missing the multipart params parser in your middleware chain.


Well, meanwhile I am trying to save my files from the input-stream, still, getting the tempfile location would be easier, so here is my code. I didn’t miss the multipart-params wrapper, but it still comes back only with the body, the params, and multipart params keys are empty maps


I mean, wrap-defaults contains the multipart-params wrapper, but I tried to add it explicitly, didn’t work either

Ramon Rios10:01:10

Hello guys. Is it possible to loop into a map of maps in clojure? I have a map of addresses and i would like to, for each element on this map, add a new key/value to each address on addresses.

Ramon Rios10:01:17

foreach (address : addresses){ 
    address.put (newKey, "NewValue")
Kind of it. Sorry for OO thinking.


Assuming the addresses map contains some arbitrary keys with actual addresses being the values, one way to do it would be:

(reduce (fn [acc k]
          (assoc-in acc [k new-key] "NewValue"))
        addresses (keys addresses))

Ramon Rios10:01:49

{:facturation {:street "name"}
 :main {:street "name"}}

Ramon Rios10:01:59

The map is kinda like this


Yep, should work.


I would use reduce-kv over a map


If you do some operations within deep nested structures often, you may want to use

☝️ 4

With it, the code, I think, would be:

(s/transform s/MAP-VALS #(assoc % new-key "NewValue") addresses)


@UCPS050BV What's the difference? You would have two assoc instead of one assoc-in.


hmm, that's right


  (fn [acc k v]
    (assoc acc k (assoc v :new-key "NewValue")))
  {} addresses)


Although in the general case, you can't replace the first usage of addresses with just {}. But we've seen the example data, so that should be OK.


BTW another pretty common way is via into:

(into (empty addresses)
      (map (fn [[k a]]
             [k (assoc a new-key "NewValue")]))


Mapping over vals is a pretty common thing to do, lots of util libraries have a "map-vals" function


Medley is one of the more popular ones I think:

(medley/map-vals #(assoc % :new-key "new-value") addresses)


What you want is to get a map where a particular key-value pair is associated into every value of the input map. Let's say, we have a function f that associates said key-value pair into a map (`#(assoc % newKey "NewValue")` does that), then it's just a matter of applying this function to every value in the input map. Maps in Clojure can be treated as sequences of key value pairs. So you can use map to do that, like this: (map (fn [[k v]] [k (f v)]) input-map). Note we're using destructuring to get to the key and value in the kv-pair. What this returns is a sequence of kv-pairs with the transformation done. Now all we need to do is to pour these kv-pairs into a map, which we can do with into, making the whole thing this: (into {} (map (fn [[k v]] [k (assoc v newKey "NewValue")]) input-map)). Now this works, but in case your input map is a sorted map, this code will return a hashmap. In order to preserve the exact type. What we can do, is to pour into a empty instance of the same type as your input map. Now the final version is this:

(->> input-map
     (map (fn [[k v]]
            [k (assoc v newKey "NewValue")]))
     (into (empty input-map)))
If you're doing this kind of thing a lot, then you can extract map-vals as a separate function of course.


@U883WCP5Z A small note - in the code block using ->>, it's better to replace it with a transducer.

✔️ 4

I thought transducers are a bit too advanced, and unless somebody is worried about performance, or re-usability, not tend to suggest them.

👍 4

I’m educating myself with some 4clojure problems. Things where smooth until I hit something like a wall. Not going to spoil which problem it is, but I find myself wanting to un-partition a sequence that I have first partitioned with sliding windows of two (so (partition 2 1 coll)), and then filtered. Throw me a hint, anyone, please. ❤️


(reduce concat coll) ? just a tip, I may be wrong :D


I’ve tried that. 😃 It leaves me with the duplicates. ((x y) (y z)) => ( x y y z). I want it to become (x y z). Let’s say I started with (x y z), then partitioned it so that I have ((x y) (y z)). I want to get back to (x y z).


well it is one step from duplicates (set (reduce concat coll))


for the first time I didn’t understand what “sliding windows of two” means, but it is clear now 😄


Cool. I didn’t know what to call it. 😃


oh wait, set won’t be good, I guess you may want to keep duplicates, just not the ones which are one after another?


Yeah. I need to get rid of only the duplicates that my partitioning has created.


Maybe this does it: (*fn* [c] [(first c) (map second (rest c))]), not sure if it is general enough…


You almost got it:

(concat (first coll)
        (map second (rest coll)))


(conj (map first coll) (last (last coll)))


oh yes, almost the same solutions 😄


Yours changes the order though.


conj on seq adds at the front.

👍 4

why rest though? dont you need second element from the first in the coll?


I get that second element in (first coll).


An alternative solution that returns a vector:

(into (vec (first coll))
      (map second)
      (rest coll))


oh, my bad, I imagined (first (first coll)) , also you are totally right with the order


well I guess it won’t get nicer, but I think it is general enough


Ah, thanks. Let’s see if this unlocks the problem for me. I have a feeling I am making it more complicated than it should be….


It definitely sounds like it, given that you first partition then, then de-partition. :) Feels like maybe transducers would be able to help you.


Also check out dedupe and distinct.


I also filter between those. 😃



(fn [input]
  (let [length-groups
        (->> input
             (partition 2 1)
             (partition-by (partial apply <))
             (filter #(apply < (first %)))
             (group-by count))]
    (if-not (empty? length-groups)
      (->> (get length-groups (apply max (keys length-groups)))
           (#(concat (first %) (map second (rest %)))))


I’ll leave it up to the reader to figure out what problem it solves, lol.


Looking at the other solutions, mine doesn’t seem very complicated. Probably just a quite tricky problem.


Don't use ->> with (#(...)). Just use -> and as->.


Also, (not (empty? ...)) is an anti-pattern. Replace it with just (seq ...). Or maybe use not-empty when you create the collection itself. Something like (if-some [coll (not-empty ...)] ...).


Thanks. Will try some of the suggestions for not empty, Why is it an anti-pattern like I did it? Also, what’s the problem with ->> together with (#(…))?


Just look at the implementation of empty?. :) Also, try using any CLJ linter on (not (empty? ...)). By using (#(...)), you create a whole new function for the sole purpose of immediately applying it. There's no point in doing that when you can just run its body without creating the function.


clj-kondo does not complain about my current construct, but will try with the spelled out one now…


Yeah, it suggested (seq ..) , but gave no reason why.


Like so now:

(fn [input]
  (let [length-groups (not-empty (->> input
                                      (partition 2 1)
                                      (partition-by (partial apply <))
                                      (filter #(apply < (first %)))
                                      (group-by count)))]
    (if length-groups
      (as-> (get length-groups (apply max (keys length-groups))) v
        (first v)
        (concat (first v) (map second (rest v))))
Feels like I could get rid of the let now, somehow, but maybe that’s just a mirage.


Oh, it's even in the docstring of empty?: "Please use the idiom (seq x) rather than (not (empty? x))". Yes, you can absolutely get rid of let by replacing if with if-some and moving the bindings block inside it.


Also, I meant using a combination of -> and as->:

(-> length-groups
    (get (reduce max (keys length-groups)))
    (as-> $ (concat (first $) (map second (rest $)))))


> Feels like maybe transducers would be able to help you. I saw in some other thread that the presence of ->> would indicate that transducer could be used. But I don’t know anything about transducers. Will have to investigate first.


Although, it doesn't really simplify it. Maybe it's worth extracting (concat (first $) (map second (rest $))) into its own function, like de-partition or something.


$ is new to me as well. Interesting!


Yeah, it's just a symbol that I see as used with as-> the most.


Ah, yeah, I thought it was special syntax there a while…. Thanks for clarifying.


java.lang.RuntimeException: Unable to resolve symbol: as-> in this context Too old 4Clojure…


Yep, that's exactly why I dropped it.

dazld13:01:44 this just caught me out a little - was writing try/catch code and not having a portable catch everything meant digging around a bit. :default or similar would be a helpful idiom.


Hi! How can I convert a Java Future<T> to a clojure future/promise?


you can @ / deref Java Futures @pablore


oh that easy??


that easy


gotta love clojrue


I feel like I'm missing something obvious. I'm trying to update *data-readers* similar to how it's done in clojure.core (, but alter-var-root, while apparently returning the correct value, doesn't change the map used to resolve the reader tag. Yet that value is getting stored somewhere, since a second call shows the change I made is getting passed to alter-var-root. set! seems to work, but I'm assuming there's some reason clojure.core uses alter-var-root instead.


you can think of bindings as a stack - the root binding, followed by all the thread-local bindings above


alter-var-root swaps out the root binding


but *data-readers* is a var that is bound in your REPL by clojure.main


user=> (thread-bound? #'*data-readers*)


I see, thanks. I'm working on a macro that defines tagged literals. Nice-to-have would be updating *data-readers* in the REPL, rather than having to change the data_readers.clj file and restarting. Is there any way to achieve this? Seems like clojure.main would have to run again no matter what to establish the thread-local bindings.

Alex Miller (Clojure team)15:01:11

(set! *data-readers* (merge *data-readers* { ... }))

Alex Miller (Clojure team)15:01:38

this only works because *data-readers* has been bound in clojure.main

Alex Miller (Clojure team)15:01:00

that may not be true of every repl environment, although I think it is in most


It only seems to work if the reader tag is accessed in the same context (thread?) as set! was called. I can have a namespace that sets *data-readers* and uses the tag, and calling require will run that code fine. But *data-readers* is unchanged in user (or whatever ns is current), and attempts to use the tag fail.


am I the only one terrified by the idea of dynamically adjusting data readers in a macro? 😛

Alex Miller (Clojure team)15:01:59

generally, macros that alter the runtime are a bad idea as compile time may be totally removed from execution time if AOT'ed


I know. that is why I'm terrified 😛


Hmmm, good point. I assume that's the rationale for data_readers.clj?


it's fine if the macro emits code that alters the runtime


if the macro itself is side-effecting that is not fine

Alex Miller (Clojure team)15:01:31

yes, it will only work in the main repl thread

Alex Miller (Clojure team)15:01:48

you're overriding the local thread binding, not the global root

Alex Miller (Clojure team)15:01:38

you started with alter-var-root, and you could do both - alter the data-readers root, and set! the current thread

Alex Miller (Clojure team)15:01:17

it depends what your actual goal is here

Alex Miller (Clojure team)15:01:44

you might also want to look instead at *default-data-reader-fn* (a fallback function used if a tag isn't found)


*default-data-reader-fn is also bound by clojure.main, so I assume would have the same issue.


Thanks. Just seeing if there was way to define the reader tag dynamically, without modifying data_readers.clj , and that would work everywhere. May not be worth the effort. For developing custom tags, set! is probably sufficient, since you can iterate and test the definition in a namespace until happy, then update data_readers.clj.


Is there any language that offers a clojure like REPL and also types? With Clojure, I find it difficult to refactor code at > 5k LOC; but in compiled languages, waiting 5-10 seconds to recompile after each edit loses the 'immediate feedback'


I only know of other Lisps that do this. And both of them are optionally typed, which doesn't necessarily solve your issue.


The challenge is that for any mandatory full program type checking the full program has to always be known. But the REPL doesn't know the full program. It knows only form by form. So the best you could do is a REPL that is project aware somehow. But then, the second issue is that type annotations make it really annoying to use the REPL, and type inference on partial code is hard as well

Alex Miller (Clojure team)15:01:54

why do you find it difficult to refactor code?


@alexmiller: In OCaml / Haskell / Rust / C++, if I add or remove a field from a struct/class, the compiler will complain at me and force me to fix the corresponding lines that (1) construct the class w/o the struct, (2) use the field of the struct. In Clojure, the code 'compiles' fine, but we get silent nils that blowup at runtime -- often not at where the nil should have been initialized / read, but later on after it's passed around in various funcntions.

Alex Miller (Clojure team)15:01:01

By silent nils it sounds like you have existing code that is extracting fields that no longer exist. Tools exist in Cursive, CIDER, etc to find usages of a field - have you tried using those to find uses before removal?


There's also a bit of the fact that you'd now be writing OOP-ish code. Or more specifically, ADT-ish code. Since you now need custom data types to model your data instead of data-structures


If you're willing to go there and I don't think it's a good idea. I wonder... I think you could create a macro that generates a struct like that with knowledge of keys, and a corresponding get-entity-key and set-entity-key functions for it. Now if you remove a key the corresponding get-entity-key and set-entity-key fns would no longer exist and cause a compile error

Alex Miller (Clojure team)18:01:03

Imo, you’re then giving up a lot of the Clojure value prop


Are there any recommendations to do http calls over unix sockets like we do in the following curl command from clojure?

$ curl --unix-socket /var/run/docker.sock 
    "Containers": -1,
    "Created": 1577395270,
    "Id": "sha256:ce6c1e7ac56533e2742030f033cf0d8cf0adc996c7bb87453eb5adc266b2ef2e",
    "Labels": null,
    "ParentId": "",
    "RepoDigests": [
    "RepoTags": [
    "SharedSize": -1,
    "Size": 1461385,
    "VirtualSize": 1461385
clj-http as far as i understand doesn't support unix sockets?


@dpsutton: Is there any particular youtube video / talk that shows off the ELM repl? I'm having trouble finding anything that matches the experience of figwheel / devcards / ...


@alexmiller: adding to +/- on fields in structs. Similarly, if I reorder args in a function or add/remove args from functions, the compiler once again tells me at compile time and forces me to fix it. in Clojure, these would be runtime errors, often in the wrong place, of the args are maps, which can sorta be interchanged, with many reads becoming nils.


Add/remove args is a compile error in Clojure as well.

Alex Miller (Clojure team)17:01:15

you may not see compilation until runtime though


Ya, I have a AOT compile step used only as a linter specially for this. Also clj-kondo should catch wrong arity inside editor as well

Alex Miller (Clojure team)15:01:31

specs can help with some of that


Spec is great in that it can test conditions that a standard type system can't. However, when I tried using spec, I found myself often writingn specs of the form: this function takes arg1 of shape A, arg2 of shape B, arg3 of shape C, and produces an result of shape D in which case if I just used a typed language and defined types/unionns A, B, C, and D, the compiler could handle it automatically for me

Luke Schubert16:01:00

defn-spec/orchestra is nice for what you are describing


Heavy specing (and slightly unconventional one: "closed keys" checks, etc) can solve 80% of the problem the remaining 20% can be killed by the realisation that this popular notion of "refactoring", in reality tends to mean "mutate this big complex ball into another complex ball" if your program is made of simple atomic things, refactoring simply means a different composition of those atoms. related:


As much as I agree in principle this is not how it happens in practice and even with well-written code you will sometimes miss a thing or two. Tests help in that case. That said, this is a dynamically typed programming language. It's good in the small, and I'm sure there are plenty of places that use it in the large (i.e., big single codebases) but like all other dynamically typed languages it will hit a ceiling in size wrt "moldability" beyond which making larger changes to the codebase will be more difficult than to a statically typed one.


interestingly, IMO hitting that ceiling is a smell per se i.e. if a codebase becomes too large, it failed at modularity IOW, I'd say that there's no such thing as a big atom. One is either programming with atoms or not


I agree. I guess I am simply trying to point out that there are plenty of such failures out in the wild, even in Clojure land, and that dealing with big failures is easier when you have a static type system to hand.


Respected! Personally I try to avoid complexities upfront at all costs, but using a static lang is also an option.


I disagree


I deal with big Scala, Java and Kotlin failures a lot, and everytime I deal with them similarly as I do with Clojure failures (though I've never had one of those yet). Which is you need to create an Anti Corruption Layer and slowly replace the component by a new one


Never have I fixed a Scala or Java failure by refactoring it


The idea that all you need to fix is just removing one field from A and adding to B is false. The failure is often deep rooted, the data is modeled wrong, the whole class hierarchy is broken, the database is corrupted, the tests assert wrong behavior or don't exist, and when you look into it, you realize that all logic was moved to some configuration anyways which isn't type checked


There's magic strings everywhere for trying to rebuild dynamic behavior that the type checked langs don't naturally offer, etc.


When I think of adding/removing fields as refactoring, I don't think of it as fixing a bug but rather a change due to a new feature or requirement. And that happens a lot for me. To me there is no question that static types help with this and therefore there is a drawback to using dynamic types. But dynamic types have many strengths as well. It's just a trade-off. The bottom line for me is that Clojure was designed quite well with dynamic types in mind, and I get a lot of benefit from that design. And for me those benefits are worth the price you pay with dynamic typing.


I guess I honestly don't see how static types help with that, and I've mostly only programmed with static typed language prior to Clojure.


In Java, IntelliJ refactoring are often unreliable. So I always do a find and replace pass after to be sure it didn't miss anything. It struggles often with cross project changes, it ignores comments, you never know if reflection access wasn't used somewhere, etc. And if the IntelliJ project was configured properly it could have ignored files, sometimes it chokes on Lombok, or where code generation is being used, etc.


Do you have a more concrete example, maybe give a scenario? Where you relied on static types to help you add a new feature? And how?


In Java I try to use final fields in classes, and when I add a new field I get compiler errors telling me where I have neglected to initialize the field. And obviously when you remove a field you'll get compiler errors for all references to it. Refactoring doesn't necessarily mean automatic refactoring in the IDE, although I think I've had better luck with IntelliJ than you have.


For me, when using a dynamically typed language I have to be more careful coding and do more frequent testing as I code, however the code is simpler and easier to read. With static types, the compiler catches more problems for me, but I also spend more time fighting with it to force my solution into something that fits the type system.


It is a mistake to think that with static types the compiler catches all the errors, and I doubt that many people really believe that. It just catches more errors than with dynamic types.


I don't want to sound like I'm bashing static types. Being able to have static analysis of code and strong guarantees of certain properties has value, and the trade offs comes from the constraints it imposes and the additional annotations needed for it.


It's interesting that you bring up final on the variable, since I wouldn't really consider that related to types. I think that's an interesting thing that this points out. There is more than just types. For example, it is a compile error in Clojure to call a function with the wrong number of arguments. Similarly, it is a compile error in Clojure to declare a let binding without a value. And I got curious if this was valid: (def a). Surprisingly, this is not a compile error. Actually, it is pretty interesting, it creates an Unbound Var, whose value is the Unbound object. So I would have thought maybe it would just default to null, but it doesn't.


I'll say that I definitely value that sort of static compiler validation.


P.S.: I don't think you need to declare the value final in java. If you try and use an un-initialized variable it'll be a compile error. The difference is final will complain if it is never initialized, or if it isn't immediately after it is declared.


I meant final class fields, not variables.


Oh I see. For constructors.


Right. It's just one example of course.

👍 4

@lkschubert8 From what I've read, defn-spec/orchestra is automatic instrumentation, and instrumentation sounds like "run time type checking" -- so what happens if we pass it a vec with 10M elements -- will it, right before the function is called, run through all the elements and check that they all have the same shape?

Luke Schubert16:01:09

I would recommend instrumenting only in dev/test


@sgerguri Right, just to be clear, I'm not bashing clojure. I love clojure for "exploratory programming", and am not happy that right now, every tiny edit takes 5-10 seconds to recompile. This is why I really want a setup where I can use Clojure as the "glue/scripting" langauge, with a typed language building up the "primitives."


Yeah, that's the trade-off that you get with static types. Pick your poison. I like Clojure precisely because it allows me to have a conversation with the runtime in realtime, and I feel like I have enough experience to not shoot myself in the foot in the process too much.


I think better error messages/stack traces would certainly help, though with some exposure (and a little bit of subsequent experimenting in REPL) you will usually be able to pinpoint the issue fairly quickly.

Alex Miller (Clojure team)16:01:12

we've spent a ton of time in 1.10.0 and 1.10.1 improving the error messages and narrowing the provided info to provide less noise and more signal. I'd be interested if you have feedback on that.


When I do get exceptions I tend to get the similar low-level stack traces as before, though in our case it tended to be a function of interfacing into the Java wrappers that we were using, or things not being initialised properly (we use mount). We've had some type-level mismatches post- code rejigging (I hesitate to call it refactoring after the above conversation! :)) that would simply tell us that type X is being used but type Y is expected, though it wouldn't say exactly where it happens - so one still relies on fishing out the correct line from the stack trace, and then the rest of the difficulty tends to be associated with stacked transformations happening in a single place of code. I know error messages is something people have been complaining about for a long time so I don't want to sound ungrateful for the effort that the core team has put into improving it.

Alex Miller (Clojure team)16:01:26

I'm curious where you're seeing stack traces when an exception happens? Generally, you shouldn't as of Clojure 1.10.1 unless a tool you're using is doing that.

Alex Miller (Clojure team)16:01:25

that is, are you seeing this in your editor (and if so, which one), or with lein (and if so, are you using Clojure 1.10.1), or somewhere else (where?)

Alex Miller (Clojure team)16:01:27

I'm trying to assess whether you're seeing the stuff we've worked on or a different experience that is shaped by a tool


I think there were three cases when it happened: • REPL startup • State initialization through mount • Tests - as mentioned all of our tests are property-based, so some of the generated data may trigger an exception somewhere in the codepath it is exercising


We use Cursive, leiningen 2.8.1, Clojure 1.10.1.


Also eftest and lein-eftest for test runner.


I may also be mixing the concept of low-level or indirectly-indicative errors together with the full stack trace - I remember we would sometimes get an error, though if I recall correctly that would simply be the error message from the stack trace that was being hidden away. In such cases I would almost always pull out the full stack trace anyway (through clojure.repl/pst *e) as I would need to see the callpath to identify where the actual problem was. Other times, I would actually get the entire stack trace - typically during test runs or when initialising a REPL through Cursive. For test runs, this would happen both when running tests directly from Cursive as well as through leiningen. Interestingly, however, Cursive itself would sometimes freeze in the REPL without showing the stacktrace/exception, something which leiningen wouldn't do.


Ya, you're probably not talking about stack trace which is the full list of a called b called c etc. But more that the exception cause and message are Java specific and not always precise to the real issue in your Clojure code


One of my most common ones are can't cast a X to a ISeq or can't cast a Y to an IFn


I know how to interpret those, and still sometimes it takes me way to long to realize that I was say doing (i < 10) by mistake


Yes, I know what stack traces are, thanks. 😉 I think the point is that even though the full stack trace may be hidden the error message is still not custom to the context and the raw stack trace error message is being shown.

👍 4

Also, ditto for the two most common errors like you say.


Well, I meant that I understood you didn't meant stacktrace per say, since those are hidden now, but exception cause


I have the same feeling, while the noise is removed a bit from the exceptions, the exceptions themselves haven't improved much


Except for spec errors when macro-expanding, those help a lot


But I think it's a pretty hard problem. Because to add better error messages you'd need to add more checks and those would slow the code down.


Luckily enough our input, representation and output data is fully specced-out so we never actually hit this in production, always during testing/changing code, and then one can interactively zoom in on the issue with a helpful piece of random, generated data. I appreciate not every codebase is going to be hot on this particular approach, however.


Also, some of that is down to our preference for using higher-level properties in our tests - laws that should be upheld in protocol implementations, or properties for bigger/more important functions. We don't directly test everything so when the code changes and a high-level property like this blows up it can take some experimentation in the repl to find out exactly what went wrong.


Hi all, I could use your help designing a macro I’m building for use of CLJS in AWS Lambda. Right now you can set up a lambda like you would a function:

(deflambda fn1 [event context]
    {:foo "bar"}))
But it would be nice to allow the addition of middleware. With the current macro, it would have to be this:
(deflambda fn2 [event context]
  ((-> fn2-handler
      (m/wrap-content-type "application/transit+json"))))
But that explicit function call seems off design-wise. How do you think it would be best to handle the middleware case?


My first thought was for the deflambda body to accept a value or a function. But is that idiomatic?


The Lambda would never return a function, so I think making it so if it does you treat it as a middleware is probably fine


But to be honest I'm confused about middleware here. You mean to say you could setup the deflambda to automatically pass the return to a middleware chain before it returns it


If so, and given it's a macro, you can do something more syntax based like: `(deflambda fn1 [event context] :middleware [middleware1 middleware2] body)`


Or even if you want a input middleware chain and an output one you could have :in-middleware and :out-middleware. Both optional.


This came to mind as well…


You’re right, it would never actually return a function.


The macro would add a bit to check if the eval’ed body is a fn or not


But I like this syntactic approach.


I was initially thinking of keeping it ring-style for familiarity


$ clojure -e "*command-line-args*" 1 2 3
("2" "3")


I wondered how clojure determined when parsing of its own command line args is done and when the user arguments start


ah, it probably interprets the 1 as a main opt?

Alex Miller (Clojure team)19:01:54

Yep - it’s really expected to be used with -m


Where is the right place to log bugs/make feature requests for


JIRA if you have an account there, otherwise


There are tags for all the contrib libs on


Neat! Thanks. 🙇


@hhausman I'm curious as to what bug fix and/or feature request you have in mind for c.t.l. since we're a heavy user at work?


tools.logging is really a facade on top of a number of java logging libraries. for things not covered by the facade (which is basically everything except actually producing log messages) you'll need to interact with whatever concrete java logging library is being used


In that case, I guess my feature request is to augment the facade to include functionality for temporarily disabling certain log levels (simlar to with-level in timbre). simple_smile


Maybe its possible, but I don't think so. Because timbre is both a logger and a log interface. While tools.logging makes use of multiple log frameworks and they don't all support that feature I believe.


But I might be wrong


request away then

Braden Shepherdson20:01:27

I've got a deftype and I want to define how it gets compared by = to another object. I can't figure out the right protocol and methods to support for that. it's clojure.lang.Associative and clojure.lang.IPersistentCollection but not updatable.

hiredman20:01:05 is the method that will eventually get called when comparing against one of clojures built in maps


it being an IPersistentCollection kinda defines what = means


it would be equals and hashCode on Object, but deftype might interfere with defining those


it does not


(ins)user=> (deftype Easy [] Object (equals [_ _] true) (hashCode [_] 0))
(ins)user=> (= (->Easy) 0)
(cmd)user=> (= (->Easy) 1)


doesn't IPersistentCollection include equiv which = will get down to?


it clearly isn't happening above - I see you mean if you also implemented those interfaces which deftype doens't out of the box...


and those aren't protocols, they are interfaces


and custom = behavior is usually a bad idea


and brings without it all of java's equals and hashcode stuff


if you redefine = without redefining hashCode as well, be prepared for bad behavior in associative collections


it is usually better to write a custom comparison function or a custom comparator


(see for example existing weird behavior of Double/NaN when used as a key or set member)


the equiv methods of clojure's built in maps test for implementing java.util.Map


The core.rrb-vector library is one example of implementing a custom vector-like class in Clojure. It has deftype Vec in the source file I will link in this message, and implements equals and hashCode methods for Java capability, and hasheq (for Clojure.core/hash) and equiv (for clojure.core/=):

💯 4

If your collection is more like a map or set than a vector, then obviously don't copy and paste those implementations -- you would want to check for other interfaces/Java-collection-classes than that code does for vectors and vector-like things.


is there a friendly way in clojure to do same thing as tree-seq but with pre-order style ( like prewalk ) ??


clojure.walk/prewalk sorry, misread


isn't tree-seq already preorder, you see a parent first, then its children, recursively


problem with tree-seq is it is post order ( dept first ) and it does not go into nodes in a vector .. i need to hit all the nodes with prewalk style


maybe i should just look at code for tree-seq and substitue postwalk for prewalk


it definitely goes into vector nodes (as long as your predicate accepts them), and depth vs. breadth is an entirely separate issue


ok .. let me check on that


it' doesn't use any clojure.walk function


it's just a recursivie lazy function


you could switch the nesting to do breadth before depth - it's a small fuction, easy to modify

user=> (source tree-seq)
(defn tree-seq
  "Returns a lazy sequence of the nodes in a tree, via a depth-first walk.
   branch? must be a fn of one arg that returns true if passed a node
   that can have children (but may not).  children must be a fn of one
   arg that returns a sequence of the children. Will only be called on
   nodes for which branch? returns true. Root is the root node of the
  {:added "1.0"
   :static true}
   [branch? children root]
   (let [walk (fn walk [node]
                 (cons node
                  (when (branch? node)
                    (mapcat walk (children node))))))]
     (walk root)))


got it .. working on that now


this appears to work

(defn breadth-tree-seq
  [branch? children root]
  (let [walk (fn walk [node]
               (let [branches (when (branch? node)
                                (children node))]
                 (concat branches
                         (mapcat walk branches))))]
    (cons root (walk root))))


does that drop the leaves?


no, it returns all the leaves just like tree-seq does (concat branches ...) ensures that


either branches aren't leaves and they are recursed on, or they are leaves and thus in the output


it doesn't include "leaves" where the parent returns false for branch?, just like tree-seq


i'm not convinced. the cons node will get the leaf from tree-seq but there's no equivalent that i can see in the breadth version


it always concats branches into the output


that's the equivalent


(and starts with (cons node ...) at the top level


anyway, running it proves it works too

user=> (pprint (breadth-tree-seq coll? seq [1 #{2 {:a 3 :b 4}} [5 6]]))
([1 #{{:a 3, :b 4} 2} [5 6]]
 #{{:a 3, :b 4} 2}
 [5 6]
 {:a 3, :b 4}
 [:a 3]
 [:b 4]
(edited to make order more obvious)

👍 4

am i just loosing marbles ? i am trying

(tree-seq #(or (map? %) (vector? %)) vals) tree
and tree-seq not loving that


(tree-seq map vals tree)  works fine 


vals surely doesn't work on vectors


oh .. ok .. hold on


yeah .. that did it .. thanks @noisesmith


does anyone know how to deal with this situation: I am adding a test dependency to a lib that is also needed for tests of the calling project but not during in prod. if i only add this dependency to the :dev profile, then it is not available when calling project’s tests run. the best i found so far is but it seems like hack  Thanks!


@antonmos is using lein with-profile +dev test an option?


a project is responsible for its own tests


what does + in +dev?


that means "use the normal profiles, plus dev, for this task"


afaik, lein test already includes :dev profiles deps


then I don't understand where the problem is here


project A has a dependency on X for tests


project B depends on A


and through depending on A somehow needs X for its tests as well


oh, yeah, than I agree with @hiredman here that you shouldn't use test deps of your deps - that smells like a design problem


this lib is an infrastructure lib


it provides support for talking to a db as well as support writing integration tests that need that db


at my company we solve this by the depended project defining a lib of test helpers


i could certainly split it into 2 libs - but it seems overkill


yeah, I mean, my two reactions are project B should depend on X for its tests then, and also something is broken in the design that makes X required for both


from one repo, you can get two artifacts: the lib itself, and the test helper lib


via separate jar / deploy profiles


if B is doing that stuff, it should depend on the library that gives it that stuff (X)


yea, i was trying to avoid that but maybe that’s the best way


@noisesmith @hiredman do you by any chance have a link to an example project that publishes multiple jars from one repo?


I thought I had one, but the reusable test stuff was folded in with the rest of the project


thanks for chekcing!


@antonmos We do it at work, but unfortunately that's closed source. We have a monorepo, with about 30 subprojects, managed with CLI/`deps.edn`, and we create JARs from 12 of those subprojects. I don't know how you'd do it with lein but it's fairly straightforward with CLI/`deps.edn` (and we used to do it with Boot pretty easily too before we switched to the CLI stuff).


proof it works


all I had to add was the :test-jar profile, otherwise a default project


(then invoke the profile in the jar task, of course)


(and there's how you do it with Leiningen 🙂 -- thanks @noisesmith )


this is likely the best way to do a clojure / lein monorepo - have a separate subdirictory for each app etc.


and then use source-paths to define the set of in-repo libs that go in each app's jar, and optionally per-profile deps for each if they want different deps


(the other approach I've seen is a modules plugin that does some magic to make version numbers match up, which is kind of messy)


looks like the core.clj is included in both jars, i think i can add :jar-exclusions within the :default profile


or the ^:replace metadata on :source-paths


I think the replace option is more parsimonious (matches semantically what you want)


yeah, :source-paths ^:replace ["test"] fixes it


yup, was just looking at the docs for that!


as an aside I think leiningen suffers from far too many "good enough" but inelegant configs, and then other configs being copy pasted - and it's powerful enough to keep iterating and getting uglier and uglier setups that way... perhaps this could be called the javascript problem haha

👍 4

or the makefile problem


the other thing you want is to override the project name inside the profile (I think that's possible...)