Fork me on GitHub

I have all three editions of Programming Clojure 🙂 And Clojure Applied and Getting Clojure, both editions of Clojure In Action and both of The Joy Of Clojure, also Clojure Cookbook and Clojure Programming. All in PDF, sync'd to every device I own via Dropbox. And in OneDrive I have three Clojure books from Packt (not a very good publisher, IMO) and Zach Tellman's Elements Of Clojure. And I'm still missing some!


Carin Meier's Living Clojure is noticeably absent. What else?

Eric Ervin21:04:59

Sotnikov's Web Development with Clojure. I think it is a great 1.5th or 2nd Clojure book to get people to "hello http"

Eric Ervin21:04:50

Quick Clojure is a good one for when I've spent some time away from the language and I need a reminder.


Professional Clojure is absent, although don't know about "notably"


Ah, who publishes that?


Ah... not a publisher I look at very often... isn't it part of O'Reilly these days?


(mind you, Packt is also part of O'Reilly now I think?)


I have no idea


Oh. You're one of the authors! 'grats! Writing a book is a major achievement! (I've started three and never got past the outline)


Thanks! I would have never finished, except it went down like, "hey want to write about Datomic in our Clojure book?" "Great, I'm in!" "Ok you have a month"


Hahaha... yeah, writing schedules are why I've never gotten further than the outline and those early discussions with publishers...


It was fun to write. Although everything else was misery

Ivan Koz09:04:31

@seancorfield how was the Elements of Clojure for you?


I really like it -- because it tackles topics that a lot of books don't cover. Some of it was "old news" but it was deep and thought-provoking, for the most part.


I'm using

(defn foo
  [y z]
  (let [x (some-long-operation y)] (map #(+ x %) z)))
as a way to force eval the some-long-operation function. Are there any other ways of forcing the evaluation of the form inside the # macro?


In order to keep the structure something like this

(defn foo
  [y z]
  (map #(+ x (some-long-operation y)) z))


(defn foo
  [y z]
  (map (partial + (some-long-operation y)) z))


(doall (some-long-operation y)) will force evaluation of the lazy-seq, your original version doesn't actually force anything


=> (let [foo (map println (range))] nil)


got it, thanks for the suggestions!


and by force I guess I meant executes only once 🙂


@U2JACTBMX That was my understanding. The partial should do that for you. But your first piece of code works as well.


oh, in clojure I've only heard "force" mean making sure a lazy seq is realized at a specific code boundary, I've never seen it used as a synonym for cache or reuse


I really like it -- because it tackles topics that a lot of books don't cover. Some of it was "old news" but it was deep and thought-provoking, for the most part.


xpost from #beginners for a broader audience: I tried a simple ETL pipeline in Clojure and Python to get a sense of pros/cons of implementing in both languages. I was surprised that Clojure wasn't much faster and only marginally shorter. Any thoughts why?


I'm not sure, but one suspicion I have here is the usage of spec. My vague understanding is that it's not optimized for performance, and the intended usage is to use it during development / testing and turn it off for production code.


I think you could write a much more efficient data validator by hand (or use a less featureful and more performance oriented validation library)


Also, core.async is great for coordinating tasks, but I don't think it's the most performant option here either. If the goal is parallelization rather than coordination, you'll get much better throughput with ExecutorService or a wrapper like claypoole.


Thanks for the feedback


I'll see if swapping out claypoole for parallelization helps


I have used claypoole successfully in a handful of projs. Here's a self-contained example of how I tend to use it (eagerly partition the work to match cores + distribute the workload evenly)


also, if throughput is bottlenecked by IO rather than CPU (which is very possible here), limiting by cores is probably a mistake

👍 4

Yeah I guess if IO is the bottleneck it will slow down both implementations


you might try making a /dev/null variant for both - that might give a better idea of the language perf difference if IO dominates the task


or perhaps the message here is "if IO dominates the task time, just pick any language you like"


good point


The I/O possibility occurred to me too, but I notice that in the original Grammarly post they show all 4 cores maxed out, so I figured it probably wasn't that.


I hadn't noticed the use of spec (I only looked at the Grammarly version) but that seems like a real possibility too. I'd definitely be curious to see whether swapping the spec/valid calls for a simple hand-written validator would make a big difference (and that could be one place where type hints would make a difference).


Or just comment out the filter line and see how much difference that makes.


order of magnitude difference for a simple case

(cmd)user=> (s/def ::foo int)
(ins)user=> (time (dotimes [_ 1000000] (int? 1)))
"Elapsed time: 11.172124 msecs"
(ins)user=> (time (dotimes [_ 1000000] (s/valid? ::foo 1)))
"Elapsed time: 123.692794 msecs"

👍 4

to be fair, that int? check might be friendlier to inlining by hotspot than the actual code checking your data format


the s/def uses int instead of int?


that's idiomatic -- it's actually not, that's my mistake


(ins)user=> (s/def ::foo int?)
(cmd)user=> (time (dotimes [_ 1000000] (s/valid? ::foo 1)))
"Elapsed time: 88.051523 msecs"


The interesting thing to me is that I use a pretty heavy handed approach to schema validation in the python implementation too. I'm surprised spec adds this much overhead, but I must have a misunderstanding of it's intended use. I was assuming this was the exact use case for spec


just because spec is 8 times slower doesn't mean it's a bottleneck in this case - it's a good idea to profile for stuff like this


and I could be wrong about it's intended use case, I'm not a spec expert, I've heard the truism "don't use it in hot loops, only use it at boundaries" but really this code is both a hot loop and a boundary


Tested without validation and it doesn't have a material impact, so not the bottleneck. Time for me to learn how to profile a Clojure program.

👍 8

> doesn't have a material impact


maybe that's not fair. it has an impact but doesn't seem to be a primary bottleneck. Really just seems like it will need some profiling to really understand


you can use a standard java profiler, visualvm (sometimes known as jvisualvm) is free, yourkit gives full licenses for use on open source projects


there's an art to translating from the vm level stuff (designed to map more or less directly to java classes) when profiling clojure (classes made via weird generated bytecode from a handwritten compiler)


FWIW I recently did a bunch of profiling and discovered some unexpected changes in the free Java tools. - jvisualvm was changed to jmc (Java Mission Control) as of Java 8 - jmc is also available in Java 9 - in Java 10 to 12 there is no jmc in the jdk jmc was spun out by Oracle as an open source project as of Java 10 but has not been released yet. They're trying to get jmc 7 done but it's not there yet. You can build it from sources, and I did that and it seems to work. But the simpler thing is to use Java 8/9 for profiling until jmc 7 is released. It doesn't work to use the Java 8/9 jmc on JFR files generated by later releases of Java, the format is incompatible.


I'm looking for a proxy [] that I can bind to *out*, such that concurrent threads printlning to it won't cause jumbled output ...might be super easy to implement, but who knows, maybe an existing solution covers some edge cases, has unit tests etc ^^


or just use a real logging library


I cannot mutate arbitrary libraries to use my logging library. They use println


life is too short for bad dependencies


I am not sure that is solvable by binding out


a binding of out can only see when .write is called on itself, which doesn't tell it when a logical unit of output is complete from one thread


I'll give it a think. Given this stub:

(proxy [] []
    (append [& _])
    (close [])
    (flush [])
    (write [& _]))
...`close` cleanly delimits the end of a message. And (Thread/currentThread) delimits the concurrent parts


scratch that


yeah, no one every calls .close on *out*


Maybe flush then. And I'd manually bind *flush-on-newline* true to ensure flush is invoked at the end of a message


I want to do aot but there's one function call that I'd like to not run during the aot compilation. Without skipping aot on the whole class, can I use some variable to know if clojure is compiling?


user=> (doc *compile-files*)
  Set to true when compiling files, false otherwise.


it's good practice to put nothing with side effects on the top level of your code


(delay can be helpful for this - you can create a delay globally but only evaluate it once it's forced)


ah good point, yes, it's a graal app that I'm working on, and it needs to start making native calls, but the pointers must be created at runtime


works! thanks @noisesmith 🙂


you can always not aot compile too


oh, graal, meh


pmap doesn't do any parallelization below 512 elements, is that right?


starting to doubt that but remember something cares about 512


thanks so much! that was bothering me 🙂

Alex Miller (Clojure team)21:04:55

pmap is parallel over 2+# processors

parens 4