Fork me on GitHub

is there something like lein clean for cli tools? clj -A:clean or something?


That will recompute the classpath from scratch and overwrite the local cache file with the recomputed information.


You have to use it with the same aliases as you want to run anyway, not just on its own.


clojure -Sforce -A:usual:stuff:here


The cached information is per-unique-combination-of-aliases.


oh, i just used lein clean to make the output of tree shorter


Oh, well, the CLI tools don't produce a target folder full of stuff.


I assume that's the tree you are talking about?


maybe it's figwheel that is creating target


Probably. Compiling .cljs files to .js ?


i assume. there's a target/public/cljs-out. seems like a safe bet.


oh hey, i tried creating src/foo.cljs and src/foo.cljc and fired up a repl and was not able to see the things in foo.cljc


I don't do any ClojureScript so I can't help with that.


What's the idiomatic way perform a bunch of steps on a map, and recording the results of each step in the map, so that the results are available to the next step? I imagine some like this:

(assoc-> {} $
  :foo 1
  :bar (inc (:foo $))
  :baz (inc (inc (:bar $))))
It's inspired by as->, and can be implemented in terms thereof:
(as-> {} $
  (assoc $ :foo 1)
  (assoc $ :bar (inc (:foo $)))
  (assoc $ :baz (inc (inc (:bar $)))))
Or with reduce:
(reduce (fn [m [k f]] (assoc m k (f m)))
        [[:foo (constantly 1)]
         [:bar #(inc (:foo %))]
         [:baz #(inc (inc (:bar %)))]])
Either solution can be used to implement an assoc-> macro. But then, looking at the as-> solution, it made me wonder if it's worth it: is the cost of introducing a new macro (`assoc->`) into the vocabulary, worth the small saving in cruft of the as-> solution? My gut says "no", and yet this seems like the kind of common use case for which there should be something more idiomatic.


I'd probably prefer something with reduce if I wanted the previous map to be available for the next step. And I might be tempted to use point-free style if I thought it helped readability, and maybe partition so I could avoid the nested pairs...


(reduce (fn [m [k f]] (assoc m k (f m)))
        (partition 2 [:foo (constantly 1) :bar (comp inc :foo) :baz (comp inc inc :bar)]))


Not sure I'd bother with a macro for that @clojurians-slack100 since a function would do just fine:

(defn assoc-f [base & args]
  (assert (even? (count args)))
  (reduce (fn [m [k f]] (assoc m k (f m)))
          (partition 2 args)))

(assoc-f {} :foo (constantly 1) :bar (comp inc :foo) :baz (comp inc inc :bar))
Something like that?

ūüėć 3

user=> (assoc-f {} :foo (constantly 1) :bar (comp inc :foo) :baz (comp inc inc :bar))
{:foo 1, :bar 2, :baz 4}


Wow! Very cool! ūüĎŹ


Function or macro, though, the question remains: does that complicate reading of the code more than using a slightly messier as-> would?


I think the as-> example you gave is a lot less readable than the one line (assoc-f ...) call above ūüôā


(in general, I only use as-> inside a -> pipeline for small expressions that don't follow the first-arg threading)

ūüĎć 3

(-> (something)
    (f arg2)
    (as-> m (assoc m :foo (inc (:bar m))))
    (g arg2 arg3))


(and if I'm using as-> at all, I prefer a less cryptic name than $ or % or whatever)


As for macros, consider them a "last resort": always try to do things with a function first. Add a macro later just for syntactic sugar.

ūüĎć 3

Unless you have a situation that absolutely requires arguments are not evaluated -- then you have to write a macro. But even then, try to implement it under the hood as a function that is perhaps passed a "thunk" (a 0-arity function).


Great answer(s), thanks @seancorfield


how do i speed up "lein ring server" api calls are about 10-20x slower


and restarts take like 3 super long minutes


10-20x slower than... what @kbosompem?


building an uberjar and running it


Do you have development middleware doing reloading etc?


Hard to give you much guidance without more details about exactly what you have in play and what you're comparing it to... ūüôā


Normally, running code from source will be slower to start up (than an uberjar) because it all has to be compiled as it is loaded (whereas the uberjar has everything pre-compiled) but then it should be the exact same speed in both cases once it is running.


Even with that compilation overhead at startup, I would be surprised by a three minute startup time unless you had a pretty big, complex app...


its a simple ring app


lein ring server let's me make changes and without reloading


but every so often when i make changes in hugsql i am forced to restart it


Sounds like it has middlware that automatically watches for code changes and reloads it for you?


I never used the ring plugin with lein. I just have code to start the Jetty server directly in -main, and I can also easily start it from the REPL. So I've no idea what the plugin might be doing that's taking so much time.

ūüĎć 3
Alex Miller (Clojure team)04:07:02

getting all of your deps aot compiled and on the classpath would probably make a huge difference in startup time


ooo looks interesting. I've never seen that guide will go read it asap


I pretty much never need to reload code. I evaluate code directly into the running app so I can avoid that. And although I don't use HugSQL, I would have expected not to need a reload just because I changed the SQL files -- you should be able to ask HugSQL to redefine the functions and have those new definitions in play immediately in the running REPL...


I generally advise beginners to avoid plugins and any special middleware that tries to auto-reload code: there's a lot of "magic" going on in those sorts of tools that can mask (or even cause) all sorts of problems and make things hard to debug.

Alex Miller (Clojure team)04:07:50

you might also check your Java version - it's also possible you're hitting the Java regression we saw around Java 1.8u202 / 11.0.1 which primarily manifests as very slow startup when using user.clj. Newer versions of those did have some improvements but really using Clojure 1.10.1 has changes to sidestep the regression.


Ah, I didn't even think about user.clj -- something else I avoid ūüôā

Alex Miller (Clojure team)04:07:45

often in play with this kind of ring setup


repl starts in 30 secs. going to try that

Alex Miller (Clojure team)04:07:23

I mean, the user.clj is probably the thing loading all the code in this case


I'm curious as to how much the Cognitect folks lean on user.clj when they're developing? I think I've only ever used it once, in just one project, and that was to compile a specific ns when a REPL started up so that a (:gen-class) type would be available to the rest of my code.

Alex Miller (Clojure team)04:07:14

probably varies, but Rich, Stu, and I never use it


That's what I figured. We need y'all to do more talks about your dev workflow and your use of the REPL. It seems to me that a lot of beginners go down the rabbit hole of all sorts of weird plugins and add-ons and complex, error-prone "reloaded" / auto-refresh tools ūüėě

‚úĒÔłŹ 9
Alex Miller (Clojure team)04:07:52

well Stu did a pretty good one :)

Alex Miller (Clojure team)04:07:04

there's not much to talk about if you keep it simple :)

Alex Miller (Clojure team)04:07:33

"here's me not using lein or any plugins and just starting a repl"

Alex Miller (Clojure team)04:07:00

but also, our use cases are different than most Clojure devs doing web stuff.


I dunno. I do web stuff (backend) all day, every day and my workflow is like y'all's ūüôā


Stu's talks are what I refer people to, and the podcast he was on, talking about his workflow.


And Eric's "RDD" course is awesome for that too.


But, yeah, no plugins, no refresh, no watchers, no auto-reloading, no user.clj.


And I gather not much use of CIDER/nREPL at Cognitect either, I'll warrant?


i have a user.clj but i fear you maybe referring to something else.


does it just look for any "user.clj"


Does it have a (ns ...) form in it, or is it just a top-level script with some development-related code in it?


its in src > clj > vg > api and has the (ns ...)


OK, then not the user.clj file that Alex was referring to.


wow thank you thank you @seancorfield and @alexmiller did the "compile" and run jetty from the repl and it was instantaneous


Same call went from 6s on average in dev mode to 1.2s which is close enough to the 800ms with a jar


My preference is to keep my dev setup as close to my production setup as possible: no plugins, no dev-only middleware, no watchers, no auto-reloading -- just a tight REPL workflow.

ūüĎć 3

(although I do build an uberjar with AOT compiled code for production deployment -- but I still run the same Socket REPL in both production and local dev)


while i have both of you here one more question. I use timbre for logging and i like it but adds at least half a second to each function call so i wrapped the call in a future to "speed" things up. unfortunately now the log isn't sequential. what would you recommend?

Alex Miller (Clojure team)04:07:52

a half second sounds unusable to me

Alex Miller (Clojure team)04:07:31

some Clojure loggers use an agent - should be basically sequential (per concurrency constraints) due to the agent queue. not sure what timbre has


No idea why Timbre would take so long. What do you mean by "half a second to each function call"? You mean just adding logging calls?


i assumed the slowness was due to the database appender i enabled


not part of the core library


Oh... well, possibly. We used Timbre at work extensively but never used its database appenders. Perhaps it is setting up a new connection on every call, instead of using a connection pool?


If you can use a connection pool with that database appender, I'd expect it to be much faster.


What's the DB appender library you're using?




(defn mysql-appender
  {:enabled?   true
   :async?     false
   :min-level  :info
   :rate-limit nil
   :output-fn  :inherit
   :fn (fn [data] (log-message db-config data))}) 


(defn log-message
  "Log application messages in application database."
  [config data]
  (let [entry
        {:instant   (java.sql.Timestamp. (.getTime ^java.util.Date (:instant data)))
         :level     (str        (:level     data))
         :line      (str        (:?line     data) " | " (:tid data))
         :vargs     (str        (:vargs     data))
         :namespace (str        (:?ns-str   data))
         :content   (str (force (:msg_      data)))
         :error     (str        (:?err      data))
         :hostname  (str (force (:hostname_ data)))}]
    (jdbc/insert! config :logs entry)))


Triple backtick at the start and end of code makes it a lot more readable.




(def log-config
  {:level :info
   :appenders {:mysql-appender (mysql-appender db)
               :rolling-appender (rolling-appender {:path log-file-name :pattern :daily})}})


So, yeah, your mysql-appender should make a connection pooled datasource from db-config and use {:connection cpds} instead of db-config in the log-message call.


(for what it's worth, next.jdbc is the replacement for and has built-in support for connection pooled datasources with both c3p0 and HikariCP)


And HugSQL supports next.jdbc -- there's a getting started with HugSQL guide in the next.jdbc docs.


Overall, next.jdbc is faster than in general usage...


will do! this has been super helpful.


If you have any questions about next.jdbc (or about -- I maintain both), drop into the #sql channel and ask (I'm more likely to see questions there).

ūüĎć 6

will definitely do!

Jim Newton10:07:23

What is the best way to invert a filter? Such as the following:

clojure-rte.core> (filter :accepting [{:accepting true
                                      :a 100}
                                      {:accepting false
                                      :a 200}
                                      {:accepting true
                                      :a 300}])
({:accepting true, :a 100} {:accepting true, :a 300})

Jim Newton10:07:19

what I really want to do is partition the sequence by a predicate, binding the true responses to one variable and the false responses to the other.

(let [[trues falses] (partition-by-predicate some-sequence some-predicate)]
I'm tempted to call filter twice, once with the predicate and second with the complement of the predicate


the complement of filter is remove, but in this case you probably want group-by :

(map (group-by :accepting coll) [true false])

Jim Newton10:07:38

(map (group-by ...) [true false]) that's clever

Jim Newton10:07:56

another question. I'm implementing a function which mathematically returns a set of sets. So I don't care which order the elements are generated. i.e., [[1 2 3] [4 5]] is semantically equivalent (for my program) to [[5 4][1 3 2]]. I've implemented this forcing the results to be sets, which makes the test cases easy to write. However, when I map and filter using sets, I don't get back sets. If I want to write my functions using just default clojure sequences, what's the correct way to write my tests?

Jim Newton10:07:26

(defn %partition-by
  "Given a set of objects, return a set of subsets thereof which is a partition of
  the given set.   Every element in any some set has the same value under f, and
  the value under f is different for any distinct subsets.  f is not called
  if the size of the given set is 1."
  [objects f]
  (if (= 1 (count objects))
    #{ objects}
    (set (map set (vals (group-by f objects))))))

(deftest t-%partition-by
  (testing "%partition-by"
    (is (= (%partition-by #{1 2 3 4 5 6 7} even?)
           #{#{1 3 5 7}
             #{2 4 6}}))
    (is (= (%partition-by #{1 2 3 4 5 6 7 8 9} #(mod % 3))
           #{#{2 5 8}
             #{1 4 7}
             #{3 6 9}}))))

Jim Newton10:07:52

BTW, I also discovered that (filter (complement :accepting) ...) also works


it works, but (remove :accepting ...) does the same thing, and that's why remove exists

Jim Newton11:07:23

I'd like to iterate over a sequence, (for example using (map f sequence) and have the iteration function f return a key/value pair, to generate a map, rather than a sequence.


You can just return a 2-element vector, (into {} [[:a 1] [:b 2] [:c 3]]) -> {:a 1, :b 2, :c 3}


Or to better fit your example:

(into {}
      (map (fn [x] 
             [x (inc x)]) 
           [1 2 3]))
=> {1 2, 2 3, 3 4}

ūüĎć 3
Jim Newton13:07:55

do we have an array tabulate function? I.e., I want to create an array of size N where rather than giving explicitly N values for the array, instead I give a function which maps an integer the intended array value? Of course I can write such a function, but want to re-use what's there. I guess it would be something like this?

(defn tabulate
  [n f]
  (into [] (map f (range n))))

Alex Miller (Clojure team)13:07:14

map-indexed might be helpful?

Jim Newton15:07:07

@alexmiller I didn't know about map-indexed thats sort of a cousin of zip-with-index I guess. BTW, my guess is that (into [] (map f (range n))) is a pretty good solution. right?


i often do (map vector (range) coll) to end up with tuples of index and items from the collection


because i forget the order of args for f in map-indexed. so i just make my own tuples

Jim Newton15:07:35

I thought an expression like (some #{target} sequence) works as a Boolean to determine whether target is an element of the sequence. It doesn't always work. what should I be using instead?

Jim Newton15:07:56

if target is nil the (some #{nil} [ 'a 'b nil 'c]) returns Boolean false

Alex Miller (Clojure team)15:07:28

generally, I would either use a set or a map instead and avoid the linear search

Jim Newton15:07:12

@alexmiller, thats'a response to which comment?


Returns the first logical true value of (pred x) for any x in coll,. #{nil} as a predicate can never return logical true.

Jim Newton15:07:18

Also (some #{false} ['a 'b false 'c]) returns nil , which surprises me.

Alex Miller (Clojure team)15:07:46

just as a general thing, don't use a set as a predicate if you are trying to match logically false values (`nil` or false)

Alex Miller (Clojure team)15:07:02

this is just one example of that more general statement

Jim Newton15:07:06

@dpsutton. yes I see. good explanation, thanks. So what is the correct way to search for an item in a sequence?

Alex Miller (Clojure team)15:07:43

don't search an item for a sequence - make a keyed data structure

Alex Miller (Clojure team)15:07:04

or use some (but be beware of the caveats above)

Alex Miller (Clojure team)15:07:26

or combo of first / filter is done sometimes

Jim Newton15:07:41

I'm using an array, because I need O(1) access.


you access n items in O(1) time then. so its still linear


(for the linear contains search)

Jim Newton15:07:44

it is indeed linear when I'm scanning it during construction. However, after I've constructed, it is O(1)


but (some #(contains? #{nil false} %) [1 2 nil 3]) can do the search for logical false values

Jim Newton15:07:14

my current challenge is the construction phase. I don't mind that it is O(n)

Jim Newton15:07:46

So given a variable x which has been passed to me from the user of my library, How can I know if that value is in my array?

Alex Miller (Clojure team)15:07:11

I think we're backing into the real problem "what data structure should I use given access pattern X"

Jim Newton15:07:33

I can of course iterate over the array and test every value == x

Alex Miller (Clojure team)15:07:49

given only that description I would say, use a map and look it up

Jim Newton15:07:53

I'm a bit shocked that the sequence api does not have membership as an operation.

Alex Miller (Clojure team)15:07:25

because you should use a data structure that supports that question better

Jim Newton15:07:31

My philosophy is that you should choose the data structure based on what makes the most sense for the entire program. not just to make it easy to construct.

Jim Newton15:07:13

But anyway, I understand the limitation. I don't have to like it, I just have to understand it.

Alex Miller (Clojure team)15:07:28

that seems right, but seems like you're choosing a data structure that is slow to answer the operations you are doing on it (not construction)?


for the entire program would include o(1) membership checking and you're missing that feature in your data structure

Jim Newton15:07:14

I don't need membership checking after the object has been constructed.

Jim Newton15:07:44

after it has been constructed all operations are done by array index.

Jim Newton15:07:31

The construction of my object is already exponential worst-case. So I don't mind an additional linear search.


Hi! Is there a way to add a custom task into the lein clean process?


as far as I know, all you can define is extra :clean-targets to clean up, for arbitrary tasks I'd define a new alias that calls clean along with your other tasks


:aliases {"my_clean" ["do" "clean" "foo" ["bar" "bar-arg"]]}


Ok ūüėě Thanks!


(do (for [i [1 2 3]] (println i))
@p.kushwaha97 ^ same effect here, no macro


for is a list comprehension, generating a lazy sequence


the sequence is only evaluated if something demands it


I see. Can I force demand it (for debugging)? Usually to get rid of compiler optimizations you'd do something with side effects (like printing it out) and that'd be sufficient.


it's not an optimization - for isn't a loop, it's a lazy generator


conceptually you get

with the block above


you can demand things by consuming the seq -- first/rest, or pour it into a vector:


(vec <LAZY-THING>)


I suppose then even (println <LAZY-THING>) would work?


vectors are always realized


there's also dorun for forcing side effects while discarding the value (but at that point why were you using a lazy thing in the first place?)


everything in Clojure is eagerly evaluated, but you have facilities to make lazy/infinite lists


I think this solves my current problem. I'll read up more on how clojure handles lazy evaluation and dorun. Thanks everyone!


(doseq [i [1 2 3]] (println i)) ; just for side-effects
(run! println [1 2 3]) ; map-like function for running side-effects


Actually, doseq fits better than for. I now understand what @noisesmith was saying about why I'm using lazy things in the first place. I'm coming from common lisp / racket so I assumed that "for" was similar to the loop macros there. Should've read docs more carefully.


Hey again. Looking into routing in a new ring app... I looked at reitit and compojure, but in the end, I kinda prefer just plain old core.match. Passing in the request method and uri (split on "/"). Is there any good reason to NOT do that? Are there security concerns that the routing libraries handle?


routing libs can give conveniences like route parameters and reverse routing (eg. "construct a route to handler x" as a markup helper), but I don't know of any big security gotchas with home grown routing


btw compojure is far from the best routing lib out there IMHO - people like it for the syntactic sugar, but there are quite a few other options


Ah maybe I just had a bad first experience. What ones do you like?


bidi and reitit are good


bidi's readme has a feature comparison matrix


I would not expect core.match to perform particularly well compared to dedicated routing libs -- and I'd also note that it hasn't had a functional update in about three years (and is considered "stable" which means there's no active development being done -- ). Not that "stable" is bad in anyway.


We mostly use Compojure at work, with one app using Bidi. If we were starting over, we'd probably use Bidi or Reitit for all of our apps I think.


Reading Bidi readme now. I didn't know about that one


I think their feature matrix gives a fair overview for comparison too - data oriented and reverse routing and clj/cljs compat are three huge ones IMHO


Reitit would probably be my first choice at this point. The Metosin folks are obsessed with performance and it works well with clojure.spec which we use very heavily.


I'm struggling to understand the use case of reverse routing in a practical sense


(I say that from zero experience with it -- but I'd evaluate it as a first choice)


@coding.aaronp if you want to link from one page to another, reverse routing means you can link based on feature, and not a specific site map


which means that if marketing decides the routes need to be prettier or whatever, the link generation handles the change


Yeah, it's very useful if you are refactoring your routing during development.


if I was going to roll routing from scratch I would think about starting from a parser


Oh, so you might actually use it outside of the router itself? Like (static-path "foo.js") to create a route to wherever


right - that's part of why the router working from both clj and cljs is good


Well that's very interesting. Thanks!


but that also applies with back-end pre-rendering with links


@coding.aaronp To add another data point - I am using bidi currently at work, but if I were to start a new project today, I'd definitely go for reitit


Both are data driven, which is great. When we started the project we were using Compojure, but did not like using macros. reitit takes inspiration from bidi in being data driven, and I think improves in terms of provided functionality, developer ergonomics. metosin do a lot of work in this area and have several excellent tools in their open source ecosystem, and reitit provides modules for integrating with many of them.

Eric Ihli21:07:45

What's the best way to quickly iterate on Clojure macros that are used in ClojureScript? I'm having to restart my shadow-cljs process every time I make a change for it to take effect. Is there a better way? I have this in a clojure file.

(defmacro parse-file [file]
   (toml/parse-string (slurp (io/resource file)))))
And then I'm doing a (:require-macros) in my ClojureScript file and then (def data (parse-file "example.toml")) . Everything is working fine except that when I edit the .toml file the changes don't appear until I restart my shadow-cljs watcher. (I know this could be fixed by making an xhr request rather than inlining the data with a macro.)


@ericihli Maybe a question for #shadow-cljs or #clojurescript ?

ūü§ô 3

hrm, just spent about half an hour banging my head against the wall -- apparently protocol methods can't accept varargs. I think it should have been mentioned somewhere here


so, if I want to implement my own reftype (via deftype ), and I want it to work with swap! -- my only option is to implement clojure.lang.IAtom , including each and every arity of IAtom 's swap ?


they allow varargs, but your implementation has to cover each arity from the original, or fail at runtime


in practice that just means one vararg impl, and a bunch of other impls that call it, most of the time


also IAtom is an interface, not a protocol, and it only has 4 arities


this isn't about protocols, and for the most part it's not even defined by clojure, the VM dispatches methods by signature, and each arity that is defined in an interface is a separate signature for dispatch


@noisesmith what do you mean "they allow varargs"? (defprotocol RefVal (swap! [this f & args])) doesn't seem to do anything useful


AFAIU it now, it says "has method "swap!" that takes "this" and 3 args ("f", "&" and "args"), so it's not varargs


OK - I misremembered that


but that's not even relevant to IAtom which is an Interface, not a protocol


Ah, ok. So yeah, I can have many arities, but not varargs


My idea was to bypass IAtom, and have my swap! via my own protocol


clojure.core/swap! turns arities into a call with a seq


clojure.core/swap! is varaargs


yeah, I know


then do what clojure.core did, and make a function calling your protocol


but why do you even need a protocol if you aren't using IAtom?


but I can't change existing swap! ?


Ok, let's rewrind. I wanted to have my own ref type that looks like an atom (but is not really an atom)


I think having to make your own fake swap! via your own protocol is a much hackier alternative compared to just supporting a few arities of IAtom / swap


well, if not for varargs, the protocol approach seems very straightforward to me


Anyway, so I take it as a "yes" ("I have to re-implement all arities of IAtom's swap "), right?


the point of protocols / interfaces is to be able to trade out implementations, if what you are doing is a parallel incompatible implementation, what is the benefit of a protocol?


you need to support all the arities that could meaningfully be called, none will be auto-generated


but if they never get called, then clojure and the vm dont' care that they aren't implemented


the benefit of protocol, in this particular case, would be that I could have the same user-visible API without being tied to IAtom (which my new ref type isn't)


you don't need a protocol for that


don't I?...


function plus deftype suffices


But I can't have a function, because swap! is already there in core?


but I still think the easier thing is to just implement the 4 or 5 arities


swap! won't work anyway, if you don't use the proper interface, please don't go and monkey patch core


I mean, I can't stop you, but it's a bad idea


swap! via a protocol does work, just not with varargs


you can make a fake swap function as easily (easier) than you can make a fake atom protocol


I can make a fake swap! , but then I'll lose the real swap! in that namespace, I don't like that


user=> (swap! (reify clojure.lang.IAtom (swap [this f] "OK")) inc)


yeah, all variants involving IAtom are pretty obvious (but generally involve the "all arities" thing)


anyway, I guess my options are clear now, thanks!


the more arg in the last arity fakes varargs

user=> (swap! (reify clojure.lang.IAtom (swap [this f a b more] more)) :a :b :c :d)
I guess clojure has to hack around how weird varargs are on methods in interop, and that's why you can't use & in protocol signatures


You wish! The ugly thing is that you can use &, and it treats it like a parameter name ūüôā


@seancorfield: Re this function you wrote yesterday:

(defn assoc-f [base & args]
  (assert (even? (count args)))
  (reduce (fn [m [k f]] (assoc m k (f m)))
          (partition 2 args)))
Why use an assert instead of a precondition, like this?
(defn assoc-f [base & args]
  {:pre [(even? (count args))]}
  (reduce (fn [m [k f]] (assoc m k (f m)))
          (partition 2 args)))


To be honest, mostly personal preference -- I don't like the :pre/`:post` syntax much @clojurians-slack100


ūüíĮ thanks. Was just curious ūüôā


I do use :pre and :post occasionally. In our 100k line code base at work, there are 11 :pre conditions and just one :post condition. There are a lot more assert calls.


(333 assert calls in 68 files)


Related curiosity: is there a way in Clojure to turn asserts into nops, like Python's -O3 cmd-line arg?


Yes, if you compile your code with *assert* set! to falsey, then assert becomes a no-op.

ūüéČ 3

But I think that's a bad idea.


If the assert would have triggered in production but you turned it off, instead of getting a hard exception (well, an Error, not an Exception), that bad data continues to flow through your program potentially corrupting your data and/or producing unintended behavior.

ūüĎć 3

Leiningen provides a way to do it (`:global-vars` in project.clj). You can turn it off for a session via the Clojure CLI:

(! 1043)-> clj
Clojure 1.10.1
user=> *assert*
user=> ^D
(! 1044)-> clj -e '(set! *assert* false)' -r
user=> *assert*


If you're using depstar, you can use that -e trick to disable assertions when compiling code for an uberjar (but I don't recommend it ūüôā )

ūüĎć 3

(I actually had to test that to be certain it was possible):

(! 557)-> clojure -m hf.depstar.uberjar test.jar -C -m hf.explore
Compiling hf.explore ...
Building uber jar: test.jar
Processing pom.xml for {seancorfield/depstar {:mvn/version "1.0.96"}}
(! 558)-> java -jar test.jar 
Exception in thread "main" java.lang.AssertionError: Assert failed: (pos? x)
	at hf.explore$foo.invoke(explore.clj:5)
	at hf.explore$_main.doInvoke(explore.clj:10)
	at clojure.lang.RestFn.invoke(
	at clojure.lang.AFn.applyToHelper(
	at clojure.lang.RestFn.applyTo(
	at hf.explore.main(Unknown Source)
(! 559)-> clojure -e '(set! *assert* false)' -m hf.depstar.uberjar test.jar -C -m hf.explore
Compiling hf.explore ...
Building uber jar: test.jar
Processing pom.xml for {seancorfield/depstar {:mvn/version "1.0.96"}}
(! 560)-> java -jar test.jar 
# runs with no assertion error
(! 561)->

ūüéČ 3

I was just looking at depstar last weekend as a substitute for lein uberjar, in a small side project I'm trying to convert to tools.cli ūüė¨


Feel free to DM me with questions about either depstar or tools.cli since I maintain both. For Clojure CLI / deps.edn questions in general, the #tools-deps channel is a great place to ask.


Will do, thanks. I hope to get to that conversion again this weekend ūüôā


I'll be online a lot of the weekend (I've been off yesterday and today as well -- nice four day weekend!). We're not early risers but I'm often a night owl -- Pacific time.


(and I always have my laptop open while watching TV -- we're binge-watching Star Trek: Deep Space 9 and we're into the final season at this point)


I'm on season 6 of TNG ūüôāūüöÄ ūüĆĆ


We already did all of The Original Series, then TNG, then into DS9 ūüôā All since quarantine started here in mid-March!


(and we did all seven seasons of Star Wars: The Clone Wars animated series before we switched to Star Trek)

stormtrooper 3

That's why I think it should be documented -- it's quite non-obvious


cells=> (defprotocol R (swap! [this f & args]))
cells=> (deftype R' [] R (swap! [this f & args] (println &)))
cells=> (swap! (->R') + 1 2)
^ this is like seriously non-obvious, I think ūüôā


A nice trick to remember if you're ever in an "underhanded clojure" contest