Fork me on GitHub

Would it be poor design to define my jdbc + honeysql wrapper functions to automatically apply the conn?

;; from
(defn execute! [conn sql-map]
   (sql/format sql-map :quoting :ansi)

;; to
(defn execute! [sql-map]
   (sql/format sql-map :quoting :ansi)


The db-conn-pool is wrapped in mount/defstate


Personally I'd avoid this and keep passing the db pool as an arg, and maybe make a dev namespace for repl usage that partials them for convenience


We did this when we got started with Clojure a decade ago. It turned out to be a terrible idea. Don't do this!


I've been refactoring code for weeks trying to get rid of the last vestiges of this practice 😞


Yeah, I think Mount is bad architecture but that's by-the-by. Having your DB layer depending on a single global for the datasource is an awful idea.

👍 3

Although your choice of state lifecycle lib shouldn't necessarily impact the argument structure you're asking about


As your system grows you might need multiple datasources with different characteristics (we have at least three that access the same database but with different permissions, load balancing, etc).


You might also want to test DB functions against a test datasource (perhaps an in-memory DB, perhaps a local substitute).


This is a project I own completely for now so the likelihood of needing separate dbs simultaneously is low. It’s possible I may have a test data source but if I’m using mount couldn’t I just use its api to update the dB connection state to point to the new one? Plus in that case I would probably have a separate test config anyway.


Isn’t that also how django and rails work? You have a db section in your config and likely a stateful singleton to make direct queries with?


In my mind, I could have my execute! function apply the db (and maybe accept a db keyword arg to override), then build up my API functions around those. If another db is needed to run things with I have two entry points to change, as well as a more granular level on a per-api-basis if needed.


Appealing to Django and Rails as models of "good practice" is... perhaps, not very convining @U8WFYMFRU? 🙂


Relying on globals is easy and lazy, but it is not good or simple. Having gone down that path a decade ago, I feel very strongly about this 🙂


The company I had been working with for the past 8 years is built with Django and not once had we run into an issue with the db connection.


If I'm making the current db connection the default and allowing it to be overwritten as an optional param, wouldn't that be the best of both worlds? In terms of my API if I have defn account/create is there really a good reason for it to be connecting to more than one db at run time? I can see having a separate test config but that problem seems covered already with my dev.edn test.edn and prod.edn files.


I'm in full agreement that (def db-conn ...) is a bad practice. But I'm curious if it's better to have one-two places that directly refer to the db connection than say every request handler destructuring it from the env state and passing it to each api function to pass it to each db function call?


After a decade of production Clojure, we've found it is advantageous to pass the system component in from the top, and pass on the relevant parts all down the call chain. It seems like a lot of work, but it is so much more flexible, and makes it much easier to reuse code, to test code, and to refactor things as requirements change.


That's why we've spent so much time refactoring away from exactly what you're doing.


The way you are recommending is how the template I'm using (fulcro v3) is already set up. Was interested if it was just striving for functional purity or a more practical problem. It sounds like the practical problem, using my above example, is that if I want to test account/create it would be tied to the db state as opposed to being able to take a param to specify a resource. If I go the optional db argument then I would probably have to make an optional db argument for each api function as well and any business logic function that could use the db api functions. In that case it would likely be better to just make the db argument a required first arg for those functions anyway, as you'll never have to guess or question what they are running against 🙂


An opaque initial "system" argument passed through the call chain is a solid approach.


Only certain parts of the call chain need to know about the DB portion of the "system". We've gone back and forth on that. I'm more on the side of passing "system" and letting code pull out the pieces they need; my teammate is more on the side of explicitly only passing the pieces down the chain that the sub-calls need. There are pros and cons to both.


I can see where its clearer and more flexible in the long run with minimal effort, I'll keep it parameterized then.


One of the things that I've found, when testing code from the REPL, is that if functions take a "db-spec", then I can use either next.jdbc to create a datasource (from any hash map) and pass {:datasource ds} and that code can call either next.jdbc or so it's very flexible.


And you can easily build whatever system map you need for tests, mocking out any subsystem on the fly.


It's why I find Component so easy to work with and why I haven't adopted Integrant (and my feelings on Mount are well known 🙂 )

jaide06:09:05 is the current side-project I've started. Can't imagine it will get that big or technically complex. I'd consider it successful if I can just replace my paper-based system by the end of the year.


Haven't really worked with Component or Integrant yet. The fulcro v3 template setup Mount so figured may as well give it a shot, get it working, then replace it with component or Integrant over time.


Oh and thanks for the discussion!


> And you can easily build whatever system map you need for tests, mocking out any subsystem on the fly. > It’s why I find Component so easy to work with and why I haven’t adopted Integrant (and my feelings on Mount are well known @seancorfield Why is that easier in component than integrant? Integrant makes this very easy too in my experience.


@U8WFYMFRU I think a deeper problem with your proposed approach is that it’s tempting to call your execute! function from just anywhere. One thing that makes Clojure programs simple is the idea of referential transparency, that you can look at a function and understand that it can only access the arguments you give it and can only return data. This means you can treat most functions as black boxes. If I know that your code base can suddenly interact with the DB from anywhere, suddenly I can’t treat any functions as black boxes.


IMHO this is the primary reason that Java apps built with Spring are so difficult to maintain and test. You can’t take anything for granted. You have to follow every call chain to fully understand what the thing is doing.


So you're saying if my codebase contains even a single side effect you would have to treat every function like a side effect?


Well, a codebase without side-effects isn’t very useful 🙂


I’ve just found that by passing the database connection where it’s needed it helps guide the internal architecture of the app such that all the side-effectful things are together.


I think this results in apps that are much simpler and easier to reason about.


Sounds like it comes back down to the easy vs. simple argument.


Django, Rails, Phoenix, etc… couple a model or schema to a db connection. It’s easy to get started as you don’t have to think about problems like this, but it’s not simple. You’ve got mutable db connection state somewhere and it’s unclear how and when say a Django model even accesses it.


Yep, it essentially boils down to simple vs easy


It’s definitely much better passing in a connection as an argument (or key in a map) than pulling it out of an implicit var. My big frustration on the one mount project I inherited was that it bakes in the assumption that there’s essentially only ever one connection. Yet I like to run both my dev and test environments simultaneously in a repl, which mount makes impossible (or at least very awkard).


A huge advantage of integrant or component is that it’s trivial to have multiple systems in the same repl.


> @seancorfield Why is that easier in component than integrant?  Integrant makes this very easy too in my experience. Good to know. I haven't had a chance to dig into Integrant in any depth (but my when I've looked at apps that use Integrant, I've found it much harder to figure out where all the lifecycle bits are because start/stop are in separate multi-methods and there doesn't seem to be a consistent pattern -- but maybe that's the fault of the authors of that code?). I do plan to look at Integrant more deeply at some point.

Chris O’Donnell21:09:56

To me, a convenient testing workflow is a compelling reason to pass an explicit database connection in as a function parameter. I am often working on a project with a dev system running (including a connection to a dev database). As I'm working, I like to run tests against a separate test database. This is both easy and simple if you pass in the connection; your tests just pass in a connection to your test database. It's much more difficult if you have a global connection var. You need to find a way to rebind that var for the test run without messing up the running dev system. I wrestled with this exact problem while writing up a blog series on fulcro and pathom. If it's helpful, I convert from mount to integrant in this commit (


Thanks for the link.

👍 3

> are in separate multi-methods and there doesn’t seem to be a consistent pattern @seancorfield they’re normally defined in the same namespace as ig/init-key or ig/halt-key. There is a consistent pattern which is that every component always calls both, but ig/halt-key has a default implementation of doing nothing, which makes sense because in integrant it’s common for some components to be just configuration. Also it’s possible for the keys to be derived via hierarchies; which is again a useful feature. Personally I think integrant takes all the good things of component, and turns it into pure edn data, with a few extra useful pieces thrown in, e.g. spec integration etc. From my perspective there’s nothing really to dislike about it.


It's on my list of things to investigate. We're pretty heavily invested in Component at this point so I doubt we would ever change to Integrant, but I feel like it something I should be more familiar with.


Kinda thinking it may be worth switching to Integrant. At the very least it will force me to learn the components setup in the template. Plus it’ll be worth learning the best practices since I’m aiming to find a goto stack for web apps.


@seancorfield: Yeah there’s definitely not enough reasons to migrate a large existing system from component to integrant; However for greenfield stuff I’d see no reason to pick component over integrant, integrant’s just a lot more flexible, and bakes data/configuration in (and all the benefits that brings) as a default. In terms of high level architecture though, they’ll lead to an almost identical decomposition of a system; it’s just expressed with different primitives, protocols vs multimethods; closures vs data, object references vs namespaced keywords/composite-keys etc.


@U8WFYMFRU: If you’re looking to do a webstack in integrant, you definitely want to look at duct. We use it for most greenfield web apps. It’s pretty good. You get a pretty standard but configurable ring stack out of the box. I’d definitely recommend it, but it’s also definitely not perfect — but it could be. Duct is also a pretty good starting point that lets you grow beyond duct’s limitations. For example our main app is pretty far from a standard duct app; but it was very easy to tweak ducts primitives for our needs. For example duct assumes an app is for a single customer; yet we were able to easily tweak the project layout etc to support multi-tenant (i.e. each customer of ours has their own build/configuration as a profile layered over a base profile of the app managed in the same source tree). We also use tools.deps not lein, and leverage most of duct’s core ring middleware stack, we do isomorphic SSR’d clj(s) with reagent, shadow-cljs and have full stack integration testing with etaoin, and recently added devcards as a harness and test bed for stand alone front end components.


Anyway #duct is quite a good channel for support around duct etc… there are quite a few folk there with large duct apps.


So, just got back a Windows desktop, and I'm wondering if I should go WSL 1 or WSL 2?


Are you okay with your working files being in the Linux filesystem instead of the Windows filesystem? If so I can’t think of any reason not to use 2 instead.


Ya, I think I'd prefer that even


I had heard WSL 2 was slower than 1? Is that no longer the case?


It’s only slower at accessing windows files


@didibus My new Microsoft Surface Laptop 3 arrives on Thursday and I'll be all-in on Clojure development on Windows / WSL2 at that point 🙂


Ok, I'll give WSL 2 a shot


Seems you can get Emacs to work in it, if I can, I'll be happy


I've used Emacs on Windows, and Emacs in WSL1. But that's all in the past for me now 🙂


@didibus We should probably continue in #clj-on-windows BTW


hum, running in Flyway rabbit hole, trying to reify a which is an interface that extends a package local interface, which leads to:

class compile__stub.xxxx$eval121065$reify__121066 cannot access its superinterface org.flywaydb.core.api.resolver.ChecksumMatcher


wondering if there's a way out of this rabbit hole


Thinking of doing what I'm after in a bit different way, just that it was interesting to run into a situation like this 🙂


my Java is getting rusty, was wondering how the above would "normally" work


I would like to turn inc into a function that accepts two arguments, but still uses the first argument for increase. Does anyone happen to know a builtin function to do that? Something that extends the argument list for a function, but uses a given number of arguments for the call? Use case: I need to apply a set of transformations on a list, but only need to know the size of the result. I am experimenting with transduce. One solution would be to apply an extra map to turn each elements to 1, and use + as reduce function. As an alternative, I can use inc instead of +, and then I would not need an extra transducer. The problem is inc is be called with two arguments, but I only need the first. I can do something like (fn [x & y] (inc x)) but there may be a builtin for that.


Hmm as I tried to assemble an example for you, I see I will have serious trouble with the initial value, so using inc may not be that good idea after all. Still I would be interested if there is such functionality in clojure.

(transduce (map identity) (fn [x & rest] (inc x)) -1 [1 2 3 4])


Can you give a more detailed code example of what you want? Because right now I don't see how count wouldn't work.


Ah, I think I understand now. Maybe you can add (map (fn [_] 1)) to the transducers and reduce with just +.


I am working on a coding practice when I need to fill a minesweeper with the numbers that count the mines. Certainly, count works like charm, I can write something with the threading macro:

(defn count-mines [board x y]
  (->> (neighbor-coordinates x y)
      (map (partial get-cell board))
      (filter (partial = \*))
This is what I want to turn into transduce:
(defn count-mines [board x y]
  (transduce (comp (map (partial get-cell board))
                   (filter (partial = \*))
                   (map (constantly 1)))
             + (neighbor-coordinates x y)))
which works again, but I am interested if I can drop the last map, and use inc instead of + , like in
(defn count-mines [board x y]
  (transduce (comp (map (partial get-cell board))
                   (filter (partial = \*)))
                   (fn [x & y] (inc x))
                   -1 (neighbor-coordinates x y)))
But I was hoping this last has still potential for improvement, if I can wrap inc somehow to behave as (fn [x & y] (inc x) with a builtin.


> Use case: I need to apply a set of transformations on a list, but only need to know the size of the result. @UR7TLKXKQ cgrand/xforms has an implemented x/count transducer. I would highly recommend checking out if you're working with transducers.


Thank you for the idea! I quickly tried the library: (transduce count rf/last [1 2 3 2]) returns 4 as expected. I do not entirely understand how this all works, or if this is the right way to use count. I will explore it further.


Your original code:

(->> (neighbor-coordinates x y)
       (map (partial get-cell board))
       (filter (partial = \*))
Could be rewritten as:
(let [xf (comp
          (map (partial get-cell board))
          (filter (partial = \*)))]
   (sequence xf (neighbor-coordinates x y))))
Or perhaps:
(let [xf (comp
          (map (partial get-cell board))
          (filter (partial = \*)))]
   (into [] xf (neighbor-coordinates x y))))
So, using a transducer to generate an filtered list and then evaling count on it.


It may make more sense (just guessing here), that you would like to consider the count as part of the operation:

(let [xf (comp
          (map (partial get-cell board))
          (filter (partial = \*))
          (x/count))]  ;; here using xforms
  (do-something-with xf ...))
But that means that the the xf will need to be applied to a collection; so probably the code would change to something like:
(defn xf-count-mines [board]
   (mapcat neighbor-coordinates)
   (map (partial get-cell board))
   (filter (partial = \*))
And you would then apply that xf to a collection of coordinates (if you then want to do further transformation on it; eg. group-by number of mines, sort-by number of mines, etc.)


But maybe there's no reason for all this complexity; and you can just call (count (sequence xf (neighbor-coordinates x y))) or even the original threading version 🙂


I was considering count+sequence too. I wanted to avoid building up the transformed list, and then call count to create the final value, if everything can be expressed as reduction. That's why I am happy to see a function such as count could be expressed as transducer. I like your version when x/count is part of the xform. I just need to figure out what should be the f parameter of transduce in that case. Or better say, why rf/last works from the library. The source code seems relatively simple, so I must be able to figure it out. That version definitely looks better than the version I intended to use originally: with inc and setting init to -1. That -1 value looks really weird.

Adrian Smith10:09:37

is there a system/library where you can give it :input [1] :output [2] and it'll give you (map inc x) ?

Jakub Holý (HolyJak)10:09:46

yes, somebody has made such a tool, but I cannot remember its name. Ask in #find-my-lib perhaps.


It might be, although it seems it can’t handle that particular example (or I’m using it wrong).


so either ask the question: 1) give me a function that turns 1 into 2 or: 2) give me a function that given inc and [1] turns it into [2]

Jakub Holý (HolyJak)10:09:13

Hi! Is there a better one to do this (let [s (-> query keys ffirst)] (when (symbol? s) s))? Essentially I want to run query through a number of transformation and only keep the result if it satisfies a predicate BUT I do not want the result of the predicate (which is boolean). Thank you!


(some-> {} keys first)

Jakub Holý (HolyJak)10:09:31

this does not do what I need, i.e. give me the result of (-> query keys ffirst) if it is a symbol and nil if it is not.


@U0522TWDA (-> query keys ffirst (#(when (symbol? %) %)) is technically shorter, but I'm not sure I like it better

Jakub Holý (HolyJak)10:09:11

Thanks, yes, I see why.


I have this toy code:

(let [f (fn [^long x]
          (println x))
       xs [1]
       x (long (peek xs))]
  (f x))
When I compile and decompile it to Java, I see that x is declared as long, a class that represents f has invokePrim(final long x), but (f x) still turns into ((IFn)f).invoke(Numbers.num(x)). Why? Is there a way to make it call invokePrim directly?


no invokePrim only works on vars


Can you elaborate a bit? I tried using (def ^long x 1), but it works similarly - the invoke is called.


on functions stored in vars


it's about f not x


to rephrase: only calls to functions stored in vars will be eligible for invokePrim instead of invoke


I see, thanks! I'll try that. But do you know why?


because that's how it's implemented in the compiler


the var holds metadata about the typehints


and that's what the priminvoke optimisation looks for


it doesn't try to do local prim inference


it could, but it doesn't


simple as that


My question was more about justification and not the current state of affairs. :) So it seems that it's possible but hasn't been implemented simply because nobody wanted to.


there is no reason it couldn't be implemented if that's what you're asking


Would there be a known reason for the core team to reject such a patch? E.g. it's well known that last won't be checking if its argument is a vector.


no but I can't imagine it being considered a priority. and the impl may be more complex than you may imagine


Absolutely! Thanks again.


I think there already is an issue in jira logged about this


but I can't use the new jira to look for it :)


There are a few similar ones, but nothing about local functions in particular.


Yet another type hints question. How come this

(defn to-long-array ^longs [v ^long _unused]
  (long-array v))

(alength (to-long-array [1] -1))
results in
Reflection warning - call to static method alength on clojure.lang.RT can't be resolved (argument types: java.lang.Object).
And the warning disappears if I remove the ^long tag from the _unused argument.


It’s an interesting one. First, support for longs as a return hint seems to not be there (but works on args and locals):

(defn ^longs to-long-array [v _unused]
  (long-array v))
doesn’t compile. Second, hinting the vector is only useful for invokePrim and longs is not a supported type. Thus the “proper” way is to write:
(defn ^"[J" to-long-array [v ^long _unused]
  (long-array v))


What do you mean by "doesn't compile"? It works just fine on my end.


It compiles into

public static Object invokeStatic(final Object v, final Object _unused) {
        return Numbers.long_array(v);
    public Object invoke(final Object v, final Object unused) {
        return invokeStatic(v, unused);


Oh wow. It works by itself. But paired with the call to alength it results in

Unable to resolve classname: clojure.core$longs@6beec176


You are right, I was evaluating both forms together


I see. Thanks!


Type hints are a finicky business.


I haven't tried it, but does ^{:tag 'longs} perhaps work as a hint before the Var? metadata on vars is eval'd by the compiler, IIRC, and that kind of type hint often survives eval better than others.


It works! It never crossed my mind to try this


Huh, it does seem to work. Although, now clj-java-decompiler.core/decompile cannot work with that: Metadata must be Symbol,Keyword,String or Map. Not sure what kind of escaping is needed.


I don't know off hand why that would be. That metadata is pretty clearly a map


as all Clojure metadata is turned into by the reader. All of the other forms like ^long are just shorthand for {:tag 'long} and similar things.


@U3E46Q1DG you're type hinting with the clojure.core/longs function not the 'longs symbol, that's why it explodes


@U060FKQPN what’s the rationale for evaluating type hints (or even metadata in general)?


:inline-fn wouldn't be possible for example


but I don't think that's why, metadata on the var has been evaluated since day 1 AFAIK


right, thanks


it's just the semantics that were decided at the time


As a last resort when you can’t extract to a var (eg because you have a closure) you can use interop

(let [^clojure.lang.IFn$LO f (fn [^long x]
          (println x))
       xs [1]
       x (long (peek xs))]
  (.invokePrim f x))

🤯 6

I keep finding stuff that feels like I'm not supposed to find it.

(defn ^double get-a [^Indexed a ^long i ^long j]
  (if (< i j)
    (recur a j i)
    (aget ^doubles (.nth a i) j)))

(get-a [(double-array [1.0])] 0 0)
Execution error (AbstractMethodError) at [...]
Receiver class [...]$get_a does not define or inherit an implementation of the resolved method 'abstract java.lang.Object invokePrim(java.lang.Object, long, long)' of interface clojure.lang.IFn$OLLO.
At the same time get-a compiles into
public final double invokePrim(final Object a, final long i, final long n) {
        return invokeStatic(a, i, n);
So the return type is from my annotation. But why then the compiler decides that it has to inherit OLLO where invokePrim returns Object and not double? Or is the culprit somewhere else?


Type tags on vars are eval’d. Type tags on arg vectors and arts are not


To be honest, I have no idea what that means. And the documentation page for type hinting doesn't mention anything of sorts, unless I'm going blind. What should I read to get a better understanding of how all this works?


I don't know any official docs that explain all about type tags.


This bit tells about how things like ^long are expanded into ^{:tag 'long}:


Yeah, that part I knew about. And it makes it even less clear how ^longs and ^{:tag 'longs} can be different.


A sentence in this part of the docs mentions that metadata on Vars is eval'd:


^long and ^{:tag 'long} can be different because long eval's to the function in clojure.core named long. Same goes for longs


This documentation for the Eastwood linter has some details about type hints that are wrong, i.e. ignored by the Clojure compiler, but the Clojure compiler gives no warnings or errors about them:


Right. So, it's eval'ed to the function. And the emitted bytecode now contains incompatible types (as demonstanted in the example with doubles and OLLO). So I guess type hinting vars that point to functions with symbols that are evaluated to functions should be considered undefined behavior?


The Eastwood documentation in that section mentions a couple of CLJ-<nnnn> tickets, one or more of which have been fixed since that documentation was written, so examples related to those bugs may no longer apply to the latest version of Clojure


type hinting vars with tags that end up being eval'd to functions are useless type hints that the Clojure compiler treats as if they weren't there at all, silently


I believe the Clojure compiler sees the tag, sees that its value is not something it recognizes as a type hint, and just ignores it


> the Clojure compiler treats as if they weren't there at all That's not the case - again, judging by the doubles + OLLO types incompatibility example.


I could very well be wrong in my guess that the compiler ignores it in all cases. I think that at least in some cases it ignores it. Again, there is no definitive documentation I am aware of that covers all cases of type hints.


other than the compiler source code


Yeah, this makes this thing far from being trivial, especially given that anything can potentially change in this area, as indicated by the outdated Eastwood documentation.


When you really want careful control over every arg type and return type, I believe it is reasonably common for people to drop down into Java and write an interface and/or method definition there, then use that from Clojure.


It has changed across Clojure versions, but on the order of years rather than months. Consider that the Eastwood documentation was probably written 5 years ago, and is still mostly correct 🙂


Yeah, that's what I may end up doing. For now, I want to have at least some understanding so that I can make such a choice easier in the future.


I believe the only thing it is incorrect about is that the things it describes as bugs, some are no longer bugs.


And the links to CLJ-<nnnn> tickets makes it easier to look at the tickets to see which have been merged into Clojure, and which version, if you want to track that down

Alex Miller (Clojure team)15:09:50

specifically here - this should be (defn get-a ^double [...]


If you want return type double, either put ^double on vector or {:tag ‘double} on the var. For primitive return type, I am not certain that the tag on var works. Not at repl to try at the moment


Given all the above, it seems like if a function returns double, then I should tag its arguments vector. And if it returns longs, I should tag the var itself either with "[J" or with ^{:tag 'longs}.


Arrays of primitive are not themselves primitive types, could be a difference that matters here


If you want to volunteer to write the definitive docs for how to type hint Clojure code, you are on your way :)


Oh god no. I am already seriously contemplating just writing Java code and using virgil to load it in runtime.


There's also this whole thing where "only longs and doubles are supported" on top of "every index-based function accepts only integers".


And (loop [i (int 0)]) emits i as a long. And probably a 100 other things that I just glanced over.


In fact, it emits it as final long i = RT.intCast(0L).


there's a ticket + patch to fix this


To circumvent that issue with longs and indices, I decided to use macros. Needless to say, now my left foot is gone:

(defmacro m [v]
  `(aget ^doubles (nth ~v 0) 0))
=> #'[...]/m

(macroexpand '(m [(double-array 1 0)]))
=> (clojure.core/aget (clojure.core/nth [(double-array 1 0)] 0) 0

(m [(double-array 1 0)])
Syntax error (IllegalArgumentException) compiling . at (/tmp/form-init9254306255426990812.clj:1:1).
Unable to resolve classname: clojure.core/doubles



^{:tag ~'doubles}


Thanks! I think I understand what's going on. Does it mean that I should always use the verbose metadata form with quoted symbols in macros?


@U2FRKM4TW Sorry, I understand your pain here, but on first reading your statement "To circumvent the issue with long and indices, I decided to use macros", I had to chuckle. I can't think of a comparable statement from literature, but "out of the frying pan, into the fryer" sounds close 🙂


Many things you want to do are, I am sure, doable, but are on the hairy edge of places where dropping down into Java for writing interfaces and/or wrapper methods might be less trouble.


But if you have bronsa's and cgrand's attention, they can tell you if there is any possible way in Clojure, if anyone can.


:D Yeah, I realize that. But it's an educational process, albeit a bit painful one.


Does anyone have any Clojure backend application specifications/standards they could share? For instance: • App should support nREPL via either cider-jack-in or via command line • App should automatically reload initialized runtime systems on file-save -- via Component, Mount, or some other mechanism. (*Edit: this should be disableable) • App should support taking all configuration from parent $env • App should have runtime prod REPL accessible for emergent response


At work we do not use nREPL and we avoid any sort of refresh/reload stuff -- so bullets 1 & 2 for us would be


A list like this would be handy for my team; we've got some older services which leave much to be desired as far as the development experience. We can agree on a lot of points, but we don't really have a "gold standard" for a Clojure app. A few years ago I really liked , but it's a little out of date now (is there a deps.edn version?)


For configuration, we generally prefer external EDN files so they can be managed separately from the code itself but still have a tangible, editable, and versionable format.


And, yes, we have the option to start a Socket REPL for any process -- our "service" shell that wraps our JAR apps, looks for a <name>.jvm_opts file at start up and uses those JVM options, so we have several processes in production that always run with Socket REPLs on known ports (accessible only over the DMZ via SSH tunneling).

👍 6

As a side note, I never really understood the prod REPL runtime accessible thing. How are you applying a patch to all replicas in your cluster?


@seancorfield I would prefer to not support autorefresh/reload -- but it's used by some of the less experienced Clojure devs on my team and don't think they'd welcome its removal. Do you deploy to Kubernetes? Curious to see if people's choice of runtime orchestrator (or lack thereof) affects this


... or is it purely for inspecting the state of a particular running jvm?


@U083D6HK9 It's more of a "situation FUBAR, break glass" thing. You can debug a troublesome production state in the middle of an emergency... not pleasant, but better than being in the dark. If you edit the Kubernetes resource definition you can pull a pod out of a pool so it doesn't receive incoming requests -- then just delete it afterwards.


@U083D6HK9 We generally use direct linking for production artifacts so applying live patches is limited (we have one process where we avoid direct linking, specifically so we can patch it live without a restart, but it's a singleton, and internal-facing). Since we can just "press a button" to initiate a rolling deployment of fixes, we don't live patch on the customer-facing cluster.


We do use a Socket REPL in production extensively for debugging or running ad hoc analysis on processes and/or data.

💯 6

Oh. We do something similar @seancorfield. I thought folks were literally eval'ing new code in a REPL to apply a prod patch.


@U22M06EKZ I think I would be more inclined to limit auto-refresh/reload to more senior devs and instead mentor the junior devs on better REPL practices so those tools are not needed 🙂


^ I mean if you can hot reload your dev systems based on config changes... and you want near instant deploy times... and you're utterly insane, then sure, who needs to restart JVMs to deploy?


We deploy JARs to bare metal servers in a data center. We're moving to virtualization but that will still be treated as just "bare servers". We have a few pieces of infrastructure in the cloud but we do not plan to go down the Docker/K8s path. We use Docker for development, for the non-code processes (Percona/MySQL DB, Elastic Search, Redis).


@seancorfield Hmm, I see where you're coming from. I'd need to adopt more of an advocate role on my team for that to work... though I admittedly already fill this role most of the time. I think as long as it's toggleable I'd prefer having the feature over not having it, but I can see how it'd get Clojure greenhorns into trouble.


We deploy dockerized uberjars to k8s. I'd avoid k8s if you can 🙂

😆 3

@U083D6HK9 Well, we live patch some processes but it's not practical across a cluster. And our auto-deploy process is smooth enough and fast enough that it's better for us to actually fix an issue in dev, get it tested in staging, and then just "press a button" to get it on our prod servers.


@U083D6HK9 The warning is much much too late my friend

😢 3

That seems frightening to me @seancorfield haha. How do you deal with some servers behaving differently than others? Curious what sort of situation warrants that.


Sean -- I didn't quite read you correctly, sounds like you're not on Kubernetes but have some other internal platform? (S'okay if you're not allowed to share, I'm just curious)


We can do something similar with a rolling deploy but it is very obvious which servers are a part of which group.


@U083D6HK9 if I was actually going to automate this insane REPL patch deploy process, I'd have it patch the server's log version too


Our application will launch within a minute so applying a patch via a REPL in an emergency doesn't make sense for us. I could see it being more important if you have a long application boot.


so your logs would suddenly get a different field to search by after the code is applied. there's technically an intermediate state as you apply the code... so I guess you'd need to stop incoming requests for a moment to safely do it


Ah, yeah - that'd do the trick @U22M06EKZ.


I mean it sounds bonkers, but if you're willing to embrace it then maybe you could do some nifty stuff.


It's a hard nope from me 🙂 A quick app boot and gradual roll out seems to solve that use case entirely. Inspecting the runtime state seems like a good use case though.

👍 3

(And good luck writing up / explaining for your Root Cause Analysis after an incident)


Well some good thoughts here but we're pretty far from a gold standard. If anyone else has input or even a repo they feel has consistent "best practices", please chime in!


> App should automatically reload initialized runtime systems on file-save We also don't do this and I agree with Sean - improve your REPL usage 🙂


Oh yeah -- re: @U083D6HK9's comment earlier. The REPL patch rollout makes less sense when you've got fast "normal deployment" with infrastructure you trust. That said, I'm in a large enterprise. While we try to avoid them, there are still some hoops to jump through for deployments. There is the possibility of infrastructural outages that impede our ability to deploy at all, which could render the REPL the only available fix option for a time

🤪 3

^ Not trying to say this is a normal situation, but unfortunately I don't have guarantees as strong as being purely AWS based


On the topic of large enterprise -- I think that plays a little into our autorefresh/reload views. Some of my team were enterprise Java / Python devs walking into Clojure for the first time. The "tooling / REPL expertise" varies wildly between teammates However, you've both given me some food for thought. I do agree with you though that we (namely myself) should do a better job sharing our Clojure expertise. Our previous "best practices & language" advocate was more philosophically a Java dev but moved into Clojure around 1.6 (which is why we have some oddities in our legacy services.) He's since moved on and nobody has really taken up the mantle, though a boss implied that I should at one point. (Forgive me for slight offtopic, but being around someone like Sean definitely makes you think you should do a better job of knowledge sharing)


@U22M06EKZ Maybe get the company to sign them all up with and get them all to do the REPL-Driven Development course?


(serious suggestion -- it's a really good course!)


I really appreciate Eric's work in . It helped me a few years ago and I maintained a membership for a bit just to support it. I never thought of getting my business to buy in to it, though it seems obvious in retrospect. I'm chatting with a coworker and we'll come up with a plan to improve the expertise on our team


Work is paying for a monthly subscription for me, right now. I've been working my way through the three property-based testing courses -- which are also great overall.

👍 6

Just an FYI on what happened: I took your advice and asked internally. We do have several interested in Eric’s REPL course. I’m not sure whether we can get funding for subscriptions now, but I’m going to keep pushing on management until we do. Half a year max IMO. Good thinking @seancorfield, though now I owe you too many thanks to count. Do you have a monthly github funds/patreon style thing setup for donations?


Thank you, but I do not. Somehow it always makes me feel a bit odd when folks try to pay me for just being a decent netizen 🙂


Maybe one day we'll meet IRL at a conference and you can buy me a beer? 🍺


A beer is a fine alternative — I’ve been at “the Conj”s in the past and I should get the opportunity to buy you a pint eventually. With the donation offer — not to put anyone in an odd spot, I just look at it this way: Knowledgeable people willing to field (manytimes dumb) questions serve to improve the community, which improves the language & libs, which benefits my job stability, salary, and ability to keep coding in a language I enjoy. I am privileged enough to be able to easily afford some donations, and if anything there’s selfish incentives in supporting the communities, libraries, and tools I rely on. Personally I feel like too much of a cowboy to want to provide the support it takes for them to thrive, but I recognize my vested interest & feel that a financial contribution is sometimes appropriate for their time and effort.


Now that said — I can understand not feeling the need to take donations either! But I’m sincere in my offer to “put my money where my mouth is”, so perhaps that’s worth something by itself 🙂


@U22M06EKZ I figure that being a decent netizen floats all boats so my "reward" is a nicer community and better open source library choices 🙂

👍 3

One last update: I’ve secured funding from my internal management to cover 10-15 subscriptions to , enough for all our initial interested devs. We even had an interested manager joining the group. Excellent suggestion & I view this as a complete success!


Excellent! Your colleagues will gain a lot from that training!


I took this suggestion all the way. My org started a REPL Driven Development weekly meeting last October & just finished today. It's been great material for getting everyone on the same page & starting conversations on developer flow. My org is now continuing onto another course & we'll realize more longterm benefits. A+ recommendation, thanks again @seancorfield. I'll also second his recommendation for Eric's aforementioned course.


A list like this would be handy for my team; we've got some older services which leave much to be desired as far as the development experience. We can agree on a lot of points, but we don't really have a "gold standard" for a Clojure app. A few years ago I really liked , but it's a little out of date now (is there a deps.edn version?)

Sam Ritchie18:09:38

hey all, had a performance Q I wanted to poll the group about (haven't benchmarked this yet). I've implemented a generic arithmetic library that dispatches via multimethod, so users can extend many functions like add, div etc to complex numbers, vectors, etc.


hmm, some how I thought numeric tower was that (multimethod based math functions), but it is not

Sam Ritchie15:11:52

Weird, this just pinged me as a new message! Maybe you are thinking of clojure.algo.generic, which is totally that idea

Sam Ritchie18:09:27

The question is, for a tight inner loop where I know that I'm always going to dispatch to the same implementation, has anyone experimented with doing the multimethod lookup a single time, caching the returned function and using that?

Sam Ritchie18:09:24

👍 super helpful for benchmarking

Alex Miller (Clojure team)18:09:59

That’s how protocols work basically :)

Sam Ritchie18:09:50

yeah, I need multimethods here because I need to define how operations work between different types

Sam Ritchie18:09:19

at a first pass, it looks like it's faster to NOT do this, and rely on the multimethod's cache

Sam Ritchie18:09:51

imagine a generic GCD algorithm that works between two numbers of the same type. Int, Long, BigInt, BigInteger. Internally I need to run a call to remainder , abs, and zero? which all have different implementations in cljs

Sam Ritchie18:09:26

I can run a benchmark, just wanted to see if this was an idea folks had tried before, or if the JVM figures this out and makes everything fast


Seems like clj-async-profiler is not very reliable. I have implemented a number-churning function in Clojure while groveling through all the type hinting, fed it to the profiler, and noticed that most of the time is spent in RT.intCast. "A great time to try a Java implementation", I thought to myself. Well, now that Java implementation shows almost the same time while the profiler pretends that the majority of time is now spent in a completely different call. Replaced lambdas with static functions in my Java implementation - yet another result with almost the same time that's allegedly taken by yet another call.

Alex Miller (Clojure team)21:09:02

I actually find the most useful thing to do to understand low-level primitive number stuff is to look at the bytecode being produced. Poor performance is caused most obviously by either reflection or boxed math, both of which are easy to see in the bytecode (with a little practice). The boxed number stuff can also be aided by setting (set! *unchecked-math* :warn-on-boxed) (which is some stuff I wrote for Clojure the last time I dealt with this in anger)

Lennart Buit21:09:18

Theres also, not sure if it helps here, but supposedly it gives you a translation from Clojure to how it would look in java. If reading bytecode is not your cup of tea

Alex Miller (Clojure team)21:09:37

that's probably not going to help you much

Alex Miller (Clojure team)21:09:00

some of the differences I'm talking about are not representable in Java, you really have to look at the bytecode

Lennart Buit21:09:04

Ah; then don’t mind me ^^

Alex Miller (Clojure team)21:09:23

but comparing the bytecode emitted from Clojure vs Java for similar code is often very instructive

Alex Miller (Clojure team)21:09:00 is more detail on how to use the warn-on-boxed thing

🆒 3
Alex Miller (Clojure team)21:09:02

this warning does not catch everything (in particular boxing happening on returns is kind of tricky and often won't be reported)

Alex Miller (Clojure team)21:09:11

but it does catch a lot of stuff


Thanks! I'll definitely take a look.

Alex Miller (Clojure team)21:09:25

and if you do the legwork to produce the bytecode, I'm happy to look at it and point out things (prob best to put it on though for something like that)

👍 3

:warn-on-boxed was complaining about one >=, so I replaced a couple of ^int tags with calls to int. And the execution time has skyrocketed from 4 seconds to 14. :) OK, time to sleep for now, but I'll definitely keep poking around.

Alex Miller (Clojure team)21:09:14

ha :) generally, Clojure is so focused on long and double as the main primitive types that focusing on longs is usually much better outcomes than trying to force ints


Indeed. But all my operations in this particular case are either on doubles or on indices of arrays or vectors, which all must be integers.

Alex Miller (Clojure team)23:09:32

Even so, you’ll find using longs may be better for the indices


Ran everything with criterium.core/bench. int: 3.255549 sec long: 3.686330 sec java w/ lambdas: 3.106545 sec java w/ static fns: 3.230163 sec Not a huge difference, but ints win. And are almost as good as the Java versions. But I think that in this case sticking with Java might be better. Unless I write some performance tests to avoid any possible performance degradation due to some tiny change. Seems like it's pretty easy to get one with Clojure, even with all warnings turned on.


Oh wow. I managed to shave off another 0.6s from Java code executions by replacing Object[] with double[][] after I learned that using 2D arrays doesn't mean that all sub-arrays must be of the same length. No idea how to use something like double[][] in Clojure.

Alex Miller (Clojure team)12:09:56

You can use to-array-2d to construct them and aget takes multiple indices


> 2-dimensional array of Objects It will be contain boxed values, which will surely affect performance.

Alex Miller (Clojure team)13:09:03

Oh right, you can just create an object array of doubles arrays through - I think that’s how Java actually implements it

Alex Miller (Clojure team)13:09:01

There is no such thing as double[][] in the jvm


But there's the thing. Here's what I used before:

final Object[] a = new Object[n];
for (int i = 0; i < n; ++i) {
    a[i] = new double[i + 1];
// Just as an example of how I would write values.
((double[]) a[k])[m] = v;
Pretty standard stuff, I think. Then I wrote it as:
final double[][] a = new double[n][];
for (int i = 0; i < n; ++i) {
    a[i] = new double[i + 1];
// Just as an example of how I would write values.
a[k][m] = v;
And it became significantly faster.

Alex Miller (Clojure team)13:09:12

Well, I could be wrong :) haven’t looked at nd arrays in a long while


Huh. Oh maybe that's just JVM shenanigans, despite the best efforts of criterium. I can only reproduce it if the Java code also uses Clojure classes, even though the way I use them doesn't change at all between the Object and the double[] variants of the code. And the difference is more pronounced if the Object version is executed after the double[] version (although the Object version is still slower if it goes first). I also checked the generated bytecode, and the differences are negligible. I think that at this point I should just bury the optimization hatchet and settle on the Java double[] version.


Hi folks. Does anyone know of a Pedestal example that illustrates post routes with body-params? I can't figure out how to get at the posted data from within my route handler...


@michael401 I don't know how many people here use Pedestal, so maybe ask in the #pedestal channel?


ahh sorry 🙂. and thanks.

dpsutton21:09:55 has several post examples for /todo, /todo/:list-id


Thanks. It looked so different from what I was expecting a post handler to look like that I skipped over it. I suppose the magic is in (let [nm (get-in context [:request :query-params :name] "Unnamed List"), and it's not necessary to use body-params. I must have looked at some old example from elsewhere earlier.


I’m using reify and seeing some odd behaviour - it seems that method implementations are called at the reify stage, not when the method is actually invoked..? what am I missing? does reify invoke all method bodies at construction..?

(defn foo []
    (deref [_] (prn ::huh?))))

  ;; invoke me, and observe that :foo/huh? is printed, even without dereferencing...?

=> #object[baz$foo$reify__34130 0x127ea46c {:status :ready, :val nil}]
=> nil


the repl prints objects after you define them, and IDerefs that are not IPending are automatically dereffed when printing (see {: status :read, :val nil} , nil there is the result of the deref)

😎 3

if you add clojure.lang.IPending to your reify you won't see the first prn


thanks for the hint - so I should implement realized? too somehow?


depends on what the behaviour of your reify should be, I was just showing why the prn is happening


but yes if you want to actually implement IPending you need to implement isRealized


thank you so much!


(defn foo []
    (isRealized [_] false)
    (deref [_] (prn ::huh?))))


no more printing!


A beer is a fine alternative — I’ve been at “the Conj”s in the past and I should get the opportunity to buy you a pint eventually. With the donation offer — not to put anyone in an odd spot, I just look at it this way: Knowledgeable people willing to field (manytimes dumb) questions serve to improve the community, which improves the language & libs, which benefits my job stability, salary, and ability to keep coding in a language I enjoy. I am privileged enough to be able to easily afford some donations, and if anything there’s selfish incentives in supporting the communities, libraries, and tools I rely on. Personally I feel like too much of a cowboy to want to provide the support it takes for them to thrive, but I recognize my vested interest & feel that a financial contribution is sometimes appropriate for their time and effort.


I took this suggestion all the way. My org started a REPL Driven Development weekly meeting last October & just finished today. It's been great material for getting everyone on the same page & starting conversations on developer flow. My org is now continuing onto another course & we'll realize more longterm benefits. A+ recommendation, thanks again @seancorfield. I'll also second his recommendation for Eric's aforementioned course.