Fork me on GitHub
Busy escaping the comfort zone02:04:53

Hi Clojurians, I'm having issues with and jdk versions > 8, I'm using Amazon correto JDK and was able to confirm that doesn't work in any correto version > 8: • Amazon correto 8 -> working • Amazon correto 11 -> not working • Ubuntu OpenJdk 11 -> working • Amazon correto 16 -> not working Iv also found which seems to have been resolved a while ago, any idea what the issue can be? Thanks

Alex Miller (Clojure team)02:04:30

“Doesn’t work” is not too helpful

Alex Miller (Clojure team)02:04:59

“I do x and expect y but see z” would be more useful


Does the code load without reloading on those different jdks?

Busy escaping the comfort zone02:04:29

Hi @alexmiller happy to add more details 🙂 iv used this project and ran the following:

$ git clone 
$ cd re-share
$ lein repl 

; In cases it did work the expected result (Amazon correto 8 and Openjdk 11)
re-share.config.secret=> (
:reloading (re-share.oshi re-share.wait re-share.core re-share.config.core re-share.schedule re-share.encryption re-share.config.secret re-share.spec re-share.log user

; In cases it didn't work (Amazon correto jdk 11/16)
re-share.config.secret=> (
:reloading ()
Iv also made sure that I'm using the latest namespace library:
$ lein deps :tree | grep namespace

 [org.clojure/tools.namespace "1.1.0" :scope "test"]
lein deps :tree | grep classpath 

   [org.clojure/java.classpath "1.0.0" :scope "test"]

Busy escaping the comfort zone02:04:06

The above did re-produce for me in one other project


I don't using reload a ton, but I have used it on a number of different correto installs just fine, and it isn't a particularly complex thing,


So my guess would be the behavior is the result of something else

Busy escaping the comfort zone02:04:17

Hi @hiredman did it work on versions 11/16?

Busy escaping the comfort zone02:04:37

I don't mind trying a clean project from scratch it could be that I'm missing something


have you tried it having performed set-refresh-dirs beforehand? that way it won't try traversing the classpath


(set-refresh-dirs is very recommendable for other reasons anyway)


My immediate guess is it has something to do with the user.clj you have in dev/ without listing dev/ in the project.clj anywhere


But it is tricky, with lots of lein plugins, and potentially more in your profile.clj to untangle this kind of thing


So the first thing is to disable all the plugins, both in the project and any user level stuff. Then get rid of the user.clj file, then see what is going on

Busy escaping the comfort zone02:04:03

Ok iv confirmed this on a clean project ( Using two different jdk versions:

$ lein repl 
nREPL server started on port 33397 on host - 
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-Ubuntu-0ubuntu1.20.04
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)
 Results: Stored in vars *1, *2, *3, an exception in *e

test.core=> (require '

test.core=> (require '

test.core=> (
:reloading (test.core test.core-test)

test.core=> Bye for now!
$ lein repl 
nREPL server started on port 36563 on host - 
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-LTS
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)
 Results: Stored in vars *1, *2, *3, an exception in *e

test.core=> (
Execution error (ClassNotFoundException) at (

test.core=> (require '

test.core=> (
:reloading ()

Busy escaping the comfort zone02:04:40

I will remove my profile.clj next


> So the first thing is to disable all the plugins, both in the project and any user level stuff. Then get rid of the user.clj file, then see what is going on fwiw, it looks fairly safe to me those plugins aren't exactly wild "dev" seems missing from the :source-paths and there's set-refresh-dirs missing

Busy escaping the comfort zone02:04:14

The above project doesn't use user.clj just empty project calling the function directly


The confirm on a clean project is not correct


You didn't do the same thing in both repls


Your second attempt is missing the require to load tools.namespace

vemv02:04:21 uses exactly my advice and comes from the t.n author. I recommend you to save time and not go overboard with scientific debugging

Busy escaping the comfort zone02:04:49

Ok, I think I got a lead removing my profiles.clj did work:

    :user {
    :plugins [
       [mvxcvi/whidbey "2.2.0"]
       [io.aviso/pretty "0.1.37"]
       [cider/cider-nrepl "0.22.4"]]
      :middleware [
      :source-paths  ["dev" "test"]
    :dependencies [[io.aviso/pretty "0.1.37"]]
$ lein repl 
nREPL server started on port 45839 on host - 
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-LTS
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)
 Results: Stored in vars *1, *2, *3, an exception in *e

test.core=> (require '
test.core=> (
:reloading (test.core test.core-test)
My only guess is that one of the above deps is triggering this, ill continue to debug but this doesn't seem to be a core issue 🙂 Thank you @alexmiller @hiredman @vemv

Busy escaping the comfort zone02:04:04

Thanks! the fact that it did work on some JDK but not on others threw me off course


🙌 again, reading the t.n source I think that performing set-refresh-dirs would remove this point of friction altogether

Busy escaping the comfort zone02:04:41

Cool, ill take a look into that, thanks again

🙂 3

I'm using some library that gives names that include unicode symbols to some of it's vars. It works fine when I run it on my own machine but elsewhere it throws a ClassNotFoundException when it tries to invoke one of these variables. Is there an environment variable I should be setting to ensure this works everywhere?

Noah Bogart14:04:19

what do you mean by "elsewhere"?


e.g. the official jdk base image

Noah Bogart15:04:03

does the error show the unicode in the symbol/variable name?


Yeah. e.g.

1. Unhandled java.lang.NoClassDefFoundError

Noah Bogart15:04:10

according to this (, you should be able to use them, which makes me think something else is happening


I think it's a LOCALE issue. Just trying to verify

Endre Bakken Stovner11:04:52

I want selmer.parser/render ( to act like the following:

(render "{{input}}" {:input ["a.txt" "b.txt"]}) => "a.txt b.txt" ;; does not work
(render "{{input.1}}" {input ["a.txt" "b.txt"]}) => "a.txt" ;; works
Currently the first line would print:
(render "{{input}}" {:input ["a.txt" "b.txt"]}) => [&quot;a.txt&quot; &quot;b.txt&quot;] 
since selmer just calls toString on each object in the map. I think I could achieve what I want by creating a custom veclike object with a toString-method like this: (toString [self] (str/join " " self). But then the question is: How do I create a veclike object?


Do you plan to use that custom vector right when passing the arguments to render, or somewhere up the callstack?

Endre Bakken Stovner12:04:11

Only in render 🙂


Then it will be both easier and simpler to just pre-process :input the way you need. E.g.:

(let [input (str/join " " ["a.txt" "b.txt"])]
  (render ...))

Endre Bakken Stovner12:04:44

That would break case 2 above:

would now become a not a.txt.


Ah, you want both to work with the very same :input value?

Endre Bakken Stovner12:04:58

I cross posted this (see since no-one in beginners knew how to do it. I guess it is not a newb q.


It is not, you're right. :) And I have a strong suspicion that it might be an instance of


If you have two calls to render, just as in your example, I would simply pre-process :input for the one that doesn't use indexing. If you have both {{input}} and {{input.1}} in the same template, then I'd just add an extra parameter - either something like input-joined or input-1, and pass the new value along with the original input.


The only scenario where you might need to go with a custom vector, is when you have no control over the template, and the template itself for some reason uses both {{input}} and {{input.1}} (which doesn't make much sense)`.


Yeah, I just got down to that message. :) I would use arg-name for a vector and arg-name-str for a string. Anything else would create implicit non-intuitive behavior that doesn't have the amount of value-add that justifies it, IMO.

Endre Bakken Stovner12:04:19

But if you were to do it?

Endre Bakken Stovner12:04:27

How would you do it?


> users would have to write it all the time It's not a bad thing. I would just figure out what the least frequent scenario is, and change it (so either -vec or -str, but not both).


If I had no control over the template and it used {{input}} for implicitly joined strings and {{input.1}} for nth, I would write that template library maintainer and ask for the reasoning behind it. I'd try to discuss with them if there's maybe a better alternative that's backwards-compatible and doesn't have an implicit behavior. If that would prove to not be fruitful, I would search for a different library or write one myself. I might be biased a bit, but I have had to deal with enough of implicit magic behaviors to justify my aversion towards them.

Endre Bakken Stovner12:04:40

Hmmm, I guess vec would be the least common case. And avoiding magic is one of the reasons I like clojure to begin with. I was just trying to copy behavior from snakemake I had gotten used to.


So many people have used to sh and bash. And they are just so horrible. :D No offense meant to anyone is meant, of course.

🐟 3
🐚 3

Perhaps fish is better - I've never tried it. But it can't solve the root problem of all the tooling that has been created around the same time, in the same ecosystem. E.g. any whitespace or quotes in file names can cause problems. -0 is a band-aid that can help you, but it's not great.


why would you not just use the join filter for the first case?


I didn't know about that filter (not that familiar with Selmer at all), but FWIW I think it's the best solution.

Ben Sless17:04:25

Is there a way to know during macro expansion if a symbol is expected to be primitive by the compiler even though it's not type-hinted?


the compiler doesn't look at things till after macroexpansion


which is to say, no, because there is no expectation at the point where macro expansion runs

Ben Sless17:04:43

When expanding a macro inside the scope of a function with a type hinted primitive I can see in the &env it's mapped to a Compiler$LocalBinding which has a jc field (Java Class) which contains long

Ben Sless17:04:00

how unwise is it to take advantage of that?

Ben Sless17:04:25

Besides the pending execution for black magic?


it depends what you mean by "take advantage", but it will likely break any other macro expanders that are not the compiler


(so if your editor provides some kind of macro expander functionality, or core.async's go macro, etc)

Ben Sless17:04:25

I'm experimenting with opportunistic inlining of equality checks

Ben Sless17:04:42

For example, (= "foo" bar) can be inlined as (.equals "foo" bar). Same for keywords. Numbers are trickier

Ben Sless17:04:54

With numbers, the only thing I can be sure of is if the argument to the function is a type hinted primitive, it would throw a runtime exception before reaching any equality check in the scope of the function if it wasn't a number anyway


the compiler already does a fair bit of that kind of thing if it can determine the types


and if it throws or not actually depends on how things are type hinted


in the general case type hinted code doesn't throw an exception if the type doesn't match


user=> ((fn [^String a] a) 1)

Ben Sless17:04:48

Generally, no, but primitives do, because invoke calls invokePrimitive

Ben Sless17:04:55

(only for double and long)


correct, and that is the only case


and there the compiler knows the type, so it is already doing a bunch of "inlining"


I think ClojureScript does more of that than Clojure

Ben Sless17:04:18

Huh, here's something weird


the call to clojure.core/= is replaced by a call to the Util/equiv static method


which is the compiler may then replace with some byte code

Ben Sless17:04:45


(defn foo [^long n] (== n 1))
    public static Object invokeStatic(final long n) {
        return Numbers.equiv(n, 1L) ? Boolean.TRUE : Boolean.FALSE;

Ben Sless17:04:14

On the other hand:

(defn foo [^long n] (if (== n 1) true false))
    public static Object invokeStatic(final long n) {
        return (n == 1L) ? Boolean.TRUE : Boolean.FALSE;

Ben Sless17:04:23

Why does the second get inlined and the first doesn't?


it is because jvm bytecode is not expression based


the decompiled result looks the same, but one ternary operator is the result of boxing, and one is the if

Ben Sless17:04:33

Yeah, but the first case has an extra method call

Ben Sless17:04:04

I can share the bytecode, too, if you'd like


the "extra" method call is what I am referring to

Ben Sless17:04:43

ok, sorry, missed your intention there


there are two different sets of byte code intrinsics the compiler has


one is purely replace this static method call with this sequence of bytecodes


so the bytecodes need to produce the exact same result as calling the static method would


the other is when compiling an if, if the test is a a matching static method, replace it with this series of byte codes


Pretty much every bytecode VM (and CPU) has that thing where comparisons are branches instead of producing a value


so when inlining if tests, you don't actually have to match what the static method would do, you just need to match the branching behavior


which is why those two sets of intrinsics are different


But e.g. the Lua compiler will emit the same bytecode for (== n 1) as for (if (== n 1) true false) (too lazy to write the Lua equivalents)


I am not saying it couldn't do the swap


I am saying there is a reason there are two different sets of intrinsics


the if intrinsics have an match for Numbers.equiv, the non-if intrinsics don't

Ben Sless17:04:34

Because the value produced by Numbers.equiv would be different?


It would not be as straightforward to use the branch intrinsics for returning the boolean instead of just when the source has a branch but it would be correct and feasible


the non-branching intrinsics just need a different sequence of bytecode to match the behavior of Numbers.equiv outside of an if


but, the question as always is "why?"


I haven't looked at the inlining specifics in a while, it may also be the case that the if intrinsics and the non-if intrinsics can conflict


if you inline the non-if version, then the if inlining can't be applied


The intrinsics are more a case of "it would be silly not to use these specialized bytecodes if the types are known" than "let's add a bunch of optimizations, it will be awesome"

Ben Sless18:04:21

(defn foo [^long n] (== n 1))
Doesn't match this case?

Ben Sless18:04:06

Because we need to return Booleans?


The compiler is pretty stupid, so it only uses IFNE et al. if it is compiling an if (and maybe case, but let's not go there)


As I said, it would be correct and feasible to use IFNE to implement your example as well, but no one has bothered


When the compiler gets the correct primitive overload of the static Util.equiv perf should be fine because all dispatch is gone

Ben Sless18:04:56

That's getting into JVM call conventions which I'm far from an expert on


Don't freak out


Just read the Java in clojure.lang.Util.equiv(Object, Object) vs. e.g. clojure.lang.Util.equiv(long, long)

Ben Sless18:04:37

I have, but there's distance from that to understanding what bytecode will run


That distance is pretty small, javac is quite stupid as well

Ben Sless18:04:24

Mainly, the difference between calling n1 == n2 between two longs and Util.equiv(long, long) between two longs


after the jit?

Ben Sless18:04:31

I guess the JIT will eliminate the static method call?


Before jitting perf is terrible anyway and the JIT will inline that call (always, I think, because the method body is so small and even static)

Ben Sless18:04:08

So you're saying not to sweat the little things


they are different small things


Using static types to get rid of branches and virtual calls matters, but inlining by itself is not so useful because the JIT does it so much better anyway

Ben Sless18:04:07

I recall there's a limit to the method size which can be inlined by the JIT. Will this be effected by inlining directly as == vs. equiv?


equiv(long, long) should be below the size that is always inlined by the JIT I don't recall a max inline size, if the method is bigger like equiv(Object, Object) inlining depends on profiling data ("hotness") But if there is a big method it contains a lot of code to optimize by itself and the call overhead is not as significant (e.g. it even makes sense to call the big long FORTRAN matrix procs through FFI)


It is complicated, but IFNE vs. a static call to equiv(long, long) should not matter after jitting


In Lua they have a bytecode that implements polymorphic equality directly instead of the equality operator being a function, so they must do the transformation



(defprotocol Eligible
  :extend-via-metadata true
  (make [this] :bar))

(defn eligible? [x] (satisfies? Eligible x))

(def x (vary-meta () assoc `make (fn [_] :made)))
I was a little surprised by this:
(eligible? x) => false
Am I missing something?


Thank you very much @U5K8NTHEZ. I’ll follow that ticket.


There are several discussions about how to use the full qualified keywords, and even in Clojure Spec Rationale[1] there is a quote: > These are tragically underutilized and convey important benefits because they can always co-reside in dictionaries/dbs/maps/sets without conflict It makes me feel somewhat guilty of not understanding something very important, a thing that I must understand. I see a lot of experimentation on how to use it, with a lot of learning, but it's hard to interpret what core Clojure maintainers expect when they created the full qualified keywords. It's much easier to get a glimpse of how it helps to solve real issues with examples, so my question is: Where can we find more substantial resources with examples that cover how and when to use full qualified keywords. What kind of practical problems full qualified keywords solve, beyond the abstract notion? 1:


I think, basically, it is rdf


which I guess is clear as mud


to some degree, my impression, is that rhickey's preferred data model is something like rdf (or a triple store or whatever, I mean look at datomic)


so you can image a map a given map is a set of rdf triples, with an implicit subject


a big difference between rdf and something like sql is column names in sql are implicitly namespaced by the table the column appears in

Alex Miller (Clojure team)20:04:08

What is there to say about it? Sufficiently qualified attributes allow you to assign global meaning to attributes


rdf doesn't have that, everything is just a soup of attibutes, so you have to do things like use urls as attribute names to avoid two "name" attributes of the same thing conflicting


so similarly with maps, if you want to merge two random maps together, the odds of that working with namespace qualified keys is much higher than without


It'd be nice if there are some books or blog posts with real examples @alexmiller, I ask for the whole community. I make an exhaustive search and just found toy examples


or experimentations, but it'd be nice to see if someone produced something along the lines of a real use case. Videos are welcome too.

Alex Miller (Clojure team)20:04:11

I’m not sure what needs to be said that’s interesting


it is the same reason spec is the way it is, you globally say "such and such a key is such and such a thing", you don't say "in some context such and such a key is such and such a thing"


Yes, I think that you already said all about it, I'm looking for good references of real use cases in application code.

Alex Miller (Clojure team)20:04:41

most applications (uses) are not open source so this is not generally something easy to come by

👍 3

I know that @wilkerlucio uses it a lot in pathom, but he maintains library code


being global, context free, requires disambiguation


I guess you mean require no disambiguation?


depends how you think about it


if you imagine names are ambiguous by default, then they must be disambiguated before they can be used unambiguously


so maybe "being global, context free, requires unambiguous names"


makes sense


In application code you can totally control the data, so it is not as useful.


I don't know that I agree


large applications deal with a lot of data


not only from multiple sources, but from different eons of development of the same application


being able to mix and match all that is very useful


yeah, also, its hard to live complete isolation, namespaced attributes (when big enough) remove the question of "what is this data about?", IME small programs hardly ever even think about this concern, but as it grows, and them you have many services with many teams integrating stuff, having those unambiguous tractable attributes can help tremendous


I think its sad our industry got so used with local (short) names everywhere, makes a pain to compose systems at large


I need to medidate more about it, it's like a koan that I'll suddenly solve and eventually get the fqn nirvana 😄 :person_in_lotus_position:


@marciol I think it’s one of those things that you don’t really see the value of until you’ve been bitten by the problem it solves. At work, we use qualified keywords a lot.


Yes @seancorfield, seems that we need to get the feeling after using it in anger to see the benefits and where it falls better


If everything is just a bag of properties, what next, a bag of bytes? "Use text, that is a universal interface". Just wire up some AWK and Perl scripts. troll

👀 3

not really, the idea (as I see) is about Entities, Attributes and Values.


with unqualified names, the context is given by something up (like a record, or a type, or something)


the idea with qualified names is that you can get rid of that "context" thing on the top, and consider that all attributes live in a flat world, like datomic and spec does


Perhaps this example will be motivating for you? next.jdbc qualifies column names by the table they come from when returning query results. If you join across two tables, both with an ID column and a name column, you will get two different qualified keywords and no conflict: {:author/id 1 :book/id 23 :author/name "Alex Miller" :book/name "Clojure Applied" :book/author_id 1}


Without the qualification, the author ID and book ID would collide and so would the book name and author name.


One issue with that approach is what if there are more than one book or author in one record? Just having a "type" namespace wouldn't scale. (e.g., a book with a person that wrote it, and a person that edited it)


It's a nice application @seancorfield


@isak Then there would be multiple column names — a database cannot have more than one name column in a table.


That next.jdbc feature is the must relevant application thing I have seen but it kind of just exports the namespacing you get inside SQL


@nilern By default, yes. But you can provide your own (result set) builders that do something more sophisticated if you wish.


@seancorfield At my work we just do JSON SQL queries instead, so what comes back from the database would be {:author {:id 1, :name "Alex"} :editor {:id 2, :name "Rich"}}


Usually you don't SELECT * on a join anyway because probably that data is going to a JSON response so you want to control what gets sent out and namespaced keys break down with JSON and random client code


We do not send qualified keys back to client code. But we do a LOT of queries internally that stay entirely inside our apps and the qualified names help avoid conflicts.


To be honest, I was quite happy to use next.jdbc in a project... and to be honest, the experience was not that good. Most of the time I spent converting qualified keywords to JSON and back, and in the end I tried to make a builder-fn that would return the query exactly how @isak sent me. How did you do it? It would help me a lot on my code here 😄


@U3Y18N0UC Here is an example for SQL Server

declare @People as table (
    id int primary key,
    [name] nvarchar(255),
    dad_id int    

insert into @People([id], [name], dad_id)
values (1, 'Bob', null),
       (2, 'John', 1),
       (3, 'Jack', 2),
       (4, 'Jill', 2);

select a.*,
    JSON_QUERY((select * from @People b where a.dad_id = for json path, without_array_wrapper)) as dad,
    (select * from @People b where = b.dad_id for json path) as children
from @People a
where id = 2 /* John */
for json path
=> [{:id 2, :name "John", :dad_id 1, 
     :dad {:id 1, :name "Bob"}, 
     :children [{:id 3, :name "Jack", :dad_id 2} {:id 4, :name "Jill", :dad_id 2}]}]


Ah, right, I see. Yeah, that I can't use because I'm stuck with MySQL 😞


Ah bummer. I know it also works in Postgres, but I don't think it works in MySQL yet, unfortunately


Yes, I though you did by using some :builder-fn on next.jdbc


I have a hard time following code where the set of keys keeps changing although it is kind of idiomatic to be flexible with that


afaik JSON has no problem with "foo/bar" keys


@borkdude except that nobody uses it 😅


A lot of Clojure programmers allocate way too much mental overhead by using different names for the same thing. This is one of the appealing and powerful attributes of spec. Global semantics + global names. I have seen a lot of CLJ + CLJS applications hurt themselves by using JSON on the wire.


I think to interpret your data correctly, you almost always need to know 1) which view you are looking at, and 2) which path into that view you are looking at. And if you know both of those things, do you still need namedspaced keywords?

☝️ 3

nah, that is the whole point to using fully qualified keywords


the view and the path are both context


used to disambiguate ambiguous names


Right, but it doesn't scale when you have more than one X per Y, you need the path in such cases anyway


when you have self joins


and that depends to, but yeah definitely not as automatic


Yea, or even just when you just have a book that was edited by one Person, and written by another Person


that is a self join


you are joining a person to a person


No, book to person twice


that is the same thing as a self join


this is kind of the thing that Pathom tries to help with, once you have qualified (unambiguous) names, Pathom allows you to define how they relate to each other, and them you can ask for data by using pure inputs and outputs (I have this data, I want this data shape, go!)

👀 3

you join book and person once to get a record that is an amalgam of book and person


and then you join that to person again


it is a self join


the fact that the names conflict tells you it is


self-join means the same table twice, no? Otherwise we would just say join


if you don't like self join, you can say "merging a person record into a record that another person record has already been merged into"


Example query:

select as book_id, as author_name, as editor_name
from book a
left join person b on a.author_id =
left join person c on a.editor_id =


person c is joined to the join of person b and book a


so there already is a person in the join, so joining a person again is self join


Ok, but it could have been written as subqueries, and since it they are left joins it would always give the same answer


when I say "join", while it does map cleanly to the database example with literal joins, I mean it abstractly as some operation that combines (joins) desperate records into new records


ah, fair enough :thumbsup::skin-tone-2:


so I it doesn't matter to me if join appears in your query or not (heck, maybe the query planner will completely rewrite your query anyway)


we use it at work in our (third party facing) APIs, we never had a complaint about it


Overall the namespaced keywords hype smells of RDF and the Semantic Web, not a good smell. They solve some naming clashes etc. but not everybody is doing Data Lakes or whatever


I have experience using namespaced keywords to handle large requests for complex front-end, in this case was a backoffice app that has to load dozens of widges about a customer (personal info, account info, transactions, etc...), and the namespaced keywords allows for a system where many teams add things on this system on daily bases, integrating multiple services, and worked very well (even though most people that end up working on this part of the system are from distributed teams, with no previous experience of doing things this wya)


what mattered in the end is that when devs are writing widget, the system allows them to just describe the data they need (using EQL), and not care about how its fetched. to add new names to the system people add resolvers to a single service, where all definitions are based on establishing relationships between the names


to me this is the experienced that convinced me that this approach scales big time


lots of weak arguments in here


the Semantic Web, while a failed effort, got it very right to use names that indicate shared meaning


I think my issue with namespaced keywords is that nobody (except Datomic and Datahike, maybe) use it. Once I tried to force myself to use it, and in the end I felt like Typed-OO again: I had a JSON or XML, had to convert to a map, then re-convert to qualified keywords, then convert back to non-qualified to persist in SQL... after some time I would have to query that DB, get non-qualified, re-qualify, work with it, de-qualify to send to an API...


did you felt the same when using Pathom?


Well, Pathom is an exception to the rule 😄


It's also open-source, and it's quite easy to just use "inside Pathom" and then have a resolver for a :json/payload that contains unqualified keywords on the format that you expect some API to use, so it's not really a big deal


I think that it's because Pathon gives us real examples of how leverage fqn's in documentation


When I forced myself to use qualified all the way, I found out that a simple change (like, for example, adding a field into a payload) would propagate into multiple files, payloads, formats, converters, and after we had to make the 5th "+40 lines" change to add a single attribute, we ditched it into a simpler approach (that would be, coerce a schema, work with that format all the way until the end)


BTW, I think Pathom didn't exist at the time 🙂

Alex Miller (Clojure team)21:04:54

s/JSON/Transit s/DB/Datomic qualified names through the stack


having explicit translation layers between input data, storage, and app logic is good actually


(forcing the db to hold OO logic etc. is not)


If my client don't use Transit and my company don't use Datomic, then what? It's not that easy, specially Datomic being closed-source

Alex Miller (Clojure team)21:04:56

my point is that there is a unified way of thinking here to create a stack that respects qualified names

Alex Miller (Clojure team)21:04:21

we're trying to move the state of the art forward, not start from whatever broken substrate is the status quo


I sympathize with those ideas, and I remember Joe also want to push the industry to a better way

💯 3

Yes. But should an open-source language as pragmatic as Clojure be opinionated to use a thing that virtually don't exist outside that language? Specially when one of the pieces is closed-source?

Alex Miller (Clojure team)21:04:08

have you not noticed that Clojure's author is opinionated? :)


To be fair, it supports the other way very well also


trying to make the edges of your program look like its implementation is a pathology, that's how we got ORM, CORBA, etc - the data interchange format / storage do not need to look like the format your application uses internally

Alex Miller (Clojure team)21:04:20

the goal is not similarity, it's solving the problem of disambiguating and describing data


But is building the whole stack around qualified names worth it just to avoid mixing up :book/name and :author/name? I would rather just use static typing, it catches so many more errors with less effort...

Alex Miller (Clojure team)21:04:25

"is building the whole stack around qualified names worth it" - yes, that seems to be the foundation of good naming in every system I'm aware of "so many more errors" - does it? (I seem to recall a lot of github issues on projects with statically typed languages) "with less effort" - is it? (that does not match my experience)


Idiomatic Haskell and ML is nothing like the C++ and Java typing travesties


"If it compiles it just works" is a real thing (exaggerated of course). "Our map keys are context free" is pretty lame in comparison.

Alex Miller (Clojure team)21:04:16

it seems like you are comparing apples and staplers, I'm not sure what these things have to do with each other really

Alex Miller (Clojure team)21:04:30

statically typed languages have just as much need for disambiguation of names


namespaced kw's can be unambiguous across networks. if you're doing the same with ML you're basically using EDN


likewise, if you're trying to check your usage of namespaced keys at compile time you're basically inventing a type system.


thus having a system for disambiguating names isn't trying to be equivalent to a type system, and a type system isn't equivalent to having a notation for disambiguating names


(trying to reframe alex's point)


I was just comparing the cost/benefit ratio, not claiming that they are interchangeable


IME (I work mostly in distributed systems) fully qualified names has more benefit for me than static types. YMMV depending on domain


the problems that unambiguous names solve are more important to my domain than the problems that static typing solve


another strawman argument


static typing doesn't extend to the db or data interchange either


No, but getting statically typed access to those requires work


(disambiguation is not the only benefit)


Saying “qualified keywords aren’t worth the effort because they don’t solve 100% of the problem” is missing the point.


They add value wherever you need names to have a larger context than “just inside this one piece of data” — which means they add value in a pretty large space.


You can choose not to use them and justify it however you want, but you don’t have to work with me so we don’t need to argue about it 🙂

🙂 3

It seems like people are interested in using qualified keywords, but finding good guides, resources, and examples is still a challenge. Hopefully, that will improve over time.

👍 3

Going in and out of the DB using next.jdbc means I can use qualified keys in both directions (`next.jdbc` ignores the qualification going into the DB, for convenience, and qualifies with the table name coming out — again for convenience), and you can explicitly map those names to whatever you need. clojure.set/rename-keys and/or select-keys means this is a “one-liner” at the boundary between naming — which I would expect at a boundary returning JSON since you’re unlikely to be exposing your internal domain data 1:1 anyway.


I wish we’d used qualified keywords from day one, at work, to be honest.


right - I think this points to the right focus - having good translation layers between input, logic, and storage, and namespacing of keys helps


If we rewound time and made it so that every json library for clojure didn't have an option to keywordize keys I think more codebases would be cognizant of it


I suspect if Clojure books and tutorials had used qualified keywords from day one, we wouldn’t be having this conversation 😉


@emccue Do you mean that JSON libs kept data structures with string keys?


yeah - unconditionally interning external input strings so that you can write (:key map) instead of (map "key") or (get map "key") blurs the line between "parsed data format" and "internal domain representation"


I may be reading it wrong, but I took it to mean that they "keywordize" option in the json lib leads to codebases without an explicit data transformation layer between input and app logic - the keywordizing is like a classic 80/20 of that


I can see the argument for drawing a starker line between JSON format and Clojure hash map format.


^not that its definitely a bad thing - but in the absence of educational materials that aren't either a rant on the clojure website or a rant published as a book how libraries/existing code works is probably the strongest driver of how people use a language


and namespaced keys is where that fallback really becomes a painpoint, since json doesn't have namespacing


(i guess and assert without evidence)


like, the better clojure books are "for the brave and true", which does a good job but it is just mechanics - how not what and why


There was a time when "Cheshire and MongoDB, no more translation layers" was the attitude


right, I've worked on apps that tried to do that and it works great until it doesn't work at all


and the clojure for web development book - which isn't prescriptive about much beyond how to glue together the libraries in a way that works


not sure where that was the attitude?


I worked with the maintainer of cheshire at a clojure shop for a bit, and the general attitude there (not sure what his attitude was) was virulently anti-monodb


Thats because MongoDB is less a DB and more a long running bit thats going to end with someone taking a bow and shouting "The Aristocrats"


Great obscure reference (at least in a Clojure channel) gets this my early vote for "Comment of the Year" 😂


so there are lots of attitudes


I get that "things that will exist in a global context should be qualified as such (ie full domain)" and "it can also be useful to qualify things categorically/by domain in our app" such as :user/email but I'm kinda lost at the idea of "json in clojure land should have used string keys by default". Does this imply that we should add qualifiers to everything at the boundaries of a program? Or is the point to keep keys as strings until making a decision about what qualifier a particular keyword should have?


I read it as an unfortunate side effect of a convenience - since keywordizing makes the incoming data look "native", people start structuring applications such that they lack the translation layer from the interchange format to the application


and that works until eg. you want to use namespaced keys inside the application


and you might get confused and think the solution is that the interchange format needs namespacing


there definitely is a tendency to look at json as data structure and not a serialization format


really, the same with edn


Lately I've been experimenting with, when getting a stream of a lot of different k/v pairs from a source, just namespacing everything :source/x as a reminder, but I'm not sure how good an idea this is, because a :source/type won't always have the same meaning


An interesting approach, as it will denote the origin of data, and seems to be exactly what qualified keywords main objective, to tag data origin.


But on the other hand it doesn't really adhere to the spec philosophy, because a particular keyword will not correspond to a single semantic meaning


unless you carry this keyword across the system


but they might have :source/type mean "event topic" in once spot and :source/type mean "car or truck" in another context


because to them "type" is just a local name and the context is required


provenance is one use of being unambiguous but IMO not the way it always should be used


I typically use qualified keywords to denote the "domain" of the attribute. the source might be relevant or not


e.g. I might load information about a user's account from three different sources; they are all about the same domain (a user) so they should all share a namespace


this is very powerful because it means that my code is generalized across sources; it doesn't matter if I load it from cache, from disk or across a network, if I have an :myapp.account/id I treat it all the same


Nice @U4YGF4NGM, I start to understand a little bit what you mean. So if you have a namespace with all functions that handle operations on users, you can denote user information on keywords from this namespace


So that you can know about the namespace from which a specific keyword originated


Interesting point!


that is one thing you can do yes


this is very similar to what I was accustomed to in Elixir, but in Clojure terms, what is much more powerful and somewhat strange 😅


Right to the point @emccue, it'd be nice to have more resources about the Clojure way to solve real problems, at least, as the maintainers imagine should be this way, because the other way, we watch this fragmentation. Maybe, it's a tragic fate of Lisps.

Alex Miller (Clojure team)22:04:32

I think there are a wide range of resources about this already


Yes, it'd be nice to get an index. I made an index of articles, some from 2016, etc, but seems that they are all superficial


can you list a few?


yes @wilkerlucio, most talk about spec, that is a related subject


top two links are recent discussions, last one was an article that led to a few discussions (on reddit/clojureverse)


I mean, I understand we have these discussions on the topic, but I was trying to think of resources more like books, articles, and things that explain via example the idea of using qualified names from the beginning


because I guess for somebody looking to learn, just reading peoples experiences (in the middle of discussions) is not enough

☝️ 6

Seems that I get at least half of the important content @alexmiller 😅


But it's not easy to extract useful content of real work. Here, in the company I work we are doing very interesting things with @wilkerlucio Pathom, but we don't find time to write about how Pathom is helping us to solve complicated integration problems.


nice collection here :)

Alex Miller (Clojure team)23:04:54

I always forget that one

Alex Miller (Clojure team)23:04:18

if someone wanted to bundle these links together nicely and add a PR for the end of the guide, that would be cool


Thank you @alexmiller This was my original question after all


I can do it


Here we go @alexmiller I included all content suggested by you and excluded all blog posts that I found except this: I included the post above because seems to be a good source of information, and excluded the remained because I think that maybe it's good to curate them and verify if it's a good source of information.

Alex Miller (Clojure team)01:04:12

Thanks, were you going to sign the Contrib Agreement so I can merge?


Yes, I'll sign

Alex Miller (Clojure team)01:04:15

I will probably move it somewhere else but would like to merge your contribution first


My first contribution, Yey!

clj 6
🎉 3
Alex Miller (Clojure team)02:04:13

I moved the resources over to and linked from the rationale and guide, I think that's better


Thank you!

Ben Sless07:04:15

One thing I find I'm missing here is the theoretical underpinnings, i.e. stuff from RDF land. I think it might help me get into the proper mindset. Are there any resources you can recommend on the subject?

Alex Miller (Clojure team)14:04:56

my own experience has been that the RDFS stuff is pretty useful, but OWL (which lets you do more reasoning) is challenging to use other than in a very constrained domain

🎯 3
Ben Sless14:04:26

Thank you 🙂

Alex Miller (Clojure team)14:04:55

everything has to be set up "just right" and one set of contradictory statements can totally break your reasoner. the effort involved to set it up requires you to understand your domain so thoroughly that you could probably have written it in some other way that's not so brittle

Alex Miller (Clojure team)14:04:21

there are really a handful of interesting ideas in RDF+RDFS - global identifiers (good idea, url impl super cumbersome), facts as EAV, the importance of A over E

Alex Miller (Clojure team)14:04:22

we built some really great RDF libs at Revelytix in Clojure, afaik that IP never escaped

Alex Miller (Clojure team)14:04:17

we had some cool things to make clojure-y views over rdf that made it actually tolerable to accommodate all the namespacing stuff

Ben Sless14:04:52

unfortunately I don't think I'm familiar enough with RDF yet to appreciate this

Alex Miller (Clojure team)14:04:15

also a federated SPARQL engine written in Clojure


it's an interesting domain, I still thinking about how to apply it conscientiously when dealing with the external world, which is generally messy


my gut felling says that it's a great tool to use on these scenarios if you know how to leverage it


but I am still connecting the points


Thinking about a keynote or interview where Rich said that RDF can be used to join data across several databases, something that usually happens during acquisitions


This is a sort of messy real-world problem, and RDF seems to offer a good set of tools to deal with


But more interesting is the fact that these days, maybe, this kind of problem isn't so prevalent anymore? It's a random guess, given that my background is working on startups and small companies.


But @wilkerlucio Pathom is a really interesting way to apply RDF ideas in a set of problems that I'm facing right now: how to compose and merge information from several legacy systems in a healthy way.

Alex Miller (Clojure team)22:04:42

As much as it’s possible to do, given that “real” systems are their own universes that live in the context of companies and groups of developers with histories, and thousands of micro decisions


Yes, so we can only hope for some future keynotes to share knowledge of dos and donts

Alex Miller (Clojure team)23:04:24

well, that is in the past keynotes already


I see that you shared some. Thank you!


I don't think resources are the problem


There are tons of resources for learning and putting clojure into practice


and a bunch of conference talks about what the intent is behind stuff


But there is no effective aggregation of that information


we don't have a "The Rust Programming language"


if that makes sense

Alex Miller (Clojure team)23:04:10

"Programming Clojure" is as much that as anything


I don't mean in terms of a language tutorial


wow @alexmiller, I have the 2nd Edition here

Alex Miller (Clojure team)23:04:29

the 3rd covers spec and Clojure CLI


In the rust community the language is a complicated beast


so thats basically all it covers


but I means in terms of a collaboratively built central resource


programming clojure is the only book about clojure on rich's clojure bookshelf(this is, of course, not fair because I think it was the only book about clojure when the bookshelf was created)

Alex Miller (Clojure team)23:04:28

imo, there is no one answerable answer to this. everyone comes to Clojure with a different background, wants a different thing, is building different things. there are resources to answer almost any question you might have. it's impossible to organize them in all some way that makes sense to every person at every point in their path of knowledge. I've never seen that in any language community I've intersected with (other than communities so small that a single person can write a single definitive resource)

👍 6
Alex Miller (Clojure team)23:04:34

in the last decade, there have been at least dozen, probably many more attempts to create such a thing. they are all helpful. they are all missing important topics. they are all useful to some extent.

Alex Miller (Clojure team)23:04:14

the Clojure web site is open for issues and PRs. I have helped people edit a variety of things into a published state there over the years and would be happy to help do more of that (as time allows). most of the guides were contributed

Alex Miller (Clojure team)23:04:01

I'd be happy to entertain organizational and expanding ideas in the community area

Alex Miller (Clojure team)23:04:46

I actually did a big overhaul of the getting started area a couple months ago that is still pending a few things so has not yet been merged, but that's coming

❤️ 6
👍 3

The last Clojurists Together survey indicated they might want to explore more creative things to fund... maybe "The Encyclopedia of Qualified Keywords: from Theory to Practice" could be funded into existence

💯 6

Thanks @marciol (and everyone who subsequently weighed in) for posting that led to such an prolific and informative thread, ending with It reinforces how important qualified keywords are. The term "tragically underutilized" jumped out at me the very first time I read the rationale.

🦾 3

You are welcome @U01040R5CJY I was talking about what @alexmiller said. About how people from different backgrounds can learn about some subject in different ways, and how it's important to take it into account. I'm already understanding, step by step, the importance of these concepts and I think that we need to make an effort to understand and communicate this understanding to others.