This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-04-05
Channels
- # announcements (12)
- # babashka (29)
- # bangalore-clj (3)
- # beginners (153)
- # calva (2)
- # chlorine-clover (46)
- # cider (11)
- # clj-kondo (21)
- # cljfx (20)
- # cljs-dev (3)
- # clojure (393)
- # clojure-australia (2)
- # clojure-europe (15)
- # clojure-spec (40)
- # clojure-uk (1)
- # clojurescript (3)
- # community-development (1)
- # conjure (2)
- # cursive (1)
- # data-oriented-programming (1)
- # datomic (7)
- # defnpodcast (2)
- # docs (2)
- # figwheel-main (5)
- # fulcro (52)
- # graalvm (2)
- # inf-clojure (21)
- # malli (10)
- # meander (6)
- # mid-cities-meetup (13)
- # nrepl (1)
- # off-topic (24)
- # other-languages (1)
- # pathom (3)
- # polylith (18)
- # re-frame (6)
- # reitit (11)
- # ring-swagger (2)
- # shadow-cljs (56)
- # specter (1)
- # xtdb (7)
Hi Clojurians, I'm having issues with clojure.tools.namespace.repl/refresh and jdk versions > 8, I'm using Amazon correto JDK and was able to confirm that clojure.tools.namespace.repl/refresh doesn't work in any correto version > 8: • Amazon correto 8 -> working • Amazon correto 11 -> not working • Ubuntu OpenJdk 11 -> working • Amazon correto 16 -> not working Iv also found https://clojure.atlassian.net/browse/TNS-54 which seems to have been resolved a while ago, any idea what the issue can be? Thanks
“Doesn’t work” is not too helpful
“I do x and expect y but see z” would be more useful
Hi @alexmiller happy to add more details 🙂 iv used this project https://github.com/re-ops/re-share and ran the following:
$ git clone
$ cd re-share
$ lein repl
; In cases it did work the expected result (Amazon correto 8 and Openjdk 11)
re-share.config.secret=> (clojure.tools.namespace.repl/refresh)
:reloading (re-share.oshi re-share.wait re-share.core re-share.zero.keys re-share.config.core re-share.schedule re-share.es.common re-share.encryption re-share.config.secret re-share.spec re-share.zero.common re-share.log user re-share.zero.events)
:ok
; In cases it didn't work (Amazon correto jdk 11/16)
re-share.config.secret=> (clojure.tools.namespace.repl/refresh)
:reloading ()
:ok
Iv also made sure that I'm using the latest namespace library:
$ lein deps :tree | grep namespace
....
[org.clojure/tools.namespace "1.1.0" :scope "test"]
lein deps :tree | grep classpath
...
[org.clojure/java.classpath "1.0.0" :scope "test"]
The above did re-produce for me in one other project
I don't using reload a ton, but I have used it on a number of different correto installs just fine, and it isn't a particularly complex thing,
Hi @hiredman did it work on versions 11/16?
I don't mind trying a clean project from scratch it could be that I'm missing something
have you tried it having performed set-refresh-dirs
beforehand? that way it won't try traversing the classpath
My immediate guess is it has something to do with the user.clj you have in dev/ without listing dev/ in the project.clj anywhere
But it is tricky, with lots of lein plugins, and potentially more in your profile.clj to untangle this kind of thing
So the first thing is to disable all the plugins, both in the project and any user level stuff. Then get rid of the user.clj file, then see what is going on
Ok iv confirmed this on a clean project (https://github.com/narkisr/test) Using two different jdk versions:
$ lein repl
1
nREPL server started on port 33397 on host 127.0.0.1 -
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-Ubuntu-0ubuntu1.20.04
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
test.core=> (require 'clojure.tools.namespace.repl)
nil
test.core=> (require 'clojure.tools.namespace.repl)
nil
test.core=> (clojure.tools.namespace.repl/refresh)
:reloading (test.core test.core-test)
:ok
test.core=> Bye for now!
$ lein repl
nREPL server started on port 36563 on host 127.0.0.1 -
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-LTS
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
test.core=> (clojure.tools.namespace.repl/refresh)
Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (URLClassLoader.java:471).
clojure.tools.namespace.repl
test.core=> (require 'clojure.tools.namespace.repl)
nil
test.core=> (clojure.tools.namespace.repl/refresh)
:reloading ()
:ok
I will remove my profile.clj next
> So the first thing is to disable all the plugins, both in the project and any user level stuff. Then get rid of the user.clj file, then see what is going on
fwiw, it looks fairly safe to me those plugins aren't exactly wild
"dev"
seems missing from the :source-paths and there's set-refresh-dirs missing
The above project doesn't use user.clj just empty project calling the function directly
https://github.com/stuartsierra/reloaded uses exactly my advice and comes from the t.n author. I recommend you to save time and not go overboard with scientific debugging
Ok, I think I got a lead removing my profiles.clj did work:
{
:user {
:plugins [
[mvxcvi/whidbey "2.2.0"]
[io.aviso/pretty "0.1.37"]
[cider/cider-nrepl "0.22.4"]]
:middleware [
cider-nrepl.plugin/middleware
io.aviso.lein-pretty/inject
whidbey.plugin/repl-pprint]
:source-paths ["dev" "test"]
:dependencies [[io.aviso/pretty "0.1.37"]]
...
}
$ lein repl
nREPL server started on port 45839 on host 127.0.0.1 -
REPL-y 0.4.4, nREPL 0.8.3
Clojure 1.10.1
OpenJDK 64-Bit Server VM 11.0.10+9-LTS
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
test.core=> (require 'clojure.tools.namespace.repl)
nil
test.core=> (clojure.tools.namespace.repl/refresh)
:reloading (test.core test.core-test)
:ok
test.core=>
My only guess is that one of the above deps is triggering this, ill continue to debug but this doesn't seem to be a core issue 🙂
Thank you @alexmiller @hiredman @vemvmy bet is cider-nrepl :) https://github.com/clojure-emacs/cider-nrepl/pull/668
Thanks! the fact that it did work on some JDK but not on others threw me off course
🙌 again, reading the t.n source I think that performing set-refresh-dirs would remove this point of friction altogether
I'm using some library that gives names that include unicode symbols to some of it's vars. It works fine when I run it on my own machine but elsewhere it throws a ClassNotFoundException when it tries to invoke one of these variables. Is there an environment variable I should be setting to ensure this works everywhere?
what do you mean by "elsewhere"?
e.g. the official jdk base image https://hub.docker.com/_/openjdk
does the error show the unicode in the symbol/variable name?
Yeah. e.g.
1. Unhandled java.lang.NoClassDefFoundError
geocoordinates/core$initial_latitude$calculate_φ__475
according to this (https://stackoverflow.com/a/65490/3023252), you should be able to use them, which makes me think something else is happening
I want selmer.parser/render
(https://github.com/yogthos/Selmer) to act like the following:
(render "{{input}}" {:input ["a.txt" "b.txt"]}) => "a.txt b.txt" ;; does not work
(render "{{input.1}}" {input ["a.txt" "b.txt"]}) => "a.txt" ;; works
Currently the first line would print:
(render "{{input}}" {:input ["a.txt" "b.txt"]}) => ["a.txt" "b.txt"]
since selmer
just calls toString
on each object in the map. I think I could achieve what I want by creating a custom veclike object with a toString
-method like this: (toString [self] (str/join " " self)
.
But then the question is: How do I create a veclike object?Do you plan to use that custom vector right when passing the arguments to render
, or somewhere up the callstack?
Only in render 🙂
Then it will be both easier and simpler to just pre-process :input
the way you need.
E.g.:
(let [input (str/join " " ["a.txt" "b.txt"])]
(render ...))
That would break case 2 above:
{{input.1}}
would now become a
not a.txt
.I cross posted this (see https://clojurians.slack.com/archives/C053AK3F9/p1617618214113600) since no-one in beginners knew how to do it. I guess it is not a newb q.
It is not, you're right. :) And I have a strong suspicion that it might be an instance of https://en.wikipedia.org/wiki/XY_problem
If you have two calls to render
, just as in your example, I would simply pre-process :input
for the one that doesn't use indexing.
If you have both {{input}}
and {{input.1}}
in the same template, then I'd just add an extra parameter - either something like input-joined
or input-1
, and pass the new value along with the original input
.
The only scenario where you might need to go with a custom vector, is when you have no control over the template, and the template itself for some reason uses both {{input}}
and {{input.1}}
(which doesn't make much sense)`.
That will not work in my case: https://clojurians.slack.com/archives/C053AK3F9/p1617624403122600?thread_ts=1617618214.113600&cid=C053AK3F9
Yeah, I just got down to that message. :)
I would use arg-name
for a vector and arg-name-str
for a string.
Anything else would create implicit non-intuitive behavior that doesn't have the amount of value-add that justifies it, IMO.
But if you were to do it?
How would you do it?
> users would have to write it all the time
It's not a bad thing. I would just figure out what the least frequent scenario is, and change it (so either -vec
or -str
, but not both).
If I had no control over the template and it used {{input}}
for implicitly joined strings and {{input.1}}
for nth
, I would write that template library maintainer and ask for the reasoning behind it. I'd try to discuss with them if there's maybe a better alternative that's backwards-compatible and doesn't have an implicit behavior.
If that would prove to not be fruitful, I would search for a different library or write one myself.
I might be biased a bit, but I have had to deal with enough of implicit magic behaviors to justify my aversion towards them.
Hmmm, I guess vec
would be the least common case. And avoiding magic is one of the reasons I like clojure to begin with. I was just trying to copy behavior from snakemake I had gotten used to.
So many people have used to sh
and bash
. And they are just so horrible. :D No offense meant to anyone is meant, of course.
Perhaps fish
is better - I've never tried it.
But it can't solve the root problem of all the tooling that has been created around the same time, in the same ecosystem. E.g. any whitespace or quotes in file names can cause problems. -0
is a band-aid that can help you, but it's not great.
why would you not just use the join filter for the first case? https://github.com/yogthos/Selmer#join
I didn't know about that filter (not that familiar with Selmer at all), but FWIW I think it's the best solution.
Is there a way to know during macro expansion if a symbol is expected to be primitive by the compiler even though it's not type-hinted?
which is to say, no, because there is no expectation at the point where macro expansion runs
When expanding a macro inside the scope of a function with a type hinted primitive I can see in the &env
it's mapped to a Compiler$LocalBinding
which has a jc
field (Java Class) which contains long
it depends what you mean by "take advantage", but it will likely break any other macro expanders that are not the compiler
(so if your editor provides some kind of macro expander functionality, or core.async's go macro, etc)
For example, (= "foo" bar)
can be inlined as (.equals "foo" bar)
. Same for keywords. Numbers are trickier
With numbers, the only thing I can be sure of is if the argument to the function is a type hinted primitive, it would throw a runtime exception before reaching any equality check in the scope of the function if it wasn't a number anyway
the compiler already does a fair bit of that kind of thing if it can determine the types
in the general case type hinted code doesn't throw an exception if the type doesn't match
Decompiling:
(defn foo [^long n] (== n 1))
;;
public static Object invokeStatic(final long n) {
return Numbers.equiv(n, 1L) ? Boolean.TRUE : Boolean.FALSE;
}
On the other hand:
(defn foo [^long n] (if (== n 1) true false))
;;;
public static Object invokeStatic(final long n) {
return (n == 1L) ? Boolean.TRUE : Boolean.FALSE;
}
the decompiled result looks the same, but one ternary operator is the result of boxing, and one is the if
so the bytecodes need to produce the exact same result as calling the static method would
the other is when compiling an if, if the test is a a matching static method, replace it with this series of byte codes
Pretty much every bytecode VM (and CPU) has that thing where comparisons are branches instead of producing a value
so when inlining if tests, you don't actually have to match what the static method would do, you just need to match the branching behavior
But e.g. the Lua compiler will emit the same bytecode for (== n 1)
as for (if (== n 1) true false)
(too lazy to write the Lua equivalents)
It would not be as straightforward to use the branch intrinsics for returning the boolean instead of just when the source has a branch but it would be correct and feasible
the non-branching intrinsics just need a different sequence of bytecode to match the behavior of Numbers.equiv outside of an if
I haven't looked at the inlining specifics in a while, it may also be the case that the if intrinsics and the non-if intrinsics can conflict
The intrinsics are more a case of "it would be silly not to use these specialized bytecodes if the types are known" than "let's add a bunch of optimizations, it will be awesome"
The compiler is pretty stupid, so it only uses IFNE
et al. if it is compiling an if
(and maybe case
, but let's not go there)
As I said, it would be correct and feasible to use IFNE
to implement your example as well, but no one has bothered
When the compiler gets the correct primitive overload of the static
Util.equiv
perf should be fine because all dispatch is gone
Just read the Java in clojure.lang.Util.equiv(Object, Object)
vs. e.g. clojure.lang.Util.equiv(long, long)
Mainly, the difference between calling n1 == n2
between two longs and Util.equiv(long, long) between two longs
Before jitting perf is terrible anyway and the JIT will inline that call (always, I think, because the method body is so small and even static
)
Using static types to get rid of branches and virtual calls matters, but inlining by itself is not so useful because the JIT does it so much better anyway
I recall there's a limit to the method size which can be inlined by the JIT. Will this be effected by inlining directly as ==
vs. equiv
?
equiv(long, long)
should be below the size that is always inlined by the JIT
I don't recall a max inline size, if the method is bigger like equiv(Object, Object)
inlining depends on profiling data ("hotness")
But if there is a big method it contains a lot of code to optimize by itself and the call overhead is not as significant (e.g. it even makes sense to call the big long FORTRAN matrix procs through FFI)
It is complicated, but IFNE
vs. a static call to equiv(long, long)
should not matter after jitting
In Lua they have a bytecode that implements polymorphic equality directly instead of the equality operator being a function, so they must do the transformation
Given:
(defprotocol Eligible
:extend-via-metadata true
(make [this] :bar))
(defn eligible? [x] (satisfies? Eligible x))
(def x (vary-meta () assoc `make (fn [_] :made)))
I was a little surprised by this:
(eligible? x) => false
Am I missing something?Thank you very much @U5K8NTHEZ. I’ll follow that ticket.
There are several discussions about how to use the full qualified keywords, and even in Clojure Spec Rationale[1] there is a quote: > These are tragically underutilized and convey important benefits because they can always co-reside in dictionaries/dbs/maps/sets without conflict It makes me feel somewhat guilty of not understanding something very important, a thing that I must understand. I see a lot of experimentation on how to use it, with a lot of learning, but it's hard to interpret what core Clojure maintainers expect when they created the full qualified keywords. It's much easier to get a glimpse of how it helps to solve real issues with examples, so my question is: Where can we find more substantial resources with examples that cover how and when to use full qualified keywords. What kind of practical problems full qualified keywords solve, beyond the abstract notion? 1: https://clojure.org/about/spec#_global_namespaced_names_are_more_important
to some degree, my impression, is that rhickey's preferred data model is something like rdf (or a triple store or whatever, I mean look at datomic)
so you can image a map a given map is a set of rdf triples, with an implicit subject
a big difference between rdf and something like sql is column names in sql are implicitly namespaced by the table the column appears in
What is there to say about it? Sufficiently qualified attributes allow you to assign global meaning to attributes
rdf doesn't have that, everything is just a soup of attibutes, so you have to do things like use urls as attribute names to avoid two "name" attributes of the same thing conflicting
so similarly with maps, if you want to merge two random maps together, the odds of that working with namespace qualified keys is much higher than without
It'd be nice if there are some books or blog posts with real examples @alexmiller, I ask for the whole community. I make an exhaustive search and just found toy examples
or experimentations, but it'd be nice to see if someone produced something along the lines of a real use case. Videos are welcome too.
I’m not sure what needs to be said that’s interesting
it is the same reason spec is the way it is, you globally say "such and such a key is such and such a thing", you don't say "in some context such and such a key is such and such a thing"
Yes, I think that you already said all about it, I'm looking for good references of real use cases in application code.
most applications (uses) are not open source so this is not generally something easy to come by
I know that @wilkerlucio uses it a lot in pathom, but he maintains library code
I guess you mean require no disambiguation?
if you imagine names are ambiguous by default, then they must be disambiguated before they can be used unambiguously
not only from multiple sources, but from different eons of development of the same application
yeah, also, its hard to live complete isolation, namespaced attributes (when big enough) remove the question of "what is this data about?", IME small programs hardly ever even think about this concern, but as it grows, and them you have many services with many teams integrating stuff, having those unambiguous tractable attributes can help tremendous
I think its sad our industry got so used with local (short) names everywhere, makes a pain to compose systems at large
I need to medidate more about it, it's like a koan that I'll suddenly solve and eventually get the fqn nirvana 😄 :person_in_lotus_position:
@marciol I think it’s one of those things that you don’t really see the value of until you’ve been bitten by the problem it solves. At work, we use qualified keywords a lot.
Yes @seancorfield, seems that we need to get the feeling after using it in anger to see the benefits and where it falls better
If everything is just a bag of properties, what next, a bag of bytes? "Use text, that is a universal interface". Just wire up some AWK and Perl scripts.
not really, the idea (as I see) is about Entities, Attributes and Values.
with unqualified names, the context is given by something up (like a record, or a type, or something)
the idea with qualified names is that you can get rid of that "context" thing on the top, and consider that all attributes live in a flat world, like datomic and spec does
Perhaps this example will be motivating for you? next.jdbc
qualifies column names by the table they come from when returning query results. If you join across two tables, both with an ID column and a name column, you will get two different qualified keywords and no conflict: {:author/id 1 :book/id 23 :author/name "Alex Miller" :book/name "Clojure Applied" :book/author_id 1}
Without the qualification, the author ID and book ID would collide and so would the book name and author name.
One issue with that approach is what if there are more than one book or author in one record? Just having a "type" namespace wouldn't scale. (e.g., a book with a person that wrote it, and a person that edited it)
It's a nice application @seancorfield
@isak Then there would be multiple column names — a database cannot have more than one name
column in a table.
That next.jdbc
feature is the must relevant application thing I have seen but it kind of just exports the namespacing you get inside SQL
@nilern By default, yes. But you can provide your own (result set) builders that do something more sophisticated if you wish.
@seancorfield At my work we just do JSON SQL queries instead, so what comes back from the database would be {:author {:id 1, :name "Alex"} :editor {:id 2, :name "Rich"}}
Usually you don't SELECT *
on a join anyway because probably that data is going to a JSON response so you want to control what gets sent out and namespaced keys break down with JSON and random client code
We do not send qualified keys back to client code. But we do a LOT of queries internally that stay entirely inside our apps and the qualified names help avoid conflicts.
To be honest, I was quite happy to use next.jdbc in a project... and to be honest, the experience was not that good. Most of the time I spent converting qualified keywords to JSON and back, and in the end I tried to make a builder-fn
that would return the query exactly how @isak sent me. How did you do it? It would help me a lot on my code here 😄
@U3Y18N0UC Here is an example for SQL Server
declare @People as table (
id int primary key,
[name] nvarchar(255),
dad_id int
)
insert into @People([id], [name], dad_id)
values (1, 'Bob', null),
(2, 'John', 1),
(3, 'Jack', 2),
(4, 'Jill', 2);
select a.*,
JSON_QUERY((select * from @People b where a.dad_id = b.id for json path, without_array_wrapper)) as dad,
(select * from @People b where a.id = b.dad_id for json path) as children
from @People a
where id = 2 /* John */
for json path
=> [{:id 2, :name "John", :dad_id 1,
:dad {:id 1, :name "Bob"},
:children [{:id 3, :name "Jack", :dad_id 2} {:id 4, :name "Jill", :dad_id 2}]}]
Ah, right, I see. Yeah, that I can't use because I'm stuck with MySQL 😞
Ah bummer. I know it also works in Postgres, but I don't think it works in MySQL yet, unfortunately
I have a hard time following code where the set of keys keeps changing although it is kind of idiomatic to be flexible with that
@borkdude except that nobody uses it 😅
A lot of Clojure programmers allocate way too much mental overhead by using different names for the same thing. This is one of the appealing and powerful attributes of spec. Global semantics + global names. I have seen a lot of CLJ + CLJS applications hurt themselves by using JSON on the wire.
I think to interpret your data correctly, you almost always need to know 1) which view you are looking at, and 2) which path into that view you are looking at. And if you know both of those things, do you still need namedspaced keywords?
Right, but it doesn't scale when you have more than one X per Y, you need the path in such cases anyway
Yea, or even just when you just have a book that was edited by one Person, and written by another Person
this is kind of the thing that Pathom tries to help with, once you have qualified (unambiguous) names, Pathom allows you to define how they relate to each other, and them you can ask for data by using pure inputs and outputs (I have this data, I want this data shape, go!)
if you don't like self join, you can say "merging a person record into a record that another person record has already been merged into"
Example query:
select a.id as book_id, b.name as author_name, c.name as editor_name
from book a
left join person b on a.author_id = b.id
left join person c on a.editor_id = c.id
Ok, but it could have been written as subqueries, and since it they are left joins it would always give the same answer
when I say "join", while it does map cleanly to the database example with literal joins, I mean it abstractly as some operation that combines (joins) desperate records into new records
so I it doesn't matter to me if join appears in your query or not (heck, maybe the query planner will completely rewrite your query anyway)
we use it at work in our (third party facing) APIs, we never had a complaint about it
Overall the namespaced keywords hype smells of RDF and the Semantic Web, not a good smell. They solve some naming clashes etc. but not everybody is doing Data Lakes or whatever
I have experience using namespaced keywords to handle large requests for complex front-end, in this case was a backoffice app that has to load dozens of widges about a customer (personal info, account info, transactions, etc...), and the namespaced keywords allows for a system where many teams add things on this system on daily bases, integrating multiple services, and worked very well (even though most people that end up working on this part of the system are from distributed teams, with no previous experience of doing things this wya)
what mattered in the end is that when devs are writing widget, the system allows them to just describe the data they need (using EQL), and not care about how its fetched. to add new names to the system people add resolvers to a single service, where all definitions are based on establishing relationships between the names
to me this is the experienced that convinced me that this approach scales big time
the Semantic Web, while a failed effort, got it very right to use names that indicate shared meaning
I think my issue with namespaced keywords is that nobody (except Datomic and Datahike, maybe) use it. Once I tried to force myself to use it, and in the end I felt like Typed-OO again: I had a JSON or XML, had to convert to a map, then re-convert to qualified keywords, then convert back to non-qualified to persist in SQL... after some time I would have to query that DB, get non-qualified, re-qualify, work with it, de-qualify to send to an API...
did you felt the same when using Pathom?
Well, Pathom is an exception to the rule 😄
It's also open-source, and it's quite easy to just use "inside Pathom" and then have a resolver for a :json/payload
that contains unqualified keywords on the format that you expect some API to use, so it's not really a big deal
I think that it's because Pathon gives us real examples of how leverage fqn's in documentation
When I forced myself to use qualified all the way, I found out that a simple change (like, for example, adding a field into a payload) would propagate into multiple files, payloads, formats, converters, and after we had to make the 5th "+40 lines" change to add a single attribute, we ditched it into a simpler approach (that would be, coerce a schema, work with that format all the way until the end)
BTW, I think Pathom didn't exist at the time 🙂
s/JSON/Transit s/DB/Datomic qualified names through the stack
having explicit translation layers between input data, storage, and app logic is good actually
(forcing the db to hold OO logic etc. is not)
If my client don't use Transit and my company don't use Datomic, then what? It's not that easy, specially Datomic being closed-source
my point is that there is a unified way of thinking here to create a stack that respects qualified names
we're trying to move the state of the art forward, not start from whatever broken substrate is the status quo
I sympathize with those ideas, and I remember Joe also want to push the industry to a better way
Yes. But should an open-source language as pragmatic as Clojure be opinionated to use a thing that virtually don't exist outside that language? Specially when one of the pieces is closed-source?
have you not noticed that Clojure's author is opinionated? :)
trying to make the edges of your program look like its implementation is a pathology, that's how we got ORM, CORBA, etc - the data interchange format / storage do not need to look like the format your application uses internally
the goal is not similarity, it's solving the problem of disambiguating and describing data
But is building the whole stack around qualified names worth it just to avoid mixing up :book/name
and :author/name
? I would rather just use static typing, it catches so many more errors with less effort...
"is building the whole stack around qualified names worth it" - yes, that seems to be the foundation of good naming in every system I'm aware of "so many more errors" - does it? (I seem to recall a lot of github issues on projects with statically typed languages) "with less effort" - is it? (that does not match my experience)
"If it compiles it just works" is a real thing (exaggerated of course). "Our map keys are context free" is pretty lame in comparison.
it seems like you are comparing apples and staplers, I'm not sure what these things have to do with each other really
statically typed languages have just as much need for disambiguation of names
namespaced kw's can be unambiguous across networks. if you're doing the same with ML you're basically using EDN
likewise, if you're trying to check your usage of namespaced keys at compile time you're basically inventing a type system.
thus having a system for disambiguating names isn't trying to be equivalent to a type system, and a type system isn't equivalent to having a notation for disambiguating names
I was just comparing the cost/benefit ratio, not claiming that they are interchangeable
IME (I work mostly in distributed systems) fully qualified names has more benefit for me than static types. YMMV depending on domain
the problems that unambiguous names solve are more important to my domain than the problems that static typing solve
static typing doesn't extend to the db or data interchange either
Saying “qualified keywords aren’t worth the effort because they don’t solve 100% of the problem” is missing the point.
They add value wherever you need names to have a larger context than “just inside this one piece of data” — which means they add value in a pretty large space.
You can choose not to use them and justify it however you want, but you don’t have to work with me so we don’t need to argue about it 🙂
It seems like people are interested in using qualified keywords, but finding good guides, resources, and examples is still a challenge. Hopefully, that will improve over time.
Going in and out of the DB using next.jdbc
means I can use qualified keys in both directions (`next.jdbc` ignores the qualification going into the DB, for convenience, and qualifies with the table name coming out — again for convenience), and you can explicitly map those names to whatever you need. clojure.set/rename-keys
and/or select-keys
means this is a “one-liner” at the boundary between naming — which I would expect at a boundary returning JSON since you’re unlikely to be exposing your internal domain data 1:1 anyway.
I wish we’d used qualified keywords from day one, at work, to be honest.
right - I think this points to the right focus - having good translation layers between input, logic, and storage, and namespacing of keys helps
If we rewound time and made it so that every json library for clojure didn't have an option to keywordize keys I think more codebases would be cognizant of it
I suspect if Clojure books and tutorials had used qualified keywords from day one, we wouldn’t be having this conversation 😉
exactly @seancorfield
@emccue Do you mean that JSON libs kept data structures with string keys?
yeah - unconditionally interning external input strings so that you can write (:key map)
instead of (map "key")
or (get map "key")
blurs the line between "parsed data format" and "internal domain representation"
I may be reading it wrong, but I took it to mean that they "keywordize" option in the json lib leads to codebases without an explicit data transformation layer between input and app logic - the keywordizing is like a classic 80/20 of that
I can see the argument for drawing a starker line between JSON format and Clojure hash map format.
^not that its definitely a bad thing - but in the absence of educational materials that aren't either a rant on the clojure website or a rant published as a book how libraries/existing code works is probably the strongest driver of how people use a language
and namespaced keys is where that fallback really becomes a painpoint, since json doesn't have namespacing
like, the better clojure books are "for the brave and true", which does a good job but it is just mechanics - how not what and why
There was a time when "Cheshire and MongoDB, no more translation layers" was the attitude
right, I've worked on apps that tried to do that and it works great until it doesn't work at all
and the clojure for web development book - which isn't prescriptive about much beyond how to glue together the libraries in a way that works
I worked with the maintainer of cheshire at a clojure shop for a bit, and the general attitude there (not sure what his attitude was) was virulently anti-monodb
Thats because MongoDB is less a DB and more a long running bit thats going to end with someone taking a bow and shouting "The Aristocrats"
Great obscure reference (at least in a Clojure channel) gets this my early vote for "Comment of the Year" 😂
I get that "things that will exist in a global context should be qualified as such (ie full domain)" and "it can also be useful to qualify things categorically/by domain in our app" such as :user/email
but I'm kinda lost at the idea of "json in clojure land should have used string keys by default". Does this imply that we should add qualifiers to everything at the boundaries of a program? Or is the point to keep keys as strings until making a decision about what qualifier a particular keyword should have?
I read it as an unfortunate side effect of a convenience - since keywordizing makes the incoming data look "native", people start structuring applications such that they lack the translation layer from the interchange format to the application
and that works until eg. you want to use namespaced keys inside the application
and you might get confused and think the solution is that the interchange format needs namespacing
there definitely is a tendency to look at json as data structure and not a serialization format
Lately I've been experimenting with, when getting a stream of a lot of different k/v pairs from a source, just namespacing everything :source/x
as a reminder, but I'm not sure how good an idea this is, because a :source/type
won't always have the same meaning
An interesting approach, as it will denote the origin of data, and seems to be exactly what qualified keywords main objective, to tag data origin.
But on the other hand it doesn't really adhere to the spec philosophy, because a particular keyword will not correspond to a single semantic meaning
but they might have :source/type
mean "event topic" in once spot and :source/type
mean "car or truck" in another context
provenance is one use of being unambiguous but IMO not the way it always should be used
I typically use qualified keywords to denote the "domain" of the attribute. the source might be relevant or not
e.g. I might load information about a user's account from three different sources; they are all about the same domain (a user) so they should all share a namespace
this is very powerful because it means that my code is generalized across sources; it doesn't matter if I load it from cache, from disk or across a network, if I have an :myapp.account/id
I treat it all the same
Nice @U4YGF4NGM, I start to understand a little bit what you mean. So if you have a namespace with all functions that handle operations on users, you can denote user information on keywords from this namespace
this is very similar to what I was accustomed to in Elixir, but in Clojure terms, what is much more powerful and somewhat strange 😅
Right to the point @emccue, it'd be nice to have more resources about the Clojure way to solve real problems, at least, as the maintainers imagine should be this way, because the other way, we watch this fragmentation. Maybe, it's a tragic fate of Lisps.
I think there are a wide range of resources about this already
Yes, it'd be nice to get an index. I made an index of articles, some from 2016, etc, but seems that they are all superficial
can you list a few?
- https://ask.clojure.org/index.php/10380/when-to-use-simple-qualified-keywords - https://clojureverse.org/t/dont-quite-understand-rules-for-namespacing-keywords/7434/3 - https://vvvvalvalval.github.io/posts/clojure-key-namespacing-convention-considered-harmful.html
yes @wilkerlucio, most talk about spec, that is a related subject
top two links are recent discussions, last one was an article that led to a few discussions (on reddit/clojureverse)
I mean, I understand we have these discussions on the topic, but I was trying to think of resources more like books, articles, and things that explain via example the idea of using qualified names from the beginning
because I guess for somebody looking to learn, just reading peoples experiences (in the middle of discussions) is not enough
Some interesting articles @wilkerlucio 1. https://blog.taylorwood.io/2018/10/15/clojure-spec-faq.html 2. https://nextjournal.com/andon/applying-clojure-spec 3. https://clojure.wladyka.eu/posts/form-validation/ 4. https://medium.com/@kirill.ishanov/building-forms-with-re-frame-and-clojure-spec-6cf1df8a114d 5. https://quanttype.net/posts/2021-03-06-clojure-spec-and-untrusted-input.html
https://www.youtube.com/watch?v=nqY4nUMfus8&list=PLZdCLR02grLrju9ntDh3RGPpWSWBvjwXg <-- playlist of screencasts we did about spec
plenty of others from other people at https://www.youtube.com/user/ClojureTV/search?query=spec too
Seems that I get at least half of the important content @alexmiller 😅
But it's not easy to extract useful content of real work. Here, in the company I work we are doing very interesting things with @wilkerlucio Pathom, but we don't find time to write about how Pathom is helping us to solve complicated integration problems.
nice collection here :)
I always forget that one
if someone wanted to bundle these links together nicely and add a PR for the end of the guide, that would be cool
Thank you @alexmiller This was my original question after all
And about this @alexmiller? https://www.cognitect.com/cognicast/103
:thumbsup:
Here we go @alexmiller https://github.com/clojure/clojure-site/pull/518 I included all content suggested by you and excluded all blog posts that I found except this: https://www.pixelated-noise.com/blog/2020/09/10/what-spec-is/ I included the post above because seems to be a good source of information, and excluded the remained because I think that maybe it's good to curate them and verify if it's a good source of information.
Thanks, were you going to sign the Contrib Agreement so I can merge?
I will probably move it somewhere else but would like to merge your contribution first
I moved the resources over to https://clojure.org/community/resources#spec and linked from the rationale and guide, I think that's better
One thing I find I'm missing here is the theoretical underpinnings, i.e. stuff from RDF land. I think it might help me get into the proper mindset. Are there any resources you can recommend on the subject?
https://www.amazon.com/dp/0123859654/ref=cm_sw_r_cp_awdb_imm_CS2VY2C913F4CYRQP4Z6 is a common entry point
my own experience has been that the RDFS stuff is pretty useful, but OWL (which lets you do more reasoning) is challenging to use other than in a very constrained domain
everything has to be set up "just right" and one set of contradictory statements can totally break your reasoner. the effort involved to set it up requires you to understand your domain so thoroughly that you could probably have written it in some other way that's not so brittle
there are really a handful of interesting ideas in RDF+RDFS - global identifiers (good idea, url impl super cumbersome), facts as EAV, the importance of A over E
we built some really great RDF libs at Revelytix in Clojure, afaik that IP never escaped
we had some cool things to make clojure-y views over rdf that made it actually tolerable to accommodate all the namespacing stuff
also a federated SPARQL engine written in Clojure
it's an interesting domain, I still thinking about how to apply it conscientiously when dealing with the external world, which is generally messy
my gut felling says that it's a great tool to use on these scenarios if you know how to leverage it
Thinking about a keynote or interview where Rich said that RDF can be used to join data across several databases, something that usually happens during acquisitions
This is a sort of messy real-world problem, and RDF seems to offer a good set of tools to deal with
But more interesting is the fact that these days, maybe, this kind of problem isn't so prevalent anymore? It's a random guess, given that my background is working on startups and small companies.
But @wilkerlucio Pathom is a really interesting way to apply RDF ideas in a set of problems that I'm facing right now: how to compose and merge information from several legacy systems in a healthy way.
As much as it’s possible to do, given that “real” systems are their own universes that live in the context of companies and groups of developers with histories, and thousands of micro decisions
Yes, so we can only hope for some future keynotes to share knowledge of dos and donts
well, that is in the past keynotes already
Yes, so we can only hope for some future keynotes to share knowledge of dos and donts
"Programming Clojure" is as much that as anything
wow @alexmiller, I have the 2nd Edition here
the 3rd covers spec and Clojure CLI
programming clojure is the only book about clojure on rich's clojure bookshelf(this is, of course, not fair because I think it was the only book about clojure when the bookshelf was created) https://www.amazon.com/ideas/amzn1.account.AFAABBRGIVOWVKTHP5NOJU5LMROQ/3BSKWCYM12RBZ
imo, there is no one answerable answer to this. everyone comes to Clojure with a different background, wants a different thing, is building different things. there are resources to answer almost any question you might have. it's impossible to organize them in all some way that makes sense to every person at every point in their path of knowledge. I've never seen that in any language community I've intersected with (other than communities so small that a single person can write a single definitive resource)
in the last decade, there have been at least dozen, probably many more attempts to create such a thing. they are all helpful. they are all missing important topics. they are all useful to some extent.
the Clojure web site is open for issues and PRs. I have helped people edit a variety of things into a published state there over the years and would be happy to help do more of that (as time allows). most of the guides https://clojure.org/guides/getting_started were contributed
I'd be happy to entertain organizational and expanding ideas in the community area
I actually did a big overhaul of the getting started area a couple months ago that is still pending a few things so has not yet been merged, but that's coming
The last Clojurists Together survey indicated they might want to explore more creative things to fund... maybe "The Encyclopedia of Qualified Keywords: from Theory to Practice" could be funded into existence
Thanks @marciol (and everyone who subsequently weighed in) for posting https://clojurians.slack.com/archives/C03S1KBA2/p1617654390371500 that led to such an prolific and informative thread, ending with https://clojurians.slack.com/archives/C03S1KBA2/p1617663102017400. It reinforces how important qualified keywords are. The term "tragically underutilized" jumped out at me the very first time I read the rationale.
You are welcome @U01040R5CJY I was talking about what @alexmiller said. About how people from different backgrounds can learn about some subject in different ways, and how it's important to take it into account. I'm already understanding, step by step, the importance of these concepts and I think that we need to make an effort to understand and communicate this understanding to others.