This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-12-01
Channels
- # adventofcode (93)
- # announcements (44)
- # asami (23)
- # aws (1)
- # babashka (48)
- # beginners (112)
- # calva (26)
- # cider (57)
- # clj-kondo (17)
- # cljfx (5)
- # cljs-dev (21)
- # clojure (124)
- # clojure-europe (19)
- # clojure-hungary (40)
- # clojure-nl (3)
- # clojure-spec (7)
- # clojure-uk (3)
- # clojurescript (3)
- # cursive (81)
- # datalog (11)
- # events (21)
- # exercism (1)
- # fulcro (37)
- # graalvm (1)
- # introduce-yourself (8)
- # jobs (1)
- # lsp (1)
- # malli (5)
- # membrane-term (17)
- # minecraft (3)
- # nextjournal (5)
- # off-topic (14)
- # other-lisps (14)
- # polylith (58)
- # reagent (16)
- # reclojure (3)
- # reitit (6)
- # remote-jobs (1)
- # shadow-cljs (55)
- # spacemacs (15)
- # testing (2)
- # tools-build (7)
- # tools-deps (191)
hi! I’m planning to use this year’s AoC to learn some clojure… is there a channel in which people share/discuss solutions? AoC tells me if my solution is correct but not how efficient/idiomatic/etc it is…
There is no consensus as far as I know.
What matters is your API design. If someone else is using your tool, what interface should they be presented with? Think "developer experience", and communicating with other people through code.
You could:
• decide that everything is public and rely on documentation to communicate about the API.
• make it implicit with public / private definitions.
• have two namespaces like
• mylib.core
: public interface, default ns
• mylib.impl
: technical stuff
• etc…
Note: private vars is a Clojure thing. In ClojureScript everything is public.
#adventofcode starts becoming active late november, and there's so many different and fun solutions, I recommend.
Hey all, I’m trying to include an external jar file in my deps.edn. The jar is a Spark JDBC driver downloaded from https://databricks.com/spark/jdbc-drivers-download. My deps look like this:
{:paths
["src" "resources"]
:deps
{Spark/SparkJDBC42 {:local/root "resources/SparkJDBC42.jar"}}}
When I run clojure -X:deps prep
(or run the code that depends on the deps above), I get the following error:
Execution error (ModelBuildingException) at org.apache.maven.model.building.DefaultModelProblemCollector/newModelBuildingException (DefaultModelProblemCollector.java:197).
6 problems were encountered while building the effective model for Spark:SparkJDBC${env.JDBC_V}:${env.MAJOR_V}.${env.MINOR_V}.${env.REVISION_V}.${env.BUILD_V}
[WARNING] 'artifactId' contains an expression but should be a constant. @
[WARNING] 'version' contains an expression but should be a constant. @
[ERROR] 'artifactId' with value 'SparkJDBC${env.JDBC_V}' does not match a valid id pattern. @
[ERROR] 'dependencies.dependency.artifactId' for sb_SparkJDBC${env.JDBC_V}:sb_SparkJDBC${env.JDBC_V}:jar with value 'sb_SparkJDBC${env.JDBC_V}' does not match a valid id pattern. @
[ERROR] 'dependencies.dependency.groupId' for sb_SparkJDBC${env.JDBC_V}:sb_SparkJDBC${env.JDBC_V}:jar with value 'sb_SparkJDBC${env.JDBC_V}' does not match a valid id pattern. @
[ERROR] 'dependencies.dependency.version' for sb_SparkJDBC${env.JDBC_V}:sb_SparkJDBC${env.JDBC_V}:jar must be a valid version but is '${env.MAJOR_V}.${env.MINOR_V}.${env.REVISION_V}.${env.BUILD_V}'. @
Full report at:
/tmp/clojure-6209738836820923427.edn
I’ve exploded the jar, and this appears to be defined in the META-INF/maven/Spark/SparkJDBC42/pom.xml
. I substituted in the version env variables from the pom.properties
file manually. It started to pull dependencies from maven central, but then fell over when it tried to pull a library not in central.
Is there a way that I can use an external jar in deps.edn
without pulling any additional deps from (probably) private mvn repos?
Databricks provide info on connecting DataGrip to their platform using the same jar. You have to specify the driver class com.simba.spark.jdbc.Driver
if that’s helpful?
Apologies for the long post folks - trying to update the Metabase Databricks driver, and it’s proving tricky!(A better place for questions like this in the future is #tools-deps )
The pom reading local jar deps is not something you can turn off right now, but if you're not finding or including it's deps, seems unlikely to succeed
Another option would be to install the jar in your local maven repo and just use it as a Maven dep
clj -X:deps mvn-install :jar '"/path/to.jar"'
It will pull the group/artifact/version from the pom inside the jar, and that's what you would use as coords
Great, thanks very much - I’d considered installing in local maven repo, but wasn’t sure exactly how. Will give it a go. Thanks again 🙂
Unfortunately, when I try and run the mvn-install bit, above. It encounters the same issue with the (undoctored) jar - version env variables not being sub’ed in 😞
You can also supply the :lib and :version to use instead
As options on the command line
I saw that mentioned in the deps docs, but couldn’t find any more info on it. I’ve tried:
clojure -X:deps mvn-install :jar '"resources/SparkJDBC42.jar"' :lib com.simba.spark.jdbc.Driver :version '"2.6.19.1030"'
But that gives me a npe:
Installing resources/SparkJDBC42.jar
Execution error (NullPointerException) at java.util.regex.Matcher/getTextLength (Matcher.java:1770).
null
am pretty sure the :lib needs to be the classpath as it wanted a class rather than a string; it’s picking up the JAR, so am I providing the :version incorrectly?
:lib should be a qualified symbol in form group/artifact
so :lib spark/jdbc
or something like that (not sure what it should be)
I mean it doesn't really matter but you'll need to use that as your lib name when you refer to it later in your :deps
whatever they have in the pom inside the jar for groupId and artifactId would be one place to look
yep, that’s what I just did…
clojure -X:deps mvn-install :jar '"resources/SparkJDBC42.jar"' :lib Spark/SparkJDBC42 :version '"2.6.19.1030"'
Installing resources/SparkJDBC42.jar
Installed to /home/vscode/.m2/repository/Spark/SparkJDBC42/2.6.19.1030
Thanks so much for your assistance - one step closer!idiom question: i have a function make-message
that takes a user and a string and returns a map:
(defn make-message [user text]
{:user user
:text (str/trim text)
:timestamp (t/local-date)})
(using clojure.java-time). this means the function has a side-effect, so I have changed the parameters to take the timestamp and then have to call (make-message user text (t/local-date))
everywhere. this works fine, but feels cumbersome. making a helper function for such a small function feels odd (defn make-message! [user text] (make-message user text (t/local-time)))
, but does give me clarity. is that better than changing make-message
to take 2 or 3 parameters, letting me pass in the timestamp if i wish (for testing, etc)?(defn make-message [{:keys [user text timestamp]
:or {timestamp (t/local-date)}]
{:user user
:text (str/trim text)
:timestamp timestamp})
i thought of that, but given how small make-message
is, writing (make-message {:user user :text text})
feels like i’m repeating the contents of the function unnecessarily
yep. there are macros out there that will let you do (make-message (m #{user text}))
, but yep thats the tradeoff usually
ha yeah, i’m feeling it now, which is why i’m asking
cool, thanks for the input
I've got two collections of maps, where the maps may have a shared id key. I want to merge both collections such that their member maps are also merged if they share an id. Something like:
(let [x [{:id 1 :foo 11}
{:id 2 :foo 22}]
y [{:id 1 :bar 33}
{:id 3 :bar 44}]]
(magic x y))
; => [{:id 1 :foo 11 :bar 33}
; {:id 2 :foo 22}
; {:id 3 :bar 44}])
Anything outside of the map/conj/assoc/merge primitives that I can use in place of magic
to make my life easier?I think you're looking for https://clojuredocs.org/clojure.set/join
Hmm, not quite able to get the result I want out of this. It omits ids 2 and 3 since they don't exist in both sets.
clojure.set/join
will give you the common items, the way you want them
then you can concat with the items that are unique in each collection
You can use group-by
and merge
/`merge-with`
(let [x [{:id 1 :foo 11}
{:id 2 :foo 22}]
y [{:id 1 :bar 33}
{:id 3 :bar 44}]]
(->> (merge-with concat (group-by :id x) (group-by :id y))
(mapv #(apply merge (val %)))))
with join:
(let [x [{:id 1 :foo 11}
{:id 2 :foo 22}]
y [{:id 1 :bar 33}
{:id 3 :bar 44}]]
(let [common (set/join x y {:id :id})
common-ids (set (map :id common))]
(concat common
(remove #(common-ids (:id %)) (concat x y)))))
without join:
(let [x [{:id 1 :foo 11}
{:id 2 :foo 22}]
y [{:id 1 :bar 33}
{:id 3 :bar 44}]]
(->> (merge-with concat
(group-by :id x)
(group-by :id y))
vals
(map #(apply merge %))))
@U024X3V2YN4 @UEQPKG7HQ Thank you both. These are fun 🙂 I like the solution from @U024X3V2YN4 for its brevity. Came up with this one too but it seems a touch slower:
(let [x #{{:id 1 :foo 11}
{:id 2 :foo 22}}
y #{{:id 1 :bar 33}
{:id 3 :bar 44}}
joined (clojure.set/join x y)
union (clojure.set/union x y joined)
indexed (vals (clojure.set/index union [:id]))]
(map #(apply max-key count %) indexed))
I'm attempting to generate hiccup. Does the seq in here need to be like, spliced into the markup? Or will it just be handled correctly:
I think it will try and use that vector in the front of the seq as a function
(require '[hiccup.core :refer [html]])
(html [:ul.tags ([:li "build"] [:li "application"] [:li "admin"])])
;; Wrong number of arguments passed to PersistentVector
that’s an error in ([:li "build"] ...)
because it is invoking the vector as a function
I thought this was like examples I'd seen but evidently not:
[:ul.tags (map #([:li %]) (:tegere.parser/tags feature))]
the proof of concept that this fundamentally works is (h/html [:ul.tags (list [:li "build"] [:li "application"] [:li "admin"])])
or you can make it work with (h/html [:ul.tags (map (fn [x] [:li x]) ["build" "application"])])
syntax quote can be helpful to see what the reader syntax expands to

user=> `#([:li %])
(fn* [p1__3__4__auto__] ([:li p1__3__4__auto__]))
i'm mentally replacing the function with its return value and they still look equivalent
the @noisesmith trick is to eval '#([:li %])
in your repl. and '#(inc %)
thought i got it, i don't got it 🙂 I would think #([])
is a function that takes no arguments and returns an empty vector. My repl is telling me I'm either wrong or I don't understand apply or both, because (apply #([]) '())
is an argument error
a function isn't replaced by what comes after the arguments, it's replaced by evaluating what comes after the arguments
#(+ 1 2)
-> (fn [] (+ 1 2))
. If you want a mechanical transform, #<stuff>
-> (fn [] <stuff>)
. So replicate this when <stuff>
= ([])
i think i follow; and if i don't, i'm at least warned that #() syntax is trickier than it looks; and if i can't remember that, at least my hiccup is right now
I think I was trying to say that -- at some step i've been mentally inserting, or failing to insert, the parens that make that list of +, 1, and 2 a function call
Read that Stuart Sierra article on the shorthand fn macro. Best explanation I've seen of this. It tripped me up at first too.

but why not do (map #(do [%]) values) ??
I've liked this form since I started clojure but I've given up writing it that way in shared codebases because most clojurians don't like it for reasons I don't understand. I mean, it could easily be an idiom and become familiar if people used it
to me its a hack to work around how the function reader macro works. and as such i don’t like it. And for me, do
means side effects
it's a convenience, and a very readable one, for a common need, imo. I guess you could call it a hack but in plenty of cases idioms start out that way
it really speaks to the need for what is essentially a template, as in "stick the value in this template"
it's definitely not
not a big deal at all
actually cant think of A good use case for (map #(do [%]) values) If I'm pulling out individual value from a [] I'm usually wanting to mangle the values somehow ...
I just really like how that looks like a template without much ceremony cluttering up the meaning, or at least that's what it feels like to me. it's all feeling, I guess
(#(conj [:li] %) "test") first part of a () always has to be a function unless preceded by a '
but I'm pragmatic and most people who care about it care about it not being that and so it's a smoother ride on shared code bases if I don't use it
99.9% of the times I have needed a do
in a function I've also needed a let
. So it becomes let
instead.
that works but feels pretty gross to me. there are lots of ways to solve it (partial vector :li)
, etc. My preference is just (fn [x] [:li x])
I've liked this form since I started clojure but I've given up writing it that way in shared codebases because most clojurians don't like it for reasons I don't understand. I mean, it could easily be an idiom and become familiar if people used it
what's the preferred way to map over a collection, but eagerly, and for side effects? (run! (map f coll))
?
doseq
?
thanks!
i was originally thinking i needed a side effecting map
, and i see now that run!
is that function
I have this code:
(map #(take
(new-len demo-input)
(drop % demo-input))
(range 3))
And it would return something like ((5 2 4) (2 3 6) (8 3 9))
, how do I map + to this to get (15 8 19)
?(apply map + input)
(where input
is your seq of seqs.
apply
says put map
followed by each seq in your seq of seqs, and the extra arg +
is inserted after map
so you get (map + (5 2 4) (2 3 6) (8 3 9))
user=> (apply map + '((5 2 4) (2 3 6) (8 3 9)))
(15 8 19)
user=> (map + '(5 2 4) '(2 3 6) '(8 3 9))
(15 8 19)
Calling apply
in that case ends up being similar to directly invoking
(map + '(5 2 4) '(2 3 6) '(8 3 9))
As the docstring for apply
says, it...
> Applies fn f to the argument list formed by prepending intervening arguments to args.
In the case of the above input collection, it has the effect of "exploding" the collection to its constituent elements.
(apply + [5 2 4 6])
is equivalent to (+ 5 2 4 6)
. it 'spreads' a passed collection into multiple arguments to the function
(map + [5 2 4] [2 3 6])
is equivalent to [(+ 5 2) (+ 2 3) (+ 4 6)]
. when you pass multiple collections to map
, it applies the function to the first elements, then the second elements, then the third elements and so on.
(reduce + [5 2 4 6]
is like (+ (+ (+ 5 2) 4) 6)
. it reduces a collection down by applying the function successively to the result so far and the next item
I'm trying to remember a function name that would, ehhh, re-arrange two+ collections like this (f [1 2] [3 4]) -> [[1 3] [2 4]]. Could you help me?
(map vector [1 2] [3 4])
@true.neutral
Right I forgot that you can map more than one collection! Thank you @sundarj