This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aleph (1)
- # announcements (39)
- # aws (11)
- # beginners (58)
- # calva (10)
- # cider (7)
- # clj-kondo (65)
- # cljs-dev (5)
- # clojure (90)
- # clojure-dev (48)
- # clojure-europe (23)
- # clojure-madison (1)
- # clojure-norway (1)
- # clojure-uk (40)
- # clojured (11)
- # clojurescript (20)
- # conjure (12)
- # core-async (4)
- # core-logic (4)
- # cursive (3)
- # datalevin (1)
- # emacs (7)
- # events (2)
- # fulcro (48)
- # introduce-yourself (2)
- # lsp (36)
- # malli (11)
- # missionary (1)
- # off-topic (1)
- # other-languages (72)
- # pathom (4)
- # polylith (13)
- # portal (94)
- # re-frame (14)
- # react (5)
- # releases (1)
- # sci (12)
- # shadow-cljs (29)
- # spacemacs (3)
- # vim (4)
- # xtdb (12)
Besides leiningen, babashka and deps-new (and of course clojure itself), what other Clojure CLI tools do you guys use with/for your Clojure projects? I’m trying to make a small list.
Do Clojure aliases work? If so: antq cider/nrepl figwheel shadow-cljs rebel reveal portal rebl clj-new tools.build carve deps-deploy cljfmt zprint puget reveal cloverage test-runner kaocha
Woah, okay, that’s a lot. I’ll have to look into those. I was asking cause I’m messing around with building a tool like Portacle for Common Lisp https://portacle.github.io/ but for Clojure, and wanted to know what the most important things to install were. I’m going for vscode + calva since I figure it’s mainly beginners who’s use a tool like this. But I wanted to know what else might be good to include besides Clojure itself, vscode as editor, calva + clj-kondo for extensions, and deps-new for easily creating new projects.
I wasn't planning on making it something portable like that is, just a convenient way to setup a full Clojure environment quickly
well, most of the ones I listed are just aliases or tools for tools.deps (basically the Clojure CLI), so they don't need to be bundled up really, unless you plan to develop without internet.
The ones that wouldn't be part of tools.deps, ya mostly editor, clj-kondo, clojure cli, babashka, and maybe clojure lsp. And if you want people to be able to do ClojureScript, you might want nodejs + npm and shadow-cljs
But deps-new is similar, you don't really need to install it, just add the alias for it and you can use it.
See https://github.com/seancorfield/dot-clojure/blob/develop/deps.edn for ones I use fairly commonly, but all my projects use
tools.build now and that has taken over a lot of what I used to do with aliases and tooling. See also https://github.com/seancorfield/dot-clojure/tree/develop/tools for CLI programs that I have installed as "tools" (invokable via
There's also a
dev.clj in there for starting a REPL with various tooling combined in play and if you're using VS Code + Calva + Clover + Portal (I highly recommend as a combination), there's my custom setup for that: https://github.com/seancorfield/vscode-clover-setup/
(I have it all under version control so I can easily sync my setup across machines and clone it onto any new machine I get -- I have a macOS setup and a Windows 11 / WSL2 / Ubuntu setup)
I’d like to confirm, does the compiler uses
macroexpand-all to expand forms?
Technically neither, it's a different implementation in the compiler, but I think to the intent of your question, it will keep expanding a form until it's not a macro
Not in the flavor of macropand-1 where it keeps expanding toplevel forms. But in the flavor of macroexpand-all which also expands nested subforms, right?
macroexpand-all, it's integrated in the compilation process, so it knows about lexical environment
macroexpand-all is a helper function that calls macroexpand1 at some point, it’s not what the compiler itself uses but rather a thing you use directly from Clojure, for example to understand a macro. https://github.com/clojure/clojure/blob/b1b88dd25373a86e41310a525a21b497799dbbf2/src/clj/clojure/walk.clj#L126 The compiler calls macroexpand1 recursively (literally) until it stops expanding a form (yields no different result): https://github.com/clojure/clojure/blob/b1b88dd25373a86e41310a525a21b497799dbbf2/src/jvm/clojure/lang/Compiler.java#L7078
the java implementation you linked is identical to
macroexpand, but the full story is not just that
(f (g x)), the compiler will essentially perform
then it will enter the args of
(macroexpand (f (g x)))
and so on recursively, which is kinda what
(macroexpand (g x))
Is there a way to capture the local context (let bindings, dynamic vars, etc.) of a form that throws an exception? I'd like to do it in the context of tests, so instrumenting a lot of forms might be feasible.
In a macro you can use
&env to get local bindings. I don’t think that gives you dynamic bindings though. Maybe if they are bound locally? Not sure.
i don’t think i understand them yet, but this seems like a possible use case for taps this is a general explanation: https://quanttype.net/posts/2018-10-18-how-i-use-tap.html and this person may be looking at the issue as you: https://clojureverse.org/t/lets-tap-with-let-a-k-a-my-first-macro-taplet/7361 or i might be totally off
scope-capture is definitely along the right lines. The main difficulty would be instrumenting the right forms so the data is available when an exception is thrown.
Taps are an interesting idea. I think at the very least, the thread has some interesting ideas for doing this sort of thing.
scope-capture is great, btw. It totally transformed my development experience. I couldn’t live without it.
The approaches I'm considering now:
• if a test fails, re-run it with instrumentation that preserves the context as much as possible. it's not ideal for integration tests but this should n't add much of a delay to most unit tests, as the JVM is already warm and both Clojure and the app are loaded.
• try to hook into
clojure.test somehow to capture the context around assertions.
given the code
(first (filter pred coll)), looking at how
first are implemented, how exactly does a
chunked-seq "know" to pass an object back to
first before processing the next chunk?
Well, basically its this code here: https://github.com/clojure/clojure/blob/35bd89f05f8dc4aec47001ca10fe9163abc02ea6/src/jvm/clojure/lang/ChunkedCons.java#L40-L52
I admit, looking at the impl, I'm a bit confused. But conceptually, I feel it makes sense to me. You'd call first, and the chunked-seq would be like, ok the next first is not part of my current chunk, so I will now grab another chunk and return the first element of that.
more of ChunkedCons is the rest of the sequence, it will happen to be also an IChunkedSeq (actually another ChunkedCons)
(currently in this code myself hacking on some iterator stuff actually!)
okay, let's see if i can step through this so i understand what the hell is happening lol.
1. lazy-seq wraps the body in an fn, returning a LazySeq with fn := the
(when-let [s (seq col)] ... body of filter
2. first calls into RT.first
3. RT.first calls into (ISeq) LazySeq.first
4. LazySeq.first calls LazySeq.seq
5. LazySeq.seq calls LazySeq.sval
6. sval invokes the fn, which executes the stored body above and stores it in sv:
a. the fn executes
(seq coll), which calls RT.seq
b. RT.seq calls (ISeq) PersistentVector.seq
c. PersistentVector.seq calls PersistentVector.chunkedSeq
d. PersistentVector.chunkedSeq returns a new ChunkedSeq
7. sval calls RT.seq on sv
8. RT.seq calls ChunkedSeq.seq which is ASeq.seq which returns itself, and sets LazySeq.s to that same ChunkedSeq, setting sv to null
chunked-seq? is true, so we do the chunk branch of
but at this point i gotta get back to work cuz i've spent too long on this, lol. i'm glad to know how this all blends together, never stepped through it on this level, but i'm still confused I guess as to how, on the completion of the
dotimes in the chunk branch...
oh wait a minute, the answer is simple now.
chunk-cons checks the length of the given chunk, and if it's empty (which it will be because the pred never executed), it returns the
filter call of the rest of the coll, which is a lazy-seq with a stored fn of
(chunk-rest s), which will follow the same flow above and on and on and on until either the original
coll is exhausted or the chunk-buffer has at least 1 element, at which point the original RT.first call actually gets to return ChunkedCons.first
follow up learning questions :) 1. why is this thread-safe? 2. how does it compare to the transducer version
oh goodness. i still barely understand how transducers work in general. I expect that it's thread-safe because every part involved is a a persistent collection? that's generally the answer for all things "thread-safe" in clojure lol
it is necessary, but not sufficient, and this responsibility is not in one place but actually in several
LazySeq is a good place to start though, looking carefully at the synchronization there - two different threads might both have a reference to the same LazySeq object and both call
next at the "same time"
importantly changes to the local state of a LazySeq only happens in
sval() which are synchronized and all other methods route to those - these serve as the critical sections for seq realization
ah, seq is
synchronized, I missed that (not used to reading java). that's very clever
Is there currently a way to produce a war archive using clojure cli tools? I'm deploying some web app to elastic beanstalk but I have one last issue related to this.
feel free to vote for it at https://ask.clojure.org/index.php/11341/possible-build-uberwar-using-deps-using-tools-build-other-tool
I’m using Elastic Beanstalk, but deploying an uberjar to the Java SE platform (not the Tomcat platform). Maybe that’s an option for you?
Let me know if you need help setting it up - I’ve been running production workloads on it for five years now.
thank you, I just finished to plug tools.build, integrant, the java platform and jetty together. I'd be glad to talk about the way you run things in production (and actually the whole pipeline)
Well, as I said, we’re using the Java SE platform running Corretto 11 (https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.javase==. This is a web app, and the web servers are running on this platform. A Postgres RDS database is provisioned separately. There is a load balancer (classic) in front, managed by Elastic Beanstalk. I’d provision that too separately now, but that option was not available when I set up the environment. Will fix later. Then there’s a Codebuild pipeline that runs the tests, builds the uberjar and uploads web assets (js, css, images) to S3, then deploys to production after a manual approval.
That’s all very hand-wavy - there are lots of details to go into, especially regarding EB config. I can help with that, too, but maybe let’s talk in the #aws channel then.
learned something new from #clj-kondo:
clojure.test/deftest- creates a private test var
maybe useful for tests in the main source file rather than optionally on the classpath? i can’t think of other reasons. Or perhaps test runners that look at public vars and metadata for tests?
I'm trying to grab every form of the type
(my.namespaced/func ,,,) or
(func ,,,) (when referred) from a source tree as lists. (I'm working on a system for extracting UI strings needing translation). Is rewrite-clj what I'm after or is there something simpler?
I've found it pleasant to manipulate s-expressions using Meander: https://github.com/noprompt/meander ^^ You could write a bottom up search and replace something like:
OTOH if you want to identify actual function invokations you'd need to build an AST (https://github.com/clojure/tools.reader) and walk that, which I've also found Meander to be excellent for. Bear in mind that dealing with an AST is really quite a big undertaking. rewrite-clj has the advantage of preserving whitespace and comments.
(def rewrite (s/bottom-up (s/rewrite (my.ns/function "hello") (my.ns/function "bonjour"))))
oh neat ... the reader might work. It's less that I want to rewrite in place (at least, not now), and more that I want to:
1. Tag all user-visible text with
(i18n/tr "some user-facing string")
(defn tr identity) for the moment
3. Use <insert-name-of-tool> to treat my code as eg. a series of nested list zippers
4. walk every node and filter to get every list node that has
tr as its
5. spit those lists into a file one-line-per.
6. use that as the basis for working with a translator
(I seem to frequently run into the issue that i18n comes 1-2 years into projects after everyone decided it would never be in scope, so this is my minimal way of future-proofing without going ahead with actually translating it)
cc @U0A997PS6 who is the Clojure internationalization expert and might have some suggestions..
some initial thoughts:
i/tr were a macro, you could do all the work "at compile time" without doing any walking etc... i.e.: the macro could do things like writing to files and/or looking up other language translations for the string keyed with the string.
+1 to everything Tim has said. Using Meander for declarative rules for transforming data (given how data-oriented Clojure is) is pretty cool, and the ways you can flip it to go bottom-up is also cool. It has definitely proven invaluable for something as gnarly as transforming an AST, and does so in concise and predictable ways
FYI on localization basics: XLIFF is the industry standard format for "localization interchange" (sending stuff to translate and a target locale, and getting it back along with the translations for that locale filled in). XLIFF is XML. You probably don't need to know about Okapi, but it's the industry standard open-source code (written in Java) framework on top of which people create software tools for localization. I'm mentioning Okapi so that you know what it is and don't reinvent it (unless you really want to...). Localization is abbreviated l10n, and most of the time it can be synonymous with translation b/c translation is the bulk of the work in many use cases, but a few non-translation l10n work exists, like designing UIs, icons, etc for RTL layouts
Okapi is a framework but thankfully is designed around an abstract generic concept of a document. It makes no assumptions about the file format. Instead, it has an interface called "filters" (others might call it converters or de-/serialization or SerDe or whatever). Each file format implements the interface, which requires it to convert itself into a list of Text Units (translatable units). That's called "extraction" - extract text units from a source document. Then you can easily build flexible l10n software tools around text units. You can also write such tools that can also perform the "merge" - take translated text units, extract the source document a 2nd time, swap out each spot in the source document with the translated text, and be able to close up the source document while preserving its original format
thx ... I must not be living my life right. I've been finding it hard up here and in the particular industry (Canada, building construction) to find translators who work with anything but a big CSV file. I once (helpfully?) produced a .po and got nothing but blank stares.
Since it sounds you're dealing with source code as your "source document", you might find yourself traversing S-expressions easily via Meander, and it may be easy enough to merge the results too. The only thing you want to figure out is how you ensure that the sequence of extracted and translated text units can match back up to where they were extracted from in order and can replace the source
yeah, XLIFF is the standard format for representing that translatable content. there are a few major big translation companies, and they're all guaranteed to accept a common core of the XLIFF spec. they may want to introduce custom extensions for lock-in reasons, but the basics are fine. there are other enterprising smaller online translation companies. they should all be XLIFF-friendly at the least. .po is definitely old school
cool, ok. Yeah a stretch goal could be to go backwards to the source strings and rewrite them into translation keys.
yep, using translation keys and creating per-locale resource bundles to put the translations into is the way to go. You might want to look into ICU MessageFormat, esp the moment you find yourself saying "(if (= 1 (count x)) "file" "files)". Let me know if you ever get interested in MessageFormat, I've been sucked into the WG for designing the v2 of MF 🙂
The other reason you want to use MessageFormat is because you can create message patterns where you can properly format (interpolate + localize) things like numbers, dates, etc. MF is the "entry point" for a lot of i18n in that sense -- it invokes DateTimeFormat and NumberFormatter when instructed to. Since you're in Canada, you might know that French numbers are written like 1 234 567 (but with the special non-breaking space in between) where in English we would write 1,234,567. Not sure if they do this in fr-CA, but certain in Europe (ex: de) you'll see 1,234.56 but in en-US / en-CA you'll see 1,234.56. Lesson: never .toString() / concatenate / iterate over chars in an i18n context
Yeah I've been using satakieli for the interpolations on some projects. Very useful even with only one lang eg. to handle “item(s)”