This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-11-22
Channels
- # aleph (5)
- # announcements (9)
- # babashka (9)
- # beginners (127)
- # cherry (1)
- # cider (48)
- # clj-kondo (5)
- # cljdoc (1)
- # clojure (70)
- # clojure-berlin (1)
- # clojure-europe (57)
- # clojure-france (2)
- # clojure-germany (1)
- # clojure-nl (2)
- # clojure-norway (4)
- # clojure-uk (1)
- # clojurescript (2)
- # css (1)
- # cursive (6)
- # emacs (6)
- # gratitude (1)
- # honeysql (5)
- # introduce-yourself (5)
- # jobs-discuss (7)
- # joyride (1)
- # kaocha (3)
- # lsp (1)
- # malli (9)
- # nbb (2)
- # off-topic (91)
- # pathom (7)
- # pedestal (14)
- # re-frame (4)
- # reitit (67)
- # shadow-cljs (46)
- # spacemacs (3)
- # squint (3)
- # tools-build (14)
- # tools-deps (1)
- # vim (3)
morning!
Do any of you use something akin to hot reload for Clojure? I often re-evalute namespaces, but that gets pretty tedious.
@grav There is something called the ns reloaded workflow, but I don't have good experiences with that. For me it's just manual evaluation from the REPL, either a complete file or form by form
I have taken to using the Cider undef function when changing things like protocols, which seems to work reasonably well. Not that I have to do so very often. 😅
I ripped an inlined potemkin defprotocol+
out of rewrite-clj a long while back. https://github.com/clj-commons/potemkin/pull/69/files. Maybe it would help?
Yeah there are probably several gotchas with a fully-fledged solution. I'm using Conjure with Neovim, and the author actually had an interesting approach that only took one re-evaluation of a form:
(do
(defn foo [x]
(+ 2 x))
(foo 42))
I think what kept me away from it was the looks of it 😄 but it's pretty simple. Maybe I should give it a go.Looking forward to hear more about your experiences with Neovim 🙂 I use it daily for ad hoc text editing, but never for Clojure editing.
I'm still a noob, and would probably be more productive with Cursive ... but just the thought of being able to spin it up on a remote machine makes me happy 😄 Not sure if I'd ever need it ...
And I enjoy the familiarity of vim on whatever machine my fingers are connected to 🙂
I have a quite enjoyable setup with neovim and Clojure. You can take a look at my config here if you like: https://github.com/mdiin/dot-files/tree/main/nvim I’m using Doom Emacs for everything these days, but keep my vim config around for when I inevitably make an editor jump again. 😆
That's unusual. I'm guessing your use case for ClojureScript is a bit different from most? i.e. not making React-based websites.
That's what I do mostly. And react native mobile apps. I like to be in control of what is re-evaluated. So my files remain unsaved for quite long periods of time, and then at some criteria (unknown to me which) I decide it is fine and I go and save the files and let hotreload do its thing.
Morning!
I never think about re-evaluating namespaces, the key combo is pure muscle memory just like Cmd+S (even though I don't actually need to save files manually in IntelliJ).
we've been using a reloaded workflow with clojure pretty successfully - we have our own component manager (not integrant, but not completely dissimilar), which exposes a fn which does a stop -> c.t.n.r/refresh -> start
sequence whenever you occasionally need a complete reload... but most of the time a REPL-based compile+load of an individual namespace or function is all that's required
as @U04V15CAJ points out, some things can cause difficulties when reloaded (if you reload a protocol definition then any extensions of that protocol which don't also get reloaded will no longer satisfy the protocol, unless you do metadata-based protocol extension), but as long as you follow the convention of putting your protocol defs in their own namespace then you only encounter this issue when you change the protocol, and as long as you have an approach which (tears down where necessary and) recreates the objects extending protocols after the reload then you are good
how many kloc clj source are you reloading this way? the initial reload starts with the full class path, right?
about 250kloc for the full reload
to me personally it's just not worth the complexity budget, I've given it a fair try
how long does the full reload take?
-curious-
hold on @U0509NKGK, i'll see
we're at 150kloc right now and full reload feels punitive 😅
we used to do the reload thing for a long time, but feel out of it. pondering giving it a fresh try
it would definitely be too slow if a full reload was needed very often ...
@U0509NKGK from a cold REPL start, reload to an active service: 41s given an already active service, reload without any code changes (i.e. destroy objects -> reload nothing -> recreate objects): 1.2s regular development flow reloads are usually a few seconds
ok that's not completely terrible 😄
the certainty that code got cleanly reloaded and all you had to do is press a button to get that, is worth some waiting
i generally only pay the full 41s tax once a day or so, the "few seconds" tax more often - maybe a couple of times an hour - and mostly just the almost instantaneous compile+reload ns tax
in order to make it work well i think you need to have a fairly principled no-global-state approach and to use something like integrant which can reliably manage setup and teardown of your app context
welp we can't use reload cos we have clerk notebooks on the classpath 😅
we can configure c.t.n.r
to not reload those namespaces
TIL about clerk 😁
> we can configure c.t.n.r
to not reload those namespaces
yes please!
@U0524B4UW #C035GRLJEP8 😄
anyone here use the stream<->table
duality stuff in kafka-streams ? i.e. ksqlDB
or kafka streams with KTables
?
does it live up to the documentation hype @U050CJW4Q? i.e. reliable, straightforwardly elastic, data-intensive app backends, stream re-use &c ... any common pitfalls or poor-fit use-cases ?
I would say in general yes. If you think of it as a fancy interface on top of rocksdb that is kafka aware. And if the provided abstractions don’t fit exactly you can use custom processors and use rocksdb directly from there. Common pitfalls would be related to partitions and keys. Certain operations required the partition keys to be the same (e.g. joins), so you need to be careful about repartitioning if it’s not handled automatically (like when using a custom processor)
i think i see - so, if e.g. i wanted to add an email to a user record, but with a constraint: only if there is no other user in the same tenant with that email, i would need something like:
a KStream
of commands: {:key [<tenant-id>,<email>] :value [:users/add-email-cmd [<tenant-id> <user-id>]]}
joined to a KTable
of: {:key [<tenant-id>,<email>] :value {:id <user-id> ...}}
and a ValueJointer
emitting values something like UserChange|Error
so data constraints have an effect on stream topologies ?
The keys have to match for the joins as a stream to ktable join essentially looks up the stream message key in the ktable’s rockdb kv store each time. Of course you can add your own constraints once in the valuejoiner fn. If you have more complex indexing requirments you can create additional state stores and look them up yourself
thanks @U050CJW4Q!