Fork me on GitHub
#clojure-europe
<
2022-11-22
>
mdiin07:11:24

Good morning. 👋

grav07:11:50

Morning!

grav07:11:00

Do any of you use something akin to hot reload for Clojure? I often re-evalute namespaces, but that gets pretty tedious.

borkdude07:11:53

@grav There is something called the ns reloaded workflow, but I don't have good experiences with that. For me it's just manual evaluation from the REPL, either a complete file or form by form

borkdude07:11:17

Some things in Clojure give problems with reloading, e.g. protocol definitions

💡 1
mdiin12:11:43

I have taken to using the Cider undef function when changing things like protocols, which seems to work reasonably well. Not that I have to do so very often. 😅

lread13:11:24

I ripped an inlined potemkin defprotocol+ out of rewrite-clj a long while back. https://github.com/clj-commons/potemkin/pull/69/files. Maybe it would help?

grav07:11:03

Yeah there are probably several gotchas with a fully-fledged solution. I'm using Conjure with Neovim, and the author actually had an interesting approach that only took one re-evaluation of a form:

(do
  (defn foo [x]
    (+ 2 x))

  (foo 42))
I think what kept me away from it was the looks of it 😄 but it's pretty simple. Maybe I should give it a go.

reefersleep08:11:52

Looking forward to hear more about your experiences with Neovim 🙂 I use it daily for ad hoc text editing, but never for Clojure editing.

grav08:11:09

I'm still a noob, and would probably be more productive with Cursive ... but just the thought of being able to spin it up on a remote machine makes me happy 😄 Not sure if I'd ever need it ...

reefersleep08:11:07

Well, vim, in itself, is pretty awesome imo. Looking forward to trade tips!

1
reefersleep08:11:03

And I enjoy the familiarity of vim on whatever machine my fingers are connected to 🙂

vim 3
mdiin12:11:39

I have a quite enjoyable setup with neovim and Clojure. You can take a look at my config here if you like: https://github.com/mdiin/dot-files/tree/main/nvim I’m using Doom Emacs for everything these days, but keep my vim config around for when I inevitably make an editor jump again. 😆

🍻 1
reefersleep12:11:48

Cheers 🙂

🍻 1
pez08:11:48

I most often don't use the hot reload in ClojureScript code.

simongray08:11:41

That's unusual. I'm guessing your use case for ClojureScript is a bit different from most? i.e. not making React-based websites.

pez08:11:46

That's what I do mostly. And react native mobile apps. I like to be in control of what is re-evaluated. So my files remain unsaved for quite long periods of time, and then at some criteria (unknown to me which) I decide it is fine and I go and save the files and let hotreload do its thing.

simongray08:11:47

good morning

simongray08:11:50

I never think about re-evaluating namespaces, the key combo is pure muscle memory just like Cmd+S (even though I don't actually need to save files manually in IntelliJ).

otfrom08:11:05

suprisingly on topic this morning

😅 4
mccraigmccraig12:11:23

we've been using a reloaded workflow with clojure pretty successfully - we have our own component manager (not integrant, but not completely dissimilar), which exposes a fn which does a stop -> c.t.n.r/refresh -> start sequence whenever you occasionally need a complete reload... but most of the time a REPL-based compile+load of an individual namespace or function is all that's required

metal 2
mccraigmccraig12:11:57

as @U04V15CAJ points out, some things can cause difficulties when reloaded (if you reload a protocol definition then any extensions of that protocol which don't also get reloaded will no longer satisfy the protocol, unless you do metadata-based protocol extension), but as long as you follow the convention of putting your protocol defs in their own namespace then you only encounter this issue when you change the protocol, and as long as you have an approach which (tears down where necessary and) recreates the objects extending protocols after the reload then you are good

robert-stuttaford12:11:39

how many kloc clj source are you reloading this way? the initial reload starts with the full class path, right?

mccraigmccraig12:11:05

about 250kloc for the full reload

borkdude12:11:39

to me personally it's just not worth the complexity budget, I've given it a fair try

robert-stuttaford12:11:10

how long does the full reload take?

robert-stuttaford12:11:31

we're at 150kloc right now and full reload feels punitive 😅

robert-stuttaford12:11:54

we used to do the reload thing for a long time, but feel out of it. pondering giving it a fresh try

mccraigmccraig12:11:23

it would definitely be too slow if a full reload was needed very often ...

mccraigmccraig12:11:32

@U0509NKGK from a cold REPL start, reload to an active service: 41s given an already active service, reload without any code changes (i.e. destroy objects -> reload nothing -> recreate objects): 1.2s regular development flow reloads are usually a few seconds

robert-stuttaford12:11:38

ok that's not completely terrible 😄

robert-stuttaford12:11:40

the certainty that code got cleanly reloaded and all you had to do is press a button to get that, is worth some waiting

mccraigmccraig12:11:01

i generally only pay the full 41s tax once a day or so, the "few seconds" tax more often - maybe a couple of times an hour - and mostly just the almost instantaneous compile+reload ns tax

mccraigmccraig12:11:50

in order to make it work well i think you need to have a fairly principled no-global-state approach and to use something like integrant which can reliably manage setup and teardown of your app context

robert-stuttaford06:11:06

welp we can't use reload cos we have clerk notebooks on the classpath 😅

chrisetheridge10:11:24

we can configure c.t.n.r to not reload those namespaces

mccraigmccraig10:11:25

TIL about clerk 😁

robert-stuttaford14:11:35

> we can configure c.t.n.r to not reload those namespaces yes please!

lread13:11:37

morning, good

mccraigmccraig14:11:11

anyone here use the stream<->table duality stuff in kafka-streams ? i.e. ksqlDB or kafka streams with KTables ?

yes 1
mccraigmccraig15:11:00

does it live up to the documentation hype @U050CJW4Q? i.e. reliable, straightforwardly elastic, data-intensive app backends, stream re-use &c ... any common pitfalls or poor-fit use-cases ?

minimal16:11:04

I would say in general yes. If you think of it as a fancy interface on top of rocksdb that is kafka aware. And if the provided abstractions don’t fit exactly you can use custom processors and use rocksdb directly from there. Common pitfalls would be related to partitions and keys. Certain operations required the partition keys to be the same (e.g. joins), so you need to be careful about repartitioning if it’s not handled automatically (like when using a custom processor)

mccraigmccraig13:11:33

i think i see - so, if e.g. i wanted to add an email to a user record, but with a constraint: only if there is no other user in the same tenant with that email, i would need something like: a KStream of commands: {:key [<tenant-id>,<email>] :value [:users/add-email-cmd [<tenant-id> <user-id>]]} joined to a KTable of: {:key [<tenant-id>,<email>] :value {:id <user-id> ...}} and a ValueJointer emitting values something like UserChange|Error

mccraigmccraig13:11:55

so data constraints have an effect on stream topologies ?

minimal15:11:11

The keys have to match for the joins as a stream to ktable join essentially looks up the stream message key in the ktable’s rockdb kv store each time. Of course you can add your own constraints once in the valuejoiner fn. If you have more complex indexing requirments you can create additional state stores and look them up yourself