Fork me on GitHub
#clojure
<
2022-09-23
>
pesterhazy09:09:55

What's a good simple unit-testing setup for clojure? I'm using deps I'd like a way to • make a change • rerun all the tests (ideally automatically) • get feedback as fast as possible (fast microtests completing in <5s) This works

watchexec -w src -w test clojure -Mtest
but it's a little slow, because all state needs to be recreated on every change and startup cost is relatively high. How can I speed things up? (Asking an open question here to make sure I'm not overlooking a good solution)

pesterhazy09:09:10

I've used kaocha in the past, and I liked it. Does it address the slow feedback problem of my watchexec?

magnars10:09:44

Yes, it will keep a process running so you don't get the startup costs.

pesterhazy10:09:21

Will give it a try

borkdude10:09:04

> Yes, it will keep a process running so you don't get the startup costs. But does it also deal with the "dirty REPL" problem? How exactly does it work?

borkdude10:09:50

btw, the official syntax is clojure -M:test : the omission of the colon only accidentally works and not always on every platform

💡 1
pesterhazy10:09:34

Can you describe the dirty repl problem?

pesterhazy10:09:09

Just tried kaocha, and seems to work pretty well out of the box, with <100ms feedback

borkdude10:09:17

well, the dirty REPL problem is for example when you remove a function from your code base but the tests still refer to that function, which should result in failing tests, but doesn't

borkdude10:09:12

I'm not sure how kaocha solved the reload of clojure code, this is why I asked

borkdude10:09:39

There's also edge cases like protocol re-definition

pesterhazy10:09:49

Seems to be using (a fork of) tools.namespace.reload

pesterhazy10:09:51

I don't now tools.namespace empties a namespace or just redefines the vars

borkdude10:09:07

yes, tools namespace reload isn't without problems either

borkdude10:09:22

my take it that I either just use the REPL to run tests or start with a clean slate on the command line

pesterhazy10:09:51

Basically what you're saying is that there's a risk of correctness problems if you're relying on namespace reloading: a test might pass even though the function references isn't used. Correct?

borkdude10:09:31

with tools namespace reload that problem is taken care of, but there's other problems that I'm not able to come up with right now, but they exist ;)

pesterhazy10:09:10

I used to be a stickler for correctness, but these days I'm willing to accept 1% of compromise in correctness, if it gives me a significant boost in feedback speed in return

pesterhazy10:09:36

Of course there's always a safety net in CI, which always takes the slow, safe path

borkdude10:09:09

sure. it also depends what you're developing. when I'm developing a small library, the few seconds of startup time don't really bug me that much for running a whole test suite, but if I'm developing a big app that takes 2 minutes to start, it's a different story. There I just rely on the REPL

pesterhazy10:09:44

Right. I'm doing TDD in a larger app so I want to minimize feedback cycles as much as humanly possible

borkdude10:09:06

I just don't have fond memories about tools reload, I want to avoid it

pesterhazy10:09:41

That's a good point, I also remember getting confused about the state of the repl at times, and proactively restarting repls whenever something was off

borkdude10:09:17

perhaps kaocha has taken care of the tools reload problems like protocol stuff, but I doubt it has solved all problems that exist in that area

borkdude10:09:02

@U04V70XH6 also has some opinions on the "reloaded" workflow, perhaps he has more vivid memories of what the edge cases are, but as long as it works for your purposes, go for it :)

pesterhazy10:09:03

Hehe, yeah I guess so long as you know it's not bullet-proof, you can restart every once in awhile

pesterhazy10:09:47

Most of those problem also affect evaluating a buffer manually in emacs right?

borkdude10:09:11

yeah, I think so, but at least you can avoid loading everything, e.g. protocols can stay stable

👍 1
borkdude10:09:34

this is also true for reloaded workflow if you carefully don't touch anything and separate them out

borkdude10:09:02

not sure if kaocha does it fine-grained or reloads just everything

jakemcc14:09:11

I'm the author of https://github.com/jakemcc/test-refresh, one of the earliest "monitor files, reload using tools.namespace, and run tests" tools. Not really recommending it over koacha as I don't have experience with both. I'm super happy kaocha exists, especially since I dragged my feet on deps.edn based projects. In general, I've found these type of tools to make a huge impact on reducing feedback cycles and generally less error prone than managing a repl's state (tools.namespace is great). A couple things that test-refresh supports that can help with faster feedback cycles include running previously failed tests first and (optionally) only running tests that could be affected by the previous code reload. It is worth it with any of these tools that use tools.namespace to read through the https://github.com/clojure/tools.namespace#reloading-code-preparing-your-application part of the tools.namespace readme. One tip for defprotocol is to isolate them in their own namespace and give that namespace as few-as-possible reasons to need reloading. This is actually generally a worthwhile strategy if you have some area of code that you'd prefer not reload often.

💯 1
borkdude14:09:16

@U06SWJ2RH Thanks for chiming in!

👍 1
seancorfield15:09:13

RCF is problematic in several ways tho' because it auto-generates test names and that means if you modify files in a running process, you get multiple modified copies of tests. I opened a ticket on the RCF repo ages ago about it and they're thinking about it. It's why Classic Expectations can also be problematic. You don't get repeatability when test names change under you: you get a mix of old, often broken tests and new tests being run -- and then you start resorting to stuff like t.n.r and then you're dealing with its fragility as well 😞

seancorfield15:09:32

At work we use Polylith (which supports Kaocha as an optional test runner) and it provides incremental testing -- only running tests that depend on code you've changed -- and uses classloader isolation for running tests to avoid the dirty REPL problem. However, there are still problematic edge cases with that (such as reified classes that end up in the global fork/join pool's classloader and then conflict with classes loaded into different classloaders) but those are much more "edge-case-y" and you're less likely to encounter them...

bbss16:09:00

Thanks for that, I'll certainly have a look at the Koacha + polylith pattern. I like how low-effort rcf style writing tests is to me since it's close to my regular workflow in both clj and cljs. It's also nice how it can double as a sort of inline documentation.

borkdude16:09:10

As for inline documentation: my concern would be dragging in extra dependencies in production libraries or bigger JS bundles

seancorfield16:09:17

@U04V15CAJ It looks like they've worked on the dependency issue and now RCF only depends on Clojure? https://github.com/hyperfiddle/rcf/blob/master/deps.edn

borkdude16:09:23

@U04V70XH6 That's good, but using rfc in a library namespace will still pull in clojure's namespaces like cljs.test which will negatively affect build size

borkdude16:09:39

So for inline tests, I wouldn't use it. I would consider using it (or #clerk or ...) for external test namespaces

seancorfield17:09:41

Yeah, I like the idea of it but I have very mixed feelings about it in practice.

bbss03:09:19

> (tests) blocks erase by default (macroexpanding to nothing), which avoids a startup time performance penalty as well as keeps tests out of prod. So that might not be an issue?

seancorfield03:09:08

It still requires cljs.test into your production code even if the macro collapses to nothing. I think that's what Michiel was referring to.

bbss03:09:04

Ah yeah, that would be bad.

kraf08:09:50

@U04V15CAJ If you use none of those tools, how do you run tests? And you don't use the "reloaded" workflow? Do you still use something like integrant or similar? Sorry for bombarding you with questions 🙈 I'm very interested in alternative approaches

borkdude09:09:21

@U01DV4FGYJ0 It depends. When I develop a bigger application with db connections or http-servers, I tend to use component or integrant. I just use the REPL and evaluate forms or buffers and then reload the system. I can run tests from the REPL

🙏 1
borkdude09:09:43

Nowadays most of my time is spent with libraries which are usually simpler to develop

Ertugrul Cetin10:09:24

Can I introduce my custom var that can be used with set! like e.g. *warn-on-reflection*

(set! *warn-on-reflection* true)
My custom one;
(set! *my-custom-var* true)

magnars11:09:54

Yes.

(def ^:dynamic *my-custom-var* 1)

Ben Sless11:09:54

@U07FCNURX warn on reflection exists in every namespace. Is there a way to create a dynamic var that can be set for each namespace, like how warn-on-reflection is interned?

magnars11:09:25

No. *warn-on-reflection* is available everywhere because it's part of clojure.core.

magnars11:09:27

I guess you could do (in-ns 'clojure.core) and then (def ^:dynamic *my-custom-var* 1) but it would certainly be a very odd thing to do.

Ertugrul Cetin12:09:14

(set! *my-custom-var* 2) does not work; Can't change/establish root binding of: my-custom-var with set @U07FCNURX

magnars12:09:43

No, I guess ^:dynamic is ment for binding , not set! . Then I can't help you, sorry.

Ed17:09:34

set! will work exactly the same for your custom var as it will for *warn-on-reflection*. set! can only be called when the var is bound to a thread local state, as in

(defonce ^:dynamic *my-thing* 1)

(comment

  (binding [*my-thing* 2]
    (prn '> (set! *my-thing* 3)))

  )
The compiler basically runs inside the equivalent of binding for things like *warn-on-reflection* so if you just write
(set! *warn-on-reflection* true)
at the top level in your file, it's interacting with the state created by the compiler.

❤️ 1
Ertugrul Cetin17:09:28

Thank you for the info!

Adam Helins13:09:11

What is the exact behavior on *warn-on-reflection* across namespaces? Little experiment. Say namespace A requires B and B sets it to true. Its value seems to remain false in other namespaces. When everything is loaded and NREPL kicks in, its value is false even in B.

Adam Helins14:09:26

Not quite because it doesn't go into the kind of behavior I'm exposing (but thanks nonetheless!)

kwladyka14:09:52

> :reload-all is the key because it forces to reload all dependency. Warnings appear when functions are defined, not when they are used. I thought it can be what you need

kwladyka14:09:08

The question is not clear for me

Adam Helins14:09:05

Well the gist is that *warn-on-reflection* is set to true before requiring other namespaces. But when those namespaces are required, printing its value, we can see it is set to false.

Adam Helins14:09:22

The question being: why? 😄

kwladyka15:09:08

I don’t know. I have never tried to read value of *warn-on-reflection*. Only set it.

kwladyka15:09:14

Why do you need it?

kwladyka15:09:45

> s set to true before requiring other namespaces Try to set it manually in REPL and require ns with reload to be sure.

kwladyka15:09:15

Perhaps ns are loaded before you set it to true

Adam Helins15:09:20

Setting it in the REPL works for some reasons, yes. I'm really talking about main invocation. The main space requires a whole bunch of namespaces. But the first one sets *warn-on-reflection* to true. That namespace can print it, it is set to true, not doubt. Yet it is somehow reverted to false everywhere after loading it.

kwladyka15:09:21

> But the first one sets warn-on-reflection to true It first load ns, then execute code. So you set warn-on-reflecation after require other ns

Adam Helins15:09:33

No 😅 The first namespace to be required in the main :require sets it.

kwladyka15:09:05

All in all I think you need tool like https://github.com/athos/clj-check

kwladyka15:09:54

:check-syntax-and-reflections {:extra-deps {athos/clj-check {:git/url ""
                                                                        :sha "da6363a38b06d9b84976ed330a9544b69d3c4dee"}}
                                          :main-opts ["-m" "clj-check.check"]}

kwladyka15:09:59

^ part of deps.edn

kwladyka15:09:19

Maybe there is newer tool. I don’t know.

winsome13:09:51

I spent a while last night reading about transducers. The RH talks were especially helpful in understanding them. I think I've got a good handle on the basics now, and they remind me quite a bit of interceptors, a la pedestal or sieppari. In my mind the killer feature of interceptors is dynamism - you can inspect the interceptor queue or stack during execution and update it however you want. Is there a way to inspect the xform in a similar way in a transduce call, to modify the transducer at runtime?

p-himik13:09:35

Inspection should be possible with a reasonably advanced debugger and some skill, because combining transducers is done via comp. But it's definitely not trivial. Runtime modification is limited to comping extra transducers.

winsome14:09:48

I suppose between reduced and dynamically comping extra reducers you can get basically the same flexibility.

Joshua Suskalo18:09:12

So the ^:once meta put on fn* calls is used when a function is going to only be called once, so that any locals it holds references to can be cleared while the function is running. Is there a way to specify the same using higher-level function macros (like bound-fn), or am I best served by managing binding conveyance myself after using the ^:once meta on fn*, the way that core.async does?

Joshua Suskalo18:09:49

What I'm seeing as I look over core is that the fn macro itself does not provide a way to set the once meta on the fn* symbol that gets emitted

Joshua Suskalo18:09:58

So I guess I just have to do what core.async does

hiredman18:09:14

:once is a pretty low level thing, core.async only sort of uses it for the go macro (the go macro relies on the fn keeping closed over values, so it cannot actually use :once, but I have a patch that changes this)

Joshua Suskalo18:09:45

Yeah in this case I'm referring to how core.async uses it for thread

Joshua Suskalo18:09:14

I'm currently writing a tiny library to monkeypatch core.async to use JDK 19 virtual threads for the go macro, and so with this in mind the implementation of go is pratically the same as for thread +`thread-call`.

hiredman18:09:58

with virtual threads you don't need the go macro at all

hiredman18:09:25

ah, so you are replacing it with thread

Joshua Suskalo18:09:50

The idea is to monkeypatch core.async so that all libraries that use it and are distributed as source can be ported seamlessly to virtual threads by just requiring the monkeypatch library before the other dependencies.

hiredman18:09:46

the big issue there is channels, when they execute a callback, unconditionally do it by putting results on the go block threadpool

hiredman18:09:36

which means things will still work with virtual threads, just ping-ponging between more threads than needed

Joshua Suskalo18:09:59

You mean the put! callbacks? Yeah, I'm not too worried about that since using the blocking versions of put and take means that there won't be callbacks happening except in libraries that are explicitly doing that as a low-level way of interacting with core.async.

hiredman18:09:15

the blocking versions use callbacks

hiredman18:09:26

they just make the callbacks blocking using promises

Joshua Suskalo18:09:36

Yeah, I'm not too worried about that.

Joshua Suskalo18:09:54

Yes it'll add some extra threads, but the point here is ease of porting, not being optimal.

Joshua Suskalo18:09:34

The ideal case in the end would be a fork of core.async that only uses virtual threads and has nothing doing with the pool, but I'm just trying to put something easy to use together rn.

Joshua Suskalo19:09:26

although I think I could replace the core.async thread pools with some virtual thread factories

seancorfield20:09:16

@U5NCUG8NR Did you see my tiny example about a go macro based on virtual threads a while back?

Joshua Suskalo20:09:34

no, I hadn't, thanks for pointing it out!

Joshua Suskalo20:09:48

That's incredibly similar to the code I wrote

🙂 1
Takis_22:09:42

hello i am getting this

Could not transfer artifact graphframes:graphframes:jar:0.8.1-spark3.0-s_2.12 from/to graphframes (): Checksum validation failed, no checksums available

seancorfield22:09:26

Sounds like that repo does not have checksums so you need to tell Leiningen not to check artifacts from that repo: https://codeberg.org/leiningen/leiningen/src/branch/stable/sample.project.clj#L112-L114

Takis_22:09:21

i want this [graphframes/graphframes "0.8.1-spark3.0-s_2.12"] and i added

:repositories [["" ""]]

R.A. Porter22:09:07

I don't know why it is, but there don't seem to be checksums in their repo (I added it to a clj-based project and it pulled the pom and jar, but not any meta files). But, with leiningen, you can add this to your repo def: :checksum :ignore like...

:repositories [["" 
   {:url ""
    :checksum :ignore}]]

seancorfield22:09:35

Jinx. I replied the same thing in a thread on the first post 🙂

😄 1
R.A. Porter23:09:03

Eh. You probably didn't have the misplaced brackets. 😄

seancorfield23:09:07

@U68Q5G1BJ If you always use threads to add extra information to your question, that won't happen (multiple people answering you in different places).

seancorfield23:09:26

Heh, I just pointed them at the sample project example showing how to do it 🙂

Takis_23:09:58

i found it just now, and i came to delete it, thank you 🙂

Takis_23:09:01

next time i will add more info on thread, thanks again 🙂