This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-06-01
Channels
- # announcements (20)
- # babashka (3)
- # beginners (30)
- # calva (28)
- # cider (3)
- # circleci (4)
- # clerk (27)
- # clj-kondo (72)
- # cljdoc (15)
- # cljs-dev (1)
- # clojure (85)
- # clojure-europe (37)
- # clojure-nl (1)
- # clojure-norway (13)
- # clojure-spec (7)
- # clojurescript (19)
- # clr (1)
- # conjure (11)
- # datahike (2)
- # datomic (11)
- # emacs (26)
- # events (4)
- # hoplon (35)
- # hyperfiddle (41)
- # jobs (7)
- # lsp (10)
- # nrepl (3)
- # off-topic (57)
- # portal (47)
- # practicalli (1)
- # rdf (3)
- # reitit (21)
- # releases (1)
- # testing (6)
- # tools-build (16)
- # wasm (1)
- # xtdb (16)
Hi folks, does anyone have any experiencing in tracking test assertions that run in a spawned thread from the test they are executed in? I’m currently using reactor.test.StepVerifier that allows me to do some custom assertions (via Java Consumer functional interfaces). I can run standard is
assertions, however, the results of these assertions do not get tracked by the underlying test - am I just asking for too much or is there potentially another approach? Many thanks in advance!
You might be asking for too much, but basically - test assertions are tracked via the clojure.test/report
multimethod - if you can call this report method, and if you can await for each spawned thread to finish before reporting everything, then you might be able to do it.
@U3Y18N0UC many thanks for you reply 🙏:skin-tone-2: sweet I’ll keep hammering this. I think it might actually be working, and there’s simply a test-runner display issue. I forgot to mention I’m using Kaocha, so I’ll take another look at their docs to see what else I might do to improve the test statistics. Thanks again for your suggestions!
How would you design a system so that you minimized testing to only those things which changed? I know this is a hard problem*TM, what practical steps do you take to try and get 80% of the way there?
@U0DJ4T5U1 honestly I just try to get it to work first, then try to make tests vaguely sane, which depends greatly on the libs/toolkits you are using. I’m using Lacinia/pedestal for the main stack, websockets for graphql subscriptions. In terms of design you can become limit by the stack you use, the choices made, and the tooling available - I’m not even sure minimising is even worth it - how do you know that a change in domain entities won’t affect messages sent out of a websocket to the client? I run my tests all the time to make sure I assume as little as possible - ~200 tests, runs within 5 mins, faster if done in parallel. HTH
Thanks rowland, errrr i didn't realize my question got asked inside a thread. I didn't mean to do that.