Fork me on GitHub
#kaocha
<
2020-12-09
>
andrea.crotti12:12:01

is there a good example of a project configured with kaocha-cljs and/or kaocha-cljs2? I'm trying on a few projects but I never manage to get it to work, always fails to connect back to the browser

andrea.crotti12:12:20

probably just something dumb, if I see a working example I can probably figure it out

plexus12:12:03

lambdaisland/uri uses kaocha-cljs, literally just 100% defaults

plexus12:12:22

#kaocha/v1
{:tests [{:id :clj}
         {:id :cljs
          :type :kaocha.type/cljs}]}

plexus12:12:09

generally with kaocha-cljs it works better the less you hard you try 🙂 if you start giving it compiler options or such you'll likely break the repl setup.

plexus12:12:23

https://github.com/lambdaisland/kaocha-cljs2-demo is meant as a demo setup for kaocha-cljs2, but not sure what state it is in...

andrea.crotti13:12:36

ok thanks I'll check

andrea.crotti13:12:07

btw @plexus I think I noticed a pretty serious issue with my retry plugin, tests now are always passing no matter what and I found out why

andrea.crotti13:12:26

if the test function itself is called in

(defn- with-capture-report [t]
  (with-redefs [te/report (fn [& args] (reset! to-report args))]
     (t)))
it returns always true

andrea.crotti13:12:41

but if I run (t) directly it returns false when the test fails

andrea.crotti13:12:50

if I check what's passed to as args it also contains something like

:type :pass, :expected (= 1 1), :actual (#function[clojure.core/=] 1 1), :message nil}
which is weird since it failed

andrea.crotti13:12:56

so somehow the return for (t) itself depends on the reporter but also what's passed the reporter doesn't seem right, am I missing something silly?

plexus14:12:11

@andrea.crotti it seems like you're assuming you'll only get one event per test. That's not correct.

plexus14:12:24

you may get multiple pass/fail/error events for a single test

plexus14:12:36

as well as other events, custom assertions, etc

andrea.crotti14:12:51

ah right of course, there are multiple assertions

andrea.crotti14:12:15

so what would (t) return then in general?

plexus14:12:38

I don't think (t) really returns anything, or nothing meaninful

plexus14:12:07

I would have to check but I think you'll just get back whatever the last form in the deftest returned, which doesn't tell you anything

plexus14:12:22

all fail/pass information is communicated via the events

andrea.crotti14:12:24

ah ok I see, yeah I was fooled by the fact that I saw it returning true/false

andrea.crotti14:12:35

so I assumed that it was returning if the test failed or not

andrea.crotti14:12:42

cool ok makes sense

plexus14:12:02

yeah I think (is ...) will return true / false, makes it convenient from a repl, but that's mostly ignored in actual tests

andrea.crotti14:12:21

yeah I think I got it now, it makes it more complicated though, since I have to capture all the reports, and remove the duplicate failed ones when I report something that failed more than the max number of times

andrea.crotti14:12:57

but well should be easy to find the duplicates

plexus14:12:41

not sure how you're approaching it, but how I imagined it would work - run test, capture all events - did it fail? (there's a fail event) -> re-run the tests, always keep the recording of events from the last test - did it pass or too many retries? -> break the loop, you're done - now replay all events from the last time you ran the test

andrea.crotti15:12:51

yes that was what I was trying to do, but since that report has to contain multiple things I can end up with multiple reports for the same assertion failed

andrea.crotti15:12:18

even if not sure that's the best solution yet

plexus15:12:35

you're capturing all reports, do the reset inside the retry loop

plexus15:12:48

throw away all reports, except for the reports from the last retry

andrea.crotti15:12:04

ah right makes sense

plexus15:12:42

#(= :fail (:type %)) this is not correct, there are multiple assertions types that can mean failure, use the helpers that exist for this, see kaocha.hierarchy

plexus15:12:30

also those global atoms 🙈

plexus15:12:42

but good job! glad to see this come together

andrea.crotti15:12:43

hehe well I'll be happy to remove the atoms if I find out how

andrea.crotti15:12:59

and yeah cool I'll fix the fail check

andrea.crotti15:12:10

have to also add a few tests for the plugin as well

plexus15:12:51

you have access to the test-plan/config in every hook, and if not you can get it through a dynamic variable, so you can use any hook that runs relatively early to assoc them on there

plexus15:12:57

but really they're only used in a fairly limited scope, you could also add a new atom onto each leaf testable in pre-test, together with setting the testable/wrap

andrea.crotti15:12:39

ah right cool I can try

andrea.crotti15:12:08

thanks for the help, not sure I would have got to something almost working otherwise 😄

plexus15:12:04

of course! always happy to help, especially people making kaocha plugins 🙂

plexus15:12:54

I still think the design of kaocha is quite elegant, it's definitely powerful, but it's not trivial. Maybe some day I'll get to make some explainer videos 🙂

andrea.crotti15:12:05

yeah it's very nicely done for sure

andrea.crotti15:12:22

the problem is maybe that's just quite hard to deal with these massive nested maps

andrea.crotti15:12:40

even on a tiny project (which is what I use to test my plugin) it can be tricky sometimes

plexus15:12:37

yeah that's true, there's so much in there that it's hard to just inspect them. I have a good "mental map" of what they look like, that's perhaps the hardest thing to communicate. Even with the specs and everything. Maybe something like Portal can help here

andrea.crotti15:12:12

ah nice didn't know about portal

andrea.crotti15:12:20

I used cider-inspect sometimes

andrea.crotti15:12:29

and it helps for sure

andrea.crotti15:12:57

also I think that sometimes in the docs or in the plugins the arguments to the various methods are called differently

andrea.crotti15:12:59

even if they are the same thing

andrea.crotti15:12:19

which can be a bit confusing when trying to understand what something takes as argument

plexus15:12:17

yeah some of this is a bit confusing because for instance the test-plan and the config are really kind of the same map, at the top level you find the same stuff, the test-plan is really config+information about tests

plexus15:12:24

or test vs testable

plexus15:12:47

do make issues if you find certain docs confusing

andrea.crotti16:12:33

yeah config/test-plan is a good one, I would not have expected them to be the same thing pretty much

andrea.crotti16:12:18

I wonder if it makes sense to make it smarter instead of just blowing up like that. I guess if only test can possibly match what you want to run it could just run that

andrea.crotti16:12:13

I don't think you would really have tests that can run in different scenarios anyway? Well maybe cljc tests can run in both cljs and clj run but can't think of much else

andrea.crotti16:12:14

and actually for the example kaocha-cljs project, uri uses the node repl which is not something I can do anyway

andrea.crotti16:12:38

ah right no I didn't actually

andrea.crotti16:12:33

uhm that's fine, and well my PR maybe wasn't exactly what I was suggesting in the issue

andrea.crotti16:12:04

I mean it's fine not to specify the test-suite, but running multiple times the same test when you pass --focus is also confusing maybe

andrea.crotti16:12:25

but well I guess it's not really a big issue, since in most cases it won't happen anyway

andrea.crotti16:12:38

if test suites don't have overlapping paths/regexes

plexus16:12:12

you should not have duplicate test ids in a test-plan, so you should not have multiple test suites with the same type and overlapping paths. If you do you are going to get confusing results. I'd rather see us turn that into a warning or error.

plexus16:12:53

Note that this is not the case for cljc, cljs tests have a prefix (in kaocha-cljs it's just cljs:, in kaocha-cljs2 it's the name of the connection (random but human readable), so even though the same test file is twice in the test plan (clj and cljs) their test ids are different. We do give those tests aliases so you can still --focus them without the prefix, so yes if you have cljc tests and do --focus my.ns/my-test-var it will run two tests. If you do --focus cljs:my.ns/my-test-var it will run one

andrea.crotti16:12:12

yeah ok then thanks

andrea.crotti16:12:17

I close issue/pr