Fork me on GitHub
#joyride
<
2023-01-06
>
pez11:01:52

Apropos Add e2e testing there ^: It is a bit mind-boggling that we are using Joyride to test Joyride. What we do is that we run VS Code and then run this:

vscode.commands.executeCommand("joyride.runCode", "(require '[integration-test.runner :as runner]) (runner/run-all-tests)");
It has the added benefit that a lot of Joyride needs to work in order for it to run the tests. So there's an extra layer of smoke testing there. It would have stopped an incident we had a while ago where we released a non-functioning Joyride version.

borkdude12:01:30

I think it's a nice approach. One potential issue: how do we know for sure that the SCI cljs.test version works and doesn't accidentally mark everything as succesful or forgets to run certain tests? :) Maybe there should be at least some checks on the outside as well after the tests are run? In babashka I do this as follows: test the bb clojure.test version using the host's clojure test and test evaluation of individual things the same way. Then I run several library tests in babashka like we do above with joyride. But at least I've got a layered approach that ensures every layer works

pez12:01:01

What would such tests look like? We could maybe wrap some known mix of succeeding/failing tests some way that would fail the tests if the result tally of these is wrong...

borkdude12:01:29

Just added something to my comment above, how I do this in bb

🙏 2
borkdude12:01:54

we could maybe also just check the expected stdout, or write to a log file in the tests

borkdude12:01:07

and then verify the output of the log file

pez12:01:09

To use the host's cljs.test (shadow) we will need to factor the joyride.sci code such that it does not rely in vscode, I think.

borkdude12:01:31

we don't need to use cljs.test per se, just some assertions will do

borkdude12:01:47

using electron/node

pez12:01:25

> we could maybe also just check the expected stdout, or write to a log file in the tests This is where I was going with my first reply above.

borkdude12:01:28

I think it would be good to check: • The number of tests that have been run Maybe other stuff?

pez12:01:40

But write some special tests for this, maybe as simple as one succeeding and one failing, just to see that we don't ”accidentally mark everything as succesful”.

borkdude12:01:09

or maybe do this in some tests: add something to a global state and then at the end of the test suite, we verify in the integration test runner that the global state thing is as expected

borkdude12:01:22

so we know at least the coverage has been as expected

borkdude12:01:47

I think maybe in the cljs test multimethods this would work

borkdude12:01:00

so we can confirm the number of assertions made, I think that's sufficient

borkdude12:01:36

I have trust that assertions work, so if we can just assert that "at least x assertions" have been made, it's good

pez12:01:42

We already keep a counter of successes and failures in an atom, a slight change to where we report back on the test run success of failure should do it. Like so:

(defmethod cljs.test/report [:cljs.test/default :end-run-tests] [m]
  (old-end-run-tests m)
  (let [{:keys [running pass fail error]} @db/!state
        passed-minimum-threshold 20]
    (println "Runner: tests run, results:" (select-keys  @db/!state [:pass :fail :error]))
    (if (or (> 0 (+ fail error))
            (< pass passed-minimum-threshold))
      (p/reject! running true)
      (p/resolve! running true))))