Fork me on GitHub
#kaocha
<
2019-04-11
>
rickmoynihan08:04:03

I’d suggest putting (println (s/instrument)) in that fixture to check it gets called… IIRC instrument will return all the vars it finds for instrumentation… so check it has loaded the fdefs as you think

akiel12:04:22

Thanks. I did this already. Do you successfully use instrumentation with kaocha?

rickmoynihan13:04:58

Not tried yet… kaocha is on a different project right now. Are you sure you can use instrument in that manner — with s/fspec? Not got the docs in front of me, but whenever I’ve done it I’ve just used an s/fdef at the top level, and then just called instrument.

akiel14:04:29

My fdef for compile is:

(s/fdef compile
  :args (s/cat :context ::compile-context :expression :elm/expression)
  :ret :life/expression)
I use that for development. So I can be sure that the :context is always a production like ::compile-context. In unit tests however, I don’t use a real production context. Thats why I override the compile spec with the one allowing any? for the context.

claynon13:04:22

I'm starting to use kaocha and I'm experiencing some problems with the watch mode When I change a namespace it detects correctly all the namespaces that use the changed one, but it reruns all tests again My kaocha setup is pretty simple and I tried to simplify the workflow as much as possible to find the problem, but I didn't find the culprit Does anybody have an idea on what I should check?

timgilbert18:04:12

I've seen behavior like this too, the tests are re-run but new code from my namespaces doesn't actually get reloaded in the VM. I've been assuming it's problems with some of my fixtures hanging on to old closures somewhere

plexus08:04:40

> When I change a namespace it detects correctly all the namespaces that use the changed one, but it reruns all tests again Can you elaborate on what you're expecting and what's happening instead? Kaocha will re-run your full test suite when it detects a change, it sounds like you expect it to only run the affected tests, but that's not what it (currently) does. What it does do is first running failed tests, so if some tests failed during the previous run, it will keep re-trying just those until they pass again. Only then will it re-run the full suite.

claynon08:04:36

> it sounds like you expect it to only run the affected tests, but that's not what it (currently) does I was assuming that would only run the tests that were affected. Thanks for clarifying > first running failed tests very helpful to know that thanks 🙂