This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-11-16
Channels
- # announcements (62)
- # babashka (12)
- # babashka-sci-dev (73)
- # beginners (16)
- # biff (10)
- # calva (65)
- # cider (13)
- # clerk (8)
- # clojure (31)
- # clojure-europe (16)
- # clojure-nl (1)
- # clojure-norway (19)
- # clojure-spec (24)
- # clojure-uk (5)
- # clojuredesign-podcast (18)
- # clojurescript (18)
- # dev-tooling (2)
- # emacs (30)
- # etaoin (4)
- # gratitude (3)
- # hyperfiddle (20)
- # integrant (2)
- # jobs (4)
- # kaocha (7)
- # malli (1)
- # observability (8)
- # off-topic (11)
- # pathom (12)
- # podcasts-discuss (7)
- # portal (12)
- # quil (3)
- # re-frame (6)
- # releases (1)
- # sql (22)
- # squint (5)
- # testing (79)
- # tools-deps (3)
- # xtdb (20)
Xiana framework released a new version with swagger-ui available. https://github.com/Flexiana/framework
Hi everybody! The https://github.com/flow-storm is happy to announce the first release of Clofidence, a test coverage tool for Clojure! https://github.com/flow-storm/clofidence The readme contains setup instructions and more. As an example find in the thread the test coverage report for the ClojureScript compiler tests for version 1.11.60 if you want to have a sense of what Clofidence is capable of. As usual feedback is welcome.
What are the chances to emit reports in cobertura's format? https://cobertura.github.io/cobertura/
I never used that tool, is there a place to see what that format is about?
For some arcane reason gitlab CI can process coverage reports in that format, and only that format
https://docs.gitlab.com/ee/ci/yaml/artifacts_reports.html#artifactsreportscoverage_report
do you have link to an example of a cobertura xml file? couldn't find one
Interesting! Two questions: 1. How does this compare to cloverage? 2. Is there a way to use this with the kaocha test runner?
I'm looking for examples, I remember there was a spec somewhere but obviously I can't find it now
@U08BJGV6E For 1. tbh I can't compare it with Cloverage because when I tried it on my projects it crashed with methods being too large to instrument, and I couldn't get any reports. So I'm not sure of what the Cloverage reports looks like. If you have some at hand to share, we can maybe compare. For 2. If you can run the tests by calling a function it should work. I guess the kaocha runner is just a function call with maybe some parameters, so I haven't try it but shouldn't be any problems.
Thank you, that's interesting - one problem we're having with Cloverage is exactly the method too large one
:exec-args {:report-name "my-app"
:test-fn kaocha.runner/exec-fn
:test-fn-args [{}]}
for anyone wondering
Edit: doesn't work as exec-fn does a System/exit at the endClofidence can also hit the same issue, like any other instrumentation system since there is a limit on how big a JVM method can be, but since it is instrumenting at the compiler level I think the extra bytecode is as tight as it can. Also if it hits a method too large exception it will notify on the console and continue with the function uninstrumented
I just managed to hit that yes. With cloverage I have around 16 such methods, with clofidence I got notified about 2
make sense
@U08BJGV6E is there a way to make Cloverage work in those situations? last time I tried it was just crashing
For us it logs non-fatal errors to the console, somewhat less gracefully than clofidence, but it goes on and produces a report. It doesn't however state (unline clofidence) how it proceeds with the un-instrumentable form so that's a bit of an unknown to us at the moment, but tests results don't change between no instrumentation/instrumentation with said errors/instrumentation with exclusions set up for the difficult forms.
Right now we maintain a list of exclusions just so we are in better control of what happens
Yeah, so I was able to run Cloverage on one of my projects and since you asked for differences, one I see is that Cloverage seems line based, while Clofidence is expression based, maybe that is why. Also I haven't done any perf passes yet
I'm not sure how Cloverage reports when you have a bunch of expressions on the same line but only some of them executed
But I don't know any of its internals so wouldn't be able to comment on the tech differences
but I mean like this
here you can see that normalize-gensyms branch wasn't take
oh, I see that is has splited it in multiple lines, not sure if it always does that
Okay, looks like my first kaocha approach didn't work as exec-fn does a system/exit at the very end so I end up with no coverage report 馃槢
same happened to me when I ran the ClojureScript tests, there was a System/exit as the last step, had to run all over again
probably need a kaocha plugin to make that work as kaocha likes to be on the outside https://github.com/lambdaisland/kaocha-cloverage/blob/main/src/kaocha/plugin/cloverage.clj
yeah, there lots of things to work on, stuff like that, and also there is room for many features. Another nice thing is the entire project is like 200 LOC, with another 200 I copied from FlowStorm to pprint the forms that I'll be moving to a lib soon
I noticed that clofidence uses a custom clojure compiler underneath. How closely does that follow the official one? How clofident I can be that if I run tests through that, my system will work the same way when it is executed in production using the official compiler?
yeah, so ClojureStorm is tracking the official Clojure compiler, just adding some extra bytecode for tracing expressions, fn calls, etc. So it is the same compiler. Of course there could be bugs, but it has been in use for some time with FlowStorm
Also I'm not using it for running tests, I think the tests should be run with the official compiler, this is low risk only, just for dev stuff like debugging, coverage, etc
Hmm so in CI you recommend executing tests twice, once with coverage off, using the official compiler and once with coverage on?
I mean, you choose, but same thing happens with cloverage, it is changing your code by an instrumented one, so how confident can you be that it doesn't change code's behavior
This bit is interesting:
Which forms are included in the report?
By default only forms with the first symbol name being one of : defn, defn-, defmethod, extend-type, extend-protocol, deftype and defrecord.
If you have other types of forms like the ones defined by some macros, you can include them by using :extra-forms in the configuration parameters. It takes a set of symbols like :extra-forms #{defroutes my-macro}.
So the methods that were found too large by clofidence in my code are defined by a custom macro, which is somewhat surprising as I haven't specified :extra-forms
for the run. The custom macro expands to a couple of defn
s however, could this have something to do with it?so, it is instrumenting all of them, just not including them on the report
Oh. There must be a reason for that. Also, is there a way to include everything instrumented in the report and perhaps have a blocklist instead of an allowlist?
currently there isn't but sure, it can be added pretty easily
this is so the report doesn't include things like deftest forms if you weren't able to skip them with the ns prefix instrumentation
maybe it is also slower because it is currently instrumenting the tests forms probably, unless you have all your tests under my-app.tests. so you can skip
I normally have a namespace suffix for tests, I think that's more or less standard in clojure
yeah, I should improve ClojureStorm ns instrumentation by regex instead of prefix as it is now, which is much more powerful
if you have any ideas also, feel free to open issues or show up in #C03KZ3XT0CF to discuss them
I can open one about this filtering thing for sure just for the sake of documentation
thanks!!
@UK0810AQ2 sorry, what is add-test and how do you think it could be affected?
you mentioned:
this is so the report doesn't include things like deftest forms if you weren't able to skip them with the ns prefix instrumentation
So I was wondering about code in forms like add-test
or with-test
that add the test metadata to an existing var, how they'd interact with the systemso, this will instrument all the code that matches the instrumentOnlyPrefixes
and then run whatever your test-fn runs, and collect the coordinates that were hit per form.. The thing you mention there is what forms should be displayed on the final report. That is so you don't display forms like (deftest ....) which you are probably not interested in. If a form starts with (add-test ...) for example, it wont be displayed in the report, but you can add them using the :extra-forms
key