Fork me on GitHub

OT: I see Fulcro3 is using ghostwheel >defn et al. Any chance for a short field report on how useful it has been in practice? Is it worth introducing into new / existing projects? Has it been a mental overhead vs using plain defn (and maybe fdef / test.check explicitly)? I see about 20% of functions are >defn the rest still defn; is that mainly due to writing only new code with >defn or is there a heuristic for what gets specced?


Tony and I are working on SaaS product using fulcro and ghostwheel. We added it after building a substantial portion of the project. I would say it’s worth it.


fdef does not check specs on return values right? We feel that is useful, though I’m sure Cognitect has their reasons.


IMO to get your money’s worth you need to have a thorough and concise behavior driven test suite. Then when you change code you can see very obviously where your assumptions are inadvertently being broken. It starts to behave like a “type system” then.

👍 4

Because of this I write most of my business logic (and tests) in CLJC, even if it’s primarily used in in CLJS just because Clojure has better IDE integrations.

👍 4

Hope that helps


Thanks @U09FEH8GN - that helps. True, fdef by default only instruments arguments, not returns, but that can also be achieved via orchestra/defn-spec, provisdom/defn-spec and others. The other benefits are (1) automated test.check tests and (2) tracing. I was curious if you were also using ghostwheel for (1) and (2) (and how that compares to just writing defspecs yourself and maybe using tools like debux)


Oh also you’ll want to use something like expound to format the error messages nicely, the default error reporting is really hard to parse

👍 4

I was actually thinking of the reverse case that you described: is it worth writing business logic in CLJC just so we could use ghostwheel tracing in CLJS, even if we primarily intend to run it on the JVM (of course then we're limited to things that have no JVM interop)


Tony and I were talking about (1) yesterday, so far we haven’t really found a use case for it in our SaaS product. To us it seems like automated testing is better suited for certain algorithmic tasks especially when you have a known, but slow, reference implementation to use. And encoding/decoding functions.


I've primarily been using expound with orchestra/defn-spec and more recently provisdom/defn-spec for instrumenting inputs and outputs. It's been useful, but sometimes I struggle with the amount of "additional syntax" that appears around functions.


getting automated tests for typical SaaS application might be possible but getting the generated data right seems like such a big investment, since most of our function aren’t meant to operate on generic data, instead they operate on our model data


It's definitely useful on primary API functions, but I can't find myself wanting to spec every function I write


yeah i like how concise ghostwheel is


makes it easy to spec most functions, because why not?


also by trancing you mean measuring performance right?


no, by tracing I mean debugging forms


oh like figuring out where a function spec was violated?


oh i didn’t know of that, that’s really cool


thanks for sharing


in GW that only works in CLJS - I was hoping you two could tell me more of how useful it is in practice ^_^


@U09FEH8GN RE: generating domain-specific data we've been using to reduce the boilerplate and still allow us to write property tests


But again, even in this case... you need some amount of test setup / configuration / generator customization - so it sounds like something you would not use directly from ghostwheel fn specs


@U05476190 I have been using ghostwheel in a recent project, I'm loving it, you can setup it so it instrument both inputs and outputs, its very nice it integrates humanize errors, and it works good in Clojure too, I'm using on both cljs and clj, great experience IMO

👍 16

@U05476190 My initial use of gw was to get the co-located specs and more concise syntax, but especially the ability to do instrument/outstrument with a global config instead of using the instrument function manually. The addition >def also makes it possible to more consistently and easily elide specs from a production cljs build (where they hurt the size of the artifact produced). You can use something like (when goog.DEBUG ...), but that doesn’t turn on/off with the ghostwheel settings, so is useless for CLJ and then you end up with more complex expressions in the when, etc. I’ve not used the tracing, because, well, I just haven’t felt the need. I’m sure I’d appreciate it if I used it, and will try it out soon, but the other benefits are plenty enough for me to justify it in my libraries.


The benefits of having instrumented functions during development with good specs has been dramatically improving my dev experience, though I admit I’m still coming to grips with best practices.


And as @U09FEH8GN said: Make sure you get the logging set up right for clj and cljs so the spec failures are good. See the new logging helpers in fulcro 3 for cljs…that really made a world of difference for me.


Eliding the specs from the CLJS production build is something I had not considered as a benefit of the >def approach.


Thanks @U09FEH8GN @U066U8JQJ and @U0CKQ19AQ for the field reports!