This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-12-28
Channels
- # announcements (1)
- # babashka (28)
- # beginners (228)
- # cider (9)
- # clara (6)
- # clojure (66)
- # clojure-uk (13)
- # clojuredesign-podcast (5)
- # clojurescript (9)
- # core-typed (1)
- # cursive (1)
- # duct (2)
- # emacs (8)
- # fulcro (18)
- # graalvm (11)
- # hyperfiddle (1)
- # malli (2)
- # off-topic (33)
- # re-frame (9)
- # reagent (3)
- # reitit (15)
- # shadow-cljs (6)
- # tools-deps (1)
Is there ever a need to mock anything that is not stateful when it comes to testing?
@srijayanth That's a good question... If a function being called is pure -- with no side-effects at all -- then calling it can never cause anything observable so there's no reason to mock it that I can think of...
I can’t find an example, but this implies that mocking really is not necessary unless a function deals with state
"state" includes any input, any output, any randomness...
Right. State/Effect
If the side-effect free function takes a ridiculously long time to execute on every call, then mocking it with something which returns a dataset from eg. plain file is useful.
@suomi.esko - great point
Of course it's a trade-off, I can't really think of anything specific which would fall into that category 😛
The talk is good, but I think I’d like a more succinct way of putting this across to people
the TDD crowd doesn’t like hearing that one doesn’t quite need mocks or that mocks/stubs are a cottage industry based on stateful programming
frameworks around it rather
In my opinion, everything that can be stand up with just a git clone doesn't need mocking
@srijayanth There are two schools of TDD: one is heavy on mocks, the other one is not.
Yeah. I mean, I’ve done that before in my life. I recently saw a mocked example in JS of an async call with the callbacks having assertions in them. I think that’s really taking it too far
Confuses the hell out of me. I’m deficient that way
It wasn’t wrong and the assertion was the right thing to do in the callback, but still, just looking at it sent my head spinning
What are gotchas around with-redefs
and with-redefs-fn
? I am guessing running tests in parallel might cause issues….?
Generally, inputs will be mocked in order to control them, so if you have side-effecting input, you'll need to mock those. And outputs are mocked in order to assert them. But if ins and outs are the arguments and return value, you don't need a mock. Just call the fn with the test inputs and assert its returned value.
@didibus - I think that’s perfectly fine to mock and stub when that is the use case
In my opinion, there isn't really any other use case. When code is very imperative, the outputs are all over the place. So you need to mock a lot of things. And one test can affect the state of another. It gets quite hairy
Now, the scope of what to test can be debated. Some people test only public things, and impl fns would be tested indirectly through its use from the public fns.
Agreed.
Others test every one of them in isolation, that requires mocking the return of depended fns as well, so more mocking is needed for that style of testing
I'll tell a little story about the "dangers" of mocking...
Sure.
Back when I worked at Vodafone in the UK -- a cell phone company for those who don't know -- I worked on the team that built the world's first pay-as-you-go system. We built it in two halves: the billing system and the actual cell connectivity part, and we built mocks for each as we went so we could develop and test each half completely independently. All good so far.
Then we shipped the system into the QA team and soon they came back with the "all green" results. We were a bit surprised they found no bugs at all in testing.
Until we realized that we had accidentally shipped the system with the mocks enabled...
So they tested the mocks and of course the mocks passed all the tests -- by design.
Wow. The mocks were baked into source? Some sort of config toggle?
It was a complex embedded system. We just packaged it incorrectly when handing it off to QA.
Ah ok.
I’d guess the challenge was more to do with the packing than the mocking itself
But I see the point
The lesson is: if you use mocks, be careful you don't end up just testing the mocks rather than your real code.
Yeah, absolutely
Ya, I feel every mock is just one more thing you don't test. So I rather minimize what I mock
Just found out about this: https://github.com/clojure-expectations/clojure-test that makes me much more likely to try and use the added conveniences of Expectations, where I never bothered before, because of how well integrated clojure.test is with everything.
> The lesson is: if you use mocks, be careful you don't end up just testing the mocks rather than your real code.
This. A similar principle I advocate is "don't test the compiler". Sometimes a given defn just contains a single if
, so testing the defn would really test clojure.core/if
For those cases, instrumenting the defn with spec
can give a greater ROI
are mocks typically used with generative property testing as well? e.g. in a code base that predominantly relies on gen. prop. rather than unit tests
Spec’s instrument supports automatic mocks based on specs
There is some regret over even adding defn-, but we don't take things away... At one point all of the current metadata niceties didn't exist (used to be #^{:private true} some may recall) and defn- seemed worth doing I presume (pre-dates my involvement in core). But then that was all simplified down to just ^:private and it's preferred to compose the pieces rather than copy N things. There used to be a slew of these in the old clojure-contrib (https://github.com/clojure/clojure-contrib/blob/master/modules/def/src/main/clojure/clojure/contrib/def.clj - but no def- !).
If you look at the frequency of need, private on def is far less common than private on defn
I'd use it in production maybe as a MVP. As soon as you have a serious production app it's more worthwhile to do custom instrumentation, e.g. logging instead of failing. Maybe asynchronously for minimizing latency impact ...Or double down on unit testing / QA so you don't have to rely on production tests
and how is it compared with the :pre and :post hook of a function? Both do sanity checks.
https://clojure.org/guides/spec#_instrumentation > It is not recommended to use instrumentation in production due to the overhead involved with checking args specs.
Good question! * you cannot add :pre/:post to 3rd party defns. In practice I doubt people instrument such code, but eventually it can be very convenient * instrumentation is more flexible. With :pre/:post you can only choose whether the AssertionError iis thrown. With instrumentation it seems more trivial to build custom tooling (like logging instead of throwing, or only instrumenting a certain set of functions) * instrumentation has the burden of having to effectively activate it in all code that was intended to be speced. This is not a trivial problem, which is why currently it's only solved in e.g. Orchestra
I do :pre/:post, but instrumentation seems a good choice provided you take the time to understand it and set it up correctly. There will be quirks.
Also there are things you cannot fully spec, like anonymous fn
s (since those fns don't relate to a var
that one can instrument)
Everything else it is passed that at test time you have turned instrumentation on and performed generative testing on, and thus know that it all works
A pattern I've seen a lot is
(when-let [e (s/explain-data ...)]
(throw (ex-info (s/explain-str e) e))
i can say that we are definitely using instrument (with orchestra + expound) in production
in practice i would say that ~ 50% of our functions are running with instrumentation in production
Make the ring handlers depend on an env var, or in a Java property
Java properties can be nice because Lein can manage them: :profiles {:production {:jvm-opts ["-Dmyapp.ssl=true"]}}
A config lib is likely cleaner. I like/use https://github.com/juxt/aero