Fork me on GitHub
Jakub Holý (HolyJak)19:04:49

@tony.kay any tips for using fulcro-spec effectively in IntelliJ or CLI? My main issue is that (assertions ... (set ..) => #{...}) just prints both sets instead of telling me what the actual difference is. I believed I saw that the humane output library could improve that somewhere but do not see it mentioned anywhere. (Perhaps it was in another library.) So what do you do? (I use the fulcro-spec.reporters.terminal/fulcro-report )


This is the Cursive REPL command I use (which requires setting the JVM -Dtest CLI option, which is a protection to keep me from running tests in the wrong REPL).

🙏 1

Then I add a kb shortcut for that (and focused)


See main thread for other comments


full disclosure, i wrote this for fulcro-spec as an experiment and it is not fully baked for serious usage, but it might help or (at least) inspire you. It is also simple and small enough that you can rewrite any checker to better suit your needs or preferences, but I still caution any users to understand what they are getting into, and that this might not be the best solution if you arent willing to use this judiciously and thoughtfully. specifically for your case:


some of the problem specifically also comes from testing sets, as what you are trying to test could be different than just equality, which is why that function is called subset, but you could easily implement any assertion and reporting you want


again i caution any potential user that this code is experimental, and likely to not be seriously maintained


Write better assertions 😄


We’ve tried various things through the life of that lib to get good diffs. The thing is that that macro just outputs is statements


so, the reporter is really the answer


but my take is that if you’re interested in things being in a set, and the set is big, you may be doing something that is a bit overboard in your test


for example, not narrowing the focus of the setup to elide crap you don’t care about


but you could also write your assertions like (set/difference a b) => #{} and then the opposite (diff b a) is also empty


So, let me summarize “what I do”: • Make sure the test setup is as specific as possible to eliminate noise • Split data structures into specific things I care about, for example, do I care that a particular element is in the set? If so, I use contains. Minimal output is ideal, with a behavioral description. THis leads to more assertions, but also GREATLY improves readability on failure, and comprehension about what is wrong. Diff, to me, turns out to be mostly an antipattern in good tests. It is a crutch that leads you to write lazy tests that are not that good. Sometimes the data IS big…but chances are you care about each detail…there are cases where diff is good/nice, so this is not a hard rule. • Make the assertion naturally lead to small output. It usually isn’t that hard to do. During test development I might assert a large thing is empty, see that it is right, and them make speicifc assertions about the things I actually care about in the test in question. If you’re writing such a test, it is likely integration level, and you might consider testing the separate bits in smaller units. A failure that says “REST API request is right” is terribly unhelpful when regressions happen. You see the smoke, but have no idea where the fire is.

👀 1