This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-10-26
Channels
- # announcements (7)
- # aws (1)
- # babashka (15)
- # beginners (144)
- # calva (8)
- # chlorine-clover (15)
- # clara (4)
- # clojure (65)
- # clojure-europe (131)
- # clojure-france (1)
- # clojure-nl (6)
- # clojure-nlp (3)
- # clojure-spec (10)
- # clojure-uk (52)
- # clojuredesign-podcast (2)
- # clojurescript (28)
- # cryogen (1)
- # datomic (17)
- # events (2)
- # figwheel-main (2)
- # fulcro (8)
- # hugsql (2)
- # jackdaw (4)
- # jobs (1)
- # leiningen (8)
- # lumo (1)
- # malli (4)
- # off-topic (23)
- # parinfer (3)
- # pathom (3)
- # pedestal (5)
- # re-frame (9)
- # reagent (26)
- # reitit (13)
- # reveal (25)
- # shadow-cljs (45)
- # spacemacs (7)
- # sql (7)
- # tools-deps (40)
- # vrac (2)
- # xtdb (22)
Ruby/Rake/Jekyll has finally caused me too much pain and I'm switching to Cryogen -- any hints/tips from others who've made the switch?
For whatever reason bundle exec rake ...
just stopped working and I can't seem to get a working Ruby install on macOS 10.12 -- and I hadn't touched it: it just broke. I've had so many problems trying to keep Ruby versions running over the years... I don't know how anyone works with that tech! Bah!
I have been a happy user of cyrogen. The only thing you can do to trip youself up is in choosing between several options on how/where to host. Github pages has a confusing model where a couple similar but different options end up with the same results. Not a cyrogen isuee per say, but as far as the full effect of having a blog is concerned, it was my major pain point.
@U0DJ4T5U1 Can you elaborate? I've been hosting http://corfield.org on GitHub pages for ages (the http://seancorfield.github.io repo).
@U11EL3P9U jbake
is what http://clojure.org uses.
@U04V70XH6 If you have something working then there isn't anything to do. I ran into issues where are nicely documented by http://cryogenweb.org/docs/deploying-to-github-pages.html. > GitHub provides two basic types of hosting: user/organization pages and project pages. I started with one and tried to finish with the steps to the other because when i setup things originally the other option didn't exist.
I'm interested in some opinions from people who take testing seriously, as I've been faced with a thought-problem in that domain recently. I'm writing a test-runner and became interested in the idea of implementing dependency tracking. The gist of the idea goes like this: > if test A depends upon but is not testing the functionality tested in test B, the test-runner shouldn't waste it's time running A if B fails, because it is a waste of time to draw conclusions about it. I thought this was pretty clever,Ā but I was discouraged by a coworker, who thought it was a waste of time, and I realized that the utility of it was predicated on a philosophy of unit-testing which was perhaps more specific to me than I realized. It seems to me "clean" practice to write unit-tests which isolate functionality as strictly as possible, i.e. the success of a unit-test should depend as little as possible on factors outside the implementation of the feature being tested. I can understand the contrary position. > Who cares how specific your unit tests as long as you test something? This viewpoint seems to be practical in the sense of the age-oldĀ fallacy of the unit-tester: unit-testing should never be assumed to guarantee the correctness of the code, the surface of which is too large for engineers to generally maintain tests against. Therefore, the work of writing of exceptionally isolated tests naturally goes against this grain, as it requires more work to test comprehensively the smaller the scope of your unit-tests. I think it checks out that a useful notion of dependency is predicated on specificity. In that contrary position, then, saving time and computing power on tests downstream from failing tests is a waste of time, because "downstream" is a moot concept. I'm desperately interested in the thoughts and opinions of programming public, and the experience and practicality of you clojurians in particular. The easiest question I'd ask, then, is, do you see any value or pitfall in the proposed functionality?
In the service I'm writing that's manually specified.
And possibly not even in 1:1 correspondence to code per-se. If a test-set tests some feature, and another test depends upon the proper functionality of that feature, then that flies, too.
I should probably emphasize that the principal environment I intended this for was test-driven development-- making changes to code and re-running relevant tests on the fly.
I don't have enough experience to give an informed opinion on the subject but this talk reoriented how I think about tests, maybe you'll find it useful, too https://youtu.be/tWn8RA_DEic
"I thought this was pretty clever,Ā but I was discouraged by a coworker, who thought it was a waste of time, and I realized that the utility of it was predicated on a philosophy of unit-testing which was perhaps more specific to me than I realized." As someone sharing your budget I'd be worried that the whole endeavour of writing a test-runner might be a waste of time.
Assuming it's for good reason, I'd argue that just terminating the test run on any error would give you a lot of the benefits.
@UK0810AQ2 This was a good talk to get a dialogue going at work.
@U016JEMAL4R I'm glad to hear, that was my hope. My 2c on the subject: In physics and engineering a good litmus test of a theory is testing it at its edges (usually 0 and infinity). Let's try to do the same with tests: ā¢ zero unit tests, all end-to-end tests. You test only business features and flows. You know exactly which feature failed. You have no idea why or were, though. ā¢ only unit tests, zero e2e tests: You know exactly which test failed and why, but you have no idea of its impact on any feature. It's a sort of uncertainty principle applied to testing š Ideally, you'd want both: If you could label tests with all the features that are relevant to them, you could test by feature. Combine that with the other dimension of dependency detection, and you'll start from the small units, allowing you to say: feature X fails at point A.
Depending on what Iām working on, there are times when I donāt want to be overwhelmed by 50 failing tests, in which case Iād rather address failing test cases one at a time. At other times (especially if Iām not testing in āwatchā mode), I do want to know the full scope of my failing tests. Depending on what your actual users need, thereās benefit to providing an option to short-circuit on any failure.
Those were my thoughts, too.