Fork me on GitHub
#off-topic
<
2020-10-26
>
seancorfield02:10:57

Ruby/Rake/Jekyll has finally caused me too much pain and I'm switching to Cryogen -- any hints/tips from others who've made the switch?

seancorfield02:10:28

For whatever reason bundle exec rake ... just stopped working and I can't seem to get a working Ruby install on macOS 10.12 -- and I hadn't touched it: it just broke. I've had so many problems trying to keep Ruby versions running over the years... I don't know how anyone works with that tech! Bah!

dharrigan06:10:48

I've started to play with jbake another static website/blog generator

dharrigan06:10:49

it's not too bad

Drew Verlee15:10:27

I have been a happy user of cyrogen. The only thing you can do to trip youself up is in choosing between several options on how/where to host. Github pages has a confusing model where a couple similar but different options end up with the same results. Not a cyrogen isuee per say, but as far as the full effect of having a blog is concerned, it was my major pain point.

seancorfield15:10:29

@U0DJ4T5U1 Can you elaborate? I've been hosting http://corfield.org on GitHub pages for ages (the http://seancorfield.github.io repo).

dharrigan16:10:20

ah, so it does šŸ™‚

Drew Verlee16:10:20

@U04V70XH6 If you have something working then there isn't anything to do. I ran into issues where are nicely documented by http://cryogenweb.org/docs/deploying-to-github-pages.html. > GitHub provides two basic types of hosting: user/organization pages and project pages. I started with one and tried to finish with the steps to the other because when i setup things originally the other option didn't exist.

3
Gargarismo08:10:14

I'm interested in some opinions from people who take testing seriously, as I've been faced with a thought-problem in that domain recently. I'm writing a test-runner and became interested in the idea of implementing dependency tracking. The gist of the idea goes like this: > if test A depends upon but is not testing the functionality tested in test B, the test-runner shouldn't waste it's time running A if B fails, because it is a waste of time to draw conclusions about it. I thought this was pretty clever,Ā  but I was discouraged by a coworker, who thought it was a waste of time, and I realized that the utility of it was predicated on a philosophy of unit-testing which was perhaps more specific to me than I realized. It seems to me "clean" practice to write unit-tests which isolate functionality as strictly as possible, i.e. the success of a unit-test should depend as little as possible on factors outside the implementation of the feature being tested. I can understand the contrary position. > Who cares how specific your unit tests as long as you test something? This viewpoint seems to be practical in the sense of the age-oldĀ fallacy of the unit-tester: unit-testing should never be assumed to guarantee the correctness of the code, the surface of which is too large for engineers to generally maintain tests against. Therefore, the work of writing of exceptionally isolated tests naturally goes against this grain, as it requires more work to test comprehensively the smaller the scope of your unit-tests. I think it checks out that a useful notion of dependency is predicated on specificity. In that contrary position, then, saving time and computing power on tests downstream from failing tests is a waste of time, because "downstream" is a moot concept. I'm desperately interested in the thoughts and opinions of programming public, and the experience and practicality of you clojurians in particular. The easiest question I'd ask, then, is, do you see any value or pitfall in the proposed functionality?

chrisblom08:10:56

How will you detect dependencies between functionalities?

Gargarismo08:10:15

In the service I'm writing that's manually specified.

Gargarismo08:10:48

And possibly not even in 1:1 correspondence to code per-se. If a test-set tests some feature, and another test depends upon the proper functionality of that feature, then that flies, too.

Gargarismo08:10:30

I should probably emphasize that the principal environment I intended this for was test-driven development-- making changes to code and re-running relevant tests on the fly.

Ben Sless09:10:10

I don't have enough experience to give an informed opinion on the subject but this talk reoriented how I think about tests, maybe you'll find it useful, too https://youtu.be/tWn8RA_DEic

gibb11:10:01

How come you're writing a test runner?

gibb11:10:06

"I thought this was pretty clever,Ā but I was discouraged by a coworker, who thought it was a waste of time, and I realized that the utility of it was predicated on a philosophy of unit-testing which was perhaps more specific to me than I realized." As someone sharing your budget I'd be worried that the whole endeavour of writing a test-runner might be a waste of time.

gibb11:10:15

Assuming it's for good reason, I'd argue that just terminating the test run on any error would give you a lot of the benefits.

3
Gargarismo16:10:36

@UK0810AQ2 This was a good talk to get a dialogue going at work.

Ben Sless17:10:21

@U016JEMAL4R I'm glad to hear, that was my hope. My 2c on the subject: In physics and engineering a good litmus test of a theory is testing it at its edges (usually 0 and infinity). Let's try to do the same with tests: ā€¢ zero unit tests, all end-to-end tests. You test only business features and flows. You know exactly which feature failed. You have no idea why or were, though. ā€¢ only unit tests, zero e2e tests: You know exactly which test failed and why, but you have no idea of its impact on any feature. It's a sort of uncertainty principle applied to testing šŸ™ƒ Ideally, you'd want both: If you could label tests with all the features that are relevant to them, you could test by feature. Combine that with the other dimension of dependency detection, and you'll start from the small units, allowing you to say: feature X fails at point A.

mjw17:10:51

Depending on what Iā€™m working on, there are times when I donā€™t want to be overwhelmed by 50 failing tests, in which case Iā€™d rather address failing test cases one at a time. At other times (especially if Iā€™m not testing in ā€œwatchā€ mode), I do want to know the full scope of my failing tests. Depending on what your actual users need, thereā€™s benefit to providing an option to short-circuit on any failure.

šŸ‘Œ 3
Gargarismo17:10:25

Those were my thoughts, too.

fadrian19:10:51

In addition, some test suites are absolutely massive. Omitting tests that will fail can be a great win here. However, whether or not this test runner is worth the time it will take to code it is another issue.