This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (2)
- # aws (1)
- # beginners (175)
- # boot (3)
- # calva (19)
- # cider (18)
- # clj-kondo (5)
- # cljsrn (18)
- # clojure (47)
- # clojure-europe (9)
- # clojure-finland (7)
- # clojure-italy (3)
- # clojure-nl (15)
- # clojure-spec (20)
- # clojure-sweden (2)
- # clojure-uk (72)
- # clojurescript (45)
- # cursive (42)
- # datomic (6)
- # duct (4)
- # emacs (4)
- # expound (48)
- # figwheel-main (7)
- # fulcro (64)
- # graphql (8)
- # hoplon (9)
- # hyperfiddle (1)
- # jackdaw (8)
- # jobs (4)
- # jobs-discuss (61)
- # klipse (5)
- # leiningen (6)
- # off-topic (72)
- # pathom (2)
- # planck (11)
- # re-frame (1)
- # reagent (3)
- # reitit (16)
- # remote-jobs (17)
- # ring-swagger (3)
- # shadow-cljs (37)
- # spacemacs (12)
- # sql (3)
- # tools-deps (124)
- # vim (64)
- # xtdb (4)
Does anyone know of any studies that measure the correlation between generative testing and software quality?
The case for general code coverage as a metric seems to be very weak. But I wonder if it would be a different story for generative tests.
IME problem with code coverage is that people mean different things when talking about it. Some people and tools think that “if I call each function in my tests at least once I have 100% test coverage” which is a quite different than “calling each function with all the possible parameter invariants”.
> IME problem with code coverage is that people mean different things when talking about it.
Related: coverage should be split depending on the test suite type
curl localhost can get you 65% coverage, which demonstrates the limited accuracy of aggregating test types under a single metric.
Assuming one categorizes tests according to e.g.
functional etc, then each category should generate its own coverage
I don't recall a tool built with that in mind, but haven't used a lot of them tbh
Is there any data I can present to my higher-ups besides my own opinion to show the value of coverage (even different kinds of coverage)?
It’s hard to find empirical evidence that any testing is beneficial, though anecdotally I would swear by it.
Testing is not always beneficial. Good and sensible testing usually is. I’m afraid it may be hard to find studies supporting that because what is ‘good’ and ‘sensible’ depends on the context.
And don’t get me wrong, I am a huge fan of testing. 🙂 It’s just.. complicated. I’ve written and I’ve seen other people write very bad tests that just add waste to the project. OTOH I’ve worked in a project where we developed and maintained a huge web-store with zero tests. It was one of the worst experiences I’ve had.
I think you probably should try to take a step back and ‘sell’
quality as a value to your higher-ups. All though that’s really hard if it’s not baked in to corporations core values somehow… And that’s a smell. 🙂
I’m not convinced measurability is important, but then again I’m not a business person.
you can have a system that works 100% but can never be changed. I would consider it a bad quality system
well, in my experience, the only thing I can count on is any system I build will need to change at some point
whether it needs to be moved from AWS to GCP, or needs to support a new feature, or bugs need to be fixed
I’m reading “Software Design X-Rays” which tries to analyze code history and show which parts of the system are the ones that have the technical debt, which is mostly the ones that change often by multiple people. It’s a very interesting read so far.
ah looks like the same author as https://pragprog.com/book/atcrime/your-code-as-a-crime-scene
OTOH, in my previous job I was doing software interactives for museums — write once, never change. That was an interesting valley — if it works and looks good (try to test that, on a tight budget), you could move on 🙂
I remember seeing a tool that claimed to do something similar with
git. Analyzing history and seeing which files were always edited together and possibly tangled.
git log --format=format: --name-only | egrep -v '^$' | sort | uniq -c | sort -r | head -15
Does the book have a spell for analyzing which files are always edited in same commit?
that’s straight from the book — gives you a sorted list of the most frequently changing files
Any data points on how long a new hire takes to get on-boarded fully in a new environment that might have new languages, frameworks, etc?
I’ve seen some comments that new hires can learn enough Clojure to get productive in 2 weeks, but that seems an extremely short period of time.
That presumes you have an existing codebase and relatively stable dev practices (someway to get the codebase up and running with a handful of commands)
java + python/ruby experience tends to be the shortest path (outside of actual clojure or lisp experience) to productive
I've only ever compared one experienced CL user, but that person had dabbled in clojure in the past
I kinda suspect that dynamic language workflow experience is slightly more crucial than java experience, but that also is probably project-dependent.
probably a few weeks to get some commits in, but then multiply some amount of familiarisation time for each drastically different part of the system that is encountered
I added Zulip Mirror Bot so the discussions here get archived to Zulip and become searchable. I was looking for that link to the long post about why companies can't always hire remote workers outside their country (or even outside their state). Does someone still have that link?
that HN post chimes with our early experiences with attempts to employ remotely in other EU (we're in the UK) countries - we found we couldn't understand how much it was going to cost us to employ someone in another EU country, and that it was memorably rather more than we expected. as a small company, with limited resource to investigate, it is much lower risk to stay within familiar territories