Fork me on GitHub
#clojure-dev
<
2019-10-29
>
devth03:10:44

Low volume with occasional bursts of API calls (estimate 10-100 calls an hour when people play with it). It can list recent issues, search via jql, list components, versions, users, comment on issues, log work on issues, update issues, create issues. I expect we’d primary use read only features.

Alex Miller (Clojure team)03:10:44

why would the bot comment or update on issues?

Alex Miller (Clojure team)03:10:28

who would be "playing" with it?

Alex Miller (Clojure team)03:10:00

I'm pretty wary of having a bot make edits that obscure the actual user

Alex Miller (Clojure team)03:10:31

why do you need to use this against the Clojure jira? can't you test against a different project?

devth03:10:27

I didn’t mean that we would use it that way, just that it has the capability.

devth03:10:46

Does jira provide access controls that would limit it to read only?

Alex Miller (Clojure team)03:10:14

I don't remember the details off the top of my head

devth03:10:41

Why this project: Because if/when I get it on Clojurians it would be fun and useful to query Clojure’s own issues in Slack

devth03:10:13

But yes I can test against a different project

Alex Miller (Clojure team)03:10:55

yeah, I'm not going to do all that

seancorfield03:10:00

@devth I can head that off straight away as one of the Admins here: we're on a free plan and we have a general rule of not allowing any token requests or any new apps.

andy.fingerhut03:10:09

(Sean is speaking from the perspective of Clojurians Slack admin here, I'm pretty sure, not Clojure JIRA)

seancorfield03:10:38

So, if you're building this with the hope of somehow integrating Clojure's Atlassian/JIRA account into this Slack, it's not going to happen. There's already a rock solid Jira Cloud integration for Slack and we've already had to reject several requests to add that to this Slack.

seancorfield03:10:20

Yes, as Andy says, I'm speaking as part of the Admin team for Clojurians Slack (not Cognitect-associated and not speaking on behalf of the admins of the Clojure Atlassian setup).

4
slipset13:10:54

So, Stuart Halloway jumps in to a silly discussion @marc-omorain and I were having on twitter with this remark:

slipset13:10:04

>@atmarc @slipset I would love to test moved out to a separate lib.

slipset13:10:37

where “test” refers to clojure.test.

slipset13:10:21

Would a ticket for such an endeavour be appreciated?

Alex Miller (Clojure team)13:10:11

it's already on our list, don't need a ticket

Alex Miller (Clojure team)13:10:49

in particular, Sean Corfield and I have been talking about this for a year+ and he's got a list of items

slipset14:10:30

FWIW one of the things I remember really appreciating when picking up Clojure was that it came with stuff like clojure.test and clojure.data.* included.

cfleming23:10:27

I agree. I’d hate to see clojure.test broken out.

seancorfield15:10:28

@slipset Link for that Twitter thread? Curious to read it.

seancorfield15:10:48

Re: clojure.test -- I already volunteered to maintain it if it moves out of "core" for 1.11 🙂

Alex Miller (Clojure team)15:10:25

note that it would still be included as a dependency, like spec is included, but via a separate lib that could be revved at a faster rate

Alex Miller (Clojure team)15:10:10

so it's really about separating the release cycle more than the inclusion part

ghadi16:10:39

"I got pinpoint accurate data" - glass half full

Alex Miller (Clojure team)16:10:56

I'd be curious how that test is being run (under lein, what clojure version, etc). Not sure that's what you'd see from just clojure.test itself.

seancorfield16:10:01

And that stack trace is coming from Orchestra which is doing generative testing so there's not even much clojure.test could do to help.

seancorfield16:10:23

[circleci.vm_service.vms_test$fn__39445$fn__39446 invoke form-init2759853974423263704.clj 1428]
  [com.gfredericks.test.chuck.clojure_test$_testing invokeStatic clojure_test.cljc 102]
  [com.gfredericks.test.chuck.clojure_test$_testing invoke clojure_test.cljc 100]
  [circleci.vm_service.vms_test$fn__39445 invokeStatic form-init2759853974423263704.clj 1428]
  [circleci.vm_service.vms_test$fn__39445 invoke form-init2759853974423263704.clj 1427]
  [clojure.test$test_var$fn__9737 invoke test.clj 717]
That's 80 lines into the stack trace and the first mention of clojure.test
[clj_honeycomb.core$with_event_fn invoke core.clj 465]
  [clojure.lang.AFn applyToHelper AFn.java 160]
  [clojure.lang.AFn applyTo AFn.java 144]
  [orchestra.spec.test$spec_checking_fn$fn__29295 doInvoke test.cljc 127]
That's 25 lines into the stack trace.

seancorfield16:10:35

(and the error is happening inside a fixture making the problem even harder to clean up)

Alex Miller (Clojure team)16:10:54

I would recommend not even doing generative testing under clojure.test at all tbh

seancorfield16:10:41

Around line 300, clojure.test did report what actually failed:

expected: (not-exception? (:result result))
  actual: (not (not-exception? #error {
 :cause "Validation failed: writeKey must not be null or empty"
so I'm not sure that first 300 lines is even coming from clojure.test TBH.

andy.fingerhut16:10:38

what disadvantages do you see to doing generative testing under clojure.test? And if not initiating generative tests that way, then you'd recommend simply invoking generative testing functions from the command line?

andy.fingerhut16:10:21

I've been doing a few generative tests in the last couple months initiated from within clojure.test deftest forms (mostly passing), and haven't noticed any problems, but perhaps because of the 'mostly passing' part?

hiredman16:10:35

I think it is just more stuff on the stack (the generation machinery can be complicated) so you get longer stacktraces

Alex Miller (Clojure team)16:10:55

generative tests have different characteristics than typical unit tests. wanting to force them both to always run at the same frequency and importance does not imo make sense (and this reason is why there is no clojure.test integration in spec)

seancorfield16:10:57

In this case, it's not that clojure.test printed a stacktrace, it printed an exception which happens to include a huge call chain.

Alex Miller (Clojure team)16:10:52

and I assume not-exception? is a custom test-is predicate

andy.fingerhut16:10:53

But anyone who knows that can create different clojure.test namespaces and/or deftests to control relative frequency of running example-based vs. generative tests.

Alex Miller (Clojure team)16:10:12

the knobs are different too

Alex Miller (Clojure team)16:10:45

spec generative tests are not unit tests. why pretend they are?

andy.fingerhut16:10:31

sure. I just don't see why clojure.test is restricted to unit tests, then.

andy.fingerhut16:10:54

Not that it necessarily gives any advantages for generative testing, either.

Alex Miller (Clojure team)16:10:22

the main advantage it gives you is to make all your tests look the same

Alex Miller (Clojure team)16:10:34

my point is that they are not all the same, so that's a false advantage

andy.fingerhut16:10:56

I guess the best I can say is that it isn't hurting me to run them that way 🙂

didibus17:10:07

How is a generative test not a unit test? In almost all cases they are testing a single unit of code, one function.

Alex Miller (Clojure team)17:10:21

you are wasting huge amounts of time retesting things that aren't changing

andy.fingerhut17:10:54

but I don't run the generative tests when I rerun the unit tests. I initiate them differently from the command line.

Alex Miller (Clojure team)17:10:11

well, then you're already split, which I'd say is good

Alex Miller (Clojure team)17:10:13

we have a vision for an entirely different kind of test runner that runs remembers what you've tested, and just retests what's affected by your change (in a smart way). I built a prototype of this two years ago but never really pulled it over the hump. hoping to get back to it after spec 2 is out.

andy.fingerhut17:10:41

I'm not here on a soapbox suggesting people use clojure.test to run generative tests -- just prompted by what you said to see if there was a harm in doing so. Wasting time is bad, agreed, but there have already been ways using clojure.test for years now to run different subsets of your tests via namespace, deftest names, Leiningen doohickies that I forget their names now, etc.

Alex Miller (Clojure team)17:10:01

some of that is shared need, some of it is not

Alex Miller (Clojure team)17:10:08

in clojure.test, there is organizational and runner infrastructure (some of which could be generic and really has little to do with testing per se), and there are tools for making assertions

didibus17:10:11

Maybe I understand something wrong, but every time you re-run a generative test, it could find a new bug that the last run didn't catch, due to the randomness of the test.

Alex Miller (Clojure team)17:10:59

generally, I would say that you should run it once for a sufficiently long time that you are satisfied it is correct. and then not test it again if it doesn't change.

didibus17:10:05

So in my mind, I'd want my CD pipeline to run them on ever deployments

andy.fingerhut17:10:50

But the likelihood of finding a new bug after running particular generative test for hours goes way down.

Alex Miller (Clojure team)17:10:01

as does running it repeatedly

Alex Miller (Clojure team)17:10:17

generative tests are a statistical argument for correctness

didibus17:10:46

In practice, one very long run is more cumbersome than multiple small runs over time

andy.fingerhut17:10:22

If there are bugs to find, then after running one generative test for 'sufficiently long' (not always obvious, but practical things like "lots of time I was willing to wait"), then the most likely way to find new bugs is to tweak how the generative tests generate random data, to make them more likely to find new scenarios the old generative test never did.

Alex Miller (Clojure team)17:10:57

this is a subtle and interesting question about the writing of randomized tests that could probably be the source of a thesis or two :)

andy.fingerhut17:10:33

I am sure. The best hardware ASIC test writer I ever met played ping pong the way he wrote tests: "never the same serve twice" 🙂

Alex Miller (Clojure team)17:10:07

I'm a big believer in doing a little bit of many test approaches

Alex Miller (Clojure team)17:10:54

what if I persisted a "database" (I'll use that very abstractly) that remembered what I had tested

didibus17:10:02

Ya, I think what you have cooking would be better. Especially if it could do something like split samples.over runs. So the next run starts from the last sample, since I believe the samples grow as they run (maybe I'm wrong though).

Alex Miller (Clojure team)17:10:26

but that growth is rarely useful in finding new issues beyond some bound

4
didibus17:10:18

But since that awesome generative test runner isn't out yet 😋, we have to assume there's a ton of clojure.test tests out there doing generative testing. So for that I think it makes sense for clojure.test to accept that use case as well and improve on it.

didibus17:10:46

Or there has to be a clear alternative. Like what should I do in the meantime to run generative tests? On a team of 10, with multiple packages worth of code?

Alex Miller (Clojure team)17:10:12

a test suite is just a program

Alex Miller (Clojure team)17:10:17

write a different program

Alex Miller (Clojure team)17:10:21

provide a way to run it

didibus17:10:55

What about my custom program would be better than using clojure.test though?

didibus17:10:23

It would still need to run as part of build, and in CD, and it would therefore rerun just as often.

didibus17:10:55

Unless I go all the way and build this awesome smart diff detecting generative test runner yes

Alex Miller (Clojure team)17:10:56

why not run it less often and for longer runs?

didibus17:10:02

If a defect was introduced by a code change, it needs to run prior to being released or risk breaking my service.

andy.fingerhut17:10:15

There are many commercial software dev teams that do nightly / weekly / only-before-release longer tests that are not run pre-commit.

andy.fingerhut17:10:21

Down side is of course that it isn't always obvious what changes caused the test breakage, but the cost of running those tests on every commit is simply too high to do them more often.

didibus17:10:26

And since we currently can't detect that the code the generative test exercise has or hasn't changed, any code commits could have. So for now, I'd need them to run.

didibus17:10:07

I'm not sure I'm following. I would never trade faster deployment for more brittle code

didibus17:10:26

And we're not talking days of slowdown

didibus17:10:51

You merge your code, and go to sleep, the next morning your generative tests are done running

andy.fingerhut17:10:33

I am reacting to your "run as part of build". Is "build" a thing you think of doing on every commit, or less often?

andy.fingerhut17:10:06

If less often, then perhaps we are in violent agreement.

didibus17:10:30

It happens on every merge to master. But we do continuous deployment, so ever merge to master goes to production.

didibus17:10:18

It won't run locally on your machine

didibus17:10:32

Basically we have them scheduled same as our integration tests

andy.fingerhut17:10:14

Do you have a shorter set of automated tests you recommend team members run on every commit, even if that commit isn't merged into master? If so, likely those need to be pretty quick, to avoid slowing people down.

didibus17:10:03

Ya, locally we only run our unit tests and some generative tests as well but the sample size is restricted so they run faster

seancorfield17:10:50

I run tests inside my editor, for code I'm actively developing. Then we have test suites for each subproject and a CI process that runs all tests for all subprojects any given build artifact depends on.

seancorfield17:10:10

So we essentially have three "levels" of tests -- and we do have a few clojure.test "unit" tests that run small generative tests but we also have RCFs that contain longer generative tests that we only want to run from time to time.

👍 4
seancorfield17:10:01

Cognitect's current test runner lets you include/exclude tests based on metadata but we don't use that right now (we probably should... I should create a JIRA ticket at work for that 🙂 )

didibus17:10:27

I kinda just agree with both of you at the same time. A test runner smart enough to only rerun a test when the code under test has changed would be awesome. Even for unit tests by the way. If the detection is accurate, like based on form instrumentation or something, then even fast tests don't need to be ran again. So if the core team is working on that, that's really awesome. But in the meantime, I don't see why writing my own test runner would be any better than leveraging the clojure.test machinery. It's just more work and can be setup to run just as often or as little as you want.

didibus17:10:13

Like on that note, I use clojure.test for integration tests, some of them take hours to run. Are integ tests not supposed to run on clojure.test as well?

Alex Miller (Clojure team)17:10:11

depends on if that's a good match for them

Alex Miller (Clojure team)17:10:32

I have certainly written integration test suites that did not use clojure.test

Alex Miller (Clojure team)17:10:21

I'm mostly interested in breaking people out of "there is only one way to run tests and it is clojure.test"

didibus17:10:53

Do you think clojure.test is causing more harm than good?

Alex Miller (Clojure team)17:10:07

we should aspirationally want more than just that though

Alex Miller (Clojure team)17:10:29

it is an old saw that Clojure people don't write tests, which is patently false

Alex Miller (Clojure team)17:10:16

what I would want for Clojure as a community is that we are always interested in using an array of testing strategies that is maximally effective

Alex Miller (Clojure team)17:10:29

and not doing any one thing dogmatically

Alex Miller (Clojure team)17:10:42

it's like investment planning - do a mix of things in your investment mix

seancorfield17:10:08

test.check has a defspec macro that pretty leads people to write generative tests that run with clojure.test runners 🙂

Alex Miller (Clojure team)17:10:24

well, I don't run that project :)

Alex Miller (Clojure team)17:10:55

actually, no one runs it right now, so it's a great time to start if anyone wants to :)

😢 8
andy.fingerhut17:10:18

It certainly makes it easy, but the library docs also show you how to run all the generative tests interactively, too.

seancorfield17:10:07

Admittedly, it adds metadata so you could "easily" include/exclude such tests...

didibus17:10:31

I'm all supportive to that, but maybe I fail to see where clojure.test plays into it. I agree we should all embrace multiple strategies to software correctnes, have CRs, have unit tests, integ tests, generative tests, run QA where it make sense, etc. Do you feel having clojure.test hurts that message?

Alex Miller (Clojure team)17:10:44

clojure itself has both unit and generative tests (actually written with test.generative) that can be run independently. test.generative actually makes it easy to run generative tests with a time budget (ran than a count budget), which is useful.

andy.fingerhut17:10:53

Hopefully Gary Fredericks is still willing to give advice/tips on it?

andy.fingerhut17:10:51

Cool. I was looking into it recently, and will ask him if there are any other sources to learn how the shrinking stuff works. The code is there, I know, but English descriptions are also nice.

Alex Miller (Clojure team)17:10:15

I don't think clojure.test hurts anything by itself, this is more of a social question

Alex Miller (Clojure team)17:10:09

the main sources for shrinking are the prior art in Erlang, Haskell, etc

Alex Miller (Clojure team)17:10:29

surely there is something by John Huges or someone that explains the idea

andy.fingerhut17:10:10

Yep, will ask Gary what he knows of, in case I've missed it.

didibus17:10:07

Funnily enough, I was just reading this today: https://hypothesis.works/articles/integrated-shrinking/

andy.fingerhut17:10:33

Howdy. Since you've been summoned, if you happen to know any English descriptions of how test.check and/or QuickCheck implement shrinking, happy to know of those. If they are already in the test.check repo and I've missed it, apologies for the bother.

gfredericks18:10:38

There might not be

gfredericks18:10:00

But I love explaining things and nobody ever asks me about that, so you should just ask me

❤️ 4
andy.fingerhut18:10:45

And probably a much easier question -- it seems that test.check fmap generator must have a pure function for transforming the generator's output. At least, it shouldn't do its own pseudo-random generation, because that would circumvent the ability of test.check to reproduce test cases from the seed?

gfredericks18:10:28

Yes, that's the expectation

gfredericks18:10:51

It'll call the function on shrinking inputs to produce shrinking outputs

gfredericks18:10:15

My "Building t.c Generators" talk emphasizes this point

andy.fingerhut18:10:34

I'll take a look at that talk, then.

andy.fingerhut18:10:50

I'll let you know if I find any other resources about how the shrinking is implemented, but I'm not actively digging into that particular part of things at the moment, so sorry to ask about it then put you off, but I will definitely ask again if I start digging into that topic in depth.

gfredericks18:10:03

reproducibility, growth, and shrinking are the main features you get by buying into the expected way of constructing generators

andy.fingerhut18:10:43

reproducibility was something I was familiar with from hardware design days long back, but automated shrinking seemed like magic the first time I heard of it.

gfredericks18:10:57

It's not any more complicated than the combinators themselves