Fork me on GitHub

That said, currently takes that approach (specs in separate namespaces) so it can run on 1.4 thru 1.8, and it optionally uses spec for the tests if they are run on 1.9.


Another option would be to look at the backport of spec to 1.8 that someone in the community maintains (future spec, I think?).


That's a cool idea, thanks!


I have a philosophical (?) question about how far spec should be pushed in terms of semantic types, type checking and testing. For example, is it a reasonable use of spec to specify that the list argument to a binary search is sorted? Can you do it?


it's certainly possible


that said I'm also interested in exactly where it should be applied, and if "overuse" of it is possible, and what that would look like


well, it’s funny you say that because that was my leading question...


one of the things that has been mentioned is the separation of specifying the keys in a map and the values those keys may take.


but it is very often the case in complex data structures that those values are interdependent. For example, is it too far to require a ::street to actually be in the ::city in the ::state with the right ::zip-code? Or, perhaps midway between the two, if I’m modeling a “blocks world” using blocks with the keys ::id ::pile ::above ::below, it’s certainly true that a block cannot be above itself, below itself or both above and below the same block. And so I find myself writing block? instead of (s/def ::block (s/map …).


It is very true that certain “schema” is expressed via the relationship of one value to the other in the entity map. Spec still allows us to express this relationship via a custom predicate, but the nice part is that it does not force us to define this structure. Instead, its intention is for us to specify reusable pieces where possible. How far we take it is largely up to us. In the example of city/zip, personally this feels as if it’s overstepping the intended bounds of spec, but I cannot quantify that yet with anything sensical so it’s not worth much.


yeah, I’m just trying to think out loud a bit. One of the points stressed in spec is the generation of examples…but automatically generated samples of addresses that are just random strings seems…sad to me.


because if I want to use this as documentation and to specify what would be acceptable input, a ::street of “V+q.29b7gqao9$” seems strange.


that’s what custom generators are for...


there are a lot of street names out there 😛


yeah, I watched that…I would like to see more on that topic…it seems like a buried but important part of the overall usefulness of spec.


after watching that, I’m still not entirely clear what a model is.


I don’t see what is wrong with random data unless you depend on a certain structure for the string in your code


I think it’s only buried because it’s still in alpha and most people don’t “get spec” yet even for basic cases, and need to build a level of understanding that can support that level of detail


and that’s fine @schmee. I do.


ahh, then custom generators are the way to go 🙂


I think that video above and the one before it illustrate the point to some degree. I was watching the testing video, and kept going, “but you aren’t actually testing the function…the odds of generating a random string that is a substring of another random string are very low”.


but even at the end, if my-index-of was an actual function, there’s absolutely nothing in there that tests if the index returned is the correct one.


gotta lunch but would love to discuss after i eat 😉 you have a great topic and is at the heart of spec and how it can be useful to everyone


if think if I replaced the use of string/index with 1, I think all the tests would pass.


I don’t see how you could test that in a generative fashion without re-implementing the function itself


I suppose this is where example-based testing as a complement comes in


or generative testing as a complement to example-based testing, if you prefer


@schmee I think I agree. My concern would be that in the video (the 2nd one of 3 in the series on Testing), Stu specifically contrasts the generative approach with the assert approach where you have to do everything yourself.


Now, one interesting thing is that the custom generator contains the information (implicitly) that you need. The generator takes a prefix, match and suffix to generate items that will definitely have a match. If you could save the length of the prefix, you could verify that my-index-of was working correctly.


you could return a tuple of [(count prefix), rest of the stuff] and use that in your tests


Hmm, doesn’t what is generated have to be arguments to the function under test?


I’m not sure, but either way you can still write tests that use generators without spec and do whatever you want 🙂


I think a subtle related problem is that the :ret is specified as <= the length of the string but it is < length of the string because of zero indexing.


well, you can always do what you want 😛


haha, yeah


but I mean, it’s cool that spec can auto-generate tests and all, but if you have a very specific (generative) test you want to make it might be better to just write that the old-fashioned way and not try too hard to cram it into a spec-based approach


@actsasgeek which video are you talking about, with my-index-of?


you posted a video on custom generators. It was 3rd in a series of 3.


I have not watched it, I’m looking at it now


you can safely watch it a 1.25 speed 🙂


sorry, I have to pair for a bit so I’ll be away from this conversation but I am interested in continuing it.


So, the docs for spec.test/instrument seem kind of circular: > Instruments the vars named by sym-or-syms, a symbol or collection of symbols, or all instrumentable vars if sym-or-syms is not specified. Is there a decent instrument lesson of some sort out there?


read the whole thing like 10 times if you haven’t already 😄


jackpot, don't know how I missed that!


the series of videos mentioned above are great introductions as well

joshjones20:01:44 @actsasgeek Regarding this, the index-of “” within “” is 0, so in this one case the length of the return is equal to the length of the source. edge case though, and you could account for this in the :fn portion of the spec So to continue the earlier discussion: in order to test something, we have to have something that is known, correct, reliable, etc., to test against. In the case in the videos for index-of, it was the “eye test” — make some sample data, manually count the characters, see if the function spits out what we determine as the “correct” value. The :fn spec can be made more robust (I’ll post an example later), but imagine we’re just spec’ing substring, determining whether a string is actually a substring of another. Well, this is not so easily spec’d, because that’s essentially what this function does! So, if you had another “source of truth” that could definitively tell you that, well, you’d just use that function. Even when you write your tests manually, you’re still placing your trust in your ability to accurately type strings correctly. There is always room for error in the testing phase, and you can’t get around it. Perhaps the answer is to hand write your own tests as usual, and it may make sense to do that. But one takeaway is that going through this process is likely to find edge cases that your own manual testing would NOT have found, even if it does not have the ability to determine with 100% certainty that your function is correct. (In the case of the video though, I think it can be made more rigorous by altering the :fn)


To the specific point, I was a little surprised that (index-of “abcd” “”) even works. It feels like an overly mathematical definition of “index of” because (index-of “abcd” “a”) returns the same result…but that’s neither here nor there.


To the general point, I think I agree. My point was that in the 2nd video, it seemed to me at any rate, that generative testing was presented as an alternative to manual testing. Just in the way Stu describes how “difficult” that coming up with the manual test was. I completely agree on the point that generative testing is much, much better at finding the edge cases. In a way, I’ve always thought of manual testing as a way of preserving REPL development of the shape of the function and documentation.


But documentation, examples, etc., are being touted as leverage points of spec and it seems to me that these are going to require more custom generators than the relative abundance of documentation for making customer generators would suggest. I could be wrong, tho.


but then I think that the opposite of conform should be deform and not unform. 😉