Fork me on GitHub

plain ol' waste of time, in my case, as I'm i large part writing tests to catch things i would have wanted the compiler to, or to document how the system works for the next poor sod.


the way i was taught to do tests is a sort of a cross between 'ensure the system behaves as it should' and 'document how this works for the next person'

bvulpes01:05:31 << another way to say this is "max dynamicism or max static"


i only ever wrote a few very small systems in CL, but the SBCL compiler was just so on top of things if i bothered to specify argument and return types.


the CLOS dispatch system in combination with the compile-time typechecking gave me an absurd amount more confidence in the code that I was writing, at least compared to clojure.


And then there’s the whole question of whether you’re writing tests first as a way to guide the design of your code (TDD/BDD) or writing code first and then writing tests to verify (a subset of) its behavior.


@seancorfield: how would you characterize schema's behavior in those two senarios?


That "question" wasn't in the context of Schema, to be honest, and I'm not really sure what you're asking...?


What do you mean by "Schema's behavior" in that context? (of TDD vs after-the-fact unit testing)


ah, i misread then.


@bvulpes: @seancorfield for me the main reason to use Schema is that I can see what my function returns and the parameters it takes. Especially regarding the structure of the maps and vecs.


yes, it's useful documentation


which is actually checked


Also i recently read somewhere that tdd brings a lot of value regarding Bugs and pays out in the long term. Because to get started with it, development takes some more time than writing Tests after the code.


Maybe it would be an idea to have a tool which stores sample arguments (during tests/dev) to some database that you can later use to infer some types/shapes in the documentation


Two words: good naming.


Seriously. Use good names so the code is self-explanatory. The less "code" you have to read to understand the actual code, the better.


Schema isn't bad in that regard. It's fairly minimal annotation, but it's still a shift from standard Clojure to Schema-flavored Clojure.


@seancorfield: I was thinking of a function that expects a nested hash-map I had in a CRUD app which would then pass it to, no amount of good naming documents the expected shape


(defn store-person [person] ...)


map destructuring of course helps


And I think if you're doing TDD/BDD, where you write the tests first, you have the opportunity to work on language, as in "the words used to describe the domain", and that -- executable "specifications" -- provides rock solid "documentation" about the intent of the code.


@borkdude: Why doesn't person signal that? Is it the word used in your business domain? Is it the word that everyone in your company understands?


@seancorfield: One example: this lib is a wrapper around liquibase and will do awesome stuff for you. But, its column shape looks like this: [:id [:varchar 30] :null false :pk true :autoinc true]. I never figured out a better name than column-description or something like that but still always had to look up how it looks exactly, because in your function you only see code like this (first (second ...))


But the problem there @sveri is that the library isn't providing a domain-specific way to buil those values.


I suspect that API was not Test-Driven ... if you have to "guess" what the shape of the data is, you haven't written tests around passing that data in ... or at least you haven't thought about that use case. No disrespect to Shantanu -- he produces great software.


@seancorfield: I had tests, but then I had to look up those tests. I think it's convenient to have the shape near the function arguments and that is what Schema gives me


I'm lazy and don't want to search


Different folks like different approaches. Schema doesn't work for me but it's not horrible, and I can see why other folks seem to love it.


I think my ideal type system would be one that allowed completely opaque types to be defined, that you could refine later on.


No language exists like that yet as far as I know.


(this goes back to @ericnormand's comment about types being "the future", not the present 😆 )


naming is an incredibly useful tool. having function signatures (with args as named in their source definitions) displayed unobtrusively in my IDE alleviates a megatonne of jumping around through code.


granted that does nothing for the "and precisely what shape did a person have again?" problem


@bvulpes: but neither Schema nor core.typed help you with that while you're actually writing the code in your IDE, right?


(and, yes, I know it helps with Java / Scala in their IDEs, when you get popup help after foo. when the IDE can deduce what foo might be ... but (:something data) doesn't really lend itself to that sort of popup help 😄 )


you mean those weird function calls where the first argument is to the left of that odd period character?


As a side note, whole program analysis with a unification engine running in your IDE could help with Clojure but, boy oh boy, that would be a lot of work!


myeah and i'm going to guess nigh-impossible without a type system.


anyways, i make do with the function arg names in the minibuffer. not great, but gets the job done.


@bvulpes: well, if you seen (:something data) you can infer that data is a map with an (optional) key :something (depending on the surrounding context) so with an entire program to look at, you can infer a lot of things ... potentially ...


sure, but i tend to write the keywords first, and what precisely is the IDE going to suggest in that case?


every map ever used in my proggy that had that key?


When I worked on that in my research (early 80's) it was certainly an interesting problem to work on.


Well, the IDE could look at (:foo bar) and say "Hey, bar probably doesn't have a :bar!"


@seancorfield: i would not even argue that the state of the art in the eighties trounces any technology i've ever deployed to any production environment ever.


When you have the whole program, you can do a lot of stuff.


i can see it.


As another example (and this was a really painful project), I was involved in a three-pass C language analyzer that converts programs to Malpas, for formal static analysis.


hahaha go on


Part of that was to eliminate all global variables by making them arguments that were passed recursively down through the entire code chain. DYAC!


Pointers had to be turned into enums (over the set of things they could refer to) as well.


We have a long way to go.


"Different folks like different approaches. " -> that's exactly why some people complain about why Scala doesn't scale in some companies 😉

bvulpes07:05:36 , << a bit of heterodoxy, and not a best practice in any sense, but on my last few projects i've regarded (my) tests as constraints on the possible spaces through which the program can transition. i generally write them in parallel with the code that i'm writing (which granted, people with the budgets and timelines to write as many test of however much verbosity they want tend to regard as sloppy or complected tests), and use them to pin down the behavior that i demand of the system under test.


to put it another way, i hold a lot of the program in my head at once, and use tests to pin down details and compound behavior.


This seems to be the thought of Uncle Bob's latest blog, which I didn't find very convincing on its own


TDD > Static Typing


But maybe I haven't done enough TDD then


you wouldn't cut a chair leg on a mill, would you?


@borkdude: "giving up on TDD"?


@bvulpes: who are you quoting here?


@bvulpes: I don't know this saying, what does it mean?


by "Uncle Bob's latest blog" do you mean the post titled "Giving Up on TDD"?


or "Type Wars"?


"Type Wars", or isn't that his latest

borkdude07:05:27 "but the major source of pain for the projects I work on has been people not knowing how to structure their code"


@borkdude: like I said, there was a paper that found significant gain regarding Code quality when doing strict tdd. If I remember i will look for it at work on Monday. Whereas i have not found a paper yet that Shows gains in the typed vs. Dynamic debate, no matter the side.


@sveri: cool, I'll be happy to read it. I think the mistake that Bob Martin makes is that TDD somehow replaces static typing. It depends on which goals you have in mind with static typing. The set of goals of each are overlapping, but not identical.


There are companies where it’s possible to compare tools. For example, Rewe Digital has like 15 dev teams across the JVM, including a Clojure (hopefully 2 soon) and Kotlin team. (And they just hire senior devs, which maybe normalizes things a bit.)


However, incompatible language cultures usually hinder communication...


(Fortunately, the Clojure team is considered very good, which I verified with managers.)


Clojure devs are usually very good. A lot of beginners give up because of the syntax (superficial), mysterious stack traces (takes time to get used to if you don't know Java), not being able to set up your dev environment, not getting the idea of working with a REPL, etc. When you get passed all that, you're pretty good already 😉


I remember fighting with getting the classpath straight in Emacs + slime/swank in 2009 and not giving up until it worked. I'm not sure if I would do that again if I was trying a new language.


Of course Clojure devs are usually good because of other reasons too, being able to appreciate the design of Clojure for example and the ideas behind FP in general


One of the smartest students I trained really got Clojure. Now he is a Haskell fan and knows an awful lot about FP, much more than me.


@tjg is it really the incompatible language culture that hinders communication? Or more the objections developers have against other languages that leaves them closed to different ideas / approaches of problem solving? From my point of view differences in culture usually enrich communication instead of hindering them, as long as people are genuinely interested.


This was my summary from back then:


1. Code Coverage is no indication for the quality of code. Regarding code coverage it is beneficial to achieve higher coverage of complex code. 2. TDD: produces 60 – 90% better code in terms of defect density than non TDD teams. Although TDD teams take longer to complete projects about 15 to 35%. 3. Use of assertions: No numbers, but the paper finds a that higher assertion density leads to a lower fault density, stating at the same time that the statistical data is not significant enough. 4. Organizational Structure: “Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85%”. This is significantly higher than other metrics such as churn, complexity or coverage. 5. Geographical Distance doesn’t matter. The data found had no statistical significance


The most interesting part for me were point 2 and 4 whereas the other points did not surprise me at all.


Also its nice for a developer to have some numbers like these. You can approach your manager now and tell him that he can have less features in better quality or more features with more bugs and in the end its up to him to decide that.


@sveri: Yeah, probably closed to different ideas/approaches.


@sveri: thanks for linking to that. I read it before but it bears rereading and some things run counter to intuition and some things confirm my intuition.


TDD is a hard topic to discuss because there are a lot of people who have tried it and failed to find value in it, or at least failed to find enough value.


Uncle Bob has talked about some of the reasons why, as have Beck and many of the others in the Agile / XP / Software Craftsmanship camps. And that's why you hear a lot of "If you think TDD doesn't work then you're just not doing it right". A sentiment which, while most likely accurate, really helps no one (since, on the face of it, it's the True Scotsman argument which is a known fallacy).


Many years ago, I worked at a company that built static source code analyzers. As QA tools. They could enforce coding standards but they could also find bugs -- and via statistical analysis of source code metrics they could predict maintenance hotshots and sometimes highlight some very bizarre code bugs.


The analysis wasn't concerned with types but with idioms and measurable aspects of the code.


The tools were easy to sell in Europe and Japan where ISO 9000 held sway but were very hard to sell in the USA due to the mindset of programmers here. This was in the early 90's.


Tying this back to TDD, and also to other languages, and to that MS research: code quality needs to be baked into your system from the start and that needs to be an organizational thing and a process thing. The programming language doesn't matter as much as the approach. Type systems catch a certain class of programmer error but they don't catch others. TDD, done properly, can help you design "correct" software in any language ("correct" in quotes because we're just not good enough to hit 100%). But, just like any other tool, TDD alone is not enough for a successful project (MS findings about project failure predicted by organizational issues).


@sveri: thanks for the information


@seancorfield: I totally agree that these findings are language agnostic. At work we do not do TDD, but we write a lot of unit tests as part of the feature work, also integration tests for these and we have two guys working in QA that do a lot of UI test automation. So our test suite is pretty extensive and we have a relatively low bug count (keeping it mostly under 10 known bugs over the last 5 years with some spikes, for a code base of several 100-thousand lines of Java code). Still, I am very certain, if we would do TDD our code would look better from a design and architectural perspective. We are slowly reaching a point now, where we fight our code base and "design decisions". For the last 5 months we tried to put a REST interface onto our web application and finished like 20% which is much less than everybody expected. Understandibly, management is not that amused about that.


I also sent around the findings of that paper, also to our management, but got no feedback about it, so, well, what should I say, there is only so much you can do


I have the feeling that with Clojure you can at least move forward. Recently I saw a colleague struggling with some type decisions another colleague had made and it took him two or three hours to work around this


@borkdude: So much this. I cannot count the hours I spent fighting inheritance and strange patterns we implemented ourselves. This also includes my code of course. Decisions that seemed to be good turn out to be a desaster a few months later and then you have so much code using all these patterns already...


In clojure I would just add another key to my map, ding dong, ready


@sveri: At my work we have microservices, but all common types are in one library called 'types'


@sveri: This is probably a design mistake


@borkdude: I am sure you will find out sooner or later


@sveri: I'm not sure. First I thought: this is much better than what we have in Clojure, type safety, etc. But then I saw my colleague struggle with this problem I just mentioned.


@borkdude: You will see him do that again and again. Code and requirements do change, so types have to do too. While it may be easy to change types, it is hard to change all the eco system that you built around these types and that make certain assumptions. Like inheritance and design patterns. The only thing where I find it superior is when I do refactoring, with java and eclipse I can even do it across projects and it works for standard cases.