This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-08-06
Channels
- # announcements (4)
- # beginners (132)
- # calva (37)
- # chlorine-clover (60)
- # cider (1)
- # clara (12)
- # clj-kondo (40)
- # cljs-dev (109)
- # clojure (76)
- # clojure-dev (19)
- # clojure-europe (8)
- # clojure-france (17)
- # clojure-nl (4)
- # clojure-sg (1)
- # clojure-spec (14)
- # clojure-uk (7)
- # clojurescript (98)
- # conjure (96)
- # cursive (15)
- # data-science (2)
- # datalog (11)
- # datomic (24)
- # emacs (17)
- # figwheel-main (3)
- # fulcro (45)
- # jobs-discuss (1)
- # kaocha (3)
- # malli (2)
- # nrepl (1)
- # off-topic (135)
- # portal (2)
- # re-frame (17)
- # reagent (11)
- # reitit (4)
- # sci (60)
- # shadow-cljs (75)
- # spacemacs (3)
- # sql (32)
- # tools-deps (79)
- # vim (88)
- # xtdb (4)
I, personally, wouldn’t try to get around it, and just learn how they do it at the company, learn the style (even if only as a curiosity) and experience the tradeoffs yourself so you know why FP does FP things. Less conflicts, more perspective.
how does one even do FP in golang
https://twitter.com/vincentdnl/status/1291041278264713220 https://pbs.twimg.com/media/EelADOtWkAEVZ6o?format=png&name=4096x4096
Or when the 'Senior Business Analyst' who is also a "Software Developer" messes up the R functions on the backend before the data gets loaded into the database, the job becomes "please invert the axis on the frontend so that they tests pass".
no joke, yesterday I got tagged for a "display issue" in the frontend of an internal tool, after reaching out to multiple other teams it turned out that the root cause was a nightly script on a microservice had silently decided not to run the night before
So there’s this quote in the “History of Clojure” paper: > Type errors, pattern matching errors and refactoring tools are venerated for facilitating change instead of being recognized as underscoring (and perhaps fostering) the coupling and brittleness in a system. I’d be interested in hearing more on this topic. I’ve seen the “Maybe Not” talk and I appreciate how nullability being tied to the type of a thing rather than the context where it’s used can be problematic, but I don’t quite see how type errors or refactoring tools, for example, would be conducive to fragility, as such. An example would be particularly enlightening. Any thoughts? (I’m not interested in pitting static and dynamic typing against one another. I’m just interested in the reasoning behind the quote.)
my take on it is that people falsely venerate the idea of changing something and having the compiler tell you the 57 places that need to be fixed
I think I'm gonna steal "Tool-assisted coupling"
I think that this sort of argument is in my experience met with a level of suspicion that I find surprising... A lot of which in my mind boils down to, "so how do you handle it then?" There'd be a lot of benefit in my mind to clearly spelling out a better approach...
That's a lot more eloquent than the long reply I was still halfway through writing but... yeah, what he said!
I view it as a bit similar to the way folks seem happy to accept vast amounts of boilerplate in Java because "my IDE writes that code for me".
The people from "strongly typed" (whatever this may mean) languages will tell you that their language is a whole different experience from Java, but you still have the same coupling problems I think.
This is all contributing to why I like really simple tooling in my editor and around my language: I don't want to end up depending on "magic" happening and lose my connection with what's really happening under the hood.
(`lein ring ...` is a good example of that, in my opinion: beginners who start out that way don't know how to start the HTTP server themselves and so they don't adopt a "healthy" REPL workflow around working with server processes)
Type systems aren't so much magic though. Garbage collection, JIT, etc, is more magic than types I would say.
Code generation of getters/setters, code modification through refactoring tools...
there are a lot of things that can shore up bad designs, static type checks can do that, editors with "jump to source" shortcuts facilitate bad code organization / factoring
Is this similar to open source tools
in the repl? Or are you talking about something entirely different?
I'm talking about the ability to instantly jump to definitions and usages for editing - it can encourage disorganized code bases with coupling between unrelated components
@noisesmith I could be missing something but I am unsure how rummaging through text files to find a symbol is a good idea that encourages good design. Sure, you could use regex | grep | equivalent
, but my understanding is that the option was introduced in IDEs like eclipse to help with finding methods, which are the primary method of dispatch (`indirection?)` in a language like Java. The verbosity of the language can lead to a proliferation of methods, and jumping from lexical scope to find one can take long. I think CIDER supports a similar functionality with M-.
, but I do not see how it encourages bad structuring of code. Any tool can be abused, guess is what I am saying
having to jump to usages and definitions should have friction to it my experience is that codebases made by people that use their jump to source editor feature frequently are more likely to be poorly factored, and leak implementation details if I have to edit 12 files to make one feature, and 40 files if I change an implementation detail, these are signals that my design is broken, and I want to see those signs and fix them early rather than later, I want to directly experience that friction so I can fix it early
that said, I don't mean "never use jump to source" I mean: if your project is impossible for me to collaborate with you on, because I choose not to leverage jump to source in my workflow, that is a flaw in the project
I am curious - Do you mean there is no project that can be sufficiently complex to warrant the existence of such a feature? And would you eschew such a project still? 🙂
it's the clojure version of those java spring framework projects that are impossible to understand or work on if you don't use a powerful IDE - it's using an external tool to patch over flaws in your core design
I can navigate clojure.core just fine, it's probably bigger than your project, it's bigger than the apps I am describing
I agree with you on the last point. I think java frameworks are designed around the tooling
it's not the project size that causes the problem, it's bad design
Minus the tooling, you are lost, which is a shame
right - but the jvm and its built in offerings? I can get by with the javadoc
function in the repl - it's a tooling and community failure, not a problem with the language
and I want to help our community avoid similar mistakes
@noisesmith You are right, but none of the issues you are raising are because of the jump to - that is a feature which is abused. Bad design can leak into any project since it arises from a human flaw (lack of understanding or architecture and design), and not necessarily tooling. Yes, tooling can encourage it, but I doubt it is the source.
I respect your nobility in trying to help the community look in the right direction, that's admirable mate
> there are a lot of things that can shore up bad designs, static type checks can do that, editors with "jump to source" shortcuts facilitate bad code organization / factoring I was at least attempting to be a bit more nuanced - I'm also not saying "never statically check types"
but I want to encourage being careful about ways tools can facilitate bad designs
I saw that, but I couldn't wrap my mind around the jump to source feature. I certainly agree with your points, but want us to be clear that certain feature x does not necessarily lead to code smell or bad design y. That is a function of dev knowledge + experience -
Someone can quip that macros are bad (I have heard such language out there). Some language have even gone further to introduce hygienic macros - whatever that means. But you agree that it would be wrong to imply that having macroexpand
feature in clojure can be a source of bad design
right, but if I can't understand any of your code as I read it, without a pop up tooltip that automatically macroexpands the source, that would be the same flavor of problem I am talking about
the problem isn't reading the source, or expanding macros, or having tooltips - it's lazy development enabled by tooling which turns design flaws into negotiable tech debt
Gotcha yer, thanks for painstakingly explaining your thinking through to me. I appreciate it -
The same could be said of garbage collectors and functional programming: they facilitate writing slower code :)
there are multiple metrics of good vs. bad code, legibility is the only one that is fungible, it can literally be traded in to gain other improvements
if gc and fp help me improve legibility, they can free my time to improve performance or resource usage where it matters
what I don't like are language features / tooling features that help people skate further on reduced legibility
Some people would argue that types help them read code better, since you don't have to guess what f
and xs
are.
but they use type inference so they aren't even able to read the type in their code
it's something that informs the tool, they then leverage that tool
I'm not saying "no types", I'm saying they are over emphasized
us advocates of dynamic typing generally assume that without types, people will structure their code in a way that is less coupled and better organized
advocates of static typing generally assume that people will be able to encode useful properties of their system into their types, and that doing this is helping solve some business problem
the only times that typing problems have been a serious issue in my clojure projects have been when I'm collaborating with people who are trying to code the way they would in ml, without any of the tooling that makes this productive
@noisesmith can you give an example of that?
@borkdude refactoring the unabstracted implementation of a concrete data type touched by multiple microservices and marshalled via a shared io library
in a language with strong types, this is trivial, though tedious
in clojure, it's a huge waste of time
root problem: you didn't abstract the data properly in the first place
which you can get away with in ml where the compiler takes care of it for you
weeks of time wasted on adding a few bytes to a field
@noisesmith I wonder what they did to marshall that thing to e.g. JSON? In Clojure we do (cheshire/generate-string ...)
, in typed languages you often describe a per-field mapping to JSON, which is tedious (but can also yield much more performant code if you don't want to serialize everything). How did they do that in Clojure?
this was an overly coupled combination of the apache avro lib (used with kafka) tied to clojure.spec and some overly clever glue code
diagnostically, it's the failure to create an explicit boundary between "data from the outside" and "data inside my application"
this was also enabled / festered by using a monorepo, so each service could exploit internals of others
I'm not sure if monorepo has anything to do with it. I once worked in a Scala team where they had many repos and one of the repos was called types :P
anyway - half assed imitation of strong typing ended up with the worst of clojure and ml in one sweeping endless-layers-of-slowly-exposed-bugs refactor
@noisesmith Do you think just throwing EDN on Kafka and use spec on that EDN would have been better?
sure - monorepo didn't cause the problem, attempted typing didn't cause the problem, clojure's laxity with data didn't cause the problem, but they all contributed to the storm that occurred
@borkdude I think it would have been more tractable, but this is more about the way that contracts between services are managed, and how they are designed in order to evolve together IMHO
Once again proving that the really difficult bits of software development have nothing to do with things like typing. 😛
also there was an attempt to make spec generate avro consumers/producers and this was exciting but in the end a source of a lot of problems, it would have been less work and less brittleness to just have the two definitions, or a DSL that was smart enough to create a spec and avro schema that agreed with one another
I heard Mia (https://twitter.com/buttpraxis) talk on defnpodcast about her work with Kafka and spec. She said that they would almost never write specs directly but write code generating the specs for them. Do you perhaps work at the same company? ;)
we were on the same team
I think my memory / long term experience with the avro generator is less happy than hers
then again, thinking out loud at this point, if we were using a strongly typed language, we probably wouldn't have tried to pretend that our message exchange format was the same abstraction as our data types / language level type verification
I mean, that would be absurd
but if engineers on our team didn't think that data types were the solution to contracts between services, that also would have prevented the problem
uncanny valley, worst of both worlds IMHO
but I wouldn't characterize the company, our team, or those collaborators by this situation, it's just the worst thing technically that came up, IMHO
oh no this was years ago, and many people left shortly after the service went live
I guess this is one of those things where you can write tons of books "Patterns of Enterprise Blah architecture" about
Zach Tellman (who consulted with us about architecture), was writing Elements of Clojure (a very good book about how to do architecture correctly) at the same time he consulted with our team
in fact if people had followed his advice more carefully we wouldn't have had this issue
One counterargument I believe I’ve heard is that in some projects, the realities of software development can drive teams into situations where you just need to make a change that requires a fix in 57 places, and if you have static typing, you at least have some help in making that happen. Yes, the real problem lies elsewhere, but whatcha gonna do? Something along those lines.
right - it's a strategic enhancing of brittleness to make things fail fast so you know they are fixed
but it mixes very poorly with a strategic flexibility that lets you not care about the implementation details of other parts of the code
unless the relationship between the two is very well defined
it's as if a tent maker and a skyscraper maker were trying to make modular parts of one structure
sure, that's a very well defined relationship
on a human level, it turns so easily into a type-advocate bitter about the sloppiness of the language making these problems hard, and a dynamism advocate bitter that coupling is making the problems hard, and hey they are both right - the mismatch and ill defined boundaries are the problem, not the things on each side
which is what I like about what @lilactown said above https://clojurians.slack.com/archives/C03RZGPG3/p1596739137163400
so no matter what language you throw at it, the system architecture is the more important issue. which is also what Rich Hickey has emphasized in his talks I think
that's my take, but hey I'm a clojurist :D
imagine how terrible ring would be, if the core data abstraction inside your middleware and handler code was the TCP packet
we recognize how wrong that is, but do the equivalent too often in microservices
whatever data abstraction describes your IPC - so the avro message, or json, or transit, or edn - I'm advocating for having an explicit boundary between app data and data sent to other services
I think the concept nowadays is called Hexagonal Architecture...I have been reading a bit about it mainly triggered by curiosity. The Nubank microservice example contains a reference to it so I discovered it from there the first time. https://github.com/nubank/basic-microservice-example
Is the issue about ipc or is it that different readers (clients, services, etc...) Can have different interpretations of the same data? For example, you might think of a name as meaning full name asc where another reader might expect just the first name. Internal, we always have to store the data as granular as possible to service all caller's. So what we need a translator for is from the readers request to an internal representation. I have felt it would be interesting to capture that translation model in the database itself using a rule engine. E.g if a name gets added we create a version for the internal system and one for each representation. Internal.Fname : drew Internal.Lname: verlee A.fullname: drew B.fullname: drew verlee The alternative is to have code that does the transition ever time someone asks. The downside is that logic it self isn't queryable.
It's that the "interpretation" is implicit in sloppy clojure code, and doesn't have a separate transition layer to relevant values
"it's just data" but misapplied across the wire
putting the translation model in the db itself is the worst possible choice
it doesn't fix the problem here, in fact it amplifies it
I'm not sure I understand the problem then.
I was interpreting it as Different caller's mean different things with the same request. So who is asking matters. That has to be handled somewhere.
I wasn't saying you build an API which depends on the caller. I was thinking that ingested data has to be categorized. In that light, yes it doesn't make sense to store multiple representations
unless you control all those services, it's the same problem with internal APIs and public or library boundary APIs?
people learned the lesson earlier with XML / CORBA but less generally than they should have I think
e.g. don't return field :foobar/id
unless it's important. you can never remove it, since that would be breaking
@borkdude yeah actually, that's a fair point as well, though within some inner domains it makes sense to keep one data format for sanity's sake instead of having onion layers of transforms
the process itself is a nice place to have the explicit boundary
This is why I advocate for three separate representations, if only in naming: API (external) data, Domain (internal) data, and Persistence (external) data -- what you expose to clients, what you traffic in within your business logic, and what you write out/read in from constrained external sources.
That said, in many cases, you can control enough of the Persistence format to let it and your Domain format overlap substantially.
another classic example of doing this very wrong: ORM
same basic problem at root - trying to impose coupling across a boundary that's better left standing
ORMs are "wrong" on so many different levels...
(one of the inherent problems is conflating identity and state there, between objects and relations)
I find it interesting that I've felt smug about ORMs for a while, being a functional programmer, but fell into the same mistakes on microservice design
the problems can persist even with immutable data as values, if you don't have the right overall structure
We're lazy programmers. We look at the two domains and we squint a bit and we convince ourselves they're "close enough" and we don't need to bother designing and building an actual "mapping" to separate them.