Fork me on GitHub
Alexandre EL-KHOURY09:02:35

Hi guys ! Im using metosin/reitit-swagger {:mvn/version "0.5.18"} and I'm having some difficulties to define an optional body-param. Here's how my code looks like :

{:post {:roles #{"employee" "admin"}
             :spec ::upload-request
             :handler post-asset
             :summary "Post an asset"
             :parameters {:path {:company-eid string?}
                          :body {:asset-category string?
                                 :model-subcategory string?
                                 :thumbnail string?
                                 :name string?
                                 :model string?
                                 :gender string? ;;optional
                                 :accessory-subsubcategory  string?  ;;optional
             :swagger {:security [{:Bearer []}]}}}
Thanks 🙏


Very terse suggestion (not at computer): compose the individual spec definitions in a spec/keys expression, defining optional and required spec names. Then use the composed spec to validate the request


is that spec or malli(-lite)?

Alexandre EL-KHOURY09:02:36

Thanks for your answer, I already use s/def, s/keys..

(s/def ::upload-request
  (s/keys :req-un [::company-eid ::name ::model ::thumbnail]
          :opt-un [::validator/gender ::accessory-subsubcategory]))
I was also questionning myself on how to use them as parameters ?


AFAICT, reitit uses this with that syntax: So you can try, as suggested there, using ds/opt.

Alexandre EL-KHOURY10:02:05

Thank a lot guys !


Once again, I'm thinking about what's the best way to do automated testing (low-level tests, microtests, unit tests - whatever you want to call them) I've been inspired by James Shore's excellent His approach to testing has a few novel ideas, in particular Infrastructure Wrappers, Nullables and Embedded Stubs Going back to my starting point, I'm looking to find the best way to have fast tests for code that has dependencies that interact with the external world. I want tests that: • have a good effect on architecture, TDD style • allow me to iterate with fast feedback without constantly running the code manually • let me refactor with confidence without fear of regressions I'm wondering about people's experience with these kinds of tests in Clojure (and Babashka)


Here's a more concrete question. I work with code that runs subprocesses. In Java, we use a class - a ProcessBuilder - to call subprocesses. James's approach relies on Infrastructure Wrapper, which wrap third-party infrastructure code like ProcessBuilder. For example, he'd create a ProcessWrapper class of his own and only use that in his code. That ProcessWrapper can now be swapped out for alternative implementations in tests. The benefit is that a ProcessWrapper object is something that can be passed around and into other classes as an explicit dependency. This makes dependency visible and explicit. In Clojure, on the other hand we avoid classes most of the time. Instead, we simply use a namespace, like or babashka.process which provides the needed functionality. I'm wondering how to swap in test doubles for these in practice. Of course with-redefs exists and is convenient. For instance you can swap out for you own null implementation or perhaps replace it with a mock. The problem is: • It's very coarse-grained - you need to replace the function completely or not at all • It's implicit - the dependencies are implicit How do people feel about this, especially as a codebase gets bigger?


(If anyone is familiar with James's blog, I'm also curious how the concept of Embedded Stubs can apply to namespaces like or babashka.process)

respatialized15:02:23 you may have seen this already, but there is an example lib that adapts that exact article to Clojure!

👍 2

> In Clojure, on the other hand we avoid classes most of the time. Instead, we simply use a namespace, like or babashka.process which provides the needed functionality. You mean, calling the functionality directly from the code-to-be-tested? Maybe the solution for this special case could be, to provide this functionality via DI / a component framework?


(and it looks like you have, but I’ll leave it here for the benefit of others interested in this approach)


> calling the functionality directly from the code-to-be-tested? Yeah exactly. You're right, I'm dancing around the word but Dependency Injection is the topic I'm talking about


Coincidentally James talked about this 15 years ago > “Dependency Injection” is a 25-dollar term for a 5-cent concept. That’s not to say that it’s a bad term... and it’s a good tool.


babashka.process has tests ;)


I have some projects that need environment variables which you can't alter in a JVM. For that I sometimes use dynamic vars or functions which you can swap out for tests. I avoid it when I can.


Basically, we as clojurians are fond of saying "let's just call the function directly" as a pragmatic attitude, but I'm wondering about the trade-offs of this choice (compared to, say, Java's)


> babashka.process has tests Didn't say that it doesn't. I'm wondering more about code that uses babashka.process For the babashka.process library itself, you need some kind of integration test that actually creates subprocesses etc (and that's


Maybe I am overusing it a little because I come from the Java world, but we tend to build one mount component for every interaction which has to be swapped out for testing. The replacements we then use are still specific, though, I you’d like to have more generic replacements, you would need to add another abstraction.


When the idea of "dependency injection" was created, it was viewed as a magical thing because it was magical in the languages it was needed for. An opaque convoluted system would jump through all sorts of hoops to swap things in place for you in the runtime. In a dynamic language like Clojure, there is no magic needed to get the same behavior. All you do is pass your dependency as an argument to a function. Then you can test that your function correctly interacts with the dependency by passing in some sort of spy or mock instead of the real dep. And that's it.


> All you do is pass your dependency as an argument to a function. I think that's true in principle (and it's true for Java as well). I guess I'm wondering how to do DI (without a framework if possible) right - how best to pass in functionality like as a dependency as a simple function argument


I've been happily using the following recipe in many codebases: • whenever possible, src functions have decoupled pure and impure parts • pure functions are very easy to test and never need mocks, DI, w/e • impure sections are tested thanks to simple DI achieved by the Component pattern ◦ e.g. you have a protocol for the DB, and another for sh , and so on ◦ so that you can have custom protocol implementations, e.g. a mock impl which is simply an atom that registers that it was called this is parallelizable (unlike with-redefs). Because you use protocols, you can swap production implementations in a future and leave cross-platform (clj/s) compat easier. With protocol extension via metadata, programming with protocols doesn't have a Java flavor anymore. It becomes all vanilla defns :)


Interesting discussion. I'm hearing a few different approaches: • Passing in dependencies as individual functions, like {:sh} (via function args) • Passing in dependencies as classes, like a ProcessWrapper (via function args) • Swapping out dependnecies at a component level (using or mount) • Use of protocols • with-redefs • Using a more elaborate DI framework


> I think that's true in principle (and it's true for Java as well). I don't write Java, so I don't know how much has been added to the language recently that would help in this regard. But if true, then certainly at least less true. At the least you need to ensure alignment of correct types. You need a mock that can properly present itself as a type it really isn't. Then if your coding style is not very "functional", I don't really see how you could easily benefit from the simplicity of "just pass it in as an argument". With Clojure all you need to do is make sure the spy matches the interface (lower-case "I", not Java Interface) or function signature of what you need to test for. No need to mirror the whole gorilla just to test the banana. > I guess I'm wondering how to do DI (without a framework if possible) right - how best to pass in functionality like as a dependency as a simple function argument What are you expecting the framework to do for you here? In Clojure, if you simply choose to not use a function or data structure from outside the local scope, you have replicated the effect of dependency injection. Just take the dep as a parameter to a function and work with the arg instead of the global.

(require '[ :refer [sh]]
         '[clojure.test :refer [deftest is run-tests]])

(defn ls [sh] (sh "ls"))

; Pass in the real dep in to use it.
(ls sh)
; => {:exit 0, :out "...a-bunch-of-files...", :err ""}

; Pass in a mock, spy, or stand-in to test
; We are not testing behavior of `sh`.
; Only care that `ls` calls sh with correct args.
(deftest calls-sh-with-ls
  (is (ls identity) "ls"))

; =>
Ran 1 tests containing 1 assertions.
0 failures, 0 errors.
{:test 1, :pass 1, :fail 0, :error 0, :type :summary}

🎯 1

If you have multiple deps, you will likely want to "inject" them all together in a map (or even just pass in all your arguments in the same map).

(defn foo [deps x]
  (and (:bar deps) x) (:baz deps) x))


I'd point out that injected functions are harder to 'spec' and also hard for clojure-lsp/cider to work with ...not that I'm against the technique, but new adopters are better off discovering this nuance sooner than later :)


Yeah, that's the type of discussion I'm looking for @U90R0EPHA. (By the way, are there any code examples of this style being used successfully in the wild?)


I'm sure there are, but I haven't had need to search for any examples. "Dependency injection" would look pretty much the same in JavaScript. So if you know JS, Google will probably return quite a lot of tutorials when searching for "javascript dependency injection". (Of course that won't cover any Clojure-specific nuances).


It's true that DI in JavaScript is similar to Clojure, but there are some differences. For example, it's common in JS to mock out via jest.mock("./myModule"). I'm not sure if you can replace an entire namespace in Clojure.


Maybe one thing to add: The hard part about “manual DI” by passing the dependencies as a parameter is the management of the dependencies. It can be as straightforward as a map, if the injected functions are not dependent on state and don’t depend on each other, but in all other cases you will end up with some initialization-logic which has to be maintained and which I have seen (in Java projects) to grow rather complex

👍 2

The other "hard" part is to really do it right you should be passing things all the way down the graph. If you have a function that takes a function that takes a function that takes a function that takes a function that uses sh, then every one of those should able to accept and pass along that dep to its children. That's part of where using a map is helpful though. You can take a map, use what (if anything) you need from it, ignore the rest, and pass it untouched (or updated with anything this scope is knowledgeable to update) to another function that also might need something from it.


The way I think about it is that parameter passing is a form of push-based dependency injection (vs. pull-based where dependencies get looked up in a var or some other global way). The advantage of push-based model is that it is straightforward and explicit, the disadvantage is that it can get verbose in practice (polluting call sites) and can unnecessary couple middle layers to implementation details (same problem like prop-drilling in React). The pull-model (system map, redefing or re-binding) is convenient but the disadvantage is the global state. James uses the push-based model and leverages OOP to avoid the call site pollution problem by injecting the dependencies at object construction time, then keeps the dependencies in local state. I fear to replicate the pattern and keep the convenience in functional way we might not have the luxury to be able to avoid a DI framework. An approach that comes to might could be DI based on partial application. One example that does this is . Unfortunately this implementation does not support cljs, so the search continues.


> James uses the push-based model and leverages OOP to avoid the call site pollution problem by injecting the dependencies at object construction time, then keeps the dependencies in local state. Great observation, thanks for sharing @U70QD18NP! When I was translating Testing Without Mocks to Clojure, what struck me was that applying the OOP pattern just to get dependency injection felt awkward in Clojure (whereas it feels natural in Java's everything-is-a-class world)

🎯 1

It meant that in every namespace I had to add a constructor

(ns my.gadget)

(defn create-gadget [deps] {:db (:db deps)})
etc and pass in the object to every method as a first arg. While it works, it feels a little awkward in Clojure


> I fear to replicate the pattern and keep the convenience in functional way we might not have the luxury to be able to avoid a DI framework. An approach that comes to might could be DI based on partial application. > IMO partial application is severely under-utilized in Clojure in general. If you get your design right, partial application could go a long way in reducing call-site pollution without necessarily introducing a framework to help.


That can be as easy as

(defn foo-impl [dep]
  (fn [x] ... ))

(def foo (foo-impl the-real-dep))


And that also bypasses a lot of the need to pass things through the graph, because the production-time consumer can just use the one already preloaded with the real dep.


Following a discussion the other day I created a new more functional based example to demonstrate testing using infrastructure wrappers instead of mocks: Dependencies are injected manually without a DI framework using partial application. I took extra care to include comments describing usage of various techniques and patterns.

👏 1

I think it turned out pretty well if I can say so myself 😄


Summary of the two main concerns that we discussed: • Call-site pollution with manual DI - not the case, the API for consumers in application code is the same as without infrastructure wrappers. • "Prop-drilling" (I learned it is also being referred to as "pass-through variable" or "tramp data" code smell) when instantiating the whole Nullable dependency tree in tests. Turned out not that bad, I think it is tolerable when applying the Parameterless Instantiation and Signature Shielding patterns.


This approach (linked example repo) reminds me a bit of an effect system. Is there a relationship here? Or have I misunderstood what it does? --- In general don't quite understand the problem statement here and wish to learn more. I started reading the article and looked for the rationale. Sorry if I missed something: > It [hexagonal/functional core] fixes the problem... for logic. But infrastructure is often left untested, and it requires architectural changes that are out of reach for people with existing code. If so, then I have a different viewpoint. Probably because I'm lacking some context or deeper understanding. I've found that automated tests are great for internal consistency. What you call infrastructure testing, I just call integration tests or end to end tests. When the side effectful, stateful code is stripped down to a minimum, then I'm happy with a few integration tests that have setup/teardown requirements. Plus a healthy chunk of defensive programming (including validation and matching etc.). And sometimes I see a way to describe interfacing with some side-effectful thing with data. So I can handle more cases in my automated tests. That's my background but I wonder where these concerns come from or if someone can elaborate more on the problem statement itself. Maybe a relatable example?


Here's James summarizing the reason why he wrote the blog post. I think reading this condensed version (relatively - it's still 78 tweets 🙂) is pretty useful for understanding the big picture:

🙏 1

thank you @U06F82LES I'll make an effort to read the rationale tweets and the blog post. I maybe have more interesting things to say and ask after!


Quick attempt at answering your question @U01EFUL1A8M in my own words. When you're testing real-life code, that's almost always code that has side-effects in the world, and often need to interact with a database. It's what Rich Hickey calls These programs are what's paying our bills. How do you test Situated Programs? Well, as any Clojurist will tell you, it's best to test pure functions. And they're right, but what they overlook is that testing the pure logic at the core of the program is actually the easy part. The hard part is the remaining 90% of your code. If you use the "Functional Core, Imperative Shell" approach, you can isolate the pure domain logic, which doesn't depend on infra code, from the rest of the program. What James is saying here - and I agree - is that (1) while the Logic Sandwitch is a good idea, it's not always easy or even possible to apply this architecture (especially to existing code); and (2) even if you do, you still have a lot of "app code" - code that glues together your db calls and the rest of it, and usually contains a fair amount of conditional logic (that definitely requires testing). So that's the question at the heart of Testing Without Mocks: how do we test app code, i.e. code that has various dependencies on infra (db, file system, http clients, etc)? This kind of code tends to be awkward to test, and in real code bases most people don't give up on trying, leading to the Big Ball of Mud architecture. The $1000000 question is, how can we bring app code, which for better or worse depends on infra, to a place where we can test it with fast-running tests that give you confidence when refactoring and that keep you from manually running your app to see if it still works?


Reminds me of this article: They struggled with the problem you describe as well. Personally, I don't. We're a very small team working on and maintaining several dozens of projects. So it's never an issue of "how can I test the last corners of this codebase?" but rather "how can I refactor / structure this so more stuff is pure/data driven" and "how much time can I allocate to write tests, where it is really worth it". So it's really somewhat of a reverse problem!


I didn’t read all above, but from my experience: • you know if code work on production or preprod. Why? Because third party APIs don’t work as it is described in documentation. I would say always. • considering above you literally can’t verify if you app really work not doing API request to third party APIs • considering above the final test before deploy is preprod and this is very reliable way of testing. Really trusty one. • preprod (using production data) is better, than staging (testing data and auth), because third party APIs designed for tests return different data, than production one • before test on preprod you can test on staging • before test on preprod / staging: a) test requests to your server with random data (for example clojure.spec) b) test with static requests c) in general full request -> response is much better, than testing functions d) test functions while you feel it gives additional value like for example it is documentation of how to use functions e) test functions which are hard to understand and maintenance f) finally test them if you really feel you need to test such small parts for some reason g) just don’t write tests for each function h) use mocks to simulate third party responses actually stop here… I can write about this too long. Right now we can draw interesting conclusion and maybe give a right question. During flow of developing, testing and deployment on different stage we use different method of testings. Mocks are useful for developing code (not for very trusty answer if something work for 100%. It gives answer like: it works for common use case without corner case. Should be good, we will add corner case later when we identify them.), but later on you want to test you app with other methods like for example preprod.


*of course good testing is a cost, so depends on how critical is a project flow of testing looks shorter or longer 🙂


so the right question in my opinion is not “if test with mocks” but “when test with mocks and when not” and the answer is testing has a flow like: fns tests during developing -> requests & response during developing -> request & response but in testing environment (CICD) -> preprod (it can be shorter or longer flow, this is just an example) while see this from that perspective it should be clear there are stages in flow for which you have to use mocks. But there are also stages in flow when you can’t use mocks.


heh I hope I am at least in the topic of thread 🙂

borkdude22:02:33 is a pure-Java library which allows you to write console TUIs and it also supports reading one character from stdin at a time. I'm trying to figure out how that library accomplishes this, but so far I haven't managed to replicate this in pure Clojure...

👀 1

It’s super hidden away in nested code, but lanterna is just passing and reading off it Note that clojure-laterna is pretty far behind lanterna, like a whole major version--needs a bunch of refactoring to get to lanterna 3


ttys are line buffered by default


You need some native code to change that




I am assuming that is what the question is about


Yes, so I wonder how lanterna pulls it off since it claims to be pure Java. @U9VHXCS7L I've been updating clojure-lanterna here: You can use it from babashka via a pod. Here is a tetris game:

bb -Sdeps '{:deps {io.github.borkdude/console-tetris {:git/sha "2d3bee34ea93c84608c7cc5994ae70480b2df54c"}}}' -m tetris.core

🔥 3

But implementing it in pure clojure would likely be faster than the pod stuff, which is why I want to figure that out

hiredman22:02:05 it does look like the stty stuff has a method for turning the canon option on and off


A pure clojure version sounds interesting. It also might help with some licensing issues (depending on your goals and desired license).


(I just know ttys are line buffered, never actually turned that off)


laterna does have:

runSTTYCommand(enabled ? "icanon" : "-icanon");


It now dawns on me that you might be able to hack around the char by char input by using ProcessBuilder and inheriting stdin and then reading the bytes of the outputstream (which is then the user's input)


any process you spawn will be running in the same tty


anyway it's getting way too late here, hope I nerd-sniped someone so I can read the answer tomorrow when I wake up ;)

💯 1
😆 3

I suspect this is going to involve calling libc from Java to set the terminal input mode flag, which appears to be mask/flag bit twiddling a pointer (to where?) directly?


I think that's what the stty stuff does...

👍 2

For the record, I'm not interested in other JNA/JNI/Panama/whatever solutions. I'd like to see a reconstruction in Clojure of what the lanterna (pure Java!) library does for reading a single character from a tty.


there are some native deps, but there's a comment that says they're optional.


Also, I've previously compiled lanterna apps with graalvm's native image, which almost always breaks with native code unless you jump through hoops.


I have compiled pod-babashka-lanterna with graalvm native-image, can't remember any hoops, it's been a while, but it works :)

bb -Sdeps '{:deps {io.github.borkdude/console-tetris {:git/sha "2d3bee34ea93c84608c7cc5994ae70480b2df54c"}}}' -m tetris.core


That was my point. I don't think it has native deps, in part, because I was able to compile it with native-image without any extra steps.

👍 1

Ah got it now @U7RJTCH6J: yes, the seemless compilation with graalvm seems to indicate that there are no native deps required.

👍 1

yep, I worded that a bit confusingly 😳


> I've previously compiled lanterna apps with graalvm's native image, which almost always breaks I read this as "lanterna apps almost always break"


That catch block just leads back to UnixLikeTTYTerminal.canonicalMode() which shells out to runSTTYCommand Where the stty command is going to call the C terminal attributes stuff, which is ultimately going to call ioctl syscall on the terminal file descriptor — so I think ultimately we’re still talking about either getting JVM to shell out, or getting JVM to call tcsetattr -> ioctl syscall?

☝️ 1

Shelling out doesn't require any native deps and requires fewer dependencies. It also works with graalvm's native image more easily.

👍 1

So you should be able to reproduce this by (babashka.process/process "stty" ...) and then (.read System/in) and then pressing a key, but how exactly?


The linked SO question talks about raw mode, but from what I've seen laterna uses canonical mode?


This is probably out of design bounds but in this specific context this thing might also be usable Will test pure bb and shelling when back at kbd


@U9VHXCS7L Native image has something for talking to C libs:, you don't need the truffle stuff for that But indeed, I'd like to not get into the native stuff if I don't have to


Need to inherit STDIN in process/process


(babashka.process/process "stty -icanon -echo" {:in :inherit})
(println "Echo is off; press `q` to quit.")
(loop []
  (let [k (.read System/in)]
    (println k)
    (when (not= k 113)
(babashka.process/process "stty icanon echo" {:in :inherit})


Perhaps it's


@U9VHXCS7L This is epic!

(require '[babashka.process])

(babashka.process/shell "stty -icanon -echo" )
(println "Echo is off; press `q` to quit.")
(loop []
  (let [k (.read System/in)]
    (println (char k))
    (when (not= k 113)
(babashka.process/shell "stty icanon echo")

🚀 3
🎉 3
👍 2