Fork me on GitHub
Erick Isos06:01:52

Hey guys! Just asking, do you have any recommendation/alternative to for a clojure project? I mean, to keep the deps updated? (Idk if it makes sense if the project only uses deps.edn file)


IMO the best alternative is to run something like antq and do it manually while checking all the release notes and maybe even diffs when it seems important. Just checking for a new version can be automated so it would run periodically in the background and notify you when there's something new - I would probably create a simple cron job for that locally.

💜 1

Indeed, it even has an example github action as a starter:

💜 1

At work use renovatebot: (I never touched it) that should have support for deps.edn:

💜 1

There's also which only focuses on pointing out dependencies with known security vulnerabilities


What can be done to make a clojure function pure?


After watching ‘Simple made easy’, I’ve been thinking of ways to disentangle layers of codes and render them pure, then I came across this article,Abstraction,-Depend%20on%20abstractions To give a recap, the article talks out two points. 1. Avoid context drilling by eliminating dependencies among layers  2. Pass side-effect causing functions as parameters instead of invoking them inside directly.   A simple representation of second point in actual code.

(ns ns1
 (:require [add]))

(defn f1 []
 (add 1 2))

(ns ns2)

(defn f2 [f]
 (f 1 2))
One of the obvious benefits of this approach is testing; there is no need to prepare actual database. Second benefit would be from segregation of domain logic from rest of the codes - that each layer has a single responsibility, hence enhancing reusability. However, there are some points to consider 1. What if the function requires tens of other functions invoked inside? pass ten functions as parameters? 2. What other actual benefits are there other than testing? I’d like to hear people’s opinion on this approach. What are your techniques/ideas on making pure functions in Clojure?

👀 1

1. That would be a huge code smell. I haven't seen a situation where such a function couldn't be refactored to become better in every aspect 2. Predictability, thread-safety, reduction of cognitive load


@U2FRKM4TW thanks for the comments. Your replies are basically in favor of the method suggested in the,Abstraction,-Depend%20on%20abstractions, yes? Are your codes actually mostly written in this function-passing style?


In general - in favor, yes. Can't say, haven't measured. But I rarely go out of my way to make something pure.


+ how does this method reduce the cognitive load? You have functions tossed in parameters, and you are not told on how thess functions should be invoked. Doesn’t this require people to re-read the function definitions?


Ah, my second point was about pure functions in general. With this particular approach, there's a trade-off of cognitive load - hard to say whether it changes at all, but with the right impure API it shouldn't increase. And no, ideally you wouldn't have to re-read definitions, just docstrings. Same deal as with clojure.core/map - you don't read its implementation to understand that its first argument is a function X -> Y, you read its concise docstring for that.


I see. hmm.. but there still remains the intrinsic impurity from the tossed functions; and I don’t see how things have gotten any better except for the ease in testing. Although, if this can be thought of as a kind of dependency injection, the flexibility is increased in original function because parameter functions can be timely replaced at runtime.


@U01TFN2113P Your intuition is correct here. That article advocates a massive addition to cognitive load for basically zero benefit.

👀 1

Throughout my career, I’ve always needed to see implementation details. It’s impossible for me to imagine the inverse exists in a professional setting.


In addition, you can get all of the benefits discussed in the article with alter-var-root


I think the technique in the article is worth using, without going overboard. alter-var-root is not a great solution. It makes testing those functions possible, not simple.


Related, one thing I do use quite often to make functions pure is splitting them into 2: one pure that returns data, another that does something impure with that data, such as:

;; Creates SQL string and executes it
(defn update-films! [id data]
  (jdbc/execute! conn
                 (-> {:update :films,
                      :set {:kind "dramatic", :watched [:+ :watched 1]},
                      :where [:= :kind "drama"]}

;; Only returns the SQL string

(defn update-films [id data]
  (-> {:update :films,
       :set {:kind "dramatic", :watched [:+ :watched 1]},
       :where [:= :kind "drama"]}))

(defn execute! [sql-map]
  (jdbc/execute! conn (sql/format sql-formatted)))


update-films! is hard to test, you would need to mock jdbc/execute using something like alter-var-root or with-redefs. Unit test update-films is trivial, it’s just input in input out, but there’s also other benefits. Now that you have an intermediate representation of the data you’re going to date, you can choose to do something else with it.


You could, for example, decide to create a ledger of all the events, for easy audit, rollback, etc:

(defn save-event! [sql-map]
  (jdbc/execute! conn
                 (sql/format {:insert-into [:events]
                              :columns [:payload]
                              :values [[payload]]})))

(-> (update-films id data)
    (juxt save-event! execute!))


Ah, and one important point. Now all your “execute SQL logic” is in execute!. That doesn’t mean only update-films is simpler to test, that means all the [create/update/delete]-[films/user/ratings] you’re going to need are simpler


This does relate back to the technique in the article. It feels like a lot of overheard. And it is, if your program is small. But if it grows to a certain size, and you have dozens of functions that need input, the overheard gets considerably lower. You’ll get used to it, and when you see a function that retrieves something from a database you automatically know its definition is in file db-funcs.clj.


@U7S5E44DB The query example seems to deal with a trivial case. I think the main point here is whether things have actually gotten simpler by accepting functions as parameters, not invoking them within.


> You’ll get used to it, and when you see a function that retrieves something from a database you automatically know its definition is in file `db-funcs.clj` and I think this can be achieved by collecting db query functions in one namespace, and has little to do with techniques in the article. Correct me if I’m mistaken.


On that point yes. I guess if you’re only asking whether making tests easier with the article’s technique is worth, I’d say yes, unless your app is too small to have maintainability issues to begin with


But as you mentioned above, there is little harm in using with-redefs if ‘making tests easy’ was the only prime goal. I’m curious to know if it is worthy in other aspects.


As I mentioned - whether it becomes simpler depends, IMO, mostly on the mutating API. If it's set-user-first-name and set-user-last-name and increase-order-total and many more - it gets much more complex but very well defined, with concrete border. If it's set-user-attr and so on - gets less complex, less well defined. Then it can become set-entity-attr. Then just execute-query!. Which level to choose depends on what you're doing and what you want to achieve.


That’s… a big selling point of clojure, or any functional language. If you don’t think using with-redefs is an issue, then sure


For me it’s much easier to reason with functions that I know don’t have IO inside, without looking at the source code


Still not clear on how things become easier to reason with.

(defn handle-register [create-user send-user-info id pw]
   (create-user id pw)
   (send-user-info id pw))

(defn handle-register [db id pw]
  (db/create-user db id pw)
  (user/send-user-info id pw))
On external level,
(handle-register create-user send-user-info id pw)
(handle-register db id pw)
there seems to be little difference, with the first one obviously flexible when doing some generative testing.


This is an oversimplified example that doesn't show anything. Imagine you need 50 places that need to create a user. Or, more realistically, to query something specific.


With 50 places that requires user-creation, I’d need to pass create-user function 50 times as parameters; and similarly if I’m using the second approach I’d also need to call db/create-user 50 times in whatever functions I need to use. Can you give me one example that’s not overly simplified?


If you meant the reusability of the function, I agree, because the function became independent from datasource drilled from above (handler or sth).


FYI I’m not seeking for an ultimate answer. just curious on how others deal with the purity issue in general.


> I’d also need to call db/create-user 50 times in whatever functions I need to use And pass around db without knowing from the outside why a particular function needs that db. When passing create-user , you know for sure that there's nothing going on in there but user creation. When passing db, you have to go check the code. I'm not sure how else to describe it because it feels like I'm just rehashing that article.

👍 1

I see the point. So it’s a matter of how specific I want to define the border as you mentioned in > it gets much more complex but very well defined, with concrete border. so there is not one standard saying which one is better.


What is the possible benefit of weaving 1000 closures throughout your code when the alternative is with-redef?


The only argument I’ve seen boils down to, “I like it better to have ‘pure’ functions.” NOTE: This^ statement isn’t even true. All of those closures are side-effecty. All you’ve done is create an ad hod module/OOP system instead of using the built-ins that clojure provides.


Everything that uses those closures is not side-effecting, it's pure - that's the whole point. It's all in that article.


There is some utility to marking functions that aren’t pure. But it’s enough to pass an object that’s responsible for side effects (a la Component).


ay yay yay — what’s the difference? It’s going to invoke the closure. It’s going to do some side effect.


That whole proposal is insanely complicated. It adds unreal amounts of complexity onto a codebase. For literally zero comparative benefit.


If you want to work only with pure functions — if you really want a guarantee that functions are pure — then use monads and a type system to enforce it.

👍 1

There’s a reason Clojure didn’t go that route.


Hm, fair, the side effects are still there - my mind is still stuck on purity above all, my bad. What they do is IoC - it has its benefits, namely it creating boundaries in your system and making it explicitly configurable. And if someone wanted purity in addition to that, then Haskell would be created within Clojure, with its IO monad. :) > There’s a reason Clojure didn’t go that route. In which way could it possibly go that route?


I don’t understand the question. “In which way could it possibly go that route?”


They could have made a type system. Or they could have put monadic operators in core (there have been several library attempts at it.)


I also had doubts as to how this dependency injection is actually profitable in clojure system. To bebefit from this system I should be able to replace the ‘create-user’ with other possible functions. But to do so it should have the same signiture with the create-user function to achieve the original goal.. and things get very OOP like here (of course, there is nothing wrong with taking OOP perspective).


An approach that's beneficial for a particular kind of applications might not be reasonable for a language to implement at a particular point in time. There's no "the Clojure" - the language is still being updated, new features are still being added, with according priorities. And some features belong to libraries and not in the core.


> things get very OOP like here How does OOP follow from having functions with the same signatures?


We do have Component, mount, Integrant, Clip, some others - and people do use them. It's the same principle.


Not sure who you’re responding to w/, “there is no the clojure,” but if it’s me, I have no idea what you’re getting after. My point is there is a reason they’ve never pursued anything like monads to date.


@U01TFN2113P I do find significant benefit from OOP-like patterns in Clojure. Specifically the Component pattern. Mostly because managing dependencies and things like connection pools in Vars can be hard to get right and doesn’t provide any mechanisms for shutdown (which you usually want to do cleanly w/ a db connection pool.)


@U07S8JGF7 Right - because monads might not belong to the core (and there is org.clojure/algo.monads) or because there are other priorities. IIRC in one of the talks Rich mentions that he didn't find monads that useful to include them in the core. But Rich has worked on specific kinds of software where that might indeed be the case. Or monads are just not something he prefers to see in his code. Or some other reason. "There’s a reason Clojure didn’t go that route" is not a good argument because the reasons might easily not coincide with one's approach to developing a particular kind of software.


If you know the reasons, you know they coincide for the software that clojure is targeting — i.e. the vast majority of software.


@U01TFN2113P I think it’s fair to be dubious that passing in objects has a ton of value. My experience has been that it’s 1. Very helpful to cache some things at application startup (you can do this with Vars, but you have to manage shutdown yourself), 2. Very useful to know at a glance what’s even allowed to do I/O, 3. Somewhat useful to be able to mock certain I/O interactions. #1 and #2 are most useful in larger codebases.


@U2FRKM4TW I suppose you want me to list the reasons here. Clojure is extremely opinionated about how you do in-memory data manipulations. However, Rich has acknowledged multiple times that managing resources and I/O calls are completely unsolved problems. The preference in Clojure is to push developers to use immutable values in memory, and to be responsible when allocating resource usage and doing I/O — because he cannot fix that for you. Side-effects are freely allowed as a design principle — not as an oversight.


> the software that clojure is targeting — i.e. the vast majority of software Not sure where that claim comes from but the language is being designed by very particular people with very particular experience. Their experience is not representative of the whole software industry, otherwise everybody would already be using Clojure. What works for them might not work for others, for all sorts of reasons.


You can use that exact line of reasoning to support making literally unbounded messes. These are people that have thought a lot about modern software. It’s worth listening to their reasoning. You might not always agree (I surely don’t), but I would suggest having a dang good reason before discarding their thinking.


Absolutely - my whole point is that your statement equally applies not only to Clojure's code but also to that article linked above, and to people that created Haskell with those pesky monads.


How in the world is that supposed to help @U01TFN2113P make a decision?


“That might not apply here,” is a universal statement.


Helping them be discerning by being honest about ramifications for decisions, on the other hand, is at least trying to help.


Indeed. To each their own. I'm not in tlonist's head, I don't have access to their backlog and code - I can't make a decision for them in this context. All I can say is that there's merit to the things discussed above - one can't blindly throw them away because "I don't write code like that" or "Clojure people don't like that".

☝️ 1
Noah Bogart17:01:53

Rich discusses why he doesn't like monads in the talk Maybe Not:

👍 1
Noah Bogart17:01:51

I see a use for the IoC/passing functions as arguments when dealing with side effects, but I prefer the component system approach of “side effect object” that's passed on a context object to the relevant functions and then using the side effecting stuff in specially marked functions (put a bang at the end!) where used


Thanks for thoughtful comments. I’ll rethink them and probably come back later with improved solution.


Hi. What’s the recommendation to perform threadcontext. The use is to store some values in the current thread throughout the lifecycle of an http request. The functionality is similar to : org.apache.logging.log4j.ThreadContext


I would just make a Clojure map and pass it to every function that needs it. Haven't seen a place where that would not be applicable.

R.A. Porter15:01:58

Especially if you're using a Ring implementation of some sort, as it's trivial to add items to the request map with middleware handlers.


It’s commonly used in some ASPECT scenarios such as logging. Passing a map is really intrusive as it is NOT related to the business flow.


Then binding is an easy solution. A more pure but less easy solution would be to pass logging function around - then you can bind whatever context you need in a closure.


Yup. Sounds like bindings is a way to go.


binding and dynamic


or make a method you can call that refers to a :^dynamic var and set the binding earlier / upsteam


Seconding the advice to regard thread-locals as "global variables" and avoid. Thread-local storage intended to have http-request scope is a bit risky, as a design, because it will fade out (or go altogether haywire) as soon as you need to incorporate async processing.