Fork me on GitHub

Could someone point me in the direction of how I would [with clojure] read in a .yaml modify it programmatically, then write out the modified version back to a .yaml?


I have not used either of the two libraries mentioned under the "YAML" category on the Clojure Toolbox page, but I would start by checking out the readme docs for those two libs:


Thank you!


Note that the maintained fork of clj-yaml is at I have an open pull request on Clojure Toolbox to update similar URLs.


I just put a question on stack overflow (the code snippet seemed too long for slack). It's about transducer performance with and without using an async/chan.


it doesn't exactly answer your question, but maybe helps shed some light. chan-xf and chan-then-nested aren't necessarily analogous since chan-xf is running the transducer before putting data on the channel while chan-then-nested is running the transducer after taking data off the channel. I think the following is a better comparison and chan-xf is about twice as fast on my computer:

(defn chan-then-nested []
  (let [c (async/chan n)]
    (async/onto-chan c (tx data))
    (while (async/<!! c))))

(defn chan-xf []
  (let [c (async/chan n xf)]
    (async/onto-chan c data)
    (while (async/<!! c))))


but it still doesn't explain your other example's result


if I was feeling a little more industrious, I might try to run your examples with


interesting. there seems to be a big difference between running the transformation before and running it after:

(defn chan-then-nested-before []
  (let [c (async/chan n)]
    (async/onto-chan c (tx data))
    (->> c
         (async/into [])

(defn chan-then-nested-after []
  (let [c (async/chan n)]
    (async/onto-chan c data)
    (->> c
         (async/into [])


chan-then-before-before is about 1.5-2x slower then chan-then-nested-after


Could it be that the transducer on the chan is just slow enough that it makes coordination between the two go loops (onto chan and a/into) more difficult and add more overhead? So in the tx version there are several (most?) items buffered by the time the takes are attempted, but with the xf version there is more waiting for a values to be ready by the async/into go block?


Like I could imagine that interleaving the producer and consumer go blocks would add more overhead then just filling a buffer then subsequently draining it


yea, something weird with the interleaving:

(defn chan-then-nested-before []
  (let [c (async/chan n)]
    (async/onto-chan c (tx data))
    (->> c
         (async/into [])

(defn chan-then-nested-before-doall []
  (let [c (async/chan n)]
    (async/onto-chan c (doall (tx data)))
    (->> c
         (async/into [])
chan-then-nested-before-doall is significantly faster than chan-then-nested-before


Thank you for these interesting investigations!


Has anyone used Clojure (JVM, not Clojurescript) on serverless Google Cloud functions?


I’m getting an error when doing a post request that I’m not getting when doing a get request


here are my reitit routes


(defn home-routes [] 
   {:middleware [middleware/wrap-csrf
   ["/" home-page]
    ["/sign-in" sign-in]
    ["/upload-video" upload-video]]])


here’s my get and post request functions:

(defn http-get [uri params on-success on-failure]
  {:method :get
   :uri (str "" uri)
   :params params
   :on-success on-success
   :on-failure on-failure
   :response-format (edn/edn-response-format)
   :format (edn/edn-request-format)

(defn http-post [uri params on-success on-failure]
  {:method :post
   :uri (str "" uri)
   :params params
   :on-success on-success
   :on-failure on-failure
   :response-format (edn/edn-response-format)
   :format (edn/edn-request-format)


and here’s how the functions are being called:


 (fn [coeffects _]
   {:http-xhrio (http-post "/api/upload-video" {}


I’m getting the following response:


{:response <!DOCTYPE, :last-method "POST", :last-error "undefined [403]", :failure :error, :status-text nil, :status 403, :uri "", :debug-message "Http response at 400 or 500 level", :last-error-code 6}


how to fix this error?


There's not nearly enough code here to be certain, but I'm going to guess it has something to do with you using middleware/wrap-csrf and not sending the CSRF token.


commenting out wrap-csrf gives a response of 500


I expect a response of 2xx


thanks. The 500 was due to an error in the handler


Hello is it possible to do something like the bellow? myns contains many clojure.core names that overrides

(:use myns :exclude clojure.core)


I tried 2 things that worked but maybe better ways

(def excluded-core-names '[get let ..])

(use `[myns :exclude ~excluded-core-names])


(:refer-clojure :only [])
(:use myns)


(:refer-clojure :only []) works. Seems pretty clean. TIL! (I deleted something I wrote earlier)


yes , its fine , i just left it like that 🙂 thanks


Is it weird that (conj) returns a vector [], but (conj nil 1) returns a list (1) , or am I overthinking it?


In Clojure nil is not a collection of any type, so it is a design choice which collection type to return in both of those cases.


You can always control the return type of the collection by providing an actual collection to conj onto, and avoid both of those kinds of calls.


> so it is a design choice which collection type to return in both of those cases. But in these two cases the choice was different for each. Just wondering why that was


Understood. They both seem like easily avoidable cases by the way you call conj, if it bugs you, so should be irrelevant in typical Clojure programs.


I don't have any knowledge of whether there is some rationale given for that design choice, sorry.


Yeah, it's definitely not a big thing in practical terms. Thanks!


Both of those design choices seem to have been consistent since Clojure was originally published, circa 2006


The very same question has been asked just 9 days ago - Slack hasn't removed it yet:

👍 6

There's a thread with good explanations under that message.


Maybe I should make an quetion/answer out of it


> The very same question has been asked just 9 days ago so it was, thanks! Even exactly the same thought process: the update use case was what got me thinking about the (conj nil 1) thing. Though I'd say even after reading it I'm at the same place as didibus - unclear as to whether the difference is due to an intentional design decision or not.


conj by default uses [] in its 0-arity. Treating nil as a sequence yields a list. No matter whether you're using conj or into or rest or something else. These two things are completely orthogonal.

👍 3
💡 3

I think it is due to an intentional decision, but made at different times for different reasons.


Initially conj only had a 2-arity: (conj nil 1) or (conj [1 2] 3) or (conj '() 1). This was in the time of sequences, and conj was exclusively a sequence function. From that perspective, the question was, what should (conj nil 1) return? And the logical answer was, we should probably treat nil as the empty sequence, so nil should default to an empty seq and thus (conj nil 1) ;=> (1) was made to return a seq.


IMO, as I said, lists are not about conj at all:

user=> (type (conj nil 1))
user=> (type (into nil [1]))
user=> (type (rest nil))
user=> (type (reverse nil))
user=> (type (dedupe nil))


So the initial question (if there even was one) was probably "what kind of collection should nil beget" and not "what should (conj nil whatever) return".


Later, conj was evolves to also be a transducer, which meant that a 0-arity and a 1-arity were added to it. Transducers are conceptually collections functions, not sequence functions. The 1-ary in transducers is the completion function, it says what to do at the end of applying a transducer, and in the case of conj we don't need to do anything except return the result, thus the 1-ary of conj just returns its argument as is, aka its an identity function. Now the 0-ary in transducers is the init function, this will be called when there is no starting collection in order to decide what the default is, and it could be used to setup some other things related to the particular transducer. Now the question here was again, what should be the default when conj is used in a transducer with no starting collection specified? In this case it was decided that would be vector, and this is because transducers are conceptually collection functions, so defaulting to a sequence wouldn't make sense. Now you could ask why it didn't default to list, and I think that is because vectors are a nicer default here since they don't reverse the order of the elements when processed. Now when (conj nil 1) was chosen, you could say hey but this was made a list and that reverses the order so what gives? Well, the rationale was different, for that it was about treating nil and empty seq as equivalent, since an empty seq is a list it returns a list. Where as for transducers it wasn't about that, it was about what is a good init collection for transducing over conj if none are provided, and here vector was chosen as the best one to pick from.


> So the initial question (if there even was one) was probably "what kind of collection should nil beget" and not "what should (conj nil whatever) return". That seems the same question to me?


Hum.. actually conj was never a sequence function hum... Might need to revise my answer a bit. Though I think it was the same idea, nil is often treated same as empty list, so I think that was the idea for conj as well


conj has always worked to add a new element to any kind of collection, not only lists or sequences. It worked on sets, maps, and vectors from the beginning.


nil being the empty list is a Common Lisp-ism, one that Rich explicitly rejected as a design decision for Clojure in most places -- mentioned in his talk on Clojure for Lisp programmers. In Clojure () is the empty list, not nil. nil is not a collection.


"But nil means nothing in Clojure. It means you have nothing. It does not mean some magic name that also means the empty list. Because there is empty vectors, and there is empty all kinds of things. "nil" means you do not have something. So either you have it, a sequence, or you have nothing, nil. If you have a sequence, it is going to support these two functions." part of this talk:


Also if you search for "eos" in that transcript, there is a slide where he talks about nil vs. empty lists and a few other things, comparing what Common Lisp, Scheme, and Clojure do there.

Max Deineko21:01:40

> conj by default uses `[]` in its 0-arity. > Treating `nil` as a sequence yields a list¹. Thankyou @U2FRKM4TW, I finally see how this could be considered consistent! 🙂 ¹ emph. added

👍 3

Interesting stuff, thanks for discussion and links!


@U0CMVHBL2 What does "eos" stand for? Or what does it mean?


I haven’t read to confirm in detail, but IIRC in this context it means “end of sequence”

Max Deineko22:01:45

I think I'll put down conj as "practical, and probably consistent with implementation/usage/history while not necessarily formally self-consistent as in having simplest possible behaviour" until some future moment of enlightment 🙂


Just don't call conj with 0 args, or a first arg of nil, and these corner cases are unimportant to you.

✔️ 3

I've been pretty deep in this rabbit whole now, and looked at the code and all as well. And I think @U0CMVHBL2 is correct actually. Nil is not the empty list in Clojure, that's a departure explained here: but it used to be. And the code that defaults conj on nil to an empty list is 14 years old, I think it predates the switch to lazy-seq even, and it was written when conj actually took the coll last same as cons: (conj 1 []). At this point, I think its just what it is, 14+ years ago Rich just chose to default (conj nil 1) to use a PersistentList, and that's that. And much more recently, for transducers he chose to default the init for conj to vector.


My guess is that closer to Clojure's inception, Common Lisp idioms were often adopted, so big emphasis on lists and thus prepended elements when building them up, as well as pervasive nil-punning, thus treating empty list as nil and thus nil as empty list. So the original implementation of Clojure seem to behave consistently to that. Later, Rich started to realize that those CL idioms might not be ideal, and slowly started to break away from some of the CL idioms. The first big change was with making nil and empty list no longer the same thing. This is why sequences now don't return nil when they are empty, they return an empty construct of some sort instead, either an empty list, an empty sequence, or something of that sort. Now a seq is still either at least one element or nil. But a sequence can be empty, and a list can be empty. Even later, I think Rich started to realize that lists maybe just aren't even a very useful collection for user programs (even if useful for modeling code). So when implementing collections, vectors were used over lists for defaults.


Only Rich could corroborate this, but that's my best guess right now.


In practice that does seem to mean that collection functions when passed nil as the coll seem to default to treating it like a list, while sequence functions when passed nil seem to default to sequences, and transducers when not passed an init coll seem to default to vector.


In that sense, things are consistent, kind of like @U2FRKM4TW was saying:

(conj nil 1) ;> (1)
(into nil [1]) ;> (1)
(conj) ;> []
(into) ;> []
But honestly, there are inconsistencies as well:
(pop '()) ;> java.lang.IllegalStateException: Can't pop empty list
(pop nil) ;> nil
(empty '()) ;> ()
(empty nil) ;> nil
So I'd go and say its "as consistent" as people managed to make it, but clearly overt time and as things were added it didn't always manage to follow the most consistent logic.

👍 3

The default for no-arg conj to return [] is as old as (conj nil x) returning (), I believe. What evidence do you have that it dates to when transducers were added @U0K064KQV ?


If you haven't watched this talk, or read the transcript, I would recommend it, if you are interested in how early some of these decisions were made (the talk doesn't explain all decisions, just some). It might help eliminate some of the guessing.


Hum, the code blame attributes it to the "transducer wip" commit. Maybe it was there before somehow as well, I didn't go further back in the history.


Yup, if I go further back in history, conj does not have a 0-arity, only the 2-ary.


This is the source of conj before the transducers commit:

(fn ^:static conj 
        ([coll x] (. clojure.lang.RT (conj coll x)))
        ([coll x & xs]
         (if xs
           (recur (conj coll x) (first xs) (next xs))
           (conj coll x)))))


I see it now. Yes, 2014, "transducers wip" commit.


And the RT conj impl is the one that has the nil handling code:

static public IPersistentCollection conj(IPersistentCollection coll, Object x){
	if(coll == null)
		return new PersistentList(x);
	return coll.cons(x);


That has been there since 1.0 and probably since beginning.


Yup, since September 2007 it seems, before that it used to be:

static public IPersistentCollection conj(Object x, IPersistentCollection y) {
	if(y == null)
		return new PersistentList(x);
	return y.cons(x);


Which is interesting, the coll came as the second arg


Oh interesting, I think Rich copy/pasted the code for cons, and renamed it conj. And then, in the next commit he changed the impl, that's why at first it has the arg order of cons.


Actually, I wonder if that's why conj defaults to emptylist, since cons did, which made sense, since cons constructs a list, and the code was copy/pasted from it and already had that implementation, so I think that's what happened 😛

👍 3

If I want to cache a single value forever, is it OK to use something like:

(def cached-value (atom nil))

(defn compute [args]
  (or @cached-value (reset! cached-value (real-compute args))))
(assuming the result is never false so the use of or is safe)


Forgot to specify - real-compute is pure. I don't care how many times it ends up being called in the end.

Robert Mitchell21:01:23

Why not just have compute wrap real-compute with memoize?


I can't think of any reason that would be unsafe. You can also use delay instead of an atom.


In your compute function, if you ever call compute with different args, that cause real-compute to be called with those different args, and return different results, then you have a race which of those results ends up being stored in the atom.


(you have a race if you call real-compute from multiple threads, at least)


Thanks, right! Apparently it was delay that induced that itch that made me ask the question in the first place. @U01BQ9141UN Because args are costly to compare and I don't care if they're different. I only care about the very first time it's called.

👍 3

Ah, bummer - delay needs to create a closure and it needs to be stored somewhere. atom it is then.

Robert Mitchell21:01:22

In that case, you could also use a promise.

👍 6
Robert Mitchell21:01:18

Something like:

(def first-value (promise))
(defn compute [args]
  (if (realized? first-value)
      (deliver first-value (real-compute args))))


A small correction to your code - seems like deliver returns the promise itself, so it also needs @.

Robert Mitchell21:01:57

Ah, right. Good catch!


Ah, and it can return nil. So it's not really thread-safe, I think.


And promise is just a wrapper on top of atom. :D OK, I'll definitely stick with atom.


I think real-compute might get called twice if your app is multithreaded. Maybe something like this would be better: (defn compute [args] (swap! cached-value (fn [v] (or v (real-compute args)))))


When you say "it can return nil, so it's not really thread-safe, I think", are you referring to promises?


@U0DTSCAUU As I mentioned - I don't care if it's called multiple times. But your code snippet makes sense, thanks! @U0CMVHBL2 Yes, promises. deliver can return nil.


Promise is definitely thread safe


But both of the implementations I'm seeing here don't seem to be thread safe


But it's not clear what you mean by thread safe exactly. Like your code can have two callers race for who gets to set the value first


And that's true with deliver on a promise as well


Like I think you're trying to only run real-compute once?