Fork me on GitHub

@ackerleytng the same way you’d use spec to validate regular data; you’d read the edn config and then use s/valid or others means to check it against a spec


Eg you could s/def a s/keys spec that validates the structure of the config map


@dadair thanks! I found :req-un really useful!


hey guys I read two books ( brave of clojure - and Clojure High Performance ( ) . I’m intrigued because both books talks about lazy-seq brave of clojure says that code below is super stupid because we are going to create intermediate collections and clojure high performance says exactly the opposite (lazy seq does not produce intermediate collections and we are running the code in O(n) )

(->> (list "Shake" "Bake")           ; 
     (map #(str % " it off"))        ; 1 
     (map clojure.string/lower-case) ; 2
     (into []))                     
my question is> Does the follow code will produce intermediate collections or not ? my understanding is that will not create any intermediate collections , both lazy-seq ( 1 and 2 ) will memoize which will increase the memory footprint. is that correct ? which book is correct ?

Alex Miller (Clojure team)10:04:04

1 and 2 will both create intermediate sequences


can’t see that 😞 thought the lazy seq will be realized only on the into []


@alexmiller 1 will return a lazy-seq to 2, right ? ((map clojure.string/lower-case) lazy-seq-1)


which will pass another lazy-seq to into


(take 5 ((map clojure.string/lower-case) lazy-seq-1))


where the new collection is being created ?

Karol Wójcik11:04:42

every map returns lazy-seq in order to run the code in 0(n) you have to use transducers

👍 4

that is because lazy-seq is storing the previous values realized ?

Alex Miller (Clojure team)11:04:42

Yes, lazy seq memoizes computed values


yeah I see, so both 1 and 2 are memoizing the values which is considered intermediate collections


is that correct ?

Alex Miller (Clojure team)11:04:39

However I would still say the seq version is O(n)

Alex Miller (Clojure team)11:04:12

Because this realization is pipelined


now makes sense


yeah, still think the code is o(n) in performance

Alex Miller (Clojure team)11:04:08

into is implemented with reduce, which walk the lazy seq asking for the next value, which asks the top lazy seq which asks the next lazy seq which asks the list

❤️ 8
👏 8
👍 4
😀 4
Karol Wójcik11:04:39

aaaaa now I understand

Alex Miller (Clojure team)11:04:56

In this case, map is also chunking into 32 element batches to amortize the cost

Karol Wójcik11:04:52

@alexmiller thank you for the explanation you are really great teacher

👍 4
Alex Miller (Clojure team)11:04:30

The transducer version would create one composite reduce function, walk the list, and drop the results into the output vector

Alex Miller (Clojure team)11:04:04

Still o(n), but no intermediate caching seqs

Alex Miller (Clojure team)11:04:36

So less object creation and gc


that is awesome !


Are there any guides on setting up a in-memory database for testing? Creating the database in a fixture, adding some records for one group of tests and so on?


Michael Nygard had a good video on Youtube using Datomic, but I have to use relational database so I will need to use H2 or SQLite :mem: instead.


You could use docker to both have a database container in some state which you can re-use, and execute specific tests based on that database? With h2 you could also save to disk, so you could have some resources and configuration in test to use.