Fork me on GitHub
Nom Nom Mousse10:01:07

Why doesn't flatten flatten sets?

(flatten #{1}) ;; ()
(flatten (into [] #{1})) ;; (1)

Ben Sless10:01:04

sets are not sequential things:

Takes any nested combination of sequential things (lists, vectors,
  etc.) and returns their contents as a single, flat lazy sequence.
  (flatten nil) returns an empty sequence.

👎 1
Ben Sless10:01:22

But flatten is suspect, why do you want to use it?

Nom Nom Mousse10:01:59

I think it does have its uses though 🙂 I have a sequence of keyword string pairs like: '([:a "a"] [:c "c"]) and I want to get all the keywords. So (filter keyword? (flatten myseq)).

Ben Sless11:01:54

Flatten is wrong here, too, at least you should (apply concat ,,,) But why not just (filter keyword? (map first xs))?

Ben Sless11:01:18

You know the shape of your data, flatten is ambiguous, first isn't

Ben Sless11:01:37

But if these came out of a map, why not just use (keys m)?

Al Z. Heymer14:01:01

indeed. A hash-map with keys would be the way to go. For example you could use

(keys (into {} coll))
If you into {}` a 2-element-each seq, it will give you a hash-map.

Nom Nom Mousse14:01:07

I appreciate the tips 😄


flatten works with sequential things and set is not sequential (sequential? #{}) ;; => false


user=> (into [] (comp cat (filter keyword?)) '([:a "a"] [:c "c"]))
[:a :c]


user=> (filter keyword? (tree-seq coll? seq '([:a "a"] [:c "c"])))
(:a :c)

Ben Sless19:01:48

If this is only for the first element of the pair any form of flattening is actually a bug, consider ([:a :b])


Usually when you think flatten you want mapcat identity

Nom Nom Mousse08:01:32

Is there a blog post or something about why flatten is so bad? It seems innocuous XD

Ben Sless08:01:29

There are two problems, it eats up everything, which is a polite way of saying "I don't know what my data looks like", which holds up right until your data contains something you did not expect and you get a bug Second is terrible performance. There's always another solution which is more specific to what you actually want to accomplish

🙏 1
Nom Nom Mousse09:01:12

The latter is a very good point! When using flatten the code says little about what your data looks like.

Ben Sless12:01:57

I'll take it farther. Code is a medium of communication, what are you communicating to yourself when you use flatten? "I'm not sure what's in this, fingers crossed". Does that make you uncomfortable? It should 🙃

Nom Nom Mousse10:01:00

----------- Does anyone want to speculate why stuartsierra/dependency does not allow one to add unconnected nodes? I've found a workaround: add a dependency between the node you want to add and a dummy and then remove the dummy afterward. Feels clunky though.

(def g (dep/depend (dep/graph) "a" :dummy))
;; => #'user/g
user> g
;; => #com.stuartsierra.dependency.MapDependencyGraph{:dependencies {"a" #{:dummy}},
                                                :dependents {:dummy #{"a"}}}
user> (dep/remove-all g :dummy)
;; => #com.stuartsierra.dependency.MapDependencyGraph{:dependencies {"a" #{}},
                                                :dependents {}}

Nom Nom Mousse10:01:02

It does keep the library minimal though. Perhaps I like this design choice.


Perhaps because in dependency resolution there can be no unconnected nodes. A library is either a dependency somewhere or it's not used at all.

😄 1
Nom Nom Mousse13:01:53

Yes, the origin is dependency resolution I see from the


hi all question about regex why (str/split "'s-Hertogenboschplein Almere" #"\b") works as expected (eg splits at word boundary), but (re-matches #"\b" "'s-Hertogenboschplein Almere") returns nothing? thanks


and more important question is how can i split at word boundary expect when it is 's- thus 's-Hertogenboschplein should not be splitted


re-matches checks for a full match. Try (re-matches #"a" "ab") - it will return nil. And #".*\b*.**" will simply match any string that has at least one word character - there is a more obvious way to achieve that, via \w.


what does it mean a full match?


It means that the whole string matches the regex and not just any substring. Same as if your regex had ^ and $ around it.


then another question how I can find word boundaries indecies?


Pretty sure you'd have to use interop with Java's Matcher for that. Construct a matcher using re-matcher, then call .find on it, check if it returns true and if so, call .start and save its return value. Repeat till .find returns false.



👍 1

there is \W which marks the start of a non-word char. I haven't given it a lot of thought but you be able to use that too to find the index of the first non-word char


thanks, but I’ve picked totally different approach

Joshua Suskalo14:01:59

there's re-findand re-seq in clojure too that don't match the whole string

Joshua Suskalo14:01:59

doing re-seq with \w would give you a sequence of all the words

Jim Newton15:01:30

As I understand some people have done some work on a monadic library for clojure. I’ve discovered a very interesting feature in Scala which I’ve also needed in clojure. Does the clojure mondic library have a foldM (monadic fold) function? It sounds exotic, but it really serves a useful purpose which does not depend on category theory to understand.

Alex Miller (Clojure team)15:01:15

I believe the fluokitten lib has fold

Jim Newton15:01:56

taking a look at fold in and it does not seem to do what I hoped it did. but I admit I don’t understand the code.


do you really need an abstract monad interface, or are you implementing this for a specific monad? i.e. a list or option


oh I see below you reference reduced

Jim Newton15:01:49

The issue I’m referring to is that the clojure reduce function has a sister function called reduced which sometimes allows you to escape (return early) from the fold operation, when you’ve found the value you’re looking for, or when your iteration has converged. The problem is that it does not work on reentrant code. if you have the function passed to reduce calls another function which itself calls reduce, then reduced only returns from the inner-most reduce , not the reduce it is lexically within.

🧠 1
Alex Miller (Clojure team)15:01:21

you can of course detect this case and there are some places we do this inside transducers

Alex Miller (Clojure team)15:01:57

with things like reduced? and other related are ensure-reduced and unreduced

Jim Newton15:01:46

interesting. is the case always detectable or sometimes detectable?

Alex Miller (Clojure team)15:01:02

reduced is just a wrapper, so you can always detect that a value is a reduced value with reduced?

Alex Miller (Clojure team)15:01:53

if the outer reduce checks for a reduced value from the inner reduce, it could return it directly (or use ensure-reduced) or whatever

Jim Newton15:01:24

what if the outer function does not know a black-box function it is calling is implemented in terms of reduce/reduced?

Jim Newton15:01:57

for example an old implementation was using recursion, and it gets refactored to use reduce, it might start failing if the function that is calling it used ’reduced`.

Jim Newton15:01:55

or perhaps a better example, can I used reduced when reducing a lazy list?

Ben Sless15:01:14

I'll need to check to make sure but I think the cat transducer satisfies this property, might be worth checking how it's implemented

Alex Miller (Clojure team)16:01:00

this is a little too abstract for me, do you have an example?

Noah Bogart16:01:53

Is it possible to make a record destructure like a vector (`(let [[k v] (->record 1 2)] [k v])` produces [1 2])?

Nom Nom Mousse16:01:58

(defrecord hi [a b]) 
(def r (->hi 1 2))
(let [{:keys [a b]} r] [a b]) ;; [1 2]


why prefer positional names to reified names?


you can do something like call seq on the record, but that won’t guarantee an order


Or does @U0232JK38BZ answer your actual question? (I read, “I really wanna do it positionally. How?” when maybe you meant, “How do I destructure records in general?“)

Noah Bogart16:01:13

I have a bunch of functions that expect a sequence of 1 or 2 element tuples (vectors), and I want to be able to write nested sequences of those tuples at the point of creation and then call flatten without flattening the tuples themselves, so that the munging function converts from arbitrarily nested tuples to sequence of tuples

Noah Bogart16:01:43

the easiest way to do this is with a map, but that requires changing all of the other functions too, which is annoying and needless because the concept of a “tuple” is all I need.

Noah Bogart16:01:51

so i was thinking of implementing a protocol that would allow this destructuring on a record to make creation of the tuple not affect any other part of the app


Yeah I’m not following at all 😄


(Not a judgment, just a fact.)


If you know your map will only have 2 elements (i.e. it will always be an array-map), you can just call vals to convert to a tuple, yeah?


I dunno, just throwing out random factoids at this point, because I don’t understand. 🙃

Noah Bogart16:01:58

i’m sorry, i’ve not done a great job of describing this lol

Joshua Suskalo16:01:18

All you need to do for this type of destructuring is to implement ISeq or ISeqable and make it so that you return a sequence of items in the order you want.

Noah Bogart16:01:43

a valid input looks like [[:a 1] [:b [:example 2]] [:c]]. when including conditions, I want to be able to write [[:a 1] (when condition [[:b [:example 2]] [:c]])] instead of [[:a 1] (when condition [:b [:example 2]]) (when condition [:c])]. there’s a single function get-fx that consumes the input and then passes it to many other functions, all of which expect it to be shaped as a sequence of pairs. I can’t just call flatten because the pairs sometimes only have 1 element and sometimes the second element in the pair itself is a pair or a vector

Noah Bogart16:01:53

ope, interesting, okay


Don't use flatten at all. Instead, I would wrap a conditional group of tuples in something and check for that something in a custom flattening function. Like [[:a 1] (when x? {:items [[:b 2] [:c 3]]}) [:d 4]].

☝️ 1

If something is a tuple - let it be a tuple. Don't create artificial wrappers only so that you can use them with one very specific function.

Noah Bogart17:01:17

interesting thought, thanks

Joshua Suskalo17:01:48

Couldn't you do

(cond-> [[:a 1]]
  condition (conj [:b [:example 2]] [:c]))

Joshua Suskalo17:01:09

conj takes arbitrarily many arguments

Noah Bogart17:01:31

oh hm, i forgot about cond-> . yeah, that would probably work here

Joshua Suskalo17:01:31

also my favorite -> helper functions

(defn when-pred [value pred then]
  (if (pred value) (then value) value))
(defn if-pred [value pred then else]
  (if (pred value) (then value) (else value)))

donotwant 1
Joshua Suskalo17:01:47

I have found neither of these functions in this useful form in any of the utility libraries

Joshua Suskalo17:01:35

this helps when you want to do cond-> but have each condition depend on the current value

Ben Sless17:01:03

Seconding "it's never flatten"

Joshua Suskalo18:01:31

I like that, although I think it's important to still have the functions for it to avoid the need for as-> in both the pred and function separately

Joshua Suskalo18:01:49

which is definitely sometimes needed


Argh. We use timbre for logging and it's been fine so far, we have com.fzakaria/slf4j-timbre to make sure any logs from libs using slf4j end up going via Timbre. But I've just recently tried to add/work with 2 different libs that use ch.qos.logback/logback-classic. Apparently both that and slf4j-timbre provide a StaticLoggerBinder implementation, so I get errors about multiple slf4j bindings and it picking one (the Timbre one). For the first lib I fixed that by excluding logback-classic, and haven't seen any issues thus far, but for the second lib it fails to load properly if I do that, and if I have both slf4j libs installed it bombs because it can't cast the Timbre adaptor to the logback classic Logger. How can I resolve this? Can I make Timbre use logback-classic somehow, instead of the fzakaria module? Can I define a custom logger implementation to both, that is used instead of StaticLoggerBinder, so that the compatibility isn't an issue?


Logback is just an SLF4J implementation. I’m going to guess that — even though you’ve removed logback-classic — some other logback library is still on the classpath (e.g. logback-core )


logback-core was there as well, yes. I had excluded it too, but that also resulted in the lib bombing on load.