Fork me on GitHub

How would I go about splitting

[{:id 123 :group "dragons"}
 {:id 124 :group "dragons"}
 {:id 125 :group "orcs}]
[[{:id 123 :group "dragons"}
  {:id 124 :group "dragons"}]
 [{:id 125 :group "orcs}]]
? I’m feeling it’s with split-with but I can’t figure out the predicate :thinking_face: (I don’t know the values of :group beforehand)


you could use ˋ(group-by :group your-map)ˋ


group-by should work

user=> (def d [{:id 123 :group "dragons"}
 {:id 124 :group "dragons"}
 {:id 125 :group "orcs"}])
user=> (group-by :group d)
{"dragons" [{:id 123, :group "dragons"} {:id 124, :group "dragons"}], "orcs" [{:id 125, :group "orcs"}]}
user=> (->> d (group-by :group) vals vec)
[[{:id 123, :group "dragons"} {:id 124, :group "dragons"}] [{:id 125, :group "orcs"}]]


brilliant I always forget about group by thanks


you can get the literal result you specified (except lazy-seq instead of vector) with partition-by

org.noisesmith.expecting=> (partition-by :group input)
(({:id 123, :group "dragons"} {:id 124, :group "dragons"}) ({:id 125, :group "orcs"}))


also your sample input is missing a " twice


How do people get into this "repl driven development" for diagnosing more complicated issues? I see how it is useful for evaluating plain old def's but what about something less straight forward? For instance if I have a reframe app with some complicated state subscription I might have something like

 :<- [:form/errors]
 (fn [errors [_ id]]
   (get errors id)))
I can evaluate here the value of the function
(fn [errors [_ id]]
   (get errors id))

=> #object[re_frame$subs$subs_handler_fn]
But how do you evaluate that function with some args passed into it? Like maybe I want to see what would be
( (fn [errors [_ id]]
   (get errors id))  [] [nil :error])

Adrian Smith08:07:57

There's a few ways to do this, I think one of the common ones at least for ClojureScript would be to add (def get) (def errors) (def id) inside the function then you'll be able to run the expression in your REPL with your variables intact In Clojure I'd combine the Cursive debugger with the REPL by placing a break point, use the REPL to get to the breakpoint then use the expression window to re-run the expression in context If I was doing lots of ClojureScript I'd be looking at to leverage chrome's inbuilt debugger whilst still viewing debug code as ClojureScript to do a similar trick to above There are definitely more approaches if you haven't already check out and maybe ask around in #re-frame


I tend to write rich comment blocks next to where the code I'm working on is and evaluate code directly from there. So something like:

 :<- [:form/errors]
 (fn [errors [_ id]]
   (get errors id)))

  (let [f (fn [errors [_ id]]
            (get errors id))]
    (f [] [nil :error]))
and I hit Ctrl-c twice in cider to evaluate the let block (but I'm pretty sure VSCode and so on have a similar shortcut). When I've fixed the bug or written the feature, I copy the code into the correct place and usually delete the rich comment block. Sometimes I leave the RCB as a bit of documentation or development helpers at the bottom of the file. What I like about this is that it makes the dependencies of the code I'm writing very explict and obvious. If I start introducing something like a side effect that hits an external service into code that could be more functional, it becomes obvious and painful really quickly, so it helps me keep that to a minimum. One thing that helps with this workflow is knowing how to select complete s-expressions in my editor, so they can be copy-pasted around with ease. I hit Ctrl-Alt-Space to select the next form, up to the matching bracket to move code around. I'm pretty sure that VSCode, etc has similar functionality available as an extension (if it's not built in to the clojure mode you've got installed).

Tomas Brejla08:07:40

As Adrian noted, it's often important for debugging in REPL to "catch some real realtime values". Lately I found myself simply copy-pasting the "defn's form", changing it to def and adding * into the symbol name. Then I place this def form as a first thing being executed in my function. For example if I have function like this:

(defn somefn [a b c]
  (+ a b c))
Then I change it to:
(defn somefn [a b c]
  (def somefn* [a b c])
  (+ a b c))
Then I make sure the function gets called somehow in realtime, so that the inputs get captured into somefn* symbol. So typically I click some button in my SPA app, it performs call to my backend and that effectively somehow calls that function I'm trying to debug. Once that happens, I make sure in the REPL, that somefn* really holds those captured parameter values. Once that's done, I can easily call the function from REPL using apply:
(apply somefn somefn*)
In fact I often temporarily wrap the (apply) around the original defn:
 (defn somefn [a b c]
   (def somefn* [a b c])
   (+ a b c))
 (apply somefn*))
This way I can easily modify the original body of somefn and immediately evaluate current top-level form (`alt+enter` in calva) without having to move my cursor. Once I'm happy with the modified behavior, I simply remove that def form, -> threading macro and apply function call. Thanks to structural editing features, that's a matter of few seconds. All this works nicely even for my full-stack app (clj on BE, cljs on FE) and even for "non-trivial" captured parameters (such as parameter holding a reference to crux db database) which I woudn't be able to easily prepare in rich comment.

Tomas Brejla08:07:52

Btw I believe one could create a macro for what I described, but I'm fine with doing what I described. I don't need to debug in this way too often and it doesn't hurt that much to add/remove those few lines.


for capturing values from a running program, I tend to use tap> with reveal: but there's also portal ( which is a bit more cljs friendly. But to be honest, before switching to reveal, I tended to use prn to print the values I wanted to capture and copy-and-paste them from the repl buffer.

Tomas Brejla09:07:59

Unfortunaly sometimes the captured parameter can't be serialized into edn (and thus printed & copy-pasted to repl buffer). Example of such parameter would be some sort of connection DB (jdbc connection, crux db reference..) Sure, you may somehow build such parameter value in your repl, or grab the reference to it from some central place (global ref, component/integrant/.. system etc.). But it is an additional effort and it is possible that you actually end up with a bit different behavior (and therefore perhaps the bug you're trying to find doesn't even appear this way).


> Unfortunaly sometimes the captured parameter can't be serialized into edn yeah ... that's what I meant by "becomes obvious and painful really quickly" ... Clojure has a bunch of different solutions to this, with the most simple being a var, with more complex solutions like component, integrant, etc being built on top. By making these things as few possible, more of the code becomes becomes more functional with explicitly manged dependencies.

truestory 2

I liked this talk by Stu Halloway : which describes this sort of interaction. He talks about "aim small, miss small", which I think is a helpful tactic to make complicated issues into isolated issues, so you can fix them 😉


Wow thanks everyone for such great suggestions, this really helps a lot, more than enough to get me started!

Tomas Brejla15:07:16

Thanks for the video. Btw the mentioned part about "aim small, miss small" starts at 25:40, and a good realworld example at 29:50 👏 So back to the original case of (get errors id) nested in (fn , it could help to temporarily add (def errors errors) and (def id id) to that fn, have it executed once and then you can use your editor to evaluate sub-forms in that function. I just tried that inside a REPL and works nicely:

;; (1)
(defn foo []
   (fn [errors [_ id]]
     (def errors errors)
     (def id id)
     (get errors id)))

;; (2)
((foo) [:first-error :second-error] [:ignored-param 0])
;; => :first-error
Once I evaluated (2), I was able to evaluate just the (get errors id) part of (1) in calva using ctrl-enter with cursor just before opening bracket or just after closing bracket.

Tomas Brejla15:07:41

BTW I'd say that when using these approaches, one needs to be really careful not to leave those defs there for too long. I can imagine that you can easily leave something like the mentioned (def id id) in multiple functions in the same namespace and get really confused really soon as the same var might get reassigned from multiple places. (some of those fns can even get called "outside of your repl" - for example via a http request coming periodically from your browser etc.)


If I have a long list of and conditions as follows: if cond1 and cond2 and cond3 and cond4.... is there a nice clojure way of writing this? cond and condp doesn't fit the bill... neither does when... not case


(if (and cond1 cond2 cond3 cond4 ...) ...)

👍 6

Is there a clojure function which unkeywords keys? e.g. if a key is :keyword then is there something which'll (unkeyword :keyword) => "keyword"?


there are two functions: • namespacename


“unkeyword” is not a thing in clojure but those two functions return underlying keyword’s building blocks. First - to return namespace from qualified keyword and nil for simple keywords. Second - it’s name


Yeah, name is great. (keyword (name :hax)) returns :hax (name (keyword "hax")) returns "hax"


this keyword is a very clojure specific concept and takes a second to wrap your head around... Muchos Gracias!!


in python you don't have the :


That is just the syntax, there are some concepts available in python and not presented in clojure like generator expressions


late to the party, but the idiom I've come to is this:

(defn kw->str
  (subs (str kw) 1))

#_(kw->str :foo/bar) ; "foo/bar"
#_(kw->str :long-keyword) ; "long-keyword"
reliably converts keywords to strings


Is there any such library that would offer support for python/javascript-style generator/iterator kind of coding? Basically some kind of continuation-style way of defining collections sort of like:

(defn evens []
  (loop [x 2]
    (yield x)
    (recur (+ x 2)))


there's also iterate which will produce a lazy-seq (take 5 (iterate #(+ 2 %) 0))


this is useful since each result is a function of the previous yielded value

Rupert (All Street)17:07:37

Core Async go-loops can be used to achieve a style like this, but might be overkill. Often you can use combinations of seq processing functions (map, reduce, filter etc) to cleanly process the data.


you can also use a lazy-seq


which can be generated by demand but also treated like a list



(defn evens
   (evens 2))
   (lazy-seq (cons x
                   (evens (+ 2 x))))))
org.noisesmith.expecting=> (take 10 (evens))
(2 4 6 8 10 12 14 16 18 20)
(fixed to be nicer and closer to the original)


(sequence (filter even?) (range)) :)


fair - I was trying to divine the intent, where I don't think the use case was just even numbers, and for a "generator" type lazy seq you'd want to use the lazy-seq macro


that is, the question wasn't about "how to generate these specific values", but "how do we do generators"

❤️ 2

where lazy-seq is kind of wrong actually because generators don't have many of the gotchas that lazy-seqs do


Yes, you are right. But also generators could be seen as the form of description for some sequence of elements. So I just tried to bring an alternative way of thinking about it in general.


Hi all, is there a way to remove all nil values from a vector and return a vector? I know we can use remove nil? vector but this returns a lazy seq.


@usman.jamil probably the nicest way is (into [] (remove nil?) vector)

👍 2
Russell Mull17:07:59

would (filterv (complement nil?) vector) offer any more perf? (Because transients, I assume)


transducing with into uses transients


thanks for that @U051SS2EU - will give it a go 🙂


note the paren placement, that's intentional - using removes's transducer arity


i assumed that there would be an analogous removev similar to how there's a filterv but there doesn't seem to be one


also filterv precedes transducing with into, and now that it exists I'd just use that


yeah. it's no longer the avenue for non-lazy computation for sure


i'm not sure i've actually ever used it


I found this coroutine package which seems to do the job, no idea on how solid it is though with only < 800 downloads. I appreciate the lazy-seq suggestions, though for a lot of iterating algorithms I find it's truly an order of magnitude simpler to express in coroutine form. Also it's much more flexible (you only need your regular control structures to express any complex iteration, instead of needing to pick just-the-right function like filter/map/iterate/etc)


you could also use core.async where you consume from a channel to resume computation (but then you need to be careful about which work happens in the go block vs. needing to be in a non-go-owned thread)


(and also need strict backpressure to control the computation... but it's easier to control than laziness)


coroutines have costs too


Would you mind elaborating on that?


implementing coroutines requires vm or compiler assistance, suspending + resuming has costs in saving and restoring locals


> coroutines requires vm or compiler assistance Ah I thought it could be done via macros like core.async is done


it could be, those aren't cost-free either


that falls under compiler assistance


drop a concrete use case for an algorithm into #beginners, and someone will help express it. "help, I have trouble modeling XYZ, but it seems straightforward with coroutines"


Thanks, will do

João Galrito17:07:24

hello... supposing I have multiple threads that want to asynchronously make changes to various objects each represented by an id, what would be the best way to represent this in clojure data? an atom containing a map from id to agents?


putting an agent inside an atom is probably silly


a hash-map inside an atom should suffice (depending on how computationally expensive your transformation functions on the values are, and how much contention there is on the data), I'd definitely use atom over hash map as a first go


if your changes to the map values are stateful / io-dependent you might consider putting the hash-map in an agent instead of an atom

João Galrito17:07:42

perhaps my idea has the same problem, but wouldn't this force all changes to happen sequentially, even for different keys?

João Galrito17:07:07

I wanted to be able to make changes to separate ids in parallel


right, that's what atoms are for


atoms do computations in their own thread, and retry if there was a conflicting change


atoms don't do computations, swap! the function mostly commonly used to mutate an atom does a computation on the calling thread

João Galrito17:07:16

but if I have atom {:foo 1 :bar 2} and have 2 threads, one trying to change :foo and one trying to change :bar at the same time, the second one would have to retry even though it didn't (conceptually) need to

Joshua Suskalo17:07:19

A key thing here is that "a conflicting change" will conflict on every change to any key, even if the keys being modified are different. If you often make changes to individual keys but do not often insert them, you may consider having an atom with a map from ids to atoms with the state. If you need to coordinate the changes between multiple keys at the same time, use refs. That said, I'd recommend just using an atom to start with, abstracting out how you would make changes to functions, measure your performance and contention, and then decide.


but if the coperation is just (swap! a assoc id some-data) the retry is pretty cheap

Joshua Suskalo17:07:38

Another thing to consider here is whether or not this state needs the full weight of clojure's concurrency. It's possible that e.g. java.util.concurrent.ConcurrentHashMap might actually fit your usecase better, depending on what you're doing.


@suskeyhose I'd be highly suspicious of an atom with another atom inside it


right, as hiredman suggests, you can do your calculation first, and just use swap! to set the value

Joshua Suskalo17:07:42

I wouldn't. The reason to do it is to reduce contention. Contention is genuinely something to worry about if it's your main hangup or if you have larger computations which rarely content on keys, but do contend on the parent structure.


it also introduces race conditions


it removes the safety atoms supposedly offer

Joshua Suskalo17:07:24

That said, I strongly recommend measuring contention before you decide to take action to mitigate it.


if you need to lower contention, you can use refs instead of a hash map in an atom

Joshua Suskalo17:07:02

Concurrency is a hard problem, and when you need to reach past the limitations of the tools you've been provided with, you sometimes need to lose some of the safety offered by your tools.


that's what they are for

Joshua Suskalo17:07:35

Well that's the same issue with race conditions as atoms in atoms has. I only suggested atoms rather than refs if you don't need coordination because they have a lower overhead.


if you don't want the safety atoms offer, you can use what java provides, and reduce your complexity and increase performance

Joshua Suskalo17:07:05

Ah, I see, you were saying the race conditions comes from coordinating across keys. I did mention if you need that coordination to use refs in my original blob.


what? the point of refs is synchronized update

João Galrito17:07:51

there's no need to coordinate across keys, they're independent


then an atom with a hash-map inside, and using swap! to send values for the keys is the straightforward way to do this


just be careful to calculate the value before calling swap!so you don't need to recalculate on conflict

Joshua Suskalo17:07:00

Only if the new value doesn't depend on the old one. Otherwise we're right back to a race condition

João Galrito17:07:22

yea, since the only thing I'm retrying is the assoc itself it should be cheap

Joshua Suskalo17:07:00

Only if the new value doesn't depend on the old one. Otherwise we're right back to a race condition

João Galrito17:07:22

I will have around 15000 keys, and will be processing a stream of changes to individual keys, prob a couple thousand/second


atom won't work then


OK - atoms are not going to be good for that kind of change rate

João Galrito17:07:53

too much contention

Joshua Suskalo17:07:13

Atoms will be fine with that change rate if you aren't inserting keys at thousands per second and you aren't updating the same key at thousands per second.

Joshua Suskalo17:07:28

But this is why I mentioned the java.util.concurrent.ConcurrentHashMap, it may fit your needs better.

João Galrito17:07:30

insertion/deletion will not be very often


not only that, even without conflicts, the overhead on atoms is going to become a bottleneck with that many changes a second

João Galrito17:07:36

mostly at startup

Joshua Suskalo17:07:05

Ideally if all the inserts and deletes are going to be at startup, you may want to produce your original map with a reducer instead of using an atom etc.

João Galrito17:07:34

it's a rolling window on live data where each key will disappear after some time

João Galrito17:07:40

and new keys keep being added

João Galrito17:07:51

usually there's around 15k keys at any given moment


OK this sounds like a cache


in this case we have actual cache libs that solve the problems we haven't even discussed yet here


(the kind of subtle problems that get solved over the course of years if you roll your own cache in my experience...)

Joshua Suskalo17:07:16

Sure. For something like that having (atom {:some-key (atom :some-data)}) will be a good starting point.


tonsky's b-tree implementation might work well here?

Joshua Suskalo17:07:26

That's a good idea

João Galrito17:07:42

thanks, I'll give that a try


assuming I've a vector of maps e.g. changmap => [{:1 1} {:2 2}] and I need to update these values in an existing hash-map tstmap => {:idtarget {:1 0 :2 0}}, how can I dynamically execute (assoc tstmap update-in [:idtarget] :1 1 :2 2) for an arbitrarily changing changemap?


by the way, :1 is accepted by the reader but technically isn't a valid keyword, you are allowed to use numbers as keys


(this becomes an issue if you namespace - :a/1 will be rejected, :1 only works for lack of enforcement and legacy reasons afaik)


yes... thanks for that. This was just an example. The real map doesn't look like this...


I think you mean (update tstmap :idtarget assoc ...) but I think you can get the effect you want from (update tstmap :idtarget merge (into {} [{:1 1} {:2 2}]))


into {} turns your vector of small maps into one big map of changes, then you can just merge it over the value of :idtarget with update @abhishek.mazy

👍 2

I try to get some fancy sounds on a build fail or succeed.  I found the leinigen plugin called lein-play.  The instructions to make it work says to: 1) add [lein-play "2.0.0"] to ~/.lein/profiles.clj  2) add `` to the :hooks entry: Here is my profiles.clj {:user {:plugins [[lein-pprint "1.3.2"]                  [lein-ancient "0.7.0"]                  [lein-play "2.0.0"]]        :dependencies [[slamhound "1.5.5"]]        :hooks []        }} {:repl {:plugins [[cider/cider-nrepl "0.10.0-SNAPSHOT"]                  [refactor-nrepl "2.0.0-SNAPSHOT"]                  ]        :dependencies [[alembic "0.3.2"]                       [org.clojure/tools.nrepl "0.2.12"]]}} Can anyone tell if the above is wrong? I dont get any sounds trying to run a leinigen project with this added?


your nrepl, cider-nrepl, and i'm guessing refactor-nrepl are very outdated. remove all of those. i'm surprised it even starts up to be honest


Updated profles.clj: {:user {:plugins [[lein-cprint "1.3.3"]                  [lein-ancient "1.0.0-RC3"]                  [lein-play "2.0.0"]]        :dependencies [[slamhound "1.5.5"]]        :hooks []        }} {:repl {:plugins [[cider/cider-nrepl "0.83"]                  [refactor-nrepl "2.5.1"]                  ]        :dependencies [[alembic "0.3.2"]                       [org.clojure/tools.nrepl "0.2.12"]]}}


Still no sound? Any ideas?


what command are you running?


lein run and lein repl


the library says "Play sounds when your tests pass or fail."


as they should force a compile and then error or pass sound?


have you run your tests?


I thought it would work on any complie?


its a hook around leiningen's test runner it seems


any compile i think would include all repl forms. would probably get old pretty quick


lein test worked i just ran that 🙂...


the source is here: . You could create your own clojure repl with an eval function to play after evaluating. but it would be really annoying ha


nice to know thx then it works only I didnt get it thx for the help so nice to know at least what I put in profiles.clj was rigjt


yeah i thoght it would beep like a mad dog 🙂


but it works on tests great 🙂


awesome. glad you are happy with it


he did some neat things


i've often wanted to play sounds when i git push. kenny loggins' danger zone on each push would be fun to me

😂 2

thx alot dpsutton. thx for the quick help and now as bonus with never cider stuff I missed that part totally but now I learn how to lein ancient check-profiles


to catch thouse outdated things as well great! thx.


cider these days will happily add everything it needs when starting up. best bet is to leave all of that stuff out


oki i will throw dem out. thx for helping me again...


I only bringit up because cider has improved and a very common source of seemingly inexplicable bugs for beginners are old things in the profiles.clj file that they forgot about


super nice to know thx...


I want to multiply a scalar (15000) by every value in a map… map will give me a result that is just the values but I want to preserve key-pairyness … {:a 12 :n 24} (magic-code to multiply all vals by 2) {:a 24 :n 48} I’m thinking (for [[k v] m] … ) might be the way to go


(into {} (map (juxt key f) some-map)) will apply f over just the values in the map and return a map. Medley has a map-vals function to help with this. You can also do as you wrote and destructure the map entry as [k v], returning an updated [k v] pair and then turning that sequence back into a map.