Fork me on GitHub

given two vectors of maps; [{:id 1 ….} …]what’s a good way to join the maps on id?


currently I’m reducing both into a new map using id as key


user=> (require '[clojure.set :as set])
user=> (set/join #{{:id 1 :name "vlaaad"}} #{{:id 1 :age 29}} {:id :id})
#{{:id 1, :name "vlaaad", :age 29}}
@peder.refsnes ^


great! thanks.. in hindsight I probably should have search the docs for join 🙂


Hi, let's say I have a map {:foo :bar :baz {:buz :biz}} and want to destructure that l know that this works: {{:keys [buz]} :baz} But what if I want to destructure different levels at the same time. So I need the :foo from the first and the :buz from the second level. Is that possible?


Ah, I think I got it: {:keys [foo] {:keys [buz]} :baz} should work.


Or {foo :foo, {buz :buz} :baz}.

👍 3

also, you are allowed to use let to do separate destructure calls, which is usually more readable and clear, and doesn't have an extra performance cost


Yea, I usually do that, but I wanted to experiment a bit as I never used nested destructuring before.


Does anyone know of a library that allows you to express regexes that operate over clojure data structures? I’m basically looking for spec’s regex ops, but in a format that allows for lightweight anonymous expressions, e.g. ephemeral matchers that aren’t placed in a registry. My usecase is expressing something like the following:

(t/is (dr/match (dr/data-regex-of-some-kind :a dr/* :b) [:a :a :a :b]))


oh, interesting - I hadn’t considered meander for this case. I’ve been looking for an excuse to play around with it for a little while now, too.


thanks for the tip!

👍 3

Hi all, what ways do clojure devs deal with async streaming? I'm coming from Kotlin mostly where coroutines/flows are used but I've also played around with Scala where Monix and fs2 compete for this use case. I watched a conference talk where the speaker seemed to imply its common to provide a callback style api to the user and then use core.async's channels behind the scenes, is that a good route to take?


If you are familiar with Kotlins coroutines then core.async should fit right in.

👍 3

a huge gotcha is that you should not do IO or CPU intensive work inside core.async go blocks


core.async is a coordinator, and go blocks are not designed appropriately for execution


Okay thanks, how do you manage that when using core.async? I'm used to libraries where you specify which thread pool you run a specific task on (io-bound, cpu-bound, etc).


There is a macro thread


Wrap all of your I/O calls like so:

(<! (thread (network-call args)))


in some cases you just want to hold onto the channel that thread returns and use it elsewhere of course - but that's the 95% idiom for sure


Okay that makes sense, thanks.


You may want to specify a dedicated thread poll for this blocking ops to avoid spawning thousands of threads


that's something core.async go blocks are actually good at, you can use them to implement backpressure, which regulatest the number of threads in flight

☝️ 3

Thread-limiting at the i/o layer is usually about in-memory bulkheads. (You can’t reliably abandon a call on a thread you don’t own.) That’s good, but you usually want to consolidate that in a single object. It’s generally better to backpressure processes at higher layers if feasible.


Note sure what you meant by "in-memory" bulkheads but threadpools are bulkheads. And when you interrupt a thread which is blocked on IO it will throw an InterruptedException (which misbehaving thread may ignore but otherwise it should be a good mechanism). I agree that backpressure at higher levels is a good thing but also think that a separate thread pool is preferrable to the potentially unbounded thread creation via a/thread


what I'm advocating for is using core.async structure to provide backpressure, rather than doing it via the size of a thread pool. It's the kind of thing core.async is actually good at (eg. you can start N go-loops, reading from one channel, in order to limit parallelism of the work derived from that channel to N)

👍 6

And I’m reiterating exactly what @U051SS2EU is saying 😉