This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-10
Channels
- # announcements (6)
- # architecture (2)
- # babashka (30)
- # beginners (90)
- # calva (21)
- # cider (22)
- # clj-kondo (27)
- # cljs-dev (7)
- # clojure (132)
- # clojure-europe (51)
- # clojure-nl (12)
- # clojure-norway (3)
- # clojure-spec (3)
- # clojure-uk (5)
- # clojurescript (69)
- # cloverage (9)
- # conjure (5)
- # core-async (54)
- # cursive (14)
- # datomic (34)
- # emacs (7)
- # fulcro (10)
- # graalvm (40)
- # graalvm-mobile (2)
- # gratitude (2)
- # improve-getting-started (1)
- # introduce-yourself (1)
- # jobs-discuss (61)
- # leiningen (5)
- # malli (6)
- # off-topic (59)
- # pathom (11)
- # polylith (38)
- # reagent (3)
- # reitit (3)
- # rewrite-clj (3)
- # shadow-cljs (53)
- # tools-build (35)
- # transit (8)
- # vim (62)
- # web-security (26)
- # xtdb (4)
I love pmap - it's smarter than it might seem! Since it caps threads to relative CPU count, and those threads are reused via an internal pool. I use a dedicated thread pool when I want a pool that isn't global and that is exclusive to some sort of 'module' (e.g. stuartsierra Component). If you have n Components each with their own pools, stopping those components will also stop pools in dependency order. So the tasks will also conclude in an order that makes sense. This is a lot simpler than creating some sort of shutdown priority order by hand
my use case is processing multiple tech.ml.datasets in a batch, so I'm less worried about component-y things
Clojure Applied by @U064X3EF3 has a great chapter about parallelism
måning!
Shall I start taking picture of my puppy too? He's only 3 days into his new family...
I was told about a “startup-idea” thingy by a colleague yesterday. So the deal is for europeans to keep an eye on what’s happening startup/scaleup-wise in the US, and launch companies in Europe doing the exact same thing. So when the US startup/scaleup decides it wants to move into Europe, the bet is that they’ll just buy your copy. Easy peasy
The “Samwer Brothers” in Germany did that with some startups and made a ton of money
afaik they tried that with Zalando, too, but the copied startup, Zappos, refused to buy, so they had to become successful on their own
Early Just-Eat employees did this, setup competitors around the world (some while working at JE) and the made £££ when JE acquired them all.
Morning!
I realise now why I keep coming back to core.async. Being able to go from into or transduce with a xf chain to pipeline with the same chain and get massive parallelism is just so powerful
you dont HAVE to use core.async to get parallelism on transducers though
core.async is brilliant for halt-the-line style backpressure
but that's about it (imho obvs)
core.async always used to be a nightmare to troubleshoot: is that still the case?
I suppose it depends. I don't think I've written a go block myself, so when I get something that blows up I'm just able to move it into something like into/transduce/sequence and play with it there.
I'm clearly missing something if transducers give me parallelism though. I'd love something as easy to use as pipeline
actually I think theres a bug in that code around the end-case for the lagging transducer fixed in https://github.com/hammonba/clojure-attic/blob/master/src/clj/attic/parallelising_map.clj
so that feels more like the kind of thing that claypoole/map
does (but fitting in with transducers better) than being able to pass in a (comp (map ,,,) (map ,,,) (mapcat ,,,) (filter ,,,))
which async/pipeline allows me to do, which is useful, but different
swings and roundabouts really
I remember the painful experiences of core.async
more than the others
I think I found the subset that worked for me. Just feels weird setting up the machinery, then pushing the data through, then getting the result (from a channel you set up ages before) rather than the usual add a bit, add a bit, add a bit that ->>
and (comp ...)
allow
wormholes in space
we do similar concurrency stuff with manifold @U0525KG62 - it's not hard to debug, but only because we added a layer over raw streams which always propagates errors when a reduce
finally happens. prior to that debugging was painful
@U0524B4UW looks like I'd need to use the manifold.stream versions of map/filter/mapcat et al?
yeah, exactly - but, like core.async iirc, raw manifold streams don't propagate errors in any sane way, so we use this layer on top, which catches any errors in map/filter etc and propagates them back to you when you reduce or take! - https://gist.github.com/mccraigmccraig/e5b9a85441db782f6debd1e8dc2697f9
Making sure that errors in core.async don't get lost is quite feasible, with little discipline https://blog.jakubholy.net/2019/core-async-error-handling/ What worries me more is making sure that all channels that need to be closed get closed...