This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-02-20
Channels
- # architecture (25)
- # beginners (68)
- # cider (10)
- # clara (3)
- # cljs-dev (90)
- # cljsrn (16)
- # clojure (132)
- # clojure-austin (7)
- # clojure-berlin (3)
- # clojure-czech (1)
- # clojure-dusseldorf (1)
- # clojure-greece (5)
- # clojure-italy (39)
- # clojure-spec (5)
- # clojure-uk (78)
- # clojured (2)
- # clojurescript (92)
- # community-development (6)
- # cursive (7)
- # data-science (1)
- # datascript (14)
- # datomic (32)
- # duct (8)
- # emacs (5)
- # figwheel (3)
- # fulcro (47)
- # hoplon (12)
- # jobs (10)
- # luminus (16)
- # lumo (5)
- # off-topic (1)
- # onyx (2)
- # parinfer (47)
- # pedestal (6)
- # re-frame (10)
- # reagent (2)
- # reitit (61)
- # ring (8)
- # ring-swagger (16)
- # shadow-cljs (116)
- # sql (17)
- # utah-clojurians (2)
- # vim (1)
morning
how’s core.async going @otfrom?
thinking about writing something like core.async/reduce but that returns at atom so that you can reference it in ongoing streaming systems. Any thoughts?
@peterwestmacott seem to have gotten my head around enough of it to be dangerous. I like how it is affecting my design. mult make a lot of things simpler. 🙂
just a personal experience and probably not at all reflective of general core.async - we found it made certain areas of our application much harder to reason about, particularly when it came to stack traces and debugging error cases
although it probably wasn't a good fit for those areas - they didn't really need to be async in the first place
yeah, I've had to flip my head around a bit. It was a bit like wrapping my head around why a seq from inside a with-open didn't hang around. I'm gonna work hard to keep some of the things inside pretty simple. It is a bit tricky to dig into why things aren't working sometimes. And my code isn't really async as such. It is more that I want to be able to do a lot of things in parallel off the same lazy stream of records (so suggestions to alternatives are welcome).
manifold streams are also nice for parallelizing processing of streams of things
(I ask because I’ve recently been of the opinion that manifold
might be a better library for that sort of thing (having written quite a bit of core.async
(but admittedly not used mult
or tap
extensively)))
part of what I like about my design atm is that I'm using the same "business logic" functions for straight transducer and file io stuff that I am for core.async
which is some work, but is the incidental complexity rather than the inherent complexity of the domain
(one thing I like about manifold
is that by default you get deferreds
so it’s much harder to block your REPL when experimenting)
@otfrom Deferred<Stream<X>> reducing to Deferred<X> is the holy signature
I got sucked into a core.async/transducer rabbit hole once, …and found a bug.
I don’t believe that manifold is abandoned, just that the rate of change is low.
bugs still get fixed in manifold @peterwestmacott @otfrom - but i don’t think there are many anymore
I don't think automat is abandoned. Zach mentioned wanting to figure out why my example was slow.
another alternative is to use the cats promise monad on top of manifold @otfrom ... which can then easily be swapped out for a core.async promise-chan monad... altho that doesn’t help with streaming ops - perhaps a comonad thing would, but i haven’t grokked comonads properly yet
railway oriented programming ?
@mccraigmccraig: a way of thinking about monadic programming using railway points
was a really good blog on it a while back, as a way of explaining what monads are useful for
would that be this one - https://fsharpforfunandprofit.com/rop/ /
I know manifold isn't abandoned now. I think the bus number on it is low. Tho there has been a good long while where core.async didn't get fixed either.
I think basically I need to have functions that plays nicely as the f passed into map/filter/etc or reduce and then I can be a bit agnostic about which system I go with
any time I move away from that I find that I'm outside of where clojure goes long term
Say you're emitting some riemann metric every 60s, to what would you typically set the ttl? A few seconds more? Or is it fine to set it to 60?
I call them reducing and mapping functions, but i am not sure that I am "correct", @otfrom
the function passed to reduce
is often referred to being a reducing function
, largely because that context is reused elsewhere.
Morning 👋
@dominicm I call them functions too when I pass them in, but I'd like a way of referring to them when they aren't attached to a reduce yet, but have the correct signature and function
he just wanted extra style points for referring to old wiki software written in python
Oh my, I must be groggy headed today
Been busy 😉
I'm trying to find some new topics to blog on. There's a blog post talking about Unrepl & Unravel, and how it saved the day when doing some one-off datomic queries.
@maleghast Indeed. It's somewhat of a field guide to the inner-workings of the tooling we use every day.
I've worked on both cider-nrepl and on 2 nREPL clients, so this is a topic I feel like I know something about.
yeah, I've had to flip my head around a bit. It was a bit like wrapping my head around why a seq from inside a with-open didn't hang around. I'm gonna work hard to keep some of the things inside pretty simple. It is a bit tricky to dig into why things aren't working sometimes. And my code isn't really async as such. It is more that I want to be able to do a lot of things in parallel off the same lazy stream of records (so suggestions to alternatives are welcome).