This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (63)
- # cljs-dev (1)
- # cljsjs (1)
- # cljsrn (11)
- # clojure (208)
- # clojure-berlin (2)
- # clojure-dusseldorf (5)
- # clojure-italy (5)
- # clojure-norway (56)
- # clojure-russia (7)
- # clojure-spec (85)
- # clojure-uk (27)
- # clojurescript (191)
- # core-async (73)
- # cursive (4)
- # datomic (62)
- # defnpodcast (1)
- # hoplon (2)
- # jobs-rus (1)
- # juxt (14)
- # keechma (1)
- # leiningen (1)
- # lumo (126)
- # off-topic (2)
- # om (11)
- # onyx (27)
- # pedestal (52)
- # planck (21)
- # powderkeg (1)
- # re-frame (32)
- # reagent (14)
- # ring-swagger (1)
- # rum (3)
- # slack-help (19)
- # specter (23)
- # untangled (32)
- # vim (7)
- # yada (43)
To some extent, async for SQL is a tad pointless, since the remote end isn't highly concurrent. Threadpooling will probably work just as well,
@tbaldridge my motivation here is that a bottleneck on my app is the number of running threads (and the resources each thread uses), so if I can use proper async for IO bound tasks I can make my response times more reliable and avoid load induced outages. Given this I would like to do as much as I can using natively async io, and one of my biggest io usages is db communication.
perhaps I'm not thinking this through properly though (and yeah, I'll still have a thread pool for things that are eg. CPU bound, but the more that can do its own thing and not get claimed by db usage the better)
Right, but by going async you don't really gain anything since the SQL server is blocking and locking tables
So if only one thread can touch a table at a time, might as well just have a SQL thread pool on the clients that's limited to x threads
Then have your go blocks funnel their requests through that pool.
ahh, so a fixed dedicated pool for the db, that makes sense, thanks
Perhaps neither here nor there, but postgresql now uses SSI for serializable transactions
Hi all! I have a question about transducers in core async channels. is it idiomatic to define a transducer which depends (without side-effects) on a globally defined atom? I posted a longer version of this question with an example here: https://groups.google.com/forum/#!topic/clojure/JCQmEXkIYms
Nothing really wrong with that approach, and it really shows of the power of Clojure's concurrency models.
Although to be careful how much work you do inside those transducers. The transducers on a channel run inside a lock on the channel itself. So while the transducer is executing, no one many read or write from the channel
Not bad if you're doing a hash-map lookup, but get into a O(n*n) situation and "you're gonna have a bad time"
@tbaldridge I see your point. Maybe for a tougher task (not just a lookup as in the example) it would be better to change the approach
Is there any technical reason why
PromiseBuffer uses a custom
NO-VAL sentinel rather than just
nil usually indicates closed
(not sure if that works in that context though…)
Well yeah but you can't put a nil into a channel, so that wouldn't be ambiguous
yeah - that is just my best guess, someone else might have a better idea - core.async generally tends to be fastidious about overloading nil though, so it could just be part of that tendency
@dergutemoritz look at the methods - it sets it to nil if closed
which is what I said, right?
so you can differentiate closed vs. not yet having a value
There is no closedness check that depends on
otherwise it would be ambiguous (or you’d need a separate state field)
My best guess is that it was used to make the code clearer maybe
If what you said was the case,
NO-VAL could escape from the buffer just as well, which is certainly not intended 🙂
I verified it just in case: works fine with nil instead, too!
but if I read the code and know how nil is used in channels, I would find that code harder to understand if it used nil
for example, I could end up asking “does line 90 imply that I can deliver a new value to my promise after closing it?"
oh wait - that’s probably a bug that using nil for no-val would introduce 😄
dergutemoritz check it out - can you deliver again to a closed promise chan when you do that?
Yeah that's what I was referring to by making the code clearer / look less ambiguous... I think closedness is maintained in the chan so it shouldn't be possible but I'll try!
where would it be maintained?
(other than in exactly the code we’ve been looking at, that is)
In the chan
but that code is the chan
No it's the buffer
oh, now I get what you are saying
Works just fine, too! Can't put anything after closing with nil as sentinel
From the description of http://dev.clojure.org/jira/browse/ASYNC-103 it looks like the author assumed that the buffer is responsible for producing the nil value when the chan is closed
I still like that it has a special sentinel other than nil - that’s how I would write it
(though I might use a namespaced keyword rather than Object)
Hm weird, close-buf! was even introduced with that patch just for this purpose!
But ... close-buf! only sets val to nil if it's still undelivered o_O
I bet this represents a coordination pattern
where you want to be clear whether it was delivered first or closed first for later steps
(even if closing and delivering are always both attempted)
Wait a second ...
That's the effect - it still allows to take from the chan even when closed
take the value if one was delivered, that is
ahh - right
That behavior is not apparent from the docstring
Or am I not reading it right?
so that’s what I had in mind for the “coordination pattern” - you launch two async logics, one closes it, the other delivers, but you can easily see which one got to it first even though both run
there’s useful things you can do with that behavior
If only the docstring would indicate this 😄
OK the comments section in the JIRA issue above discusses this. The behavior was indeed intentionally changed to ignore closes on delivered promise-chans
So I guess the docstring was just forgotten about at that point, seems like a pretty last minute change
Hmm one could argue that this behavior is to be expected because for other buffers it's also possible to take all remaining values after closing. A delivered promise-chan just happens to contain infinite number of remaining values 😉
Yep, the behavior is definitely desirable
And in line with the semantics of closed channels anyhow
I just didn't connect all the dots 🙂
Thanks for thinking along @noisesmith and @mpenet