Fork me on GitHub

yeah making the result always a channel and changing behavior based on a call to poll! seems a bit cleaner than checking the return type


@tbaldridge for an idea of the comparison:

(crit/quick-bench (async/<!! (async-utils/val-chan 1)))
WARNING: Final GC required 4.3414515843340595 % of runtime
WARNING: Final GC required 44.00718455080148 % of runtime
Evaluation count : 3877950 in 6 samples of 646325 calls.
             Execution time mean : 154.573532 ns
    Execution time std-deviation : 3.798315 ns
   Execution time lower quantile : 152.133622 ns ( 2.5%)
   Execution time upper quantile : 161.129376 ns (97.5%)
                   Overhead used : 1.797788 ns

Found 1 outliers in 6 samples (16.6667 %)
	low-severe	 1 (16.6667 %)
 Variance from outliers : 13.8889 % Variance is moderately inflated by outliers
(crit/quick-bench (async/<!! (async/go 1)))
WARNING: Final GC required 35.90382520884284 % of runtime
Evaluation count : 119034 in 6 samples of 19839 calls.
             Execution time mean : 5.077414 µs
    Execution time std-deviation : 49.789453 ns
   Execution time lower quantile : 5.024920 µs ( 2.5%)
   Execution time upper quantile : 5.135894 µs (97.5%)
                   Overhead used : 1.797788 ns


yeah, it's still not really slow, but it gets even worse in a system with a lot of parallelism going on, as there's no guarantee when the system will context switch back to your code after the go block has it switch threads. That's why we switched to this approach, since it actually made a noticeable difference in our API's performance for certain calls (on the order of like 5-15 ms difference for some of the calls under heavy load)


granted, while the criterium numbers above I just ran, the 5-15 ms differences I reference were things that I changed 2 years maybe my memory's fuzzy on those?


Honestly if you're creating and destroying channels this fast, you kind-of need to re-think the design


What's the old saying from Rich's talk? Promises are the one-night-stand of async primitives?


I'm really intrigued by this quote. Could you point me to the talk for context ?


The overall design of channels is that they are created and stick around for awhile, in some systems I've worked on they last the life of the application.


Hmm...not entirely sure how that would work when trying to create API's? I'm not sure how you would have channels that are shared between API requests and not have issues with the different requests getting the wrong data on a take?


async isn't for stateless request response


it isn't so much about the lifetime of the channels as it is about using the channels for communication between stateful processes


so you don't make a "request" by invoking a function that returns a channel that will have a value put on it, ala promises, futures etc


In our design, we end up creating channels as part of an API request that has parallel parts to it (both actually async things like HTTP calls to other API's, and sync things like jdbc calls using async/thread); any one API request might have anywhere from 0-25 of these components that get created throughout the request, depending on how complex that API call is, and we might have hundreds of requests happening simultaneously depending on the particular server's configuration


so why are you using core.async over future, or Executors directly?


core.async is great, and I am not suggesting you should not use it


but if you compare and contrast it with other options, the big difference is it is built around conveying multiple values over channels instead of channels being one shot things


I've definitely debated not using it on occasion, since for the synchronous requests (like jdbc calls) it causes more headache then it solves; but at this point there's 1.) Still a lot of benefit from using this for async calls with the park/await feature that's not easily replicated with futures 2.) There's way too many lines of code to make a refactor away from core.async a trivial project


I could be wrong on the first point, honestly. We migrated to core.async after having issues with the 3.x version of aleph / lamina a couple of years ago (getting things to actually be asynchronous rather than blocking was really difficult with that design, and had some severe performance problems due to that)


again, I am not saying core.async is a poor fit for that, or a bad choice or you should change to something else or whatever. What I am saying is core.async is very flexible and you can do whatever with it, but you can compare its stance, how it addresses the problem space, to other tools and libraries


for apis, it of course depends on whatever you are doing, but something that looks at lot like a socket server works, you have a "server" waiting on "requests" on a channel, the requests take the form of a pair of channels so the requesting process gets 2-way communication with a "subprocess" the server forks


😄 don't worry, I'm not feeling attacked on my architecture choices, @hiredman. It's definitely not without it's flaws, but it was nice to find that we could improve performance by lowering the amount of context switches the servers were using in a rather easy fashion for very specific use cases


@hiredman if I'm understanding the approach you're suggesting, where you put a "request" onto a channel that contains the data to process the request and a channel to put the result onto, wouldn't you still probably be creating a channel per API request for each of these "socket servers" for you to get the requests back?


in the most general model no, the request is just the channels


but it depends on the processes you are creating and how they communicate


Any resources I could look at to understand that model more? Blog posts, conj recordings or the like?


I dunno, it is like a really simple socket server with channels instead of in and out streams


Lol, fair enough


Right, it's a bit of a refactor into a pipeline vs a request/response setup


And if one API call is resulting in 25 channels being created, you might just be better served to use a single thread for the entire thing.


In the end, using go is more about conserving memory than anything else. At some point it's just better to use thread pools and remove a ton of complexity from the application


@tanzoniteblack i'd be interested to know what your blocking problems with aleph were, which were solved by moving to core.async


@U0524B4UW Aleph worked correctly, but only when used correctly, which we found difficult to use. The main lamina feature we were making use of was run-pipeline to chain async events. It turns out that, at least in the version we were using maybe, that run-pipeline only jumps off the calling thread when a function used in it's pipeline jumps threads. This means by looking at the code, without looking at the exact implementation of the functions being used, there was absolutely no way to know if this pipeline was going to block the calling thread or not. Since we were using it not only for async handling, but also a means of parallelization (rather than mixing futures and async, just stick to one is easier was the theory). core.async makes it incredibly obvious at what points your calling code will block or park, and what points you have a guarantee of not, which made us able to parallelize code easier


ah, so that was perhaps in pre-manifold times perhaps - we’ve been happily mixing manifold deferreds and streams with aleph, but maybe those abstractions didn’t use to exist


it was definitely pre-manifold


Can someone answer a quick question regarding mults? Do you have to have two taps in order for the consuming to work? It seems to be the case for me.


Yes, you need one tap per consumer


Yes, I understand, but what if you only have one consumer, like in the above snippet?


It doesn't work for me, it only prints TEST 0, while I expected all 10