Fork me on GitHub
#clojure
<
2021-11-04
>
didibus03:11:23

What are some use cases people have for async code? Is it mostly to do a few IO calls concurrently? Do people have other use cases for it?

Alex Miller (Clojure team)04:11:54

Loose coupling between components

didibus04:11:16

Hum... can you expand on this?

Alex Miller (Clojure team)04:11:55

see you have these components ... but you want to connect them loosely

Alex Miller (Clojure team)04:11:23

sorry, not sure what part is not clear :)

Alex Miller (Clojure team)04:11:42

instead of A having a reference to B, A and B communicate via a channel

didibus04:11:04

Ya ok, so each coupled to the channel, but not to each other.

Alex Miller (Clojure team)04:11:14

well channels are abstractions

Alex Miller (Clojure team)04:11:41

maybe they're directly, maybe there's a T or a mult or who knows

didibus04:11:48

You'd still be coupled to the message structure

Alex Miller (Clojure team)04:11:20

sure => see maps as loose bags of attributes with strong semantic meaning

didibus04:11:09

But I see what you mean, something like here's data, don't know who cares, anyone is free to grab it from the channel and do stuff. So the producer doesn't need to know who the consumers are

Alex Miller (Clojure team)04:11:51

this story is told in Rich's Language of the System talk, in Transit, etc

Alex Miller (Clojure team)04:11:00

core.async is more useful in this context than network I/O b/c you can make use of backpressure (which is way harder over the network)

didibus04:11:08

I guess I've never really thought of it within an application, between services it makes sense to have pub/sub and all that for loose coupling, but within an app I need to think where that be usefull

Alex Miller (Clojure team)04:11:09

Clojure Applied has a couple chapters on this, although a lot of that is just core.async basics

Eddie04:11:25

There is a lot of good arguments for this kind of thinking in the Kafka community. Kafka is more “in the large” than core.async but they have put out a ton of great case study and discussion about the benefit of decoupling producers and consumers via a logical queue.

didibus04:11:00

Like would you go as far as say model your data layer so instead of exposing functions that maps to queries, you'd have a channel where you put query requests modeled as data ?

seancorfield04:11:31

I think core.async is really good for decoupling logical processes in your code, so you can think about consumers, producers, and transformers independently.

Alex Miller (Clojure team)04:11:37

no, that doesn't seem like itd be useful

Eddie04:11:15

There is even an illustrated children’s book that is technically about Kafka but I think much of the motivation could be applied to core.async as well 🙂 https://www.gentlydownthe.stream/

👍 1
seancorfield04:11:37

Our sitemap generator, at work, runs on core.async: producers generates data about locations and profiles and put it on channels, transformers read and augment that data and put it on other channels, and consumers read that data and generate XML sitemaps. Each process can be read separately and can have a certain amount of concurrency, without having to explicitly wire everything together and try to deal with sequences of data.

seancorfield04:11:11

@U0NCTKEV8 can probably talk in more depth about how we use core.async for other stuff (in our billing system and our messaging system for example).

Alex Miller (Clojure team)04:11:19

it's the same ideas as microservices to some degree, but in process

1
👍 2
didibus04:11:09

Ya, its really not something I've though of before. I'm trying to think like how I'd benefit from that, because coupling in-process is not as bad, and other means to loosely couple exist, like protocols, multi-methods, etc.

didibus04:11:16

Apart from the concurrency aspect, you find that better than just a transducer that generates, enhances and generate ?

Alex Miller (Clojure team)04:11:04

you still have transducers - you can put those on channels

Alex Miller (Clojure team)04:11:28

channels have backpressure and can tell you to slow down

didibus04:11:31

I think the only thing I can think of right now is that if say you wanted to generate something else from it as well, beyond a site-map, you could introduce another reader to do so. But in-process, its so easy to also just add another call from the producer to also trigger the other generator...

didibus04:11:31

Ya, okay, so when you add concurrency, I can see channels being nice to manage backpressure. But my question maybe is more, where do people care for concurrency that than they'd care for async code? I couldn't come up with anything beyond implementing a server that can concurrently handle requests

didibus04:11:20

Then I thought, okay maybe you want to make like 3 requests to other APIs concurrently? That's another example I thought, and say you want to do so with non-blocking IO you'd wrap your callbacks into async code which is nicer to work with

didibus04:11:51

I got stuck there, haha, but loose coupling between component is a really interesting thought... And I need to sit on it

seancorfield05:11:14

The backpressure is an important aspect of it, when you have concurrency involved.

Ben Sless05:11:25

@U0K064KQV if you've ever played factorio, it's analogous to placing components adjacently and passing items between them using inserters, and passing elements using a transfer belt

Ben Sless05:11:51

Can't model mults and pub sub with it but it's a good jump off point

didibus05:11:46

I understand the workings, but I'm missing to see like the applicability of it? I thought I knew this, but when trying to come up with examples of like: here's some stuff you can do with async code (and also would want to do), I couldn't think of many. And then, I couldn't also think of why I'd need to leverage the stackless coroutines over threads. That's another aspect, the only use case I can see is handling non-blocking IO callbacks, otherwise you'd shove compute on a CPU bounded thread-pool, and blocking ops on an unbounded cached IO pool

Ben Sless05:11:19

The use case is usually when you have to processes a sequence of values, concurrently, and usually don't have to send replies, only pass them on

Ben Sless05:11:47

Then you can build a processing pipeline like a factory instead of a rube goldberg machine

Ben Sless05:11:15

With small simple machines which do one things and do it well, then pass it on to the next machine

didibus05:11:36

Even then though, that seems to be like 5% of core.async 😛 So I can use a pipeline to parallelize some computation, which by the way, I'm bothered by the fact they each get their own threads and I can't have like 5 pipelines share the same CPU bound pool.

Ben Sless05:11:38

Notice that in the model as I presented it, I did not talk about parallelism or efficiency as the value proposition

didibus05:11:02

What's the value proposition?

didibus05:11:25

I guess my question is: Why do you have to process a series of values concurrently?

phronmophobic05:11:14

the moments core.async has really shined for me is when I've had multiple timelines that produce inputs to my program, usually a combination of network and user input. if you have multiple outputs, that usually not very hard. however, core.async really helps when dealing with multiple input sources that are asynchronous, especially if you need timeouts.

phronmophobic05:11:31

the next project that I'll probably use core.async is for making a reusable video player. my plan is to have separate threads for ui, video decoding, and audio decoding. there can be multiple videos and users can play, pause, rewind, seek, etc at their leisure.

hiredman06:11:15

You can have pipelines share an executor, you just have to do it using pipeline-async

hiredman06:11:45

I think of core.async as being two things: a coordination model (channels) and a DSL for callbacks (the go macro)

hiredman06:11:08

I most often reach for it because of the coordination model, and end up using the go macro because I might as well instead of taking up a thread

Ben Sless06:11:20

A slightly higher level abstraction which is incomplete in core async is a stream processing one, with pipelines

hiredman06:11:50

Outside of one particular service at work (our chat/messaging system) most stuff is very straightline synchronous code, but I still reach for core.async here and there to coordinate things, like have background threads (that mostly end up being go blocks) to make sure the auth tokens for the different apis we use are refreshed as needed

hiredman06:11:11

And they have to coordinate waiting to refresh, refreshing, retrying if it fails, and hand the token out as needed to another code

hiredman06:11:18

The messaging/chat system is the "reallying" async place because the communication model we have is a stateful connection between the browser and the server, and we don't want a lot of long lived dedicated threads for each connection

hiredman06:11:36

Our redis pubsub component actually satisifies the WritePort protocol so you can put to it like a core.async channel, and it satisifies most of the core.async pubsub protocol, which is maybe terrible because it doesn't properly communicate back pressure (I've experimented with backpressure there and found it tricky over a broadcast medium), but has mostly worked great, it makes it easy to mock in tests

hiredman06:11:20

Our in memory user presence system is basically a go-loop that by reading and writing messages to core.async channels, keeps the different servers view of who is currently online in sync (it reads and writes to channels, connect those channels to the network)

didibus05:11:38

For context, I wrote a lib that adds async/await, error handling, cancellation, and common JS promise-like API functions over core.async, mostly for fun, but also because so many people seem to want that. And I was thinking, cool, I'll write some examples or try it out to see the usability of it, and I could not think of any examples to test it on or demo it 😞 It turns out my use of async are always either my http server is async or my http client is async, but never have I really needed to have anything be async within those bounds. So I'm looking for ideas for that.

olaf05:11:15

In clojure.spec how can I do a or logic? I want to receive an email field (required) that could be empty or a string with an email inside.

(s/def :user/email (s/and string? #(re-matches re-email %)))
(s/def :login/user
  (s/keys
   :req-un [:user/password
            :user/email]))
s/or takes a key-pred map, not arguments. or is not usable. What is the best way to validate email like (or empty? (and string? #(re-matches re-email %))?

dpsutton05:11:34

(s/valid? (s/or :empty str/blank? :bob #(re-matches #"bob" %)) "sue")

🙌 1
gomosdg07:11:56

Howdy! I have not used Rules Based Systems much but I am keen to explore Clara Rules. Anyone with some examples or use cases to share for where Clara Rules would be really useful? Also please share some of your experiences and best practices in debugging and testing. Than

Jakub Holý (HolyJak)09:11:37

Hi! Clojure does not support Java's readable number syntax 10_000 right? Will it?

1
1
delaguardo10:11:21

No, it doesn't. And if i remember correctly some recent discussion - it won't support it in the future.

😞 1
Ivar Refsdal11:11:23

Not sure if you know, but it's possible to use e.g. (int 1e4) if you find that more readable.

Ivar Refsdal11:11:15

I suppose you would then write (int 10e3)

Alex Miller (Clojure team)13:11:28

I don't remember saying we won't support it in the future, I'd say that's tbd

👍 1
jjttjj13:11:41

Here's the ask clojure for it: https://ask.clojure.org/index.php/8511/add-digit-separators-support-to-number-literals For some reason I also remember someone saying it would't be supported but I could be imagining that

🙏 3
Stuart14:11:56

C# supports this as well now

emccue13:11:49

i mean, if you want you can always do this:

emccue13:11:49

(defmacro numbah [symbol]
  (Integer/parseInt (string/replace (name symbol) #"_" "")))

emccue13:11:55

(numbah 10_000)

emccue13:11:56

or something similar if that doesn’t work

emccue13:11:19

(defmacro numbah [value]
  (Integer/parseInt (string/replace value #"_" "")))
=> #'user/numbah
(numbah "10_000")
=> 10000

emccue13:11:22

there, thats better

Alex Miller (Clojure team)13:11:53

how often do you supply large literal constants where you would even use this?

emccue13:11:16

(defmacro numbah [value] 
  (read-string (string/replace value #"_" "")))
=> #'user/numbah
(numbah "0x10_000")
=> 65536

jjttjj14:11:01

In the cryptocurrency domain, large literal constants like this are pretty prevalent. Of course we have util functions to turn "100mm" into a number, but I do frequently find myself putting in the _ in numbers just to compare them visually temporarily. It would be a welcome addition for me to have this be valid code. It's not something I feel super strongly about, but since we're on the topic

Jakub Holý (HolyJak)16:11:31

Not that often, mostly only during development when playing with benchmarking and test data generation.

👍 1
Akiz17:11:36

I like this feature - it makes reading much easier. But i am not sure if it would be good idea as (read-string “10_000”) should return string to be backward compatible.