Fork me on GitHub
#clojurescript
<
2023-03-21
>
M J08:03:26

#{{:11024 {:group_instance_id nil, :input_type text, :value nil}
{:11025 {:group_instance_id nil, :input_type text, :value "hi"}
{:11026 {:group_instance_id nil, :input_type text, :value nil}}
i have input-data which is a persistent hashmap of maps, where ids are the first item and followed by another persistent hashmap contanining a key :value. i only want the ids that have :value nil

p-himik08:03:56

This really belongs in #C053AK3F9 but since you've already asked it here: • In that block, you don't have just a map - you have a map wrapped in a set • Keywords can't start with digits. Perhaps you're getting the data from some JSON with string keys an then keywordize those keys - you shouldn't do it in this case • The code that does what you want, assuming that data is a map and not a set with a map:

(map key (filter #(nil? (:value (val %))) data))
As an alternative that can potentially be faster (but you should measure your specific scenarios):
(into []
      (keep (fn [e]
              (when (nil? (:value (val e)))
                (key e))))
      data)

thheller08:03:07

this also isn't valid EDN? looks like you added a few extra { manually?

p-himik09:03:32

I have an audio player component that's a mix of CLJS and JS. It has to heavily rely on promises because of how web audio API works. That reliance results in timing issues because there's no guarantee that when f().then(g).then(h) is in flight, there's no some other-f in between g and h due to e.g. a user interaction with a very bad timing. Another issue is that some operations are compositions of other async operations. If e.g. h has a "tail" of a, I need that a to be executed right after h - again, without anything being done in between h and a. So in other words, I need to run compositions of async operations as if they were sync. Initially I thought about using core.async given that its channels are serial. But seems like that last bit kinda spoils it. On the other hand, I don't really know core.async all that well, so maybe I'm wrong. Or maybe there's a proper solution that doesn't even need core.async?

thheller09:03:35

I use core.async for coordinating stuff a lot. totally depends on how you design your overall code though whether thats useful or not. I have never used web audio. so not sure what exactly is needed

thheller09:03:13

other than that you can just control when you .then and maybe Promise.race or so

p-himik09:03:44

Oh, race is the exact opposite of what I need. :D Yeah, can't really say it better than "run async stuff as if it's sync". Don't think it's possible in general, but here I control all promises and all call sites, so it should be. But bloody hard, every tiny change has a chance to blow things up.

p-himik09:03:35

So, every execution begets a series of other executions, which can potentially be of length 0. The executor queues up executions, runs them one by one, and each execution is replaced with its result. Something like that.

dgb2309:03:52

I'm not familiar with core async yet. But i've played a lot with the equivalent in Go. This sounds like a perfect example for channels to me. How I'd approach it, is draw a state machine on paper first, or with excalidraw and then map it conceptually to channel mechanisms.

thheller09:03:52

from erlang I inherited thinking in messages

thheller09:03:32

so you basically have an atom sitting with your state. all .then just trigger messages. sort of like agent in CLJ, or workers in general

thheller09:03:51

so at any point you can look at your entire state and decide what to do next

☝️ 2
2
thheller09:03:04

little verbose at times but great for control

thheller09:03:15

basically one alt!! that selects over which channels you want to react to

thheller09:03:30

and the loop keeps the state

thheller09:03:21

I'd avoid doing this for CLJS because of the go macro generating too much code

thheller09:03:39

but you can achieve the same entirely without core.async and just an atom

thheller09:03:11

also basically how select works in go

p-himik09:03:05

How would that enforce total ordering though? I'm very hesitant to model it as an explicit state machine because there are a lot of states, most of them being intermediate ones ("the nodes are stopped but we haven't received the stop event yet", "the playback is initiated but the worklet is not yet loaded", etc.). Not sure one piece of paper would be enough, heh. I think it'll be solved if instead of "work queue" I use "work deque". I just have to make sure I never use .then except for some special cases.

dgb2309:03:11

If I understand the problem correctly you'll need an internal queue. Right? Every message (say, user events) that your statemachine can't process right now gets put on the queue, while you await the messages from your process.

thheller09:03:33

you control the ordering. you just check your current state, if a message arrives out of order you buffer it somewhere

2
p-himik09:03:38

But that would require me modeling an explicit state machine, no? Otherwise, how else can I figure out whether I can process a message right now or whether it has arrived out of order?

dgb2309:03:59

Yes I would do it this way for sure.

dgb2309:03:10

Maybe draw up a very minimal one and use setTimeout to kind of explore it.

dgb2309:03:54

To get a better feel for how it works. At least I always need to do stuff like this.

p-himik09:03:55

Oh, that last part will absolutely not work for me. :D Whenever I get a "good feel" that it works properly, the actual users, that use that functionality extensively and intensively, manage to find ways to break it - ways that I can't even imagine by looking at the code. :) I still have no idea how one particular error has happened, even taking all the arbitrary ordering into account.

thheller09:03:20

depending on how complicated your "loop" is a simple :state :foo and then (case state :foo ... :bar ...) might be enough

thheller09:03:59

I just tend to feed all events into a generic (swap! state-ref process-msg {:op :foo ...}) type of thing

thheller09:03:28

you can abstract this into many different layers, all depends on how complicated you want to make or need it

thheller09:03:07

in JS it is pretty straightforward since there is only one thread

thheller09:03:35

so you never have the problem of two threads trying to swap! the state-ref and maybe causing CAS to retry

thheller09:03:54

(so triggering side-effects inside the swap no longer is a problem basically)

dgb2309:03:25

That's also the reason why it maps perfectly to a state machine.

thheller09:03:56

yeah, its all state machines basically

thheller09:03:19

with varying degrees of actual defined "machines" 😛

dgb2309:03:27

my state machine models are typically just maps that describe the transitions. Then I dispatch to the right function that takes [from to]

dgb2309:03:49

maybe that works for you too

sirwobin09:03:26

Have you taken a look at https://github.com/funcool/promesa? It's do and let macros can force order on nodejs. Check https://funcool.github.io/promesa/latest/promises.html.

p-himik09:03:09

> in JS it is pretty straightforward since there is only one thread Would you say it's still straightforward given that most computation trees look like this

a
   / \
  b   c
 / \   \
d   e   f
   / \
  g   h
where every / and \ is a call to .then and during each and even one of those "intermissions" another one such tree can be "scheduled"? The thread is one, but the computation is still concurrent, which is a huge PITA in this particular case. @U1KGF7AG3 That's local order, not global. Or at least, not "global for a piece of functionality".

p-himik10:03:02

But maybe I'm thinking about it wrong and some things that I consider to be separate states should actually be lumped together or not be considered states at all.

thheller10:03:12

don't model it as a tree 😛

p-himik10:03:39

I didn't mean it as what I'm modeling it as - it's just how calls happen.

thheller10:03:44

there is absolutely no logic in any .then call. they all just feed into the same place

p-himik10:03:53

Yeah, but they don't ensure order.

thheller10:03:08

your code ensures order.

thheller10:03:21

this is really tough to talk about abstractly

dgb2312:03:24

Last time I did something similar-ish in JS I just triggered enriched events in the callbacks. with dispatchEvent . The events are custom events and the target is custom too (not an actual dom node). (As you might know, you can put stuff into a custom event via the details property.) This way you have one handler that then further dispatches the individual event messages: There is your state machine and your queue for messages that the state machine doesn't handle at that moment. I personally never required to use a queue like the one you probably need. But I use this type of pattern in UI programming quite often (while discarding messages that don't get accepted). But here is how I would do try to do it: The queue gets emptied while your state machine is in the appropriate state say "Accepting Messages", so any message gets processed after the queue is empty. The transition goes back to the same state "Accepting Messages" after each message it takes. (At least logically. While you empty the queue you might just do that without going through the state machine.) When a message comes in that require specific further processing while ignoring others, you transition into "Step One", which only accepts messages that puts it into "Step Two" and so on. While your state machine is in one of those steps, your handler will just put any other message into the queue, until the "Final Step" is reached and you got back into "Accepting Messages". I mention this because this kind of pattern has helped me a lot in order to coordinate asynchronous UI stuff. But it's also a matter of whether it really fits to your problem and whether you vibe with that mental model.

p-himik14:03:11

After writing a lot, thinking a lot, and cursing a lot, I realized that my main culprit is the fact that the composable API is both internal and external. Ended up extracting the external API into its own functions and making it serial via a common constantly reassignable promise. So far, seems to work alright.

👍 2
skylize05:03:54

If it helps, you can reduce calling of .then over a seq of transforms. Something like this (untested)

(defn serial-then 
  ([fs] (serial-then (Promise.resolve nil)))
  ([fs p] (reduce (fn [p f] (.then p f) p fs))))

skylize05:03:29

Hmm, might need a do-seq or something, too. Haven't yet tried this technique with laziness.

thheller05:03:09

@U90R0EPHA this doesn't work. .then runs in the next microtask and itself just returns a new promise

Chris McCormick06:03:50

👍 for promesa flattening out complex async promise code. > So in other words, I need to run compositions of async operations as if they were sync. This sounds a lot like something promesa will help with.

skylize06:03:33

I don't follow what your concern is @U05224H0W. This builds up a chain of .then calls. When the first promise resolves, its result is passed into f of the next .then, which itself must resolve before passing its result to the 3rd, etc.

thheller06:03:46

> When the first promise resolves

thheller06:03:52

so, yes the .then calls will resolve in order but that is not what this discussion was about. the example chained .then calls also do that, f().then(g).then(h)

thheller06:03:58

you just made a function to write (serialize-then [g h] (f))

thheller06:03:59

.then always runs in a micro task, it never returns a sync result. even if the promise is already resolved

skylize06:03:11

I guess I didn't really catch what the complaint was. I just saw that he wanted serial processing, and was throwing that out as an easy way to get that. Not clear what "as if it's sync" should even mean, so I don't know what about the issue makes running in a microtask problematic. But I see now the complaint about duplicate f.then... calls conflicting, which is certainly not going to benefit from my suggestion.

hifumi12320:03:07

Is it safe to set! dotted symbols like js/document.title or should we prefer writing (.-title js/document) in this case? Both cases work fine in development builds but I'm curious if there are any gotchas under advanced optimizations

p-himik21:03:37

Shouldn't matter for js/* stuff. I usually do (set! js/document -title "something").

hifumi12321:03:38

I guess there is a detail to note: > Dots inside symbols ... are not detected by :infer-externs So we're safe in the case of js/document but not necessarily other namespaces

thheller05:03:40

in shadow-cljs it should work for all js/* things