Fork me on GitHub

I have enjoyed learning about WSL2 and clojurescript integration from the on-topic posts in Clojurians, but I think my query is probably off-topic. Which came first, vcXsrv or WSL? If vcXsrv, how was it used? What was the motivation for developing it? If created after WSL, was WSL the motivation (communicate between different processes on the same host)? If before WSL, was it perhaps for a windows machine to communicate with a physically remote and separate Linux machine?


@brendnz We can also discuss this in #clj-on-windows if you want


How I see it: vcXsrv and X410 (another one) are X servers for Windows which allow you to render linux applications in Windows. You could use this with WSL. But in Windows 11 there is now wslg which replaces the need for a custom X server.


Thank you @borkdue. I am in the process of installing vcXsrv (not to mention learning about bashrc and awk; just enough to know what they are doing). I have got it installed and I am following instructions on safety and port forwarding etc. I am happy with my progress. I am doin this because when I run the getting started with clojurescript introductory repl on Calva I get a "cannot read property 'eval' of null" when loading the namespace. I assume this is because Calva needs the JVM in WSL2 to open a GUI web-browser.


Oops, pressed enter for a new paragraph ...


Not sure why this is. Maybe @U0ETXRFEW knows. But when you have Windows 11 and upgrade wsl then you should not need the X server anymore I think.


But you should also be able to do the tutorial from Windows VSCode not from within WSL2 I think


... I at first looked forward to wslg, but my computer lacks the specs to upgrade to Windows 11. But I am now pleased to go the vcXsrv route in the meantime, for what it teaches me about how things work.


Yes, I am using the tutorial in vscode (the Windows program, albeit running remote programs in WSL2), but it runs programs from WSL2.


Ah, but my query relates to the history of vcXsrv; the motivation of why it was created?


I’ll have a look and see if I get that boring error message. But later. Will be mostly afk for a while


ctrl-alt-c enter produces the error message.


@U04V15CAJ, thank you for the offer of continuing the discussion on #clj-on-windows Sorry for not wanting to type everything out again, but I have learnt about the channel's existence. It was not on my list of channels on the left-hand side of my screen, but it shows up when I click your link.


@brendnz Are you sure about WSLg being Win 11 only? I am pretty sure I used it with Win 10.


Having used both vcXsrv and WSLg, the latter is heaps and bounds more performant.


Hi hindol, I'm not really capable enough to get into the early track of pre-release Windows, but thanks anyway.

✔️ 1

I'm pretty sure I've had the getting started cljs repl working on Windows w/o any WSL stuff.


Ah, but I want the WSL stuff.


If Datomic on-prem is running on WSL, I cannot connect to it from vscode without vscode running through WSL.


Regarding your original question, vcXsrv definitely came first. WSLg is pretty recent.


Then I misunderstood the issue. And am also not well suited for trouble shooting it. I'm super bad with Windows and also installed WSL for the first time yesterday and found it super confusing.


@U0ETXRFEW, I wish you the same luck I have had, which overall has been quite good.


I'll avoid it with the same passion as I avoid Windows in general. 😎


I was going to say if your luck is any good, you will enjoy it, but if you don't need Windows maybe you don't need WSL. If you do, do everything the Linux way within Windows terminal. which is really a Linux terminal (see the picture). That is work with ~/home/pez/... files, not /mnt/c/... files. Um, I'm not qualified to comment in some respects.


Just for the record, I have not connected to Datomic from an editor yet; still learning preliminaries.


I only have a Windows machine to try help Calva users on Windows. But it drives me crazy... From where is that screenshot taken?


I have posted four screenshots, but no doubt you mean the most recent. Here are some more. The first (though they may not show in order) comes from the Windows start screen. The icon on the right comes from when I set up WSL2, and the icon on the left comes from when I set up Windows Terminal. You can set up Windows terminal to open into a Ubuntu shell, a Powershell, a command prompt, or apparently an Azure Cloud sheet. Once you open the terminal, you can open more than one shell, each in its own tab. So I often have several Linux shells open, and sometimes a Powershell open at the same time. The screenshot you refer to is what shows when you click the plus sign to open a new tab. When you click +, you get to choose what sort of (additional) shell you want to open. Oh, by the way, I have never clicked the icon on the right since I installed the Windows terminal. In the second screenshot below you will see I have three bash shells open (I believe that is the term). I list a linux directory from the linux file system, and I list a windows directory from the Windows file system, albeit accessible from the linux file system. Just because you can, I wouldn't save a file of one file system from a shell of the other. Incidentally, that BashCommands file is my cheatsheet, and that deps.clj file is in this location only temporarily.


@brendnz I added you to #clj-on-windows and posted a suggestion there about installing a browser on WSL.


Was exploring Recoil, the new React state management library by Facebook and stumbled upon this. (David McCabe is the author of Recoil)


Heh, that explains all my "but we already have those!" reactions while watching a presentation on Recoil.


The uniqueness requirement on the “key” when instantiating atoms and selectors made me sigh that they really need namespaced keywords:sweat_smile:


I find it interesting how a lot of things Clojure(script) seem to get picked up my more mainstream langs. HMR wasn't really a thing before figwheel. Devcards predates Storybook by quite some time. The list goes on.


A little core.async (for the async bits of recoil) + reagent atoms and we basically have Recoil.


@U04V5VAUN also nice to see that [email protected] is a lot closer to reagent than [email protected] And reagent didn't changed in all these years

Dustin Getz18:12:40

IIUC, Recoil's implementation is significantly better / less glitchy than Reagent due to planning out the computation in advance as a DAG. So the various ways that Reagent's deref tracking violates algebraic reasoning/composability (i.e. don't deref an ratom inside an if) - Recoil is not vuln to this class of error


> don't deref an ratom inside an `if` Why is that?


And what are other various ways in which the algebraic reasoning/composability is violated?


I guess because derefing an atom is impure? (and therefore i am not sure you can “express” derefing an atom using algebra?) I’d love to read Dustin’s response tho


If Recoil’s implementation is superior and more performant it would be interesting to see if it can be used under the hood for r/atoms while maintaining the same interface.


Arthur, derefering an atom is the same as using useRecoilValue. Not sure if the usual rules of hooks apply to the recoil hooks, but if they apply useRecoilValue cannot also be invoked conditionally.

👍 1

Reagent + Hiccup + Ratoms is the cleanest interface to doing react IMO and one can get pretty far with it. (Reframe is great, but there is a lot of ceremony involved).

Dustin Getz23:12:37

the deref syntax can indeed be analyzed by a macro at compile time however it is a full rewrite of the implementation, note that Recoil does not use React. we are working on something similar to this at Hyperfiddle, see my pinned tweet (sorry typing on mobile)


Just to not leave potential readers confused - it is perfectly fine to @ a ratom in one of the branches of if. It's not an error.

Dustin Getz13:12:55

it's not fine, any derefs in the branch not taken will not be detected/tracked and future updates will result in glitches

Dustin Getz13:12:32

reagent only sees deref side effects that actually occur


If some ratom x is not used, why would or should its update affect some component? What kind of glitches are you talking about, exactly? Can you provide an example? In this code:

(def a (r/atom false))
(def x (r/atom 1))
(def y (r/atom 2))

(defn my-view []
  [:span (if @a @x @y)])
all rendering will be done correctly, all values will be displayed according to the value of a. If you change a, a different number ratom will become tracked.


On the off-chance that you mean that the old value of x will be used if you change x right after doing (reset! a true) - IIRC that was an issue prior to Reagent 0.3.0. With that version, async rendering was introduced and now multiple updates to multiple atoms will all be rendered once, without a chance for a race condition.

Dustin Getz14:12:07

It only happens in more complex compositions, to trigger it you may need to combine it with further hiccup calls and/or more complex reactions derived from ratom. We left reagent two years ago over stuff like this, i have forgotten all the internals knowledge


Alright. But what do you mean by "combine it with further hiccup calls"? The only way I see anything having a chance of creating any issues here is if you have side-effects in your view functions. Like reset!-ing some ratom right when a view function is called, or calling run! on each render. Or maybe using setTimeout. Just regular conditions without other really suspicious and often inappropriate things should not cause any troubles.


And if that's the case, I would blame all such suspicious things rather than conditions, because those things can definitely cause trouble on their own.

Dustin Getz14:12:15

you're assuming reagents reactive evaluation model is sound; it very much is not (in fact almost all reactive/FRP libraries are broken, this is not unique to reagent, it seems it is only in the last 3-4 years that glitch free FRP has been figured out)

Dustin Getz14:12:44

reagents model is based on tracking deref side effects after they occur which is the opposite of what you want - you want to analyze the ast into a DAG and plan out an optimal rendering order


But how exactly is the model broken? Surely if almost all similar libraries suffer from similar things, there should be plenty of literature you could recommend.

Dustin Getz14:12:36

tons of literature google glitch free propagation and frp


I don't know what I don't know, so that's why I'm asking specifically for recommendations. :)

Dustin Getz14:12:03

typing on mobile but jane street has good videos, the docs in jane street incremental lib are great

Dustin Getz15:12:02

clojure's has a goal of being glitch free


The link to Airstream provides the shortest explanation of the problem, so I'll focus on that. It states two things: • A glitch in FRP is a situation where inconsistent state is allowed to exist and exposed to either an observable or an observer • Most streaming libraries [...] are implemented with unconditional depth-first propagation By default Reagent is not a streaming library, it does not propagate changes till they reach a view. Instead, it marks everything that needs to be recomputed as dirty, and then, only when a reaction is deref'ed again or the rendering is being done, the actual computation will take place. And even so, if reaction a depends on b then there will be no old values of b - the new value will always be computed if b is marked as dirty. You can opt-in and make it work in a streaming-life fashion where reactions are recomputed right when their dependencies change and not when you deref the reactions - by using :auto-run true explicitly or via run! or track!. But then again - if you deref multiple things in the reaction function with :auto-run true, they will be recomputed if they are marked as dirty. So it's less like streaming changes and more like triggering an SQL function that depends on a non-materialized view. Unless my understanding is grossly incorrect, the only way for you to shoot yourself in the foot, apart from using async JS stuff or fiddling with ratom watchers manually, is if you mix deref'ing reactions with changing the ratoms they depend on. But that's quite bizarre, on par with calling reset! within a swap! in Clojure. It doesn't mean that the model is incorrect, it just means that the usage is wrong. Going back to the topic of conditional derefs. Even if what you say is applicable to Reagent and my understanding of how Reagent works is utterly wrong, your initial statement still doesn't seem to be correct. What you describe as an issue of FRP has nothing to do with conditional deref'ing by itself. Given that you have already had experience with it, I would really appreciate if you could prove me wrong and come up with an example that uses Reagent (without "bad" stuff - async JS, fiddling with ratom watchers, or mixing and matching derefs with resets) and manages to return an inconsistent state to an observer. But so far, especially given that I myself have been using Reagent with no similar issues for years on rather complex projects that do use run!, my working hypothesis is that when you encountered the issue it was one of those "bad" things as its root cause.

Dustin Getz16:12:42

may not have time

Dustin Getz16:12:47

you're still assuming that the reagent dirty checking deref side effect tracking model is sound. it is not sound, there is no literature on that model, it spews undefined behavior in moderately complex expressions


@U061V0GG2 maybe you have an opinion on where my understanding is wrong and how exactly it might be possible to attain undefined behavior?

Dustin Getz17:12:58

I was unable to produce a glitching if, the case i am recalling may have been higher order (if i even recall correctly, it's been years)

Dustin Getz17:12:13

There are other more clear glitch cases though, one is that reagent's alg makes no guarantee as to the order in which incremental renders will occur, because once the deref side effect has happened it's actually too late to do any "query planning" (as an analogy). You have to analyse the ast to plan out the optimal computation order. A consequence of this is that reagent may run your render functions multiple times, sometimes in a loop. IIRC due to the deref tracking model, each time something changes, we need to re-discover which reactions are visible and even this discovery process can cause render functions to be called more than once or even in the wrong order, causing glitched intermediate states that violate assumptions that the userland code makes. For example I recall we often saw Reagent passing nil to render functions which absolutely was not a possible/consistent state, there would be an NPE, and then reagent would discard the exception and keep rendering over and over, getting a little further each time "through the maze" until it discovered all the reactions and finally reaches a consistent state. This was all years ago, maybe I am mis-stating internals, we just tired of imperative mush and wrote a correct renderer using missionary in about two man months

Dustin Getz17:12:58

At some point programmers inevitably need to create a lambda abstraction or closures and Reagent is an abstraction ceiling in this regard


> A consequence of this is that reagent may run your render functions multiple times, sometimes in a loop That would be true only if your render functions themselves alter the state. > each time something changes, we need to re-discover which reactions are visible There's no discovery, there's tracking. I don't know what kind of discovery you're talking about - I only see explicit queues and watchers in the source code. If any of those points were true, any component that depends on multiple reactions/ratoms would be updated multiple times if two of those reactions/ratoms change. But that's not the case - there's only a single update. Perhaps you've experienced some bug in old Reagent that has since been fixed and that I've managed to never stumble upon. Regarding your link - this is true but completely orthogonal to the whole discussion of reactivity.

Dustin Getz19:12:34

> That would be true only if your render functions themselves alter the state. this is unequivocally false based on our experience, we have debugger screenshots of the behavior


Can you share them? Is there any chance the old source code with the glitchy behavior is still somewhere in your VCS? If not to share, maybe it could be used to come up with a minimal reproducible example.

Dustin Getz19:12:25

It's not worth it, the level of reactivity we needed is well beyond the typical reagent app, codebase was abandoned it's full of code like this (fmap is r/track; cursor is r/cursor; partial is r/partial)

(defn unsequence
  "Expand a reactive reference of a list into a list of reactive references while maintaining order.
  If `key-fn` is provided, the children cursors will be pathed by the provided key-fn, NOT index.
  This is useful when the child cursors' references must be consistent across reorderings (which index does not provide).
  Like `f` in track and fmap, `key-fn` MUST be stable across invocations to provide stable child references."
   {:pre [(reactive? rv)]}
   (assert @(fmap #(or (vector? %) (nil? %)) rv) "unsequencing by index requires vector input, maybe try using a key like :db/id?")
   (->> (range @(fmap count rv))
        ; cursur indexing by index silently fails if @rv is a list here
        (map (fn [ix] [ix (cursor rv [ix])]))))
  ([key-fn rv]                                              ; kill this arity
   {:pre [(reactive? rv)]}
   (let [lookup (fmap (partial util/group-by-unique key-fn) rv)] ; because results are vectors(sets) and we need to traverse by id
     (->> @(fmap (partial map key-fn) rv)
          (map (fn [k] [k (cursor lookup [k])]))))))

Dustin Getz19:12:28

the application needed to be reactive on a massive scale


Doesn't look too bad TBH, although I'm not a particular fan of using lazy map with reactive stuff. Such an approach is almost certainly susceptible to issues, albeit of a different kind to the one discussed.

Dustin Getz19:12:32

yes the "issue" being reagent is not algebraic and therefore cannot be abstracted over

Dustin Getz19:12:46

otherwise it's just ordinary functional programming


IME reagent's reactive machinery tends towards doing more work than necessary, including firing more side effects


the prototypical test for a reactive library is what I call the "dirty diamond" test, which I got from watching Jane Street's talk "7 implementations of Incremental"


imagine you have a DAG like this:

 /   \
b     c
|     |
|     d
 \   /


depending on how you calculate the graph, you can end up re-calculating e multiple times. reagent does

👍 1

if you have a side effect that fires each time e changes you can end up seeing "incorrect" outputs because there's an intermediate calculation of e which doesn't match the state of the entire world. in more advanced discussions of React.js you will hear talk of "tearing," it's fundamentally the same problem. your outputs are inconsistent with the state of the graph at certain points in time


> You have to analyse the ast to plan out the optimal computation order. this isn't necessary. Again referring to Jane Street's incremental, the key to creating a "glitchless" reactive graph is to do a topological sort on the nodes and calculate them in order. This can be accomplished by doing deref tracking, see my experiments In fact, I am surprised that this can be done at compile time in all cases, since even in the algebraic case a bind node can change its ordering depending on what sub-graph it constructs. Or you limit yourself to very static graphs, which IME aren't as useful

Dustin Getz23:12:33

the bind case is handled in both jane street and missionary


Jane Street AFAIK uses a run time heap to maintain the topological ordering of nodes


it doesn't rely on compile-time analysis of the graph

Dustin Getz23:12:32

the bind nodes are visible in the ast

Dustin Getz23:12:29

there is runtime state of course but the spanning set of possible dynamism is constrained by the bind nodes visible in the ast, iirc. i am not spun up on this, it's been a year


If Jane Street does that to optimize the performance of its engine, that's cool, but that doesn't make it necessary for planning out optimal computation order.

Dustin Getz23:12:23

like when a bind node changes the DAG then yes we resort but it's in a separate phase than the event propagation


I can't tell if you're agreeing or not 😅


I was saying I was surprised, but it sounds like you've found some tricks to optimize the calculation of the ordering using the AST. Like I said above, you can also do the topological ordering required to do glitchless computations using deref tracking.

Dustin Getz23:12:55

that i would like to know more about, are you following a paper? and you have bind?


and other tests

Dustin Getz23:12:35

i see diamond but i don't see bind


signal-fn is very close to bind


the standard API isn't algebraic, so there's not an explicit bind operator, it's meant more for the same feel as reagent, but more correct and some other features. signal can dynamically connect and disconnect other computations to the graph based on what is dereferenced, so it shares similarities to bind in that way. signal-fn is for creating functions that dynamically create signals


here's an example of dynamically connecting and disconnecting two nodes from the graph based on a third node:

Dustin Getz23:12:33

thanks yes that was what i was looking for

Dustin Getz00:12:46

good stuff, you've done a lot


it's slow AF. mostly in the heap and calculating each node allocates a lot


hence not really ready for production use

Dustin Getz01:12:20

ok i read the implementation, still working out the consequences of the approach, it's very dynamic (since you're tracking derefs literally anything could happen) so that means e.g. history sensitivity is out right?

Dustin Getz01:12:22

(just trying to get oriented, i don't think history sensitivity is very important in the types of ui problems you and i care about)


@U4YGF4NGM How can I change the code below so that "e" is printed more than once on a button press?

(def a (r/atom 1))
(def b (ratom/reaction (js/console.log "b" @a) (inc @a)))
(def c (ratom/reaction (js/console.log "c" @a) (* @a 2)))
(def d (ratom/reaction (js/console.log "d" @c) (inc @c)))
(def e (ratom/reaction (js/console.log "e" @b @d) (+ @b @d)))

(defn app []
   [:button {:on-click #(swap! a inc)}
    "inc a"]
   [:span "e: " @e]])

Geoffrey Gaillard10:12:32

If I'm not mistaken, Reagent propagates reactions using a requestAnimationFrame like loop, your example runs in a single RAF frame. Introducing some latency in step d will make it produce a new value on the next frame, triggering e a second time. At the end of the first frame, e will be in an inconsistent state, and will recompute and reconcile on the next frame. Instead, d should be scheduled for the next frame, and therefore e should be too, according to toposort. By latency I mean something synchronous, like blocking the UI thread.


If it's synchronous, e cannot possibly be recomputed before the delay is d is completed. And by that time, both of the values will be available. In the code above, when the view is re-rendered, e will deref d - it will trigger the delay, and the rendering process will simply wait till the delay is done. The values are not propagated through the reaction graph - only the dirty flag is. The values are then requested and are recomputed. To be absolutely sure, I replaced d with this:

(def d (ratom/reaction (js/console.log "start d" @c)
                       (let [start (js/]
                         (loop [now start]
                           (when (< (- now start) 1000)
                             (recur (js/
                       (js/console.log "end d" @c)
                       (inc @c)))
And it still works in the same exact way, only with a delay of a second, where the whole UI becomes blocked.

Geoffrey Gaillard10:12:01

More precisely, reagent does not have a notion of readiness. So it can't know if a reaction is ready to be recomputed. It just recomputes ASAP, and therefore produces inconsistent states and recomputes until all reactions settles.

Geoffrey Gaillard11:12:03

My example was incorrect, thank you for pointing it out. I'll come back with a running example, probably tomorrow.

Geoffrey Gaillard11:12:23

Ah! In your example, reactions are queued in the same order as they are declared, because the example is minimal. I need to craft an example where they are not queued in the same order as they appear in the AST.


> It just recomputes ASAP It does that only with reactions created with :auto-run true. Which is by default is not the case.


I'd really appreciate such an example. So far, three people have said that Reagent can fail and yet nobody has given any MRE for that.

Geoffrey Gaillard12:12:55

I'll produce one and back my claim up ;)

👍 1

@U2FRKM4TW that example has a glitch after the first change AFAICT


nvm can't read

😄 1

so I do remember reproducing some glitches in reagent but I think it was very edge case. for the most part, reagent is pretty good at avoiding glitches


reagent uses a breadth-first algorithm for recalculating a graph, but since it computes and caches reactions on-demand as well, you end up with a consistent state


(like p-himik said above)

Geoffrey Gaillard17:12:42

Here is something to talk about:

(def a (r/atom 1))
(def b (r/reaction (prn "b" @a) @a))
(def c (r/reaction (prn "c" @a) @a))
(def d (r/reaction (prn "d" @b @c) (vector @b @c)))

(prn "result:" @d)
Will print:
"b" 1
"c" 1
"d" 1 1
"b" 1
"c" 1
"result:" [1 1]
Now, if you deref d in a component like so:
[:div (pr-str @d)]
b and c will only compute once.


It is all working as intended, as far as I can tell. Reactions that are used outside of reactive contexts don't cache their values. You can reproduce it with something as simple as:

(def x (r/reaction (js/console.log "x")))

Geoffrey Gaillard01:12:11

Reagent is not pretending to be FRP which is still an open area of research, so yes it is working as originally intended, and it's not a sound reactive model. It is useful, it is handy, easy to learn. I enjoy using it and I make money doing so, but it has an abstraction ceiling. It is not algebraic, no equational reasoning. Caching is a hack. Using = to dedupe reactions is also a hack. The DAG is clearly visible in the code example, yet how I ask for value changes how it is computed. Does asking for the value of a triggers side effects? It depends. If a is a continuous signal, then sampling a twice should never run it twice. Not because it is cached or = to its previous value, but because it is just defined at all point in time. Solving this is crazy hard and out of scope for Reagent, and it's ok. Reagent and its design are not to be blamed at all. What I think is problematic is clojure developers consider reagent is doing the right thing, whatever it means, without questioning the model. It turns out it's a rabbit hole. I did that. Picking this model for a huge SPA was a terrible decision, and I'm not the only one who made that mistake. We should talk about it more as a community, otherwise it gives the impression no one actually care or even worse, that using clojure is manly for toy projects and real world usage is exceptional.

Geoffrey Gaillard01:12:25

I failed to produce a MRE for the diamond glitch in the time I was willing to allocate to it. So I failed to back my claim up. I therefore have to resort to the ad-populum argument (3 people here) and the famous "dude, trust me™" harold


Alas, I've seen enough clever people making trivial mistakes to doubt unbacked claims. Notice how I myself try to provide code when possible - have been on the other side of similar arguments before. :)

Geoffrey Gaillard08:12:25

The way your argument is greatly appreciated. I failed to produce a code example for the glitch and I'm ok with it. The above code example is enough to show it is not sound.


Depends on the definition of "sound", but I see what you mean, I think.


I bet my previous experiments were derefing at least one reaction outside of a reactive context, which was causing a glitch