This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-02-26
Channels
- # announcements (19)
- # babashka (27)
- # beginners (24)
- # calva (14)
- # clerk (5)
- # clj-commons (21)
- # clojure (51)
- # clojure-europe (14)
- # clojure-madison (1)
- # clojure-nl (1)
- # clojure-norway (9)
- # clojure-uk (4)
- # clojuredesign-podcast (32)
- # core-async (14)
- # datomic (7)
- # events (1)
- # honeysql (3)
- # hyperfiddle (14)
- # introduce-yourself (2)
- # kaocha (7)
- # malli (21)
- # off-topic (50)
- # portal (2)
- # reagent (41)
- # reitit (41)
- # releases (1)
- # scittle (6)
- # shadow-cljs (90)
- # tools-deps (10)
- # xtdb (1)
- # yamlscript (1)
is there a reason for reagent to enforce component state to be maps?
I have a CodeMirror instance I want to set as the state, without the indirection of wrapping it into a map, but this throws an assert error
rc/set-state, rc/state calls
because reagent itself hooks into the react state, so can't use that one directly
working around this by using a separate JS property, ie local
but wondering if there's a better way
the whole atom+map indirection seems overkill when neither are truly needed
Heh, I've never even used that functionality, completely forgot about it.
> reagent itself hooks into the react state, so can't use that one directly
Not sure what you mean. Why not just use a plain atom, if you need for it to not be a part of the React life cycle? Or a ratom, if it must be a part of that.
> the whole atom+map indirection seems overkill when neither are truly needed
That's just the convention Reagent chose, and it's the only reasonable one that works when you need set-state
to actually merge the state and not replace it altogether.
I wouldn't call it an overkill - how much harm can a one map do?
And an atom is needed in order to make the component react to state changes, since it's actually a ratom.
ah in this case its not used for reactions, but used with a create-class
component with lifecycle methods; had performance issues at scale last time I naively put atoms everywhere
You can put a regular atom in a let
right outside the call to create-class
.
Or react/createRef
, if it's a proper ref.
Or a JS object, whatever you prefer.
ah didnt consider wrapping the whole create-class around a let, thanks!
thought that was form-2 only
That's not for any specific kind of components, it just makes the functions close over some values that you chose. It can even be around a form-1 component, although those values will remain static after being initialized at the ns loading time.
yeah also trying to avoid long-lived lambdas, they tend to be a nuisance when live editing code 🙂
or if I do, i made a wrapper dev macro to wrap the call around another lambda, so the original function can still be redefined from cider
Sounds complicated and I have no clue what it all means. :D On the frontend I use the automatic hot reloading by shadow-cljs, I don't have to think about long-term anything, apart from the state of course.
ah, well imagine you have references to functions inside that state, ie a map from keybindings to command functions, just reloading code wont cause the state to automatically point to the new definitions
or registered event handlers, this is a more common case; ie mediaDevices.ondevicechange
I see. I very, very rarely encounter those scenarios myself. Possible factors for why: • Such a function is going to be re-set on a hot code reload (because loading a particular ns sets the value) • Not the function itself is referenced but some indirection (like e.g. event IDs in re-frame) • React does it for me > registered event handlers An event listener can be not only a function but also an object that wraps a function. E.g. I do this to listen to keyboard events globally:
(defonce -listener (let [listener #js {:handleEvent (fn [_])}
opts #js {:capture true}]
(js/addEventListener "keydown" listener opts)
(js/addEventListener "keyup" listener opts)
listener))
(defn stop-recording! []
(set! -listener -handleEvent (fn [_])))
;; For hot reload.
(stop-recording!)
And later in the code I can (set! -listener -handleEvent (fn [evt] ...))
.Of course, it's an ad-hoc solution. But the problem is also kinda ad-hoc. Definitely not worth it to switch from the automatic hot reloading to any manual approach, at least for me (although on the backend I use the "reloaded" workflow with a manual trigger).
ah yeah, trying to avoid having to add manual steps to iterations, I just do (js/addEventListener "keydown" (cb some-function)) then cider-eval-last-sexp over (defn some-function [e] ...) makes it live instantly; saving the file and waiting for the whole reload flow is ~2-5 seconds on this project
where cb
is that macro I made, which is identity in optimized builds, and wraps the call in development to enable this flow
yeah its over 1 minute before the JVM optimizations kick in haha
ah I mean the cljs compiler running over the JVM, on a fresh start the initial compile is slow
second one drops to 10 seconds, then its 2-5 secs, hotspot seems to be working great here
figwheel-main
It might not be hotspot but rather some caching. Shadow-cljs has great approaches, timings are pretty much never that high.
yeah its probably the extensive macro work im doing in most cljs namespaces, and reloads only doing partial compiles of the affected dependencies
The initial start is around 15 seconds, all subsequent hot reloads (including the time to make the whole web page re-rendered) are less than a second.
wouldnt that depend on project size?
say 50+ cljs files, sometimes going over 100kb, and extensive macro transformations (ie clj -> wgsl+js with full data structures)
persistent maps are great, but also orders of magnitude slower than mutations when chaining transforms
Only the very first compilation would depend on the project size a lot, when there's no cache at all. Any subsequent compiler server launch is much faster because most of the stuff is cached. Any subsequent compilation and reloading of a changed file is even faster because the server is already running and even more things are cached. The only thing that can make it slow is if you change some ns that's a direct or transitive dependency of most other ns'es - then they all have to be reloaded. > say 50+ cljs files, sometimes going over 100kb The timings I provided above are for a project around 10 times larger. Much more than that if I include all of the dependencies that shadow-cljs also has to compile.
> and extensive macro transformations (ie clj -> wgsl+js with full data structures) You should be able to cache those. > persistent maps are great, but also orders of magnitude slower than mutations when chaining transforms I don't know what it has to do with anything. Maybe you meant it as an explanation of why you're using wgsl+js, but I just have zero knowledge in the area.
ah i mean most of the cljs code is driving a transpiler to generate WSGL shaders and the equivalent webgpu bindings, probably could do some caching right now it's re-evaluating the entire thing when a ns is recompiled