This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-05-20
Channels
- # announcements (16)
- # babashka (104)
- # beginners (77)
- # bristol-clojurians (1)
- # calva (3)
- # chlorine-clover (50)
- # cider (19)
- # clojure (73)
- # clojure-australia (1)
- # clojure-europe (37)
- # clojure-france (3)
- # clojure-nl (3)
- # clojure-norway (13)
- # clojure-spec (21)
- # clojure-uk (79)
- # clojurescript (225)
- # conjure (102)
- # cursive (11)
- # datascript (1)
- # datomic (1)
- # defnpodcast (1)
- # events (3)
- # figwheel-main (2)
- # fulcro (49)
- # ghostwheel (10)
- # helix (1)
- # kaocha (17)
- # leiningen (10)
- # meander (1)
- # off-topic (26)
- # other-lisps (3)
- # pathom (5)
- # re-frame (40)
- # reagent (6)
- # reitit (33)
- # shadow-cljs (107)
- # testing (3)
- # tools-deps (68)
- # xtdb (16)
- # yada (3)
In the differences between CLJ and CLJS, it says "Vars are not reified at runtime." Can someone explain what that means?
in clj, you can use the syntax #'my-namspace/var-name
to get a var or browse the the vars available in a namespace using the https://clojure.org/reference/namespaces. with that var you can do a couple things:
• examine its metadata (which has info like doc strings and its source location)
• return its symbol and namespace
• alter its value
• more?
is re-natal still the goto for react native/cljs apps?
With or without http://Expo.io ? For both you can also try Shadow-CLJS: • Expo repo eg. repo: https://github.com/PEZ/rn-rf-shadow • Bare RN: https://github.com/thheller/reagent-react-native
seems as thought its not being actively maintained
@mitch249 https://github.com/vouch-opensource/krell was recently released, I would have a look at that if I were doing react native. No recommendations, just info 🙂
much appreciated! will take a look!
I'm trying to figure out what kind of speed you get from hash-maps vs js-objs vs defrecord. one thing I noticed is that defrecords seem to have same read speed as js-objects, but only if you use the dot-form. is there a way to get the cljs compiler to realize that it might as well use the dot-form when it gets a record? 🙂 I've tried this with shadow-cljs, both in watch mode and release with advanced optimizations
you can see the kind of code that is generated by (defrecord Foo [bar baz])
:
cljs.user.Foo.prototype.cljs$core$ILookup$_lookup$arity$3 = (function (this__18488__auto__,k68,else__18489__auto__){
var self__ = this;
var this__18488__auto____$1 = this;
var G__72 = k68;
var G__72__$1 = (((G__72 instanceof cljs.core.Keyword))?G__72.fqn:null);
switch (G__72__$1) {
case "bar":
return self__.bar;
break;
case "baz":
return self__.baz;
break;
default:
return cljs.core.get.call(null,self__.__extmap,k68,else__18489__auto__);
}
});
aha, I see. good to know 🙂 I try to look at the generated code, but sometimes it's a bit hard to sort out the optimized code
yeah, reading it optimized is too hard. I used http://app.klipse.tech/ which doesn’t apply optimizations
right now I'm trying to figure out what kind of data would make most sense for my game project. I've come to feel that "standard" hashmaps are a bit too slow, so I'm considering defrecords or js objects
that code will get optimized a bit in terms of code size, but will probably always be a linear lookup of the keyword in the table of fields
I thought keyword lookup was constant time?
JS switch/case is linear https://stackoverflow.com/a/41109455/4379329
> Engines are free to optimise the evaluation if all of the cases are constant strings or numbers (and it’s fairly simple), so you can expect constant time complexity.
it’s definitely been the case where I just “assumed” the right thing was happening, so it would be nice to know more definitively
personally I would use mutable objects, for benefits both in terms of code size and deterministic runtime performance
the problem then, is that I've gotten some useful usage of the immutable data when writing cpu behaviour
so then I'm wondering what would be most efficient, copying the mutable objects, or converting it to immutable, when using the data in the cpu
to be more concrete, the cpu calculates some turns ahead, which would be impractical if it used the mutable objects :)
there’s also https://github.com/mfikes/cljs-bean which you might see if it helps as a drop-in replacement for some of your maps
a bean is a map/vector-alike object that keeps all it’s data in a mutable JS type and just does copy-on-write
If you use a defrecord
, with some type hints you can get the compiler to emit direct field access
@mfikes I tried adding ^Thing
, but it doesn't seem to help... do you know what I'm missing?
Example:
cljs.user=> (set! *print-fn-bodies* true)
true
cljs.user=> (defrecord Foo [a b])
cljs.user/Foo
cljs.user=> (defn bar [^cljs.user/Foo f] (:a f))
#'cljs.user/bar
cljs.user=> bar
#object[cljs$user$bar "function cljs$user$bar(f){
return f.a;
}"]
I'm guessing writes are always slower by necessity, since they need to create new objects?
wrt: copying js objects vs creating a new record from a js object -- it seems that they take ~same time. so might be using mutable objects for "frame to frame" business, then converting to an immutable object for the cpu works.
Yeah, you can definitely use mutable JavaScript at the bottom to make for some fast frame rates. See this https://blog.fikesfarm.com/posts/2014-07-14-nolens-notchs-minecraft-without-locals.html (especially in Safari if you happen to be on a Mac)
in case you're curious about what I do, here you can see it in action: http://simple-animalsv2.surge.sh/
I use very little mutation (mainly for direct user interaction, such as dragging a card). but I've had some issues with frame drops. so considering moving towards a more mutable solution
how do I set an exit code with cljs targeting node ? I tried
(.exit cljs.nodejs/process 1)
but that’s not working and I have googled/tried different things with no luck@thomas.ormezzano Perhaps this Lumo code is portable: https://github.com/anmonteiro/lumo/blob/2b4bc09f768fc57164cebc99a90b95bdb1a26762/src/cljs/snapshot/lumo/core.cljs#L13
e.g.
(update-all obj :a inc, :b inc)
;; instead of
(-> obj (update :a inc) (update :b inc))
in order to get fewer allocationsmaps and vectors already support that pattern, but not records - and unlikely to unless Clojure does
better to think about how you can just write mutable stuff in a functional friendly way
in the Minecraft example one thing that was clear was reallocating the buffer at each frame isn't that bad
React is another good example, and the implementations of the Clojure(Script) data structures are a good example
well, it's a bit hard to answer that. the slowest thing in my game atm is the cpu, and I'm not sure if copying the whole game state multiple times is fast enough
in my case, the game works a bit differently than most. essentially most of the game is "performing queries". e.g. "find all cards that would trigger on this event". those kinds of things are what I've found to be the slowest atm
and when the cpu runs, the gc often triggers, so I figured I'd look into: 1. improving read operation speed 2. allocating less often
also profiling will probably give you a hint what's trigger GC, the problem might not be what you think
I've used the profiling tool a lot to measure cpu time. but I haven't really understood how to measure what triggers gc's
it's a common problem that people just starting to work w/ Clojure aren't aware of the behavior of laziness (lazy sequences) and that leads to GC activity
unless your profiles specifically point out that keyword lookups are killing you - this sounds like a waste of time
but if the queries are seq'ing maps by filtering, mapping or whatever that might be your problem
right transducers happen inside, so you're not going to have the pointless allocation problem
but that also got me thinking about performance of js vectors vs cljs vectors, if there are performance problems when looping over them
anyways it sounds to me like you might have a considerable number of simple things to try first before heading into the weeds
one way to determine that is talk about the size N for your app state and T time to run your operations
only reason I'm thinking that is you're talking about a card game and this doesn't immediately sound like a game style that should have performance issues
people have tried component entity game engine style things in ClojureScript and been moderately successful
well, I agree with what you're saying, but I've had a hard time figuring out what the problem could be... I agree about your point about the app state, I don't think it's that big.
and to be clear, it's mostly when I want the cpu to go through multiple paths (as many as possible) that I notice that things take long time. it's not too bad otherwise
I am not RN internals expert, but have used it for many years now; something about how it works; on a production build it’s way faster
thanks for the input. I have noticed the difference between dev and prod builds 🙂
when looking at the profiling data I feel that I've hammered away the worst "obvious" issues, and that it often comes down to read operations or similar
I don't think so - your multiple paths comment definitely makes it seems like there's probably just an algorithm problem here - that's where I would spend my time
might be I'm looking at it the wrong way, but I've spent some time trying to fix any "basic" problems : P
well, here's an example then:
(defn remove-all-temp
[db]
(update db :cards #(into {} (map (fn [[k v]] [k (dissoc v :temp)]) %))))
it's not something that is too crazy, but from what I can tell most of the stuff taking time now are a bunch of functions similar to this one
https://reactnative.dev/blog/2017/02/14/using-native-driver-for-animated …. this is a relatively old article but should get you started… otherwise if you’re doing anything remotely “intense” by forcing React to re-render on every frame, the results won’t be good
It can easily clobber up your CPU if there’s any animation happening at the “same time” with you removing data, etc
I guess the most noteworthy thing is that I haven't been able to run :optimizations :advanced
-- but I thought that mostly impacted code size
@saikyun you can try :static-fns true
that should help a little, :simple
might help a lot
oh, I guess that's true, in that case I was using no optimizations in order to preserve function names
(Immediately Invoked Functions Expressions) we generate these to preserve "everything is an expression"
hm, yeah, sorry about the 0.63ms. the same function seem to take 0.13ms when I do release (in which I run :simple
)
finally a minor thing for you update
call above, you're allocating a vector for each step (which just gets tossed)
yeah, I haven't been able to figure out a better way if one is to write "normal" clojure
btw when looking into some of the profiling again, I'm finding many small, stupid choices I've made, so I think I can find some more places to improve the performance before going into the weeds as you said
e.g. I use this:
(defn elem?
(boolean (some #(= elem %) coll)))
to look for things in vectors, but in many cases those vectors could just be sets insteadon that note, is it better to use #{:a :b}
or #{"a" "b"}
? from what I can tell :a
allocates a new object, even when :a
has been used before
cool, now I know some places to fix first then. I'll come back with the transient -> map -> transient stuff later then :^)
good to know. I wasn't able to get three.js to work with :advanced
. maybe I should try to fix that some day
externs inference should work for you - but I would probably hold off on that for now, one problem at a time
transduce
would let you get the transient map and just assoc!
into, no intermediate stuff
sorry, not sure I follow. do you have a link that might help? ^^; I probably need a break
btw, one last question. is there an alternative to update-in
that doesn't need to allocate a vector? it feels so unnecessary. for get-in
I often do (-> a :x :y :z)
there isn't - it's long outstanding enhancement to include smaller data literals in the constant optimizations
Question : I'm a bit confused about :npm-deps
, like is it possible for it to fail to infer externs and foreign-libs for some given npm dependency?
Or is it only that it can't make non Closure compliant libs compliant, thus all :npm-deps will not be optimized by it.
@didibus :npm-deps
is primarily about declaring Node dependencies, other flags control whether those dependencies pass through Closure
I thought :npm-deps would pull down the steps from npm, and then declare them all as a :foreign-libs with the correct externs file generated for them. Which would mean they'd never pass through Closure (I thought). But that the difference between me manually doing npm install with :foreign-libs was that :npm-deps was going to configure the foreign lib automatically and generate the externs file.
Say I don't care if my npm deps don't get optimized, I just want them bundled as is with my optimized build. But I just don't want to have to configure the foreign-libs or generate the externs file myself for them?
I see. So which option is used to perform externs inference? Only target bundle does it?
:bundle
doesn't really involve anything special, just composition of existing features
the passing through Closure part is the problematic bit (almost impossible to know if it will work)
lein with-profile -dev run -m cljs.main -co script/min.edn -c
I have a clojurescript build that runs that ^ where min.edn contains:
on this note alone, I assume you are already caching your m2, and on top of that you can use the LEIN_FAST_TRAMPOLINE
env var plus the trampoline
task to avoid recalculating classpath when your deps don't change
it does if you persist your m2 and the dotfile that lein uses to cache cp
in fact, thinking out loud, once you've cached classpath, it seems like cljs could get run without lein for the task you have there
:thumbsup:
I explicitly calc'ed the classpath then ran java -cp $CP clojure.main -m cljs.main ....
so clearly lein isn't a bottleneck here, I hope others here have suggestions for getting better perf from cljs
so clearly lein isn't a bottleneck here, I hope others here have suggestions for getting better perf from cljs