Fork me on GitHub
#clojurescript
<
2020-05-20
>
Val Baca04:05:05

In the differences between CLJ and CLJS, it says "Vars are not reified at runtime." Can someone explain what that means?

lilactown05:05:11

In clojure, vars are values that you can pass around

phronmophobic05:05:40

in clj, you can use the syntax #'my-namspace/var-name to get a var or browse the the vars available in a namespace using the https://clojure.org/reference/namespaces. with that var you can do a couple things: • examine its metadata (which has info like doc strings and its source location) • return its symbol and namespace • alter its value • more?

lilactown05:05:49

They are automatically created when you def something

lilactown05:05:24

In CLJS, there’s no value that represents a var by default

lilactown05:05:43

You can create them but they’re expensive and mostly for interop purposes

Mitchell Carroll06:05:34

is re-natal still the goto for react native/cljs apps?

Michaël Salihi08:05:31

With or without http://Expo.io ? For both you can also try Shadow-CLJS: • Expo repo eg. repo: https://github.com/PEZ/rn-rf-shadow • Bare RN: https://github.com/thheller/reagent-react-native

orestis10:05:18

There’s also Krell which is quite recent, created by David Nolen

Mitchell Carroll06:05:48

seems as thought its not being actively maintained

ingesol06:05:28

@mitch249 https://github.com/vouch-opensource/krell was recently released, I would have a look at that if I were doing react native. No recommendations, just info 🙂

Mitchell Carroll07:05:39

much appreciated! will take a look!

Saikyun16:05:16

I'm trying to figure out what kind of speed you get from hash-maps vs js-objs vs defrecord. one thing I noticed is that defrecords seem to have same read speed as js-objects, but only if you use the dot-form. is there a way to get the cljs compiler to realize that it might as well use the dot-form when it gets a record? 🙂 I've tried this with shadow-cljs, both in watch mode and release with advanced optimizations

lilactown16:05:35

that would be a good optimization

lilactown16:05:14

the keyword would need to be known via code analysis

lilactown16:05:52

you can see the kind of code that is generated by (defrecord Foo [bar baz]):

cljs.user.Foo.prototype.cljs$core$ILookup$_lookup$arity$3 = (function (this__18488__auto__,k68,else__18489__auto__){
  var self__ = this;
  var this__18488__auto____$1 = this;
  var G__72 = k68;
  var G__72__$1 = (((G__72 instanceof cljs.core.Keyword))?G__72.fqn:null);
  switch (G__72__$1) {
    case "bar":
      return self__.bar;

      break;
    case "baz":
      return self__.baz;

      break;
    default:
      return cljs.core.get.call(null,self__.__extmap,k68,else__18489__auto__);

                   }
});

Saikyun17:05:35

aha, I see. good to know 🙂 I try to look at the generated code, but sometimes it's a bit hard to sort out the optimized code

lilactown17:05:04

yeah, reading it optimized is too hard. I used http://app.klipse.tech/ which doesn’t apply optimizations

Saikyun17:05:29

right now I'm trying to figure out what kind of data would make most sense for my game project. I've come to feel that "standard" hashmaps are a bit too slow, so I'm considering defrecords or js objects

Saikyun17:05:45

thanks for the link, have been looking for something like it

lilactown17:05:50

that code will get optimized a bit in terms of code size, but will probably always be a linear lookup of the keyword in the table of fields

phronmophobic17:05:06

I thought keyword lookup was constant time?

lilactown17:05:03

I was just going based off of the code generated by the defrecord

phronmophobic17:05:26

> Engines are free to optimise the evaluation if all of the cases are constant strings or numbers (and it’s fairly simple), so you can expect constant time complexity.

lilactown17:05:12

yep maybe that’s the case in v8/spidermonkey, and I misspoke

phronmophobic17:05:34

it’s definitely been the case where I just “assumed” the right thing was happening, so it would be nice to know more definitively

lilactown17:05:09

personally I would use mutable objects, for benefits both in terms of code size and deterministic runtime performance

Saikyun17:05:21

yeah, I'm leaning towards that

Saikyun17:05:45

the problem then, is that I've gotten some useful usage of the immutable data when writing cpu behaviour

Saikyun17:05:26

so then I'm wondering what would be most efficient, copying the mutable objects, or converting it to immutable, when using the data in the cpu

Saikyun17:05:50

to be more concrete, the cpu calculates some turns ahead, which would be impractical if it used the mutable objects :)

lilactown17:05:58

yeah, that makes sense

lilactown17:05:12

that’s tricky

lilactown17:05:37

you can consider optimizing certain access paths in your application

lilactown17:05:29

there’s also https://github.com/mfikes/cljs-bean which you might see if it helps as a drop-in replacement for some of your maps

lilactown17:05:33

a bean is a map/vector-alike object that keeps all it’s data in a mutable JS type and just does copy-on-write

mfikes17:05:59

If you use a defrecord , with some type hints you can get the compiler to emit direct field access

Saikyun17:05:53

@mfikes I tried adding ^Thing, but it doesn't seem to help... do you know what I'm missing?

mfikes17:05:05

You need to qualify it

mfikes17:05:22

(full name like ^my.ns.Thing)

🧠 16
mfikes17:05:10

Oops. @saikyun that should be ^my.ns/Thing

mfikes17:05:47

Example:

cljs.user=> (set! *print-fn-bodies* true)
true
cljs.user=> (defrecord Foo [a b])
cljs.user/Foo
cljs.user=> (defn bar [^cljs.user/Foo f] (:a f))
#'cljs.user/bar
cljs.user=> bar
#object[cljs$user$bar "function cljs$user$bar(f){
return f.a;
}"]

Saikyun17:05:25

aha, you're right. that did it 🙂 was a bit perplexed first when I tried my.ns.Thing

Saikyun17:05:47

huh, print-fn-bodies was new to me. thanks for that

💯 4
Saikyun17:05:15

I'm guessing writes are always slower by necessity, since they need to create new objects?

Saikyun17:05:24

wrt: copying js objects vs creating a new record from a js object -- it seems that they take ~same time. so might be using mutable objects for "frame to frame" business, then converting to an immutable object for the cpu works.

mfikes17:05:40

Yeah, you can definitely use mutable JavaScript at the bottom to make for some fast frame rates. See this https://blog.fikesfarm.com/posts/2014-07-14-nolens-notchs-minecraft-without-locals.html (especially in Safari if you happen to be on a Mac)

mfikes17:05:25

^ This was an experiment to eliminate some of the rampant mutation

Saikyun17:05:58

how come it runs faster on safari? 🙂

Saikyun17:05:42

in case you're curious about what I do, here you can see it in action: http://simple-animalsv2.surge.sh/

Saikyun17:05:20

I use very little mutation (mainly for direct user interaction, such as dragging a card). but I've had some issues with frame drops. so considering moving towards a more mutable solution

mfikes17:05:34

I don't know why Safari is faster. Just an observation. /shrug

Saikyun17:05:25

interesting solution with the recur "setup" 🙂

tzzh17:05:34

how do I set an exit code with cljs targeting node ? I tried

(.exit cljs.nodejs/process 1)
but that’s not working and I have googled/tried different things with no luck

tzzh17:05:12

ah yes thank you very much that works :thumbsup:

mfikes17:05:28

Looks very similar on the surface. Dunno why the difference. Hrm.

Saikyun17:05:23

huh, it seems you can use (set! (.-a record-obj) 10). that's... interesting

Saikyun17:05:36

shouldn't that mean that one can create a record object directly from a js object?

Saikyun17:05:55

i.e. creating a record by "consuming" a js object

dnolen17:05:08

there's no way to prevent mutation, though the compiler could warn about that

dnolen17:05:22

if you set! a record you should be ok with bad things happening

dnolen17:05:59

there's no support for getting records from a JS object

Saikyun17:05:53

I see, I see

Saikyun17:05:13

I was thinking about wether one could batch changes to immutable objects somehow

Saikyun17:05:44

e.g.

(update-all obj :a inc, :b inc)
;; instead of
(-> obj (update :a inc) (update :b inc))
in order to get fewer allocations

dnolen17:05:30

maps and vectors already support that pattern, but not records - and unlikely to unless Clojure does

Saikyun17:05:53

how do you mean that maps support it?

dnolen17:05:16

transients

Saikyun17:05:36

can you go from a hm to a transient? I thought it was a one-way operation

dnolen17:05:54

no both directions

Saikyun17:05:02

oh really? didn't know that

Saikyun17:05:35

is map -> transient slow?

Saikyun17:05:16

huh, thanks for the info

Saikyun17:05:19

that's very interesting

dnolen17:05:19

but I really wouldn't spend anymore cycles here if throughput is your primary concern

dnolen17:05:44

better to think about how you can just write mutable stuff in a functional friendly way

dnolen17:05:25

in the Minecraft example one thing that was clear was reallocating the buffer at each frame isn't that bad

dnolen17:05:59

React is another good example, and the implementations of the Clojure(Script) data structures are a good example

Saikyun17:05:05

well, it's a bit hard to answer that. the slowest thing in my game atm is the cpu, and I'm not sure if copying the whole game state multiple times is fast enough

dnolen17:05:38

components are generally not going to be bottleneck

dnolen17:05:49

unless you have millions of game objects

dnolen17:05:52

in the state

Saikyun17:05:56

in my case, the game works a bit differently than most. essentially most of the game is "performing queries". e.g. "find all cards that would trigger on this event". those kinds of things are what I've found to be the slowest atm

dnolen17:05:38

sounds like you just need an index?

Saikyun17:05:38

and when the cpu runs, the gc often triggers, so I figured I'd look into: 1. improving read operation speed 2. allocating less often

Saikyun17:05:10

I've started to index some things, and you're right, those are often helpful 🙂

dnolen17:05:02

also profiling will probably give you a hint what's trigger GC, the problem might not be what you think

Saikyun17:05:44

I've used the profiling tool a lot to measure cpu time. but I haven't really understood how to measure what triggers gc's

dnolen17:05:57

it's a common problem that people just starting to work w/ Clojure aren't aware of the behavior of laziness (lazy sequences) and that leads to GC activity

dnolen17:05:13

1. doesn't seem related to the GC problem, I would not look here at all

dnolen17:05:32

unless your profiles specifically point out that keyword lookups are killing you - this sounds like a waste of time

Saikyun17:05:41

yeah, sorry for being unclear. 1. was meant to improve the "queries"

dnolen18:05:25

but if the queries are seq'ing maps by filtering, mapping or whatever that might be your problem

Saikyun18:05:03

yeah, I've tried using transducers some, and they seem to help.

Saikyun18:05:16

rather than "standard" filter / map

dnolen18:05:34

right transducers happen inside, so you're not going to have the pointless allocation problem

Saikyun18:05:52

but that also got me thinking about performance of js vectors vs cljs vectors, if there are performance problems when looping over them

dnolen18:05:58

anyways it sounds to me like you might have a considerable number of simple things to try first before heading into the weeds

dnolen18:05:25

one way to determine that is talk about the size N for your app state and T time to run your operations

dnolen18:05:34

if N is small and T is big something very basic is wrong

dnolen18:05:39

and you don't need to worry about other stuff

dnolen18:05:38

only reason I'm thinking that is you're talking about a card game and this doesn't immediately sound like a game style that should have performance issues

dnolen18:05:10

people have tried component entity game engine style things in ClojureScript and been moderately successful

Saikyun18:05:28

well, I agree with what you're saying, but I've had a hard time figuring out what the problem could be... I agree about your point about the app state, I don't think it's that big.

raspasov18:05:05

@saikyun is your game browser, browser mobile, or ReactNative mobile?

Saikyun18:05:18

browser + reactnative

raspasov18:05:36

Are your perf problems on browser or RN or both?

Saikyun18:05:48

and to be clear, it's mostly when I want the cpu to go through multiple paths (as many as possible) that I notice that things take long time. it's not too bad otherwise

raspasov18:05:16

For RN, it’s very important to do a production build, perf goes way up

raspasov18:05:55

I am not RN internals expert, but have used it for many years now; something about how it works; on a production build it’s way faster

Saikyun18:05:02

thanks for the input. I have noticed the difference between dev and prod builds 🙂

✌️ 4
Saikyun18:05:58

when looking at the profiling data I feel that I've hammered away the worst "obvious" issues, and that it often comes down to read operations or similar

dnolen18:05:11

I don't think so - your multiple paths comment definitely makes it seems like there's probably just an algorithm problem here - that's where I would spend my time

dnolen18:05:17

I wouldn't look anywhere else first

Saikyun18:05:19

might be I'm looking at it the wrong way, but I've spent some time trying to fix any "basic" problems : P

dnolen18:05:51

a good exercise would be to just isolate this query code and share it

dnolen18:05:03

there's a lot of people here an in #clojure who could point out problems

Saikyun18:05:08

well, here's an example then:

(defn remove-all-temp
  [db]
  (update db :cards #(into {} (map (fn [[k v]] [k (dissoc v :temp)]) %))))

Saikyun18:05:59

it's not something that is too crazy, but from what I can tell most of the stuff taking time now are a bunch of functions similar to this one

dnolen18:05:20

how many cards?

dnolen18:05:41

yeah no way

dnolen18:05:53

that's not going to take much time

Saikyun18:05:56

this takes 0.63ms

Saikyun18:05:30

so I have ~25 of these functions per frame in order to keep 60fps :'D

raspasov18:05:54

are you trying to animate this way?

Saikyun18:05:14

nah, animations are done using three.js / mutable objects

Saikyun18:05:25

so this is purerly for data manipulation

raspasov18:05:53

OK, for RN animations are you using nativeDriver?

Saikyun18:05:20

I don't know what nativeDriver is... ^^; so no

dnolen18:05:25

0.63 ms for 60-100 items is still very unlikely IMO

Saikyun18:05:59

I don't know what to tell you 😞

raspasov18:05:12

https://reactnative.dev/blog/2017/02/14/using-native-driver-for-animated …. this is a relatively old article but should get you started… otherwise if you’re doing anything remotely “intense” by forcing React to re-render on every frame, the results won’t be good

raspasov18:05:36

(that is for React Native specifically)

raspasov18:05:21

It can easily clobber up your CPU if there’s any animation happening at the “same time” with you removing data, etc

Saikyun18:05:25

I don't think I trigger many react re-renders at all, since I'm using a gl window 🙂

dnolen18:05:31

@saikyun this is via the Chrome debugger thing in a simulator?

raspasov18:05:37

Ah… ok… not sure then

Saikyun18:05:46

either way, thanks for the link @raspasov, will be useful when making apps :))

👍 4
Saikyun18:05:01

@dnolen chrome debugger, but no simulator

Saikyun18:05:36

I guess the most noteworthy thing is that I haven't been able to run :optimizations :advanced -- but I thought that mostly impacted code size

dnolen18:05:33

@saikyun you can try :static-fns true that should help a little, :simple might help a lot

Saikyun18:05:46

oh, I guess that's true, in that case I was using no optimizations in order to preserve function names

dnolen18:05:58

it removes IIFEs which also create GC pressure

dnolen18:05:03

(Immediately Invoked Functions Expressions) we generate these to preserve "everything is an expression"

Saikyun18:05:08

hm, yeah, sorry about the 0.63ms. the same function seem to take 0.13ms when I do release (in which I run :simple)

Saikyun18:05:12

can try static-fns

Saikyun18:05:02

not sure I understand the IIFE-part

dnolen18:05:47

:simple does what I was talking about

dnolen18:05:01

@saikyun :optimize-constants true you'll want as well for that :simple build

dnolen18:05:38

finally a minor thing for you update call above, you're allocating a vector for each step (which just gets tossed)

Saikyun18:05:40

okay, thanks. I'll try that too

Saikyun18:05:19

yeah, I haven't been able to figure out a better way if one is to write "normal" clojure

Saikyun18:05:37

it bothers me every time I want to do something to all vals in a map 😄

Saikyun18:05:24

btw when looking into some of the profiling again, I'm finding many small, stupid choices I've made, so I think I can find some more places to improve the performance before going into the weeds as you said

4
Saikyun18:05:55

e.g. I use this:

(defn elem?
  (boolean (some #(= elem %) coll)))
to look for things in vectors, but in many cases those vectors could just be sets instead

dnolen18:05:33

things like some seq their args so that just a whole bunch of allocations

Saikyun18:05:17

on that note, is it better to use #{:a :b} or #{"a" "b"}? from what I can tell :a allocates a new object, even when :a has been used before

Saikyun18:05:32

huh, didn't know some did that

dnolen18:05:35

:optimize-constants true

dnolen18:05:44

it creates a lookup table for literal keywords

Saikyun18:05:49

aha, awesome, thanks

Saikyun18:05:18

cool, now I know some places to fix first then. I'll come back with the transient -> map -> transient stuff later then :^)

dnolen18:05:24

this stuff is defaulted for :advanced, but :simple you need to do this yourself

Saikyun18:05:07

good to know. I wasn't able to get three.js to work with :advanced . maybe I should try to fix that some day

dnolen18:05:43

externs inference should work for you - but I would probably hold off on that for now, one problem at a time

Saikyun18:05:07

oh, but btw, wrt the update allocating vectors, how would you solve that?

dnolen18:05:17

use transduce directly

Saikyun18:05:37

how does transduce help in that situation?

Saikyun18:05:43

when reducing over kvs

dnolen18:05:45

into uses transduce

dnolen18:05:00

but less flexible because you can't get to the reducing value

dnolen18:05:23

transduce would let you get the transient map and just assoc! into, no intermediate stuff

Saikyun18:05:08

sorry, not sure I follow. do you have a link that might help? ^^; I probably need a break

dnolen18:05:35

honestly I would just read the into source, but take break first 😉

Saikyun18:05:37

I've only used transducers with r/map / r/filter etc

Saikyun18:05:45

okay, I'll take a look then 🙂

dnolen18:05:45

if you have questions about it after looking just ask

Saikyun18:05:04

thanks for all the other help as well 🙂

Saikyun18:05:32

btw, one last question. is there an alternative to update-in that doesn't need to allocate a vector? it feels so unnecessary. for get-in I often do (-> a :x :y :z)

dnolen18:05:41

there isn't - it's long outstanding enhancement to include smaller data literals in the constant optimizations

👀 4
8
dnolen18:05:53

Clojure does this

didibus19:05:30

Question : I'm a bit confused about :npm-deps, like is it possible for it to fail to infer externs and foreign-libs for some given npm dependency?

didibus19:05:34

Or is it only that it can't make non Closure compliant libs compliant, thus all :npm-deps will not be optimized by it.

dnolen20:05:50

@didibus :npm-deps is primarily about declaring Node dependencies, other flags control whether those dependencies pass through Closure

didibus23:05:27

I thought :npm-deps would pull down the steps from npm, and then declare them all as a :foreign-libs with the correct externs file generated for them. Which would mean they'd never pass through Closure (I thought). But that the difference between me manually doing npm install with :foreign-libs was that :npm-deps was going to configure the foreign lib automatically and generate the externs file.

didibus23:05:34

Is that a wrong impression?

didibus23:05:11

And like, if it is, are there a combination of flags that can get me that behavior?

didibus23:05:22

Say I don't care if my npm deps don't get optimized, I just want them bundled as is with my optimized build. But I just don't want to have to configure the foreign-libs or generate the externs file myself for them?

dnolen14:05:30

yeah that's not how it works

dnolen14:05:40

:npm-deps just declares the deps

didibus17:05:43

I see. So which option is used to perform externs inference? Only target bundle does it?

didibus17:05:30

Ok, nevermind, I can probably Google that

didibus17:05:34

:infer-externs

dnolen17:05:09

:bundle doesn't really involve anything special, just composition of existing features

dnolen20:05:01

the passing through Closure part is the problematic bit (almost impossible to know if it will work)

dnolen20:05:16

you could use :npm-deps + :bundle target and have no issue

ghadi20:05:52

lein with-profile -dev run -m cljs.main -co script/min.edn -c I have a clojurescript build that runs that ^ where min.edn contains:

ghadi20:05:02

the build takes about 65 seconds.

ghadi20:05:05

cljs 1.10.439

ghadi20:05:20

any obvious ways to speed this up? It's in a CI context

ghadi20:05:33

besides lein being dog slow

noisesmith20:05:41

on this note alone, I assume you are already caching your m2, and on top of that you can use the LEIN_FAST_TRAMPOLINE env var plus the trampoline task to avoid recalculating classpath when your deps don't change

ghadi20:05:43

it's the first time it gets invoked in a CI build

ghadi20:05:51

does that still help?

noisesmith20:05:23

it does if you persist your m2 and the dotfile that lein uses to cache cp

noisesmith20:05:09

in fact, thinking out loud, once you've cached classpath, it seems like cljs could get run without lein for the task you have there

ghadi20:05:37

it doesn't seem to make a huge difference, time dominated by cljs

ghadi20:05:07

I explicitly calc'ed the classpath then ran java -cp $CP clojure.main -m cljs.main ....

noisesmith20:05:06

so clearly lein isn't a bottleneck here, I hope others here have suggestions for getting better perf from cljs

noisesmith20:05:06
replied to a thread:besides `lein` being dog slow

so clearly lein isn't a bottleneck here, I hope others here have suggestions for getting better perf from cljs

ghadi20:05:41

if I add :verbose true to the flags, I see that most of the time spent is: Applying optimizations :simple to 563 sources

dnolen22:05:38

@ghadi you might need to tweak memory setting, Closure Compiler is pretty memory hungry