Fork me on GitHub
#cljs-dev
<
2017-10-24
>
rauh05:10:08

@mfikes I'm curious: Do you have some kind of benchmarking setup to test all the engines together? Do you have that in a repository?

slipset06:10:47

script/benchmark runs them, and there’s a corresponding cljs file containing the actual benchmarks

mfikes11:10:39

Right @rauh, and I simply have the engines set up as per https://clojurescript.org/community/running-tests

mfikes11:10:27

Tip: I usually temporarily delete all of the other benchmarks outside of the ones I'm interested in to eliminate extra cruft to look at

rauh11:10:33

@mfikes I see, I haven't run it in a while. I didn't realize it'd run all engines and format the output.

rauh12:10:01

Btw, I agree with your select-keys benchmarks. I tried a few other things, but nothing stood out

mfikes12:10:12

Yep, it's about the only way I get a sense of trust that perf gains actually work. (As we all know, you can try so many approaches that actually don't really pan out.)

mfikes12:10:51

In https://dev.clojure.org/jira/browse/CLJS-2383, I really expected that the stuff from CLJ-1789 might be better, but the perf tests don't lie, and evidently a simple change to use keyword-identical? is sufficient to get the perf gain.

rauh12:10:00

Yeah I expected lookup-sentinel to also be faster. I got better numbers at first, but once the JIT of the JS engines kicks in it's just a toss up. Also, I sometimes got better performance with reduce, but then slower perf on other browsers... Tough call.

shaunlebron18:10:39

can :global-exports be called as functions?

(require '[cljsjs.codemirror :as codemirror])
(codemirror ...) ;; <-- use of undeclared var codemirror

shaunlebron19:10:23

it was a build tool problem, works as expected now, thanks