Fork me on GitHub

@mfikes I'm curious: Do you have some kind of benchmarking setup to test all the engines together? Do you have that in a repository?


script/benchmark runs them, and there’s a corresponding cljs file containing the actual benchmarks


Right @rauh, and I simply have the engines set up as per


Tip: I usually temporarily delete all of the other benchmarks outside of the ones I'm interested in to eliminate extra cruft to look at


@mfikes I see, I haven't run it in a while. I didn't realize it'd run all engines and format the output.


Btw, I agree with your select-keys benchmarks. I tried a few other things, but nothing stood out


Yep, it's about the only way I get a sense of trust that perf gains actually work. (As we all know, you can try so many approaches that actually don't really pan out.)


In, I really expected that the stuff from CLJ-1789 might be better, but the perf tests don't lie, and evidently a simple change to use keyword-identical? is sufficient to get the perf gain.


Yeah I expected lookup-sentinel to also be faster. I got better numbers at first, but once the JIT of the JS engines kicks in it's just a toss up. Also, I sometimes got better performance with reduce, but then slower perf on other browsers... Tough call.


can :global-exports be called as functions?

(require '[cljsjs.codemirror :as codemirror])
(codemirror ...) ;; <-- use of undeclared var codemirror


it was a build tool problem, works as expected now, thanks