This page is not created by, affiliated with, or supported by Slack Technologies, Inc.


Time reduced from 48s to 28s for me with optimizations set to none. That's big! (advanced not so dramatic from 80s to 70s).


@olivergeorge yeah advanced is Google Closure time, not ClojureScript time


Mentally averaging all of the reports I've seen, it feels like the compiler perf improvement for real world projects is in the ballpark of 30%, not the 50% experienced by Planck. Still great, but not likely 2× in general.


@mfikes that just means we have a little more work to do by the next release :wink:


I'm still a little surprised that the defrecord experiment for compiler state didn't speed things up. (To me, typical profile appears to be dominated by persistent map operations.)


@mfikes not sure what experiment you're talking about, but assuming you mean something like replacing maps for records as AST nodes -- I briefly experimented with that for tools.analyzer a few years ago and didn't find any significant performance wins either


Yeah (IIRC, it was done by Sebastian Bensusan), but it was essentially that exact kind of approach.


Perhaps we also needed to hint things to obtain the speedup?

(time (let [^Foo foo (->Foo 3)] (dotimes [_ 10000000] (.-x foo))))


Yeah, perhaps Bensusan's code was using keywords to access things, where a hinted .- access appears to be much faster.


Something else must be dominating the runtime other than AST manipulation mechanics. (Using hinted .- isn't that much faster than keywords on records, especially compared to what you get when switching from keywords on maps.)


@mfikes defrecords in clj do not cache their hash values. In cljs they don't use m3 hash (more collisions) and have an inefficient -equiv


So defrecords are often not the faster drop in replacement they should be


@mfikes don't know if this is the case for cljs but in t.a because functions were highly polymorphic, often destructuring different AST nodes, the inline caches used for defrecord keyword lookups actually caused things to slow down as they're optimized for monomorphic callsites


There is a perf optimization related to repeated io/resource calls, which doesn't seem to be clean enough to warrant writing a JIRA ticket for. But, I thought I'd leave it here for thought and feedback. Here is a gist that explains it and shows a little revised code: