This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-08
Channels
- # aleph (11)
- # arachne (7)
- # aws (1)
- # bangalore-clj (4)
- # beginners (24)
- # boot (128)
- # bristol-clojurians (23)
- # cider (1)
- # cljs-dev (43)
- # cljsrn (6)
- # clojure (178)
- # clojure-austin (3)
- # clojure-chicago (1)
- # clojure-dusseldorf (14)
- # clojure-finland (15)
- # clojure-france (6)
- # clojure-italy (18)
- # clojure-portugal (2)
- # clojure-russia (67)
- # clojure-spec (148)
- # clojure-uk (55)
- # clojurescript (199)
- # core-async (4)
- # cursive (18)
- # datascript (5)
- # datomic (120)
- # devcards (3)
- # dirac (53)
- # emacs (11)
- # events (3)
- # gsoc (7)
- # jobs (1)
- # lein-figwheel (25)
- # leiningen (5)
- # lumo (12)
- # off-topic (29)
- # om (174)
- # om-next (2)
- # onyx (7)
- # perun (10)
- # protorepl (6)
- # re-frame (12)
- # remote-jobs (1)
- # ring (19)
- # ring-swagger (25)
- # rum (6)
- # spacemacs (13)
- # sql (3)
- # untangled (88)
- # yada (7)
There’s some evidence that self-hosted compiler perf has eroded significantly over time. One measure (perhaps a little inaccurate owing to adding a few new test namespaces over time), is how long it takes to run script/test-self-parity
. Here are some timings over time:
2/6 HEAD 2m47.490s
1/27 1.9.456 2m06.745s
12/16 CLJS-1873 2m13.174s
10/19 1.9.293 1m53.280s
10/8 CLJS-1815 1m56.592s
9/5 CLJS-1773 1m41.529s
8/19 CLJS-1760 1m36.202s
4/24 1.8.51 1m16.113s
There is perhaps a decent amount of stuff outside of direct self-hosted compilation going on in the above. Another measure is how long it takes Planck to load the cljs.core.async
namespace (which I suspect is largely up to ClojureScript—not much Planck itself is doing once the require has been issued), and this takes 3x longer when comparing Planck 1.17 (ClojureScript 1.9.229) and Planck 2.0.0 (ClojureScript 1.9.468).
I think I’ll try to produce a more fine-grained timeline for script/test-self-host
in order to hopefully find the handful of commits that hurt perf the most.Some of those test timings are also related to us including more test namespaces in self-parity
we weren't testing all of them until the end of summer
but there might also be something going on with .cljc
files
^ this is a thing, btw /cc @mfikes
definitely. I'm not disregarding Mike's observations
just introducing some more variables for context 🙂
@anmonteiro Here is another bit of corroborating evidence, comparing the perf of Lumo 1.0.0 vs. 1.1.0:
time lumo -c andare-0.4.0.jar -e'(ns foo.core (:require cljs.core.async))’
With 1.0.0: 0m40.883s
With 1.1.0: 1m45.191s
definitely slower 🙂
I wonder what we messed up
did you bisect?
there could be one
There might be a single commit recently that does account for most of it; I agree. But we probably also have a bit of lingchi going on.
one thing that comes to mind is that the externs inference always runs, although that is by no means that much work
@mfikes something came to mind
if you have time, can you verify if this one would impact perf a lot? https://github.com/clojure/clojurescript/commit/94b4e9cdc845c1345d28f8e1a339189bd3de6971
@anmonteiro unlikely given that there are very few ns
forms
right. I thought about that right after seeing the changeset 🙂
nothing else comes to mind though
there should at least be one commit that causes 80/90% of the perf degradation
^ that's my hunch at least
I’m wondering how hard it might be to write a script that marches through the change sets, running script/test-self-parity
and recording the time taken (if the script runs for a given commit), and then making a chart to look for jumps in that chart. I bet that could be done with a few hours work, and if successful, it would be a very useful tool. (The thing that gets executed could be anything for each commit, even the regular tests.)
Some jumps would be easily explained by adding new test namespaces to script/test-self-parity
but others might visually jump out and not be explainable other than a perf regression for a commit.
aware of https://jafingerhut.github.io/clojure-benchmarks-results/Clojure-benchmarks.html ?
example for expression benchmarks https://jafingerhut.github.io/clojure-benchmarks-results/Clojure-expression-benchmark-graphs.html
I’m running a simple script across all the commits since 1.8.51. It should take less than 24 hours. 🙂 https://gist.github.com/mfikes/2db7e33c6494e6b0950b02e7dc5b0df6