This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-02-04
Channels
- # aatree (5)
- # admin-announcements (37)
- # alda (1)
- # announcements (4)
- # architecture (1)
- # aws (3)
- # beginners (82)
- # boot (230)
- # braid-chat (14)
- # cider (48)
- # cljs-dev (8)
- # cljsrn (31)
- # clojars (47)
- # clojure (72)
- # clojure-austin (2)
- # clojure-russia (396)
- # clojurescript (72)
- # community-development (3)
- # component (6)
- # core-async (6)
- # cursive (26)
- # datomic (42)
- # emacs (6)
- # events (35)
- # hoplon (57)
- # immutant (3)
- # jobs (2)
- # jobs-discuss (10)
- # ldnclj (16)
- # luminus (2)
- # off-topic (50)
- # om (181)
- # parinfer (285)
- # proton (68)
- # re-frame (19)
- # reagent (2)
- # ring-swagger (23)
- # yada (36)
~/d/p/parinfer-test (master)> lein with-profile bench run
Evaluation count : 4920 in 60 samples of 82 calls.
Execution time mean : 12.537784 ms
Execution time std-deviation : 436.185958 µs
Execution time lower quantile : 11.935113 ms ( 2.5%)
Execution time upper quantile : 13.447069 ms (97.5%)
Overhead used : 1.707955 ns
Evaluation count : 5280 in 60 samples of 88 calls.
Execution time mean : 11.775496 ms
Execution time std-deviation : 408.468619 µs
Execution time lower quantile : 11.253086 ms ( 2.5%)
Execution time upper quantile : 12.753575 ms (97.5%)
Overhead used : 1.707955 ns
Found 5 outliers in 60 samples (8.3333 %)
low-severe 5 (8.3333 %)
Variance from outliers : 20.6507 % Variance is moderately inflated by outliers
I replaced a few of your functions with stdlib equivalents that are more efficient, especially around repeated String concatenation.
although I spoke with him last night; he is pretty excited about parinfer-jvm as well as the JS speed-up
I can probably do a naive impl relatively easily, but there’s a few bits around the edges making it user friendly.
yeah - are you familiar with the "parent expression" hack that atom-parinfer uses?
Shaun and I were talking about this last night; with the performance we're at now we might be able to just take it out
Yeah, I am. I’m actually planning to benchmark just parsing the top level forms from the top of the file until, say, the last one in really_large_file, and then parinfer’ing that.
there might be a flurry of small commits in order to test that actually - nothing major
I guess the question is: if this already existed and you were just adding it to Cursive, would what you want the API to look like?
I think we should probably create the object explicitly so we can control the name, rather than have it be autogenerated.
Yeah. I’ll have to check how fast it is in Cursive because it’s not just manipulating strings, it’s fiddling with documents that have locks and so forth.
In Atom, are files ever changed in the background? There’s nothing like refactorings or anything like that, right?
So these changes are only ever applied to a document while a user is interactively editing it?
Actually, what about if a file is updated from VCS, which is effectively in the background as far as Atom is concerned?
@snoe: I like that idea you had yesterday! tracking here: https://github.com/shaunlebron/parinfer/issues/89
@thomas: by “block comment” do you mean “comment the selected lines” or “comment the lines of the cursor’s sexp”?
@cfleming: parinfer looks at all parens outside of comments, strings, and character literals. so it defaults to treating #{
, #[
and #(
correctly.
it doesn’t actually know anything about the preceding #
, so it’s treated as if it wasn’t there
I’m actually planning to benchmark just parsing the top level forms from the top of the file until, say, the last one in really_large_file, and then parinfer’ing that.
Colin is kicking total JVM ass; the benchmark on parinfer-jvm is cut in half from my original implementation
little worried that the "clean implementation" is going to have more JS-specific stuff
and for editors without their own native support for locating it, I can put the fast reader back in the API
parinfer without all the transformation business can run through really_long_file in about 7ms
idk why i didn’t think of this earlier. chatting with the emacs guy today helped I think
https://clojurians.slack.com/archives/parinfer/p1454558963000385 atom-parinfer only ever looks at the current buffer
@cfleming: in atom, if a file is changed in the background, atom-parinfer won't run until the user does something to trigger it, like press any key on the keyboard
from your example @snoe, you want an indent-next-sexp
function, not a indent-next-line
function, since it indents the child expressions with it
I think the parinfer API can provide some primitives that allow parinfer plugins to handle this behavior themselves
I agree, maybe parinfer could return something like indentMode(..., {...}) => {:text "..." :tab-stops [1 4 8 11]}
?
The make edit, goto next line, select form, indent/dedent selection dance is still something I find myself doing all the time. I wonder if others do the same thing?
you’ll need more info for (
to be two-space, and for (expr arg
to be n-space indent
yeah, the important part is making the structural change. formatting can/should be handled outside.
I see what you’re saying, so do a separate auto-indent operation after parinfer moves it to one-space indentation
@snoe, seems like we need either indent-selected-lines
option instead of a single line
Maybe : 1) go to next line -> 2) find first char -> 3) keep going down until you find a char before step 2
parinfer.py and then parinfer-jvm showed that the algorithm should be returning similar result times for both functions
I'm not sure we're going to get faster than that; I was playing with replacing the if/else with dispatch and such
I considered changing the perf.js to run multiple iterations and then show the average, etc
to be honest, I "discovered" that earlier today on a Windows machine running an older version of node
in both of these last cases of "discovering" big speed improvements, it's clear they are v8 optimizations
I did see some perf improvement in parinfer.py by changing that if/else chain to a Dict lookup
@chrisoakman: you were asking about the next language to tackle maybe if you do C/C++ you could then use emscripten for js (or c node module)
I don't know what editor would benefit from having an F# implementation; I don't think there are too many people using MS Visual Studio to edit Lisp code
another benefit of writing Parinfer in emacs lisp is that I get to use parinfer to write it in !
but cursive, emacs, and parinfer stripped down to its reader form only, all of these can locate a parent expression very quickly and reliably
so it’s like using atom-parinfer’s parent expression hack, except without the corner cases
I am seriously considering dropping the parent expression hack for files under a certain N length
yeah for what it's worth I haven't bothered with expressions in vim because the whole file is pretty much fast enough
I am willing to wager that 99% of lisp files that people regularly work with are under 1000 lines of code
vim's select top form: https://github.com/guns/vim-sexp/blob/b4398689f7483b01684044ab6b55bf369744c9b3/autoload/sexp.vim#L1054-L1211 hehe
which is probably a decent decision for most languages, but probably the wrong one for lisp-based languages
thanks @snoe! this looks how most editors would locate the parent expression: https://github.com/guns/vim-sexp/blob/b4398689f7483b01684044ab6b55bf369744c9b3/autoload/sexp.vim#L1054-L1080
Last I saw, the strategy was to rewrite parinfer in emacs lisp, instead of using the lib