This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-07-05
Channels
- # aws (12)
- # babashka (30)
- # beginners (294)
- # calva (98)
- # clj-on-windows (3)
- # clojure (48)
- # clojure-europe (31)
- # clojure-italy (8)
- # clojure-nl (2)
- # clojure-uk (11)
- # clojurescript (58)
- # conjure (1)
- # events (1)
- # fulcro (35)
- # graalvm-mobile (2)
- # jobs (8)
- # lsp (11)
- # malli (25)
- # off-topic (33)
- # pathom (24)
- # pedestal (1)
- # polylith (15)
- # re-frame (7)
- # reitit (10)
- # releases (8)
- # remote-jobs (2)
- # sci (3)
- # shadow-cljs (79)
- # spacemacs (10)
- # sql (17)
- # tools-deps (17)
- # vim (4)
- # xtdb (11)
Calva 2.0.203 is released: • Fix: https://github.com/BetterThanTomorrow/calva/issues/1203 • Improvement: https://github.com/BetterThanTomorrow/calva/issues/942 • Fix: https://github.com/BetterThanTomorrow/calva/issues/1222 • Bump clojure-lsp https://github.com/clojure-lsp/clojure-lsp/releases/tag/2021.07.01-13.46.18
Thanks, @U01LFP3LA6P, for the output window improvement!
Glad to have helped. Unfortunately the output still gets quite sluggish after 10k lines so I still often find myself jumping to that window and manually deleting all the content. But I guess the improvement helps quite a lot. Printout of 500 lines took almost 25 seconds for me without batching (with empty repl window), now it's under second.
Yeah, I'm not sure how much more performant we can really make printing to an editor. Adding an option to print to an output channel would be helpful, though.
@U9A1RLFNV
what is actually quite interesting is the fact that...
1. when I keep appending lines via evaluation of some expression containing println, it gets quite slow after line count in output.calva-repl
reaches approx 5000 - 10000 lines.
2. but now when I enter the output.calva-repl
window and perform Ctrl+a, ctrl+c, ctrl+v (copy all + paste) several times, the editor is actually responding really quickly. Including syntax highlighting etc.
3. (when I'm combining steps 1 & 2, I often end up in situation, that my output.calva-repl
and evaluation somehow completely freezes 😞 )
Any idea why copy-pasting acts so quickly, compared to printing from expression?
Yeah, this is the plan ^. @U01LFP3LA6P I'm not sure. It could be that pasting operates differently behind the scenes than when we use the API to apply an edit.
But it's weird, isn't it? Is there perhaps an API which would mimic the paste operation, so that one could try to perform some benchmarks and comparions?
When I think about it now, I wouldn't be surprised if the editing api actually performed (perhaps even multiple?) saving operation of the content, while copy-pasting doesn't do that. Also I remember that in the past I was trying to tail-pipe the output.calva-repl
file to stdout and it seemed that the content doesn't get appended to that file, but the file is always written "from scratch". Not sure if that's really true or not, it just behaved like that.
Btw having option to forward stdout to somewhere else would be super-nice, but it is also good to have integrated window with your repl output, directly in your vs code window.
Is writing the same amount of output in one edit operation slower than pasting the same chunk? Or is it writing as the output arrives that is slower?
my observation is that _vscode_._workspace_.applyEdit(edit)
itself starts to get sluggish very quickly. Even when the edit effectively appends a single line, the operation takes...
• around 10ms when appending to empty document
• around 90ms when appending to doc with 1000 lines
• around 200ms when appending to doc with 2000 lines
• around 900ms when appending to doc with 4000 lines
• around 3000ms-1000ms when appending to doc with 10000 lines
sweet spot for testing the delays seems to be around 8k lines for me. What is interesting is that
• when you manually keep appending yet another 8k lines again and again and again... (via ctrl+v) it's instantious. Like super-quick.
• but when you manually select last line (or just a single character, really) and try to delete it, it takes a 1-2 seconds delay.
• when I open foo.txt, put the same 8k characters, try the same operations... it works just fine, appending is quick, deletions are quick.. everything just flies.
So perhaps the syntax highlighting somehow triggers on full document instead of just the appended part?
> but when you manually select last line (or just a single character, really) and try to delete it, it takes a 1-2 seconds delay.
This could be because what’s really happening is Paredit delete backwards. Try with alt+backspace
.
> when I open foo.txt, put the same 8k characters, try the same operations... it works just fine, appending is quick, deletions are quick.. everything just flies.
Does that include _vscode_._workspace_.applyEdit(edit)
?
can't say, really, I'm just comparing user editing directly in the editor window.. ie. in both cases not using API
can the paredit support be disabled or short-circuited somehow quickly? just for testing..
when I manually add 8k lines into output.calva-repl
, try to manually delete last line, it gets stuck. I can add new letters, but any attempt to delete (either via Backspace, del, or alt-backspace, alt-del) doesn't work.. seems to be blocked. I've seen this behavior before in the editor and we were already trying to pinpoint the root cause of this.
funny thing is that one can delete the line super-quickly by selecting the row and pressing space 🙂
I’ve seen that behaviour with deletes not working too. Never figured out what gets stuck.
now it seems to me that it can quite easily be reproduced
Paredit delete is structural. So for a large file, lots of things are going on. Pressing space is not structural. But alt+backspace
shouldn’t be structural either, so that should be as quick as pressing space.
i deleted the content of output.calva-repl window by using the "select-all + space" hack. But I can stil not perform any real deletion (delete, backspace, alt+delete, alt+bacspace)
is there any obvious place in calva's code which could prevent such delete event to be handled?
The mirror document is probably out of whack. Closing and re-opening the output window should make it start working again.
(and ctrl+s also does nothing)
yes, closing + reopening fixed the situation
or even beter, we should try to find out the root cause for all this 😄, but I get your point
Situation 1 "all works"
• start new calva in debug mode by making some change in calva code
• open output.calva-repl file from previous session, delete the content
• add 8k lines of plain text this is a weird bug
• try to delete last few lines by selecting them and pressing Del
- works fine
• try to append 123
to last line.... works fine
• try to delete that line ... works fine
now different scenario.
• start new calva in debug mode by making some change in calva code
• open output.calva-repl file from previous session, delete the content
• add 8k lines of plain text this is a weird bug
• add 123
on the end of the last line (after the text)
• sometimes deletion stoped working for me, but if nothing else, Enter key stops working
◦ even if I close and reopen the calva repl window, same problem remains
ha #2 I take that "enter stops working" back. It's just terribly slow. Enter performs evaluation. And that evaluation is extremly slow even after 2000 lines. After a lot of appending, that evaluation takes more than 30 seconds, so the editor appears stuck.
if it takes ~30 seconds for 2000 lines, I understand that it takes forever for 8k
I’m a bit slow here as well, because tired. Can you tell me the difference between scenario 1 and 2?
yeah, sorry, me too 🙂. I'll probably look more into this tomorrow afternoon/evening and possibly record a video and describe the reproduction in better way.
several times it seemed to me that syntax highlighting might be the culprit. As when the lines contained just the text which didn't get highlighted, I never got the "hangup" behavior. But now it seems that REPL needs to be connected as well to get into this "cannot delete/enter" state.
If the contents of the editor is mostly a lot of plain text, then there will be very long sequences of tokens on the same “level”, the top level probably, in your test there. That will be slow for the token cursor to navigate. So this is a reason why we might want to offer the option to send side effect output to some other place. b/c that kind of data is often unstructured.
ok, but still... 30s delay for 2000 lines?
I mean for 2000 very short lines of just a few words?
I don’t know what’s going on really. But I often have 10K+ lines in the output window without it getting that slow.
The bracket coloring tries to work on what’s visible in the view. But it needs the start and end of forms to do this. If it is all unstructured, I think maybe it works on the whole file.
I think it's a good time to leave it for today. I'm just running around the issue and trying to guess the correct steps to reproduce it in that way that it would pinpoint the cause. But no luck there. If nothing else, it's really easy to make the repl window behave super-slow with just ~2-3k lines, definitely with 8k lines. When the REPL is NOT connected, any manual appending or deleting lines works instantly.. including syntax highlighting. I'll try to add some elapsed time logging tomorrow to the code that performs the "enter key" evaluations when the REPL is connected. Since it also starts to behave slow with increasing amount of lines in the repl window, maybe it will lead me to some slow place deeper in the code, who knows.
> When the REPL is NOT connected, any manual appending or deleting lines works instantly.. including syntax highlighting. This is quite a lot of progress on this issue!
state.extensionContext.subscriptions.push(vscode.window.onDidChangeTextEditorSelection(event => {
let submitOnEnter = false;
var before = new Date().getTime();
if (event.textEditor) {
const document = event.textEditor.document;
if (isResultsDoc(document)) {
const idx = document.offsetAt(event.selections[0].active);
const mirrorDoc = docMirror.getDocument(document);
const selectionCursor = mirrorDoc.getTokenCursor(idx);
selectionCursor.forwardWhitespace();
if (selectionCursor.atEnd()) {
const tlCursor = mirrorDoc.getTokenCursor(0);
const topLevelFormRange = tlCursor.rangeForDefun(idx);
submitOnEnter = topLevelFormRange &&
topLevelFormRange[0] !== topLevelFormRange[1] &&
idx >= topLevelFormRange[1];
}
}
console.log("onDidChangeTextEditorSelection subscription took " + (new Date().getTime()-before))
}
Maybe I found something. When appending a single line to a window with ~8000k lines, the code above ^^^ takes 1149ms
out of the full 1186ms
that is elapsed within applyEdit
. Will investigate further tomorrow. It's 3AM 🙂const topLevelFormRange = tlCursor.rangeForDefun(idx);
this seems to be the slow one
yup, when I short-circuit it (put return [0,0]
to that method's body), applyEdit takes 50ms
instead of 1186ms
.
💤 🛏️ 😴
That’s great. I can reproduce it with Select Current Form. I need some 5K lines, though, but anyway. The way rangeForDefun
works is that it starts at the token-cursor position (0, in this case and most often) and moves forward sexp until it has passed the editor cursor position. In my test I have 60693 words in the window. Which translates to as many tokens. token-cursor.forwardSexp()
has to do its work 61K times so it will take some time. Hmmm…
I wonder why selectionCursor.atEnd() is not enough… Can you recall anything around that, @U9A1RLFNV?
In this particular case we can find the range by starting at the end and go backward sexp instead. We could consider generalizing that and always start from the closest end, or change the way we find top level forms entirely. Like:
• If upList()
:
◦ Continue upList()
until we can’t anymore
◦ note the current position as end of current top-level form
◦ go backwardSexp()
◦ note the current position as start of current top-level form
• Else:
◦ Current form is the current top level form
That should be quick in any document. Maybe I am missing something obvious though, because why didn’t I do it this way to begin with? 😃
awesome ! 🤞 There might be other smaller bottlenecks, but this one seems most pronounced with increasing amount of lines in repl window. After I disabled that slow function, I was able to append relatively quickly even to 200k-lines repl window. It choked for a while (10-20s) after I pasted all that content (pasting itself was quick), but then after those 10-20 seconds, additional succeeding append operations took around 200ms each, which is not too bad considering the amount of lines.
Up already? 😃 Those 10-20 seconds is probably spent in tokenizing the document. Something we potentially can speed up if we go for a deterministic regex implementation in the scanner. (Which probably is a pretty huge undertaking.)
Yeah, lately I find that about 6 hours of sleep is often enough for me. I mean I don't mind sleeping for longer period of time, really.. but even after less than 6, I no longer feel like zombie :male_zombie: 😄
@U0ETXRFEW / @U9A1RLFNV so, you'll eventually try to improve the performance of that rangeForDefun
as described above, right?
It's very far from my level of knowledge (sexps, their processing, in-memory representations in calva etc), so it would probably take me ages to start understanding even some small bits and pieces around all that.
I was hoping you would do it. 😃 But that particular change is small enough for me to assign myself. It’s actually the most fun parts of Calva to work with. Do we have an issue where this fits, or should an issue be created?
I’m still a bit confused on why we need those extra checks to enable the submit-on-enter key bindings…
hmm, if you think it's easy, then perhaps I can later try to re-read those bullets you posted above and try to figure out what is exactly needed to be changed and how. regarding the issue: AFAIK there's this generic one, that's related: https://github.com/BetterThanTomorrow/calva/issues/942
I’d be happy to pair program a bit with you @U01LFP3LA6P and guide you a bit if you like to give this a try. I might be over simplifying it, but at least I can guarantee it is a fun task. 😃
At least while I haven’t thought it through fully, it looks to me that the change could be isolated to that one function. Which is implemented on a pretty high abstraction level so knowledge about the turtles below is probably not needed.
Haha, turtles bellow 🙂 🐢🐢. Ok. Not sure whether I'll have some time today or tomorrow, but if so, we can try that. In the meantime, can you please update that issue and write a few sentences there regarding what change you think should be needed in that function?
Something you can test manually in Calva, to get a feel for the idea to change I had tonight, is to have the cursor in some form. Then do Paredit Forward Up Sexp (`ctrl+alt+down`).until the command doesn’t move the cursor. That’s the end of the current top level form. Then do Paredit Backward Sexp. That’s the start. Two things complicate this a tad more:
1. The token-cursor primitive for upList()
doesn’t move up unless the cursor is at the very end of the list (iirc), so the first upList()
needs to be preceded by a forwardList()
. This might make determining if the current form might actually be the current top-level form a bit tricky. (But maybe it doesn’t, we’ll find out.)
2. comment
forms create a new top level. (But hopefully this will already be handled by the function and will just work.)
Hmmm, ctrl+alt+down doesn't get handled by vscode/calva in my setup. Prorobly my desktop environment stealing that shortcut somehow. It works via ctrl+alt+p though. Update: fixed.
> I wonder why selectionCursor.atEnd() is not enough… Can you recall anything around that, @U9A1RLFNV? I do not recall anything around that, fyi. > Those 10-20 seconds is probably spent in tokenizing the document. Something we potentially can speed up if we go for a deterministic regex implementation in the scanner. (Which probably is a pretty huge undertaking.) Interesting..
Is there not prior work in this area that can be utilized/borrowed? (re: regex implementation in the scanner)
There is probably prior work. I think all XML-parsers are using deterministic regex engines. But still, we have quite complex regexes as it is. Converting them to DFA … I think it will be tricky! 😃
Also, 200k lines is quite a lot. We rather might throw in the towel there and tell the user that the document is not getting scanned. Like we do with long lines. (Like VS Code does with long lines too.)
When I have been working with the scanner I have sometimes wished for it being a multiline scanner. Right now I can’t recall why I have wished that, but anyway. I think that would require a DFA approach since the matched text quickly gets quite huge.
No, it is about: > Convert simple regular expressions to deterministic finite automaton But anyway, nice tool.
@U0ETXRFEW I tried to somehow hack the solution you suggested and it seems to generally work. When I originally intentionally crippled the rangeForDefun
function, code evaluation using alt+enter
stopped working. Which is expected, as it's being used to lookup the range to be executed, right? After implementing the strategy you suggested, I'm again able to evaluate using alt+enter
both from my core.clj
file, as well as from output.calva-repl
. And it's FAST! So is appending to output.calva-repl
. Even after 20k lines in repl editor, appending is still perfectly responsive (and thanks to previous batching effort, it's fast no matter if you're appending single line or 1000 of them).
It will definitely need some more work, for example the comment
must be handled somehow, as now alt-enter
actually tries to execute the range including (comment)
top-level form. And of course additional code cleanup and checking for regressions will be needed. But in general, it's starting to seem to work really nicely.
there'll definitely be some more bugs in that code I have right now. For example when I place cursor to any place in mapv
, the threading macro gets evaluated, but I think (do
should be returned as top level form and evaluated.
(do (->> (range 1000)
(mapv #(println "printing line " %)))
nil)
But these small things should hopefully be easily fixable
Awesome news! I don’t know how good the test coverage is for this, but generally it is important that the depth
argument keeps working. 😃
yeah, I'll need to understand its place first 🙂 Will look into this later after dinner
Hi, is there a way/example of putting 'start calva' command (with project type) in project's tasks.json file?
Simplify work :)
Then you might get away with using Custom Jack-in/Connect Sequences: https://calva.io/connect-sequences/
Hi! Sorry that I'm asking something that has probably been asked in the past (couldn't find it though):
If I understand correctly, VIM keybindings in vscode do not respect Paredit.
For example, if I erase a paren with x
in normal mode, then the corresponding other paren stays, and I get unbalanced parens.
Did I get it right?
🙏
I haven't seen that particular question before. 😎 You'll need to bind x to the paredit delete command. There's a VSpaceCode config as well.
Thank you so much!
Hi 👋 I think that my vscode config (shared mutable state, amirite?) is messing with the jack-in command.
Symptoms:
1. When I execute the jack-in command from a deps.edn
file, I get the prompt to choose my repl type; when I choose Clojure CLI nothing happens and this message appears Running the contributed command: 'calva.jackIn' failed.
2. When I execute the jack-in command from a clojure file, nothing happens altogether
I found this https://github.com/BetterThanTomorrow/calva/issues/1182 that looks similar so I tried to debug It like bpringe suggested
I ran calva in debug mode and in the console I see for 1:
An error occurred while initializing project directory. TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type function ([Function (anonymous)])
when I executed the jack-in command, but I do get to the prompt to choose the project type then nothing happens
(Something to note, I have an alias in the deps.edn file, and when I choose to use it it works!)And for 2 I see the same message:
An error occurred while initializing project directory. TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type function ([Function (anonymous)])
and the Running the contributed command: 'calva.jackIn' failed.
prompt
And nothing else happensThis sounds like this issue: https://github.com/BetterThanTomorrow/calva/issues/1139 Do you, by chance, have the python extension installed? See here: https://github.com/BetterThanTomorrow/calva/issues/1139#issuecomment-832655477
If you do, and disabling it makes jack-in work again, then maybe we should look into what's going on there. If you don't have it installed, then if you can give us a reproducible project + vscode config, that would be helpful.