Fork me on GitHub

Calva v2.0.183 out with these changes: • • Fix: The output window fix is because that printing to a regular editor is quite slow and we need to manage the printing queue so that things are printed in order. There can be thousands++ prints in the queue when you realize you don’t have the time to finish waiting for it all to print. Now the queue is truncated when the evaluation is interrupted. Bliss. Thanks @brandon.ringe for fixing it!

Tomas Brejla07:03:09

> The output window fix is because that printing to a regular editor is quite slow I wanted to ask one thing about that. Occasionally, this window is really too slow for me and in such cases I wanted to just close the windows and print out the content of output.calva-repl using just tail (with follow mode) in console. But I was surprised that this didn't work correctly as the new lines doesn't seem to be just appended to that file. Instead, the file seemed to be re-created over and over again. Does such behavior make sense? Does it needs to be like that?

Tomas Brejla07:03:10

It's been a while in a past, so it's possible that this might have changed.

Tomas Brejla07:03:10

basically my plan was to do something like

tail -f .calva/output-window/output.calva-repl 2>/dev/null | zprint '{:color? true}'
But it seems to be non-trivial to do.


Iirc we use the same mechanism for writing to the file whether it is open or not. We probably (not sure) could switch to just appending to the file when it is not showing.


We have also been thinking about creating an option for letting the output go somewhere else than to the output window.

Tomas Brejla08:03:40

> letting the output go somewhere else than to the output window That might be nice as well. But I'm still wondering.. Is it not possible to simply append lines to output.calva-repleven if it's being shown in "repl editor"?


I don’t think so. It was a while since I implemented it, but I seem to recall testing that and not getting the results that I wanted. VS Code does not have an API for letting me know when the editor is updated if it is updated from underneath (again, as I recall it).

Tomas Brejla09:03:33

so in case there's 10MB of text in that file and you need to add a single line, you actually have to read the 10MB content to memory, append one line in memory and write 10MB+1line to that file again?


Hmmm, I don’t know how VS Code does it. ¯\(ツ)

Tomas Brejla15:03:25

the Stop printing to the output window when all evaluations are interrupted functionality is nice :thumbsup: . Saves one from having to juggle with output.calva-repl file. I used to close that window for a few seconds, as that signifficantly improves logging speed 🙂. Then I reopened it and hoped that it already finished its logging 😄 Btw I believe that append function in results-doc.ts might be altered so that it doesn't always pass a single line from the queue to be rendered. When the queue is large (let's say 1000 rows), this results in every single line being inserted to the editor via its own 1-line-`append` call. That call then performs insertion, scrolling, and - what's probably the most expensive - the whole file gets syntax-highlighted again (at least I think highlighting gets executed here). All that possibly just because of 1 inserted line. By "batching" those lines you lose the sense of "quick refresh loop". The scrolling visually doesn't appear that fluid as when "baching by 1" . But in the end it's way, way faster. By default it's terribly slow to even printout 500 rows to the log. It took 25 seconds (!?) to render them with current implementation. That's just 500 rows, appended to empty editor. 😱 When I batched by 20 items (or less if there's less then 20 in the queue), the time dropped to acceptable value of ~ 2-3s. It's of course possible to use some dynamic batch size. For example derived from current number of rows and number of items in the queue. Then some division, sane capping etc. Here's a video with comparison. Disclaimer: it's really ugly code, I just quickly hacked it together to test the hypothesis 😉 Of course Once the editor holds too many lines (let's say 10k +), it becomes slugish no matter what. Ideally One would like only the appended lines to be syntax-highlighted, but it's questionable whether this can be achieved easily. What do you think? Would similar form of batching make sense?

Tomas Brejla16:03:09

btw good docks on Hacking calva ! 👏 👏 👏 had no issues bringing up the "dev session", with zero previous knowledge of Calva codebase and VSCode Extension development


We've discussed batching before and I don't remember the issues around it, but it may be something we could try. Here's a discussion around the slowness issue:


So, there must be some max delay to wait for a batch size, right?


If one message comes into the queue, then no more for several seconds, we'd want to only wait x milliseconds before printing what's in the queue if the batch size isn't met, right?


What if we always send out everything in the queue?


I think we'd get back to the same problem, but I'm not sure. Without waiting some period of time (though it could be pretty small, less than 200 ms or something), we might end up printing such small batches that we're back to the same issue, but... maybe not. I think this should be tried.


@brdloush Thanks for experimenting with this. If you'd like to try the above, I think a PR would be great - maybe a rough draft first so we can try it out / test it and discuss.


Since printing is so slow, it seems like Clojure would fill up the queue quite nicely. But it was a while since I visited that code, and even when I did I couldn’t figure it out so you had to fix it, @brandon.ringe. 😃


> Since printing is so slow, it seems like Clojure would fill up the queue quite nicely. You may be right, here


And yeah, async operations made "synchronous" via queuing and recursion. Maybe this code could be refactored, I haven't looked in a while. 😄 The queue, at least, seems necessary to keep, because of the ordering issue we had in the past and/or some things not being printed (or being replaced, which looked like they weren't printed).


Yeah, the queue must stay, I am pretty sure. @brdloush the smooth feeling when printing one item at a a time is not by design. It is mainly an effect of printing being slow. Good to know if you give this a shot. 😃


@slawek098: I’ll look at your report a bit closer later today and come back to you. Does not look good from my superficial reading. I wonder what is going on!


@pez Hey, thanks! I think I see what’s going on. Second screenshot, showing broken defprotocol was caused by VSCode “format on save” which I had enabled.


(but |I read that it should be using Calva’s formatting)


And the first screenshot, when typing implementations into defrecord is simply caused by: > Calva’s code formatter sets the default keybinding of its Format Current Form command to `tab`. Meaning that most often when things look a bit untidy, you can press `tab` to make things look pretty. Good to know, right? For performance reasons it only formats the current enclosing form, so sometimes you want to move the cursor up/out a form (`ctrl+up`) first. See for more on moving the cursor structurally through your code.


It’s correctly indenting current form but in order to correctly indent defrecord method implementation, you need to indent it in defrecord context.


It should actually do that. The formatter considers the current enclosing form….


If there’s anything I can do - please let me know!


And thanks for all the great work you’re putting into Calva. Some day I might be able to dump emacs 🙂


I now see what you mean with defrecord needing to be considered as a whole. Can you file an issue about this and paste some actual text there in addition to screen shots? It makes it easier to reproduce. If the format on save seems to be a separate thing, file it as a separate issue. (I don’t think there is much we can do about it, but good to have it on the radar for now.)


Also, a bit of confusion here if it is defrecord or defprotocol we are talking about. 😃


Using Calva for babashka scripting just got much, much better! Thanks @borkdude and @brdloush!

calva 9
babashka 6