Fork me on GitHub
#vscode
<
2020-02-06
>
sogaiu02:02:22

@pez is there something i can replicate locally to experience the "pain" of 10K chars with calva? is it just a matter of getting calva running and getting back a large result? how about just viewing larger clojure files? i typically test with clojure.core as that is > 250K (when i feel like real pain i go for clojurescript's core 🙂 )

pez06:02:21

You can take something like clj or cljs.core and replace all newlines with space. But that will get REALLY painful so taking something smaller is to recommend.

pez06:02:13

But just spitting out a big result w/o pretty print works as well.

sogaiu09:02:13

@pez i did the following: * removed comments from clj's core.clj * replaced newlines with spaces * configured settings to include: "editor.maxTokenizationLineLength": 1000000, * closed the file * restarted * opened the file it took may be around 2 secs, but i got a result that looked fine. does the duration sound close to what you experience? (fwiw, the max col was a bit over 260,000.) my impression is that there aren't really that many clojure source files that are that big (though ofc that's a guess) i pulled down a relatively recent copy of the latest releases of things on clojars so i suppose i could go over all of the contained source files to find out more specifics regarding typical clojure source file sizes...

pez09:02:55

The tokenization for the syntax highlight grammar is not the problem. It is Calvas tokenization for structural editing that can't cope with those long lines. So if you open a regular clojure file first. Then open that 260K long line file you cooked, you should note that you can't delete or do any other structural editing in the regular clojure file for a VERY long time. Minutes, I think.

sogaiu10:02:53

ah, i see. gave that a try and it did take some minutes. ouch.

pez10:02:57

Ouch! But the rainbow colors is faster. using yet another tokenizer. I think it could be some regexp that is running amok.