Fork me on GitHub

One of those last few might have been me sheepy Anways, I've been trying to hack on #lsp, but I'm an absolute noob at JVM world. I'm using its dev instructions in calva, but I get this:

Exception in thread "main" java.lang.UnsupportedClassVersionError: clojure_lsp/feature/test_tree/TestTreeParams has been compiled by a more recent version of the Java Runtime (class file version 62.0), this version of the Java Runtime only recognizes class file versions up to 52.0
What do I do? I don't know where in vsc the jdk I want to use is configured 😞

all-clear 1

nvm had to mess around with java_home


Yep, that message means you compiled with a idk but is running with a newer one


External files checksum development 🧵

👀 1

@U07M2C8TT @U0BUV7XSA you may be interested on, added a first iteration on the checksum of external filenames, more details about the impl chosen I made some tests and it worked very well, we may want to use that even for project source if we don't find any thing that would make it unreliable. We'd need to persists project analysis on db to make that possible tho, so need to be little bit of more careful if that wouldn't cause any undesired side-effects


Cool @UKFSJSM38! Sounds promising. Regarding project source… what would your plan be? At shutdown save analysis into the db, along with file modification time? That’s a little problematic because the analysis could be for an unsaved buffer.


Yep, there are tricky cases like this one, so for now we are good doing this only for external, but if we manage to reliable make it work for project, it would be a huge startup improvement for huge projects like metabase and others we know


Where project source analysis take more than a minute every restart


Yeah, I imagine that’d help big projects a lot. For that use case, an actual (non-cryptographic) hash of the text that was analyzed might work better than the file modification time. It’d be slower to check on startup, but more likely to be correct. It’d be a little strange to have two algorithms for checksumming.


Anyway, those are all considerations for the future. Nice work on getting this far!


Yeah, agreed, if that works relatively fast we may want to make both use that alg, IMO that should be a little bit slower than current way but way faster than analyzing each file.. so it may worth testing, but probably would be ok


I suppose you could change the algorithm for q/filter-external to use your new cache of external filenames. Dunno… the other way seems to be working well.


Indeed, good point