This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # asami (2)
- # babashka (9)
- # babashka-sci-dev (33)
- # beginners (6)
- # calva (5)
- # cider (1)
- # clj-kondo (2)
- # clojure (79)
- # clojure-dev (8)
- # clojure-europe (1)
- # clojurescript (56)
- # core-logic (1)
- # datalevin (1)
- # emacs (20)
- # funcool (3)
- # holy-lambda (3)
- # honeysql (28)
- # improve-getting-started (11)
- # introduce-yourself (4)
- # lsp (21)
- # off-topic (9)
- # other-languages (5)
- # polylith (3)
- # quil (3)
- # releases (1)
- # rewrite-clj (9)
- # sql (5)
- # tools-deps (29)
- # xtdb (9)
Seeing hashing times in the hundreds of milliseconds for some typical pod binaries. Which is a big chunk of the caching speed up. Wondering if it defeats the purpose…
Was looking into perf optimizations too
Perhaps checksumming might be appropriate here? I would want to keep the
:cache param if so because it can (very rarely) have collisions.
Tried sha-1 too and it was about the same
CRC-32 or I found another one called Alder-32
we could do md5 + add the filename to the cache file: /Users/borkdude/pod-foobar => dcccadfc-pod-foobar.cache
MD-5 didn't seem any faster either
I tried using streams instead of reading the whole file into a byte array but that was slower
So might still be doing something wrong
Anyway, going to keep trying for a bit to see if I can get it sped up and if not might just go back to
Or we could make a cache relative to the bb.edn file instead of the global one and indeed the :cache param. Let's just do that
Not sure I follow
Since local pods aren't local, maybe it doesn't make sense to store the cache globally but relative in the project.
Ah I see. So the goal of storing the cache in project is to make it more obvious to the user that it exists?
Or… we could tell people using local pods in prod that if they want caching they need to store the sha-512 hash of their pod binary in name-of-pod.sha512 in the same dir. and then we read that and use it for caching? Then they can just build the hashing into their build pipeline. It really only needs to be calculated once when the pod is compiled. Or we could do it if the file isn't found and then just check if the timestamp is still newer? Maybe getting too complicated… What do you think?
you still need to calculate the sha512 of the pod to compare it with the stored sha512
and this will still be slower than just starting the pod and reading the uncached describe message probably
Well, the idea would be that you don't do that and it's up to the pod builder to bust the cache by updating that themselves
But yeah, I'm not sure it's a great approach. Was just trying to think of a way to only compute the hash when it changes.
note that we only do this for local pods, not for registry pods which are always cached.