This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-09-24
Channels
- # 100-days-of-code (7)
- # announcements (1)
- # bangalore-clj (1)
- # beginners (87)
- # boot (6)
- # cljdoc (16)
- # cljsrn (13)
- # clojure (32)
- # clojure-dev (30)
- # clojure-italy (18)
- # clojure-nl (4)
- # clojure-serbia (1)
- # clojure-uk (48)
- # clojurescript (18)
- # cursive (18)
- # datascript (1)
- # datomic (7)
- # events (9)
- # figwheel-main (28)
- # fulcro (2)
- # hyperfiddle (2)
- # immutant (8)
- # jobs (16)
- # liberator (4)
- # nyc (2)
- # pedestal (15)
- # re-frame (8)
- # reagent (12)
- # reitit (8)
- # remote-jobs (1)
- # ring-swagger (2)
- # robots (1)
- # rum (1)
- # schema (1)
- # shadow-cljs (45)
- # spacemacs (49)
- # sql (13)
- # tools-deps (59)
- # uncomplicate (1)
- # vim (10)
@mping my guess is that: 1) It's well supported 2) It gives good compression while still having good performance and relatively low memory usage. 3) Storage is cheap, so the differences in compression format might not be worth the time spent on finding the absolutely most optimal format.
I was wondering because if you want to do a lookup by id, I believe datomic will fetch and decompress the whole segment; maybe this isn’t a problem anyway
I suspect it's nothing more than GZIPInputStream and GZIPOutputStream being in the jdk already (no dependencies) and better than Deflate (the only other jdk option)
yeah, in hadoop stack sometimes people use lzo/snappy/bz2 because they can be splittable and decompressed without reading the complete file but again maybe that isn’t a problem