Fork me on GitHub

@mping my guess is that: 1) It's well supported 2) It gives good compression while still having good performance and relatively low memory usage. 3) Storage is cheap, so the differences in compression format might not be worth the time spent on finding the absolutely most optimal format.


I was wondering because if you want to do a lookup by id, I believe datomic will fetch and decompress the whole segment; maybe this isn’t a problem anyway


I suspect it's nothing more than GZIPInputStream and GZIPOutputStream being in the jdk already (no dependencies) and better than Deflate (the only other jdk option)


yeah, in hadoop stack sometimes people use lzo/snappy/bz2 because they can be splittable and decompressed without reading the complete file but again maybe that isn’t a problem


Can a first deployment of Datomic Cloud use the individual compute and storage templates or do I need to go through the marketplace template?


Is KeyName a required parameter for the Datomic compute stack?


It appears KeyName is required. Why is this so?