Fork me on GitHub

[grep for ‘benchmarks’]


Why would they make that part of the EULA


That makes no sense to me. Can someone explain the reasoning behind this restriction?


performance benchmarks are rarely done correctly


@meow: @jan.zy I thought everyone would know by now, this has been brought up quite a few times simple_smile apparently this is standard for proprietary databases


and rarely can be objectively compared outside of original cases


o rly, I wonder what’s the reason behind that


(and I think that this is a good moment to start anonymous benchmarking blog 😉 )


but they have a disproportionate effect on the public image of the product which is very hard to change later


I also think it would be detrimental to Datomic's adoption, not because Datomic has objective performance issues, but because non experts would misinterpret such benchmarks, because they would still rely on assumptions that don't apply to Datomic


having said that, I would welcome a summary of various companies using Datomic along with their performance requirements. I don't think that would count as benchmarking, the only information disclosed being that it's fast enough for them.


I would want to know that we are making the best use of Datomic


I don't care about public image or non experts or whether Datomic is fast enough for someone else's use case.


Just being honest.


I don't like the restriction at all.


@meow: I'm not saying I approve of this restriction, just trying to explain it realistically.


@val_waeselynck: I appreciate that. I am anything but realistic. I'm not too fond of reality. That's why I intend to create alternate realities. Thanks.


@meow: I think the best thing to do is to voice your concerns to the Cognitect team, you probably can find a thread in the mailing list on this topic


I don't participate in mailing lists. I think I've voiced my concerns here. Thank you again.


I personally was not happy about it either, but now that I know it works for my use case I feel less of a need to complain simple_smile


I have several concerns about committing to Datomic in the long run. This issue is one.


@val_waeselynck: I truly appreciate your responses, suggestions, and support.


hey folks. Getting a critical error via h2 when starting up my free transactor. Anyone know how to recover from this?


for some reason this process yields a database with a much smaller size than my original, and it only seems to contain really old versions of the database


@augustl: have you taken any Datomic level backups?


it's a staging environment (with some semi-important data) so we don't have a backup of it unfortunately


free and dev storages do pose durability risks in usage without datomic level backups. I’ll add that the ops cost of taking backups is not particularly high - it’s a matter of a cron or even a tight loop in a shell running bin/datomic restore-db to e.g. a file target. - I would recommend it for dev or staging environments if you want failure tolerance.


you’re correct in looking to storage level recovery tools barring the presence of a Datomic backup. It may be that you can’t get very recent data from it, as you observe. I’m not sure re: H2 write consistency and expectations across failures, but in general Datomic storage level writes are acked by all storages before they’re persisted to disk just as a performance reality. The reason I’m not sure on H2 guarantees off the top of my head is it’s not typical for Datomic users to rely on it for much other than sandbox scenarios, so guarantees around durability if it e.g. crashes mid-write usually doesn’t come into play.


but e.g. if you lost an entire Cassandra cluster mid-write (or sufficient nodes to account for your write consistency level spread across those nodes), you’d also corrupt the database. Or if you had replication factor of 1. It’s the storage/disk level reality of what can be guaranteed, and Datomic can’t tolerate missing data or inconsistent writes given its expectations for its data in storage.