Fork me on GitHub
#datomic
<
2016-02-16
>
jan.zy08:02:38

[grep for ‘benchmarks’]

meow08:02:28

Why would they make that part of the EULA

meow08:02:08

That makes no sense to me. Can someone explain the reasoning behind this restriction?

dm308:02:12

performance benchmarks are rarely done correctly

val_waeselynck08:02:27

@meow: @jan.zy I thought everyone would know by now, this has been brought up quite a few times simple_smile apparently this is standard for proprietary databases

dm308:02:38

and rarely can be objectively compared outside of original cases

jan.zy08:02:53

o rly, I wonder what’s the reason behind that

jan.zy08:02:25

(and I think that this is a good moment to start anonymous benchmarking blog 😉 )

dm308:02:45

but they have a disproportionate effect on the public image of the product which is very hard to change later

val_waeselynck08:02:47

I also think it would be detrimental to Datomic's adoption, not because Datomic has objective performance issues, but because non experts would misinterpret such benchmarks, because they would still rely on assumptions that don't apply to Datomic

val_waeselynck08:02:19

having said that, I would welcome a summary of various companies using Datomic along with their performance requirements. I don't think that would count as benchmarking, the only information disclosed being that it's fast enough for them.

meow08:02:00

I would want to know that we are making the best use of Datomic

meow08:02:07

I don't care about public image or non experts or whether Datomic is fast enough for someone else's use case.

meow08:02:26

Just being honest.

meow08:02:46

I don't like the restriction at all.

val_waeselynck10:02:44

@meow: I'm not saying I approve of this restriction, just trying to explain it realistically.

meow10:02:10

@val_waeselynck: I appreciate that. I am anything but realistic. I'm not too fond of reality. That's why I intend to create alternate realities. Thanks.

val_waeselynck10:02:34

@meow: I think the best thing to do is to voice your concerns to the Cognitect team, you probably can find a thread in the mailing list on this topic

meow10:02:38

I don't participate in mailing lists. I think I've voiced my concerns here. Thank you again.

val_waeselynck10:02:42

I personally was not happy about it either, but now that I know it works for my use case I feel less of a need to complain simple_smile

meow10:02:21

I have several concerns about committing to Datomic in the long run. This issue is one.

meow10:02:20

@val_waeselynck: I truly appreciate your responses, suggestions, and support.

augustl22:02:24

hey folks. Getting a critical error via h2 when starting up my free transactor. Anyone know how to recover from this? https://gist.github.com/augustl/a34a31f9df7c37fa26fd

augustl23:02:48

for some reason this process yields a database with a much smaller size than my original, and it only seems to contain really old versions of the database

bkamphaus23:02:03

@augustl: have you taken any Datomic level backups?

augustl23:02:34

it's a staging environment (with some semi-important data) so we don't have a backup of it unfortunately

bkamphaus23:02:31

free and dev storages do pose durability risks in usage without datomic level backups. I’ll add that the ops cost of taking backups is not particularly high - it’s a matter of a cron or even a tight loop in a shell running bin/datomic restore-db to e.g. a file target. http://docs.datomic.com/backup.html#backing-up - I would recommend it for dev or staging environments if you want failure tolerance.

bkamphaus23:02:13

you’re correct in looking to storage level recovery tools barring the presence of a Datomic backup. It may be that you can’t get very recent data from it, as you observe. I’m not sure re: H2 write consistency and expectations across failures, but in general Datomic storage level writes are acked by all storages before they’re persisted to disk just as a performance reality. The reason I’m not sure on H2 guarantees off the top of my head is it’s not typical for Datomic users to rely on it for much other than sandbox scenarios, so guarantees around durability if it e.g. crashes mid-write usually doesn’t come into play.

bkamphaus23:02:22

but e.g. if you lost an entire Cassandra cluster mid-write (or sufficient nodes to account for your write consistency level spread across those nodes), you’d also corrupt the database. Or if you had replication factor of 1. It’s the storage/disk level reality of what can be guaranteed, and Datomic can’t tolerate missing data or inconsistent writes given its expectations for its data in storage.