This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-02-16
Channels
- # admin-announcements (14)
- # announcements (1)
- # aws (1)
- # beginners (105)
- # boot (609)
- # braid-chat (4)
- # braveandtrue (3)
- # cider (24)
- # cljs-dev (13)
- # cljsrn (2)
- # clojure (142)
- # clojure-berlin (7)
- # clojure-ireland (7)
- # clojure-japan (10)
- # clojure-nl (4)
- # clojure-poland (76)
- # clojure-russia (198)
- # clojure-sg (4)
- # clojure-taiwan (1)
- # clojurebridge (1)
- # clojured (4)
- # clojurescript (73)
- # conf-proposals (11)
- # cursive (10)
- # datomic (32)
- # devcards (1)
- # dirac (22)
- # editors (5)
- # emacs (3)
- # events (4)
- # funcool (19)
- # hoplon (18)
- # job (1)
- # jobs (3)
- # jobs-rus (16)
- # keechma (25)
- # ldnclj (33)
- # lein-figwheel (10)
- # leiningen (4)
- # luminus (1)
- # off-topic (19)
- # om (255)
- # onyx (51)
- # overtone (1)
- # parinfer (206)
- # perun (5)
- # proton (2)
- # re-frame (3)
- # reagent (2)
- # remote-jobs (13)
- # ring-swagger (7)
- # slack-help (4)
- # yada (7)
@meow such is the EULA
oh dear that’s true.. http://www.datomic.com/datomic-pro-edition-eula.html
@meow: @jan.zy I thought everyone would know by now, this has been brought up quite a few times apparently this is standard for proprietary databases
but they have a disproportionate effect on the public image of the product which is very hard to change later
I also think it would be detrimental to Datomic's adoption, not because Datomic has objective performance issues, but because non experts would misinterpret such benchmarks, because they would still rely on assumptions that don't apply to Datomic
having said that, I would welcome a summary of various companies using Datomic along with their performance requirements. I don't think that would count as benchmarking, the only information disclosed being that it's fast enough for them.
I don't care about public image or non experts or whether Datomic is fast enough for someone else's use case.
@meow: I'm not saying I approve of this restriction, just trying to explain it realistically.
@val_waeselynck: I appreciate that. I am anything but realistic. I'm not too fond of reality. That's why I intend to create alternate realities. Thanks.
@meow: I think the best thing to do is to voice your concerns to the Cognitect team, you probably can find a thread in the mailing list on this topic
I don't participate in mailing lists. I think I've voiced my concerns here. Thank you again.
I personally was not happy about it either, but now that I know it works for my use case I feel less of a need to complain
@val_waeselynck: I truly appreciate your responses, suggestions, and support.
hey folks. Getting a critical error via h2 when starting up my free transactor. Anyone know how to recover from this? https://gist.github.com/augustl/a34a31f9df7c37fa26fd
I've been trying the procedure outlined here - using h2's own recovery tools http://infocenter.pentaho.com/help/index.jsp?topic=%2Ftroubleshooting_guide%2Ftask_h2_db_recovery_tool.html
for some reason this process yields a database with a much smaller size than my original, and it only seems to contain really old versions of the database
@augustl: have you taken any Datomic level backups?
it's a staging environment (with some semi-important data) so we don't have a backup of it unfortunately
free and dev storages do pose durability risks in usage without datomic level backups. I’ll add that the ops cost of taking backups is not particularly high - it’s a matter of a cron or even a tight loop in a shell running bin/datomic restore-db to e.g. a file target. http://docs.datomic.com/backup.html#backing-up - I would recommend it for dev or staging environments if you want failure tolerance.
you’re correct in looking to storage level recovery tools barring the presence of a Datomic backup. It may be that you can’t get very recent data from it, as you observe. I’m not sure re: H2 write consistency and expectations across failures, but in general Datomic storage level writes are acked by all storages before they’re persisted to disk just as a performance reality. The reason I’m not sure on H2 guarantees off the top of my head is it’s not typical for Datomic users to rely on it for much other than sandbox scenarios, so guarantees around durability if it e.g. crashes mid-write usually doesn’t come into play.
but e.g. if you lost an entire Cassandra cluster mid-write (or sufficient nodes to account for your write consistency level spread across those nodes), you’d also corrupt the database. Or if you had replication factor of 1. It’s the storage/disk level reality of what can be guaranteed, and Datomic can’t tolerate missing data or inconsistent writes given its expectations for its data in storage.