This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (7)
- # arachne (24)
- # beginners (40)
- # boot (24)
- # braid-chat (22)
- # cider (8)
- # cljsrn (35)
- # clojure (32)
- # clojure-austin (1)
- # clojure-belgium (52)
- # clojure-russia (16)
- # clojure-sanfrancisco (1)
- # clojure-taiwan (2)
- # clojure-uk (25)
- # clojurescript (112)
- # core-async (3)
- # cursive (18)
- # data-science (1)
- # datascript (7)
- # datomic (30)
- # devcards (2)
- # dirac (12)
- # emacs (4)
- # flambo (1)
- # funcool (5)
- # hoplon (146)
- # jobs (9)
- # jvm (5)
- # off-topic (4)
- # om (141)
- # onyx (22)
- # re-frame (89)
- # reagent (86)
- # ring-swagger (31)
- # rum (3)
- # spacemacs (1)
- # specter (10)
- # untangled (112)
- # yada (3)
I am looking to generate some queries based on user input. I think this is doable but given that Datomic doesn’t have a query optimiser, how do I make sure that I order the clauses in my :where in the right (or at least a reasonably right way)?
Is it possible for a transaction to have only partially completed due to java.lang.OutOfMemoryError?
@gardnervickers: Since 0.9.5130 Datomic backups have been incremental if they’re issued against the same storage location: http://docs.datomic.com/backup.html#differential-backup
@sdegutis: Transactions are atomic, so they either complete successfully or fail, there is no way to have a ‘partial transaction’; did you see an OOM error on the transactor?
@casperc: Are you generating the entire query or just altering parameters based on the user input?
Phew. @marshall I just verified that it did not partially go through. Thank goodness for ACID compliance I suppose.
@marshall: I think so... I tried to
d/transact-async a hundred thousand entities into existence, and got
@sdegutis: create 100,000 entities within a single transaction? that is a bit on the large size for number of datoms in a single txn - do you need to be able to create those together atomically?
@marshall: Probably not, I'm devising a way of splitting this data migration into multiple transactions. Think I found a way.
I’d definitely recommend splitting something that size up. I don’t think it should be particularly hard to get it through in 20 minutes, of course it will depend on the specifics of your system and schema, etc.
@marshall: Someone yesterday mentioned that Datomic recomends a maximum of 10 billion datoms in a database. After this migration we'll have gone from 5 million to 7 million, which eases my mind considering it's not even 1000th of the max.
Incidentally, the “Understanding and Using Reified Transactions” talk here: http://www.datomic.com/videos.html discusses a few approaches to large operations that span transaction boundaries
@marshall: I am generating the entire query. Our data model forms a DAG and I am generating a :where clause joining from (if that is the right way to put it) one of the leaf nodes to the root.
It might just be that it is not a problem though if I put the clauses with input parameters first and then just join up towards the root.
@casperc: If your user-paramaterized clauses narrow the dataset fairly substantially, that sounds like a reasonable place to start. I’d recommend against premature optimization and tend to worry about making it faster only if you see significant perf issues
@casperc: might be helpful to look at the code in the mbrainz sample database for generating rules: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj and the resulting rules: https://github.com/Datomic/mbrainz-sample/blob/master/resources/rules.edn for graph traversal for collaborating artists.
@marshall: Sound advice, I’ll see how it performs. 🙂 I guess I was looking for some reference material of some sort for generating the query
Right, the other thing I was going to say was that it sounded like a recursive rule might fit the problem, depending on your schema.
@sdegutis: if you haven’t yet, might want to check out the transaction pipeline example here: http://docs.datomic.com/best-practices.html#pipeline-transactions — though that’s for the step after you break up the transaction. (you put transactions on a channel that the tx-pipeline function would take from).
is there any way that calling (d/tempid :db.part/user) in lazy-seq's would result in producing the same db/id?
@bvulpes: so not generally but two possible issues: 1 - messing up the code so you just generate it once and repeat the generated value, and 2 - transaction functions that generate tempids can unintentionally conflict with tempids generated on the peer