Fork me on GitHub

is there some function in datomic to convert TX data maps to list of datoms ?


in other words, when we define datoms in map form, to convert it to list of lists form?


Nothing provided,, the logic is basically an entid, db/add, mapkey, mapval for every mapkey, mapval in the map form (except the ID, of course)


I believe someone in here or on group has pointed to a gist or list post with an example before.


@bkamphaus: more question...


When using datomic, in some top level function which should execute atomically, meaning, its execution constitutes unit-of-work


and this top level fn is calling few low-level fns which are doing something with db, but these low level fns are actually just preparing tx-data that top level fn collects and waits until very end to transact this collected tx data (postpone side-effect until very end)...


Do you code your functions this way when using datomic in your apps, or you're dealing with domain objects represented as maps, you transform them in this functions, and then at the very end, you just transact whole new state of this domain entitiy by using some entitiy-> datomic tx data functions, but that means that one doesn't know which for eg. fields of these maps were updated, but always transact whole entitiy to datomic (all key/value pairs as datoms) ?


to recap, 2 aproaches are:


1. collect TX data from each db "mutating" low level functions, and at the end of top level fn just trasact whole collection of tx data


2. load entities from db as clojure maps, tranform them during fn execution, and at the end just transact whole new state of these entities back to datomic, with all keys


in second aproach we don't care about fine grained approach where you take care that you only transact changed entities' keys, just transacting whole entitiy state at the end of top level fn (unit of work)


I'm total datomic newb, just started evaluating it in one of my apps, thus the question - maybe the second aproach is not eevn feasible, dunno, just speaking what's in my head right now


of course, in second aproach, i could at the end of top level fn compare final entity's state with initial ones, and maybe by diffing these states resolve what keys were changed and construct datomic tx datoms from that, and transact these


@vmarcinko I don't know if this is best practice, but in one similar case, I do #2 with DIFF. When I query the db to get the data, I also get the basis-t and store it with the data. When I get the modified data structure back, I get the basis-t from it, obtain the db at that point and do a nested diff (original vs new). In addition, since I have the basis-t (original value), when I go to update I can use db.fn/cas to assert the prior value (this way I can avoid conflicts if the data was changed due to some other process).


@mlimotte: Thanx, though I think we don't speak about same case. When I say diff, I mean the process of diffing at the end of top level function, just prior to doing single trasnact to datomic, and diff is necessary because there could be few fucntions that this top level function called and that modified some initially loaded entity


So you see in example above, this top level function calls some-fn1 and some-fn2 which "change" initially loaded entity state, and at the end of this top level fn, I make diff, and transact this final state of entity


that's option 2 in my inital question


and this diff is needed to transact only those attributes that changed


but diffing is burden, so I could just convert whole final state of entity to datomic datoms, and transact all its attributes, regardless if they were changed during this function


that would definitely simplify code


I've hit a bump in the road using clojure.spec to specify Pull queries


Specifically map specifications


Looks like there isn't a way in clojure.spec to say 'here is a spec for keys and a spec for values, give me a spec for a map'.


@vmarcinko: I believe we are saying the same thing. Get some entity (in my case a complex nested entity using a pull pattern). It gets edited. Then do a diff on the original vs. the edited structure. This diff is then converted into datomic txns and passed to d/transact. In my case, I save and then use the basis-t to get the original data... in your case, looks like it all happens in the same thread, so you still have the original data and don't need to use basis-t, so a little simpler than my case. I can tell you it's feasible... up to you if the diff is worth the effort.


Hi guys, is there a way for me to bulk import large transactions without tying up my transactor? I just tried importing 2M records using transact-async and my transactor was unresponsive for up to 5 minutes.


@jdkealy: You can set up a pipeline import and tune the number of concurrent transactions to find a balance between import speed and availability of the transactor:


thanks @marshall... out of curiosity, can you pay to scale this kind of thing? Since I'm doing this on my localhost, would I get 3x the availability + speed if I paid for 3 processes in production?


sorry.. i mean 5 processes


@jdkealy: Datomic serializes all transactions to maintain ACID semantics. A given Datomic database is only written to from a single transactor process. The additional processes available in larger licenses would be peers, not transactors, and allow horizontal scaling of read/query, not of writes.


If I've found an issue in the documentation (I think I've found an inaccuracy in the Query grammar), where would I report it?