Fork me on GitHub
Chris Bidler00:06:23

I guess the question in my mind is “what schema would you want to transact programmatically instead of by direct human direction?”

Chris Bidler00:06:56

but I also think that if you do have a use case for such a thing, there’s no reason the fn doing the schema updates couldn’t also read in your schema.edn, add to it, and spit it back out to S3

Oliver George01:06:41

Not sure if this is a sensible question but I'm curious if there are recommend ways to maintain additional indexes alongside datomic - in particular for spatial queries.

Oliver George01:06:58

I guess the datomic fulltext search feature is an example of that


@olivergeorge if you're OK with eventual consistency (as is fulltext), it's relatively easy in Datomic to sync data to other stores such as ElasticSearch

Oliver George11:06:13

Thanks. That makes sense.


thanks to change detection being so easy with the Log API


Possible error in the on-prem docs: Noticed that cloudwatch:PutMetricDataBatch is not a valid AWS action when reviewing our setup (


can anyone confirm that the 504 error Timeout connecting to cluster node originates from datomic/ion glue code somewhere? i assume it’s not some kind of generic AWS error. i’ve ruled out cloudfront and lambda timeouts, but i’m an AWS noob and there’s probably a dozen other timeouts i don’t know about 😛


Is anyone aware of a library that can convert a sequence of transactions on a mocked connection (i.e. datomock) into a single final transact? I know it isn’t that hard to code, but no need if someone’s already got it working and tested.


This may be hard to code, once you take transaction functions, conflicting values, reified transactions and upserts into account


My best shot at it would be a transaction fn which applies the supplied tx requests via db.with(), looks at the diffs and merges them into :db/add :db/retract


Again, far from trivial IMO


Datomock will give me the ability to snapshot the starting point, run anything through a mocked connection to an ending point. At that point I should just have a sequence of datoms in “history” to apply, right? And I can detect which are “new” by checking for their existence from the starting point, and remap them back to tempids


and run that whole thing through a single transact


not atomic by any means


so writes after reads are a problem if I had concurrent access to stuff during the “block”


Am I missing something else? I mean: transaction functions just result in datoms…admit that they are “atomic”, so I lose that atomicity. Upserts should “just work”. So, other than giving up some of the ACID bits that I would have had during that block, I’m not sure I see a(nother) difficulty…


@U0CKQ19AQ you don't need Datomock at all for doing this - db.with will give you the same thing. (in a more functional style). Which is a good news, because it means it's not difficult to embed in a transaction function, thus keeping atomicity


One issue you could have is transaction entities; e.g the :db/txInstant datoms


for my use-case I actually need Datomock…I need to create a block context where the (black-box) code in the block uses the connection as-if


Ah, good point


I fail to see how Datomock is mandatory to your use case; for the purpose of merging several transaction requests into one, I would not use it


The use-case is a rules engine that is using Datomic in a very granular way…transacting single datoms and using the results as it goes to figure out what was added/retracted…using the real db is very heavy, but we’re not using the atomicity or functions at all…it’s just a bunch of little changes.


and it has to be cumulative for each “run” of the rules…so cumulative


working in “with” would be a bit difficult


I’ve tried stripping Datomic out of the middle for 3-4 days, and this is what we settled on as a compromise to move forward…it’s really just an optimization


I see, interesting !


so, I’ll just be stripping the txInstant


and running the delta as a new tx…detecting which IDs should be converted back to tempids based on their existence in the real db


be careful with entity relationships


This is why I asked if anyone had already coded it 🙂


I’m simultaneously pleased this is possible (and also relatively straightforward), but concerned about the concurrency issues it raises. I’m hoping I’ve analyzed the safety of this well enough for this given use-case and runtime context 😕


because the algorithm is parallel?


no, because the real database is being used by many users…so updates to the real database could cause this “diff” to be incorrect in subtle ways


say someone does something that retracts an entity while this is running…I detect it as a “missing” thing at the end of the batch, give it a tempid, and recreate it. I guess I can do detection of that as well…but there are possible issues


I can’t currently think of real cases for this particular area of the app where that will happen…but the things you “don’t see” are also often called “bugs”


if you can restrict the scope of the proposed change to a small set of entities, you could do some optimistic locking where you check in a txfn whether any of these entities has been affected by a new transaction, should be cheap enough


that’s true….does the log API work with datomock? I’m not seeing a delta when I use it against the mocked connection


It should work yes, but do tell me if you see any bug


looking at the source it seems there’s some implementation for it


ok, I’ll try against a real connection to see if my code behaves differently


hm. yeah. Against a real connection I get a diff…with datomock I get nothing


(defn diff
  "Returns the diff on the database from start-t to end-t"
  [connection start-t end-t]
  (d/q '[:find ?e
         :in ?log ?t1 ?t2
         :where [(tx-ids ?log ?t1 ?t2) [?tx ...]]
         [(tx-data ?log ?tx) [[?e]]]]
    (d/log connection) start-t end-t))

;; then:
(let [start (d/next-t (d/db c))
      _     @(d/transact c [{:db/id     "name"
                             :owsy/name "Tony"}])
      end   (d/next-t (d/db c))
      delta (dbatch/diff c start end)]


when c is “real”, delta is non-empty. When it’s a mocked connection, empty


I have a function that "undo" a transaction Sure, it's not the same problem but may help. Later I will try to make the "tx-diff" function


I’ve got the diff, and already have the filtering…am working on the tempid generation now (almost done)


@U06GS6P1N Do you want me to open an issue on log?


Hi tony, i needed something similar once, I had something similar to datomock that collected all the inputs to d/transact so that they could all be combined in the end, but there are various edge cases that were problematic


I think the problem is here: The normal log API accepts various forms of “t”


conflicting id’s, tx function n longer being atomic etc, in the end I also settled on a diffing approach which works really well


well, perhaps that isn’t true…this is internal..might have already been transformed by Datomic


@U0P1MGUSX Yeah, it is going fine so far. The main thing is I was hoping to use datomock to track the progress, and the log API isn’t working 😕


all my tests pass with a real db, so I’m making progress, but I’ll be blocked on the full solution. I guess I might be contributing a patch today 🙂


@U0CKQ19AQ there is many differences between use log api or history api?


history lets you make time based queries, whereas the log is just the log of stuff that happened between two pts in time


I htink log is easier to use for my use-case


@U06GS6P1N so, the bug is due to how Datomic is executing that query. It calls the log once with times, but then again with what I think are tx db/ids


the latter one does not succeed, so the query returns nothing


@U06GS6P1N so I have a fix, but it is sort half-baked unless you know something I don’t


@U0CKQ19AQ thanks, will have a look


If it contains no attribute installs, you can just concat them, yes?


@U0ECYL0ET no, because there could be conflicts


(I might not understand the problem)


@sekao yes the 504 is from Datomic -- have not yet repro-ed what you are seeing but it is on the list


@stuarthalloway awesome thanks. BTW i can get it to happen with a simple web ion containing (Thread/sleep 15000). time chosen is arbitrary, but still below the lambda / API gateway timeouts AFAIK. also i’m hitting the route with an ajax POST, if that matters.