Fork me on GitHub
#datomic
<
2018-06-28
>
chris_johnson00:06:23

I guess the question in my mind is “what schema would you want to transact programmatically instead of by direct human direction?”

chris_johnson00:06:56

but I also think that if you do have a use case for such a thing, there’s no reason the fn doing the schema updates couldn’t also read in your schema.edn, add to it, and spit it back out to S3

Oliver George01:06:41

Not sure if this is a sensible question but I'm curious if there are recommend ways to maintain additional indexes alongside datomic - in particular for spatial queries.

Oliver George01:06:58

I guess the datomic fulltext search feature is an example of that

val_waeselynck10:06:22

@olivergeorge if you're OK with eventual consistency (as is fulltext), it's relatively easy in Datomic to sync data to other stores such as ElasticSearch

Oliver George11:06:13

Thanks. That makes sense.

val_waeselynck10:06:44

thanks to change detection being so easy with the Log API

mgrbyte10:06:10

Possible error in the on-prem docs: Noticed that cloudwatch:PutMetricDataBatch is not a valid AWS action when reviewing our setup (https://docs.datomic.com/on-prem/storage.html#cloudwatch-metrics)

sekao11:06:04

can anyone confirm that the 504 error Timeout connecting to cluster node originates from datomic/ion glue code somewhere? i assume it’s not some kind of generic AWS error. i’ve ruled out cloudfront and lambda timeouts, but i’m an AWS noob and there’s probably a dozen other timeouts i don’t know about 😛

tony.kay15:06:48

Is anyone aware of a library that can convert a sequence of transactions on a mocked connection (i.e. datomock) into a single final transact? I know it isn’t that hard to code, but no need if someone’s already got it working and tested.

val_waeselynck15:06:06

This may be hard to code, once you take transaction functions, conflicting values, reified transactions and upserts into account

val_waeselynck15:06:41

My best shot at it would be a transaction fn which applies the supplied tx requests via db.with(), looks at the diffs and merges them into :db/add :db/retract

val_waeselynck15:06:49

Again, far from trivial IMO

tony.kay15:06:57

Datomock will give me the ability to snapshot the starting point, run anything through a mocked connection to an ending point. At that point I should just have a sequence of datoms in “history” to apply, right? And I can detect which are “new” by checking for their existence from the starting point, and remap them back to tempids

tony.kay15:06:19

and run that whole thing through a single transact

tony.kay15:06:26

not atomic by any means

tony.kay15:06:55

so writes after reads are a problem if I had concurrent access to stuff during the “block”

tony.kay15:06:11

Am I missing something else? I mean: transaction functions just result in datoms…admit that they are “atomic”, so I lose that atomicity. Upserts should “just work”. So, other than giving up some of the ACID bits that I would have had during that block, I’m not sure I see a(nother) difficulty…

val_waeselynck16:06:34

@U0CKQ19AQ you don't need Datomock at all for doing this - db.with will give you the same thing. (in a more functional style). Which is a good news, because it means it's not difficult to embed in a transaction function, thus keeping atomicity

val_waeselynck16:06:59

One issue you could have is transaction entities; e.g the :db/txInstant datoms

tony.kay16:06:05

for my use-case I actually need Datomock…I need to create a block context where the (black-box) code in the block uses the connection as-if

tony.kay16:06:17

Ah, good point

val_waeselynck16:06:41

I fail to see how Datomock is mandatory to your use case; for the purpose of merging several transaction requests into one, I would not use it

tony.kay16:06:52

The use-case is a rules engine that is using Datomic in a very granular way…transacting single datoms and using the results as it goes to figure out what was added/retracted…using the real db is very heavy, but we’re not using the atomicity or functions at all…it’s just a bunch of little changes.

tony.kay16:06:08

and it has to be cumulative for each “run” of the rules…so cumulative

tony.kay16:06:22

working in “with” would be a bit difficult

tony.kay16:06:22

I’ve tried stripping Datomic out of the middle for 3-4 days, and this is what we settled on as a compromise to move forward…it’s really just an optimization

val_waeselynck16:06:08

I see, interesting !

tony.kay16:06:22

so, I’ll just be stripping the txInstant

tony.kay16:06:50

and running the delta as a new tx…detecting which IDs should be converted back to tempids based on their existence in the real db

val_waeselynck16:06:24

be careful with entity relationships

tony.kay16:06:00

This is why I asked if anyone had already coded it 🙂

tony.kay16:06:52

I’m simultaneously pleased this is possible (and also relatively straightforward), but concerned about the concurrency issues it raises. I’m hoping I’ve analyzed the safety of this well enough for this given use-case and runtime context 😕

val_waeselynck16:06:44

because the algorithm is parallel?

tony.kay16:06:16

no, because the real database is being used by many users…so updates to the real database could cause this “diff” to be incorrect in subtle ways

tony.kay16:06:04

say someone does something that retracts an entity while this is running…I detect it as a “missing” thing at the end of the batch, give it a tempid, and recreate it. I guess I can do detection of that as well…but there are possible issues

tony.kay16:06:22

I can’t currently think of real cases for this particular area of the app where that will happen…but the things you “don’t see” are also often called “bugs”

val_waeselynck16:06:23

if you can restrict the scope of the proposed change to a small set of entities, you could do some optimistic locking where you check in a txfn whether any of these entities has been affected by a new transaction, should be cheap enough

tony.kay16:06:50

that’s true….does the log API work with datomock? I’m not seeing a delta when I use it against the mocked connection

val_waeselynck16:06:48

It should work yes, but do tell me if you see any bug

tony.kay16:06:49

looking at the source it seems there’s some implementation for it

tony.kay16:06:07

ok, I’ll try against a real connection to see if my code behaves differently

tony.kay16:06:42

hm. yeah. Against a real connection I get a diff…with datomock I get nothing

tony.kay16:06:15

(defn diff
  "Returns the diff on the database from start-t to end-t"
  [connection start-t end-t]
  (d/q '[:find ?e
         :in ?log ?t1 ?t2
         :where [(tx-ids ?log ?t1 ?t2) [?tx ...]]
         [(tx-data ?log ?tx) [[?e]]]]
    (d/log connection) start-t end-t))

;; then:
(let [start (d/next-t (d/db c))
      _     @(d/transact c [{:db/id     "name"
                             :owsy/name "Tony"}])
      end   (d/next-t (d/db c))
      delta (dbatch/diff c start end)]
  ...)

tony.kay16:06:27

when c is “real”, delta is non-empty. When it’s a mocked connection, empty

souenzzo17:06:28

I have a function that "undo" a transaction Sure, it's not the same problem but may help. Later I will try to make the "tx-diff" function https://gist.github.com/souenzzo/d8e6afe21e990530f58fab5c8c3abc8c

tony.kay17:06:02

I’ve got the diff, and already have the filtering…am working on the tempid generation now (almost done)

tony.kay17:06:51

@U06GS6P1N Do you want me to open an issue on log?

chrisblom17:06:52

Hi tony, i needed something similar once, I had something similar to datomock that collected all the inputs to d/transact so that they could all be combined in the end, but there are various edge cases that were problematic

tony.kay17:06:08

I think the problem is here: https://github.com/vvvvalvalval/datomock/blob/master/src/datomock/impl.clj#L22 The normal log API accepts various forms of “t”

chrisblom18:06:02

conflicting id’s, tx function n longer being atomic etc, in the end I also settled on a diffing approach which works really well

tony.kay18:06:15

well, perhaps that isn’t true…this is internal..might have already been transformed by Datomic

tony.kay18:06:00

@U0P1MGUSX Yeah, it is going fine so far. The main thing is I was hoping to use datomock to track the progress, and the log API isn’t working 😕

tony.kay18:06:49

all my tests pass with a real db, so I’m making progress, but I’ll be blocked on the full solution. I guess I might be contributing a patch today 🙂

souenzzo18:06:24

@U0CKQ19AQ there is many differences between use log api or history api?

tony.kay18:06:07

history lets you make time based queries, whereas the log is just the log of stuff that happened between two pts in time

tony.kay18:06:38

I htink log is easier to use for my use-case

tony.kay18:06:27

@U06GS6P1N so, the bug is due to how Datomic is executing that query. It calls the log once with times, but then again with what I think are tx db/ids

tony.kay18:06:41

the latter one does not succeed, so the query returns nothing

tony.kay19:06:03

@U06GS6P1N so I have a fix, but it is sort half-baked unless you know something I don’t

val_waeselynck05:06:05

@U0CKQ19AQ thanks, will have a look

eraserhd15:06:00

If it contains no attribute installs, you can just concat them, yes?

val_waeselynck15:06:15

@U0ECYL0ET no, because there could be conflicts

eraserhd15:06:05

(I might not understand the problem)

stuarthalloway19:06:27

@sekao yes the 504 is from Datomic -- have not yet repro-ed what you are seeing but it is on the list

sekao20:06:02

@stuarthalloway awesome thanks. BTW i can get it to happen with a simple web ion containing (Thread/sleep 15000). time chosen is arbitrary, but still below the lambda / API gateway timeouts AFAIK. also i’m hitting the route with an ajax POST, if that matters.