This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-06-28
Channels
- # aleph (2)
- # beginners (25)
- # boot (12)
- # cider (73)
- # cljs-dev (3)
- # clojure (37)
- # clojure-dev (93)
- # clojure-germany (1)
- # clojure-italy (24)
- # clojure-nl (21)
- # clojure-russia (26)
- # clojure-spec (37)
- # clojure-uk (80)
- # clojure-za (1)
- # clojurescript (47)
- # cursive (4)
- # data-science (17)
- # datomic (69)
- # emacs (19)
- # events (7)
- # fulcro (41)
- # hoplon (14)
- # leiningen (2)
- # nrepl (4)
- # off-topic (253)
- # om (11)
- # portkey (2)
- # re-frame (11)
- # reagent (24)
- # ring-swagger (1)
- # rum (5)
- # schema (1)
- # shadow-cljs (106)
- # specter (2)
- # tools-deps (91)
I guess the question in my mind is “what schema would you want to transact programmatically instead of by direct human direction?”
but I also think that if you do have a use case for such a thing, there’s no reason the fn doing the schema updates couldn’t also read in your schema.edn, add to it, and spit it back out to S3
Not sure if this is a sensible question but I'm curious if there are recommend ways to maintain additional indexes alongside datomic - in particular for spatial queries.
I found this related thread from a while back: https://groups.google.com/forum/m/#!topic/datomic/dFTigkOIDB8
I guess the datomic fulltext search feature is an example of that
@olivergeorge if you're OK with eventual consistency (as is fulltext), it's relatively easy in Datomic to sync data to other stores such as ElasticSearch
Thanks. That makes sense.
thanks to change detection being so easy with the Log API
Possible error in the on-prem docs: Noticed that cloudwatch:PutMetricDataBatch
is not a valid AWS action when reviewing our setup (https://docs.datomic.com/on-prem/storage.html#cloudwatch-metrics)
can anyone confirm that the 504 error Timeout connecting to cluster node
originates from datomic/ion glue code somewhere? i assume it’s not some kind of generic AWS error. i’ve ruled out cloudfront and lambda timeouts, but i’m an AWS noob and there’s probably a dozen other timeouts i don’t know about 😛
Is anyone aware of a library that can convert a sequence of transactions on a mocked connection (i.e. datomock) into a single final transact? I know it isn’t that hard to code, but no need if someone’s already got it working and tested.
This may be hard to code, once you take transaction functions, conflicting values, reified transactions and upserts into account
My best shot at it would be a transaction fn which applies the supplied tx requests via db.with(), looks at the diffs and merges them into :db/add :db/retract
Again, far from trivial IMO
Datomock will give me the ability to snapshot the starting point, run anything through a mocked connection to an ending point. At that point I should just have a sequence of datoms in “history” to apply, right? And I can detect which are “new” by checking for their existence from the starting point, and remap them back to tempids
so writes after reads are a problem if I had concurrent access to stuff during the “block”
Am I missing something else? I mean: transaction functions just result in datoms…admit that they are “atomic”, so I lose that atomicity. Upserts should “just work”. So, other than giving up some of the ACID bits that I would have had during that block, I’m not sure I see a(nother) difficulty…
@U0CKQ19AQ you don't need Datomock at all for doing this - db.with will give you the same thing. (in a more functional style). Which is a good news, because it means it's not difficult to embed in a transaction function, thus keeping atomicity
One issue you could have is transaction entities; e.g the :db/txInstant
datoms
for my use-case I actually need Datomock…I need to create a block context where the (black-box) code in the block uses the connection as-if
I fail to see how Datomock is mandatory to your use case; for the purpose of merging several transaction requests into one, I would not use it
The use-case is a rules engine that is using Datomic in a very granular way…transacting single datoms and using the results as it goes to figure out what was added/retracted…using the real db is very heavy, but we’re not using the atomicity or functions at all…it’s just a bunch of little changes.
I’ve tried stripping Datomic out of the middle for 3-4 days, and this is what we settled on as a compromise to move forward…it’s really just an optimization
I see, interesting !
and running the delta as a new tx…detecting which IDs should be converted back to tempids based on their existence in the real db
be careful with entity relationships
I’m simultaneously pleased this is possible (and also relatively straightforward), but concerned about the concurrency issues it raises. I’m hoping I’ve analyzed the safety of this well enough for this given use-case and runtime context 😕
because the algorithm is parallel?
no, because the real database is being used by many users…so updates to the real database could cause this “diff” to be incorrect in subtle ways
say someone does something that retracts an entity while this is running…I detect it as a “missing” thing at the end of the batch, give it a tempid, and recreate it. I guess I can do detection of that as well…but there are possible issues
ah, I see
I can’t currently think of real cases for this particular area of the app where that will happen…but the things you “don’t see” are also often called “bugs”
if you can restrict the scope of the proposed change to a small set of entities, you could do some optimistic locking where you check in a txfn whether any of these entities has been affected by a new transaction, should be cheap enough
that’s true….does the log API work with datomock? I’m not seeing a delta when I use it against the mocked connection
It should work yes, but do tell me if you see any bug
(defn diff
"Returns the diff on the database from start-t to end-t"
[connection start-t end-t]
(d/q '[:find ?e
:in ?log ?t1 ?t2
:where [(tx-ids ?log ?t1 ?t2) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]]
(d/log connection) start-t end-t))
;; then:
(let [start (d/next-t (d/db c))
_ @(d/transact c [{:db/id "name"
:owsy/name "Tony"}])
end (d/next-t (d/db c))
delta (dbatch/diff c start end)]
...)
I have a function that "undo" a transaction Sure, it's not the same problem but may help. Later I will try to make the "tx-diff" function https://gist.github.com/souenzzo/d8e6afe21e990530f58fab5c8c3abc8c
I’ve got the diff, and already have the filtering…am working on the tempid generation now (almost done)
@U06GS6P1N Do you want me to open an issue on log?
Hi tony, i needed something similar once, I had something similar to datomock that collected all the inputs to d/transact
so that they could all be combined in the end, but there are various edge cases that were problematic
I think the problem is here: https://github.com/vvvvalvalval/datomock/blob/master/src/datomock/impl.clj#L22 The normal log API accepts various forms of “t”
conflicting id’s, tx function n longer being atomic etc, in the end I also settled on a diffing approach which works really well
well, perhaps that isn’t true…this is internal..might have already been transformed by Datomic
@U0P1MGUSX Yeah, it is going fine so far. The main thing is I was hoping to use datomock to track the progress, and the log API isn’t working 😕
all my tests pass with a real db, so I’m making progress, but I’ll be blocked on the full solution. I guess I might be contributing a patch today 🙂
@U0CKQ19AQ there is many differences between use log api or history api?
history lets you make time based queries, whereas the log is just the log of stuff that happened between two pts in time
@U06GS6P1N so, the bug is due to how Datomic is executing that query. It calls the log once with times, but then again with what I think are tx db/ids
@U06GS6P1N so I have a fix, but it is sort half-baked unless you know something I don’t
@U0CKQ19AQ thanks, will have a look
@U0ECYL0ET no, because there could be conflicts
@sekao yes the 504 is from Datomic -- have not yet repro-ed what you are seeing but it is on the list
@stuarthalloway awesome thanks. BTW i can get it to happen with a simple web ion containing (Thread/sleep 15000)
. time chosen is arbitrary, but still below the lambda / API gateway timeouts AFAIK. also i’m hitting the route with an ajax POST, if that matters.