Fork me on GitHub

@shofetim: Thanks. That's what got me started. I improved on the Dockerfile a bit, next week I will share it all in a blog.


what’s the most performant way to find the most recently transacted datom for a given attr?


@bkamphaus / @marshall : could I ask a huge favour and ask if you can get any sort of clarity on this issue for us, please?!topic/datomic/1WBgM84nKmc

Ben Kamphaus13:10:54

@robert-stuttaford: is there a specific aspect you want clarified? Re: changes to the behavior, we don’t have anything to report. The present recommended strategy for accommodating this is still the same, i.e. generate a unique db name with gensym or appending a uuid to the name.


Stu’s last message on that thread was ‘investigating a fix’ - presumably, that means, it’s not the intended behaviour. Do you plan to fix it?


appending a unique suffix fixes it in the short term, but that’ll bloat durable storage very, very quickly


gives us one more thing to manage in dev and staging environments


suffixes are totally fine for in-memory dbs, but then, this isn’t actually an issue for in-memory dbs


does that make sense?

Ben Kamphaus14:10:33

@robert-stuttaford: I do understand the points you outline here. Just nothing additional to report at this time. The exact previous behavior is unlikely to be restored, as there was at least one bugfix related to insufficient coordination around deletion.


ok - all we’re really hoping to be able to do is delete and make durable databases with the same name without restarting the peer


even if we have to wait for a future to deliver or something

Ben Kamphaus14:10:29

The investigation comment Stu makes is exactly that - looking into tradeoffs. We are reluctant to have people rely on create/delete in a tight cycle as a promised fast behavior. I suspect, due to how it often comes up in testing, a slow but synchronous solution won’t match what a lot of people expect. Anyways, I have brought it up with the dev team again, but can’t promise any specific outcome.


ok, great. thank you, Ben. much appreciated


@robert-stuttaford: You don’t necessarily need to restart the peer process. You can ‘reuse’ DB names after a certain timeout. I just confirmed that I can successfully create a db, delete it, wait 60 seconds, then create again with the same uri and reconnect.

Ben Kamphaus15:10:35

@robert-stuttaford: re: the earlier questions for fast query to get last datom transacted, something like (working example on mbrainz):

(let [hdb (d/history (d/db conn))]
  (d/q '[:find ?a ?attr ?aname ?atx ?added
         :in $ ?attr
         [(datomic.api/q '[:find (max ?tx) 
                           :in $ ?attr
                           [_ :artist/name _ ?tx]]
                         $ ?attr) [[?atx]]]
         [?a ?attr ?aname ?atx ?added]]
       hdb :artist/name))

Ben Kamphaus15:10:24

bah, hard coded artist/name in subquery is a refactoring artifact


I must say, I never considered using q inside a query like that before. Nice.

Ben Kamphaus15:10:25

you can also just compose the queries, but for rest api etc. where you want everything in one query subquery is a good way to get an answer based on an aggregate, or handle an aggregate of aggregate problem


yes, this gets around one of my biggest bugaboos with using the rest api. I'm always trying to minimize round tripping back and forth to a peer, so this is very nice.


also, let this be a record of my surprise that my computer recognizes "bugaboos" as a word.


my biggest bugaboo is the error handling


@ljosa: with the rest api? if so, I hear you there.


must admit i’ve never seen d/q used inside a datalog clause before! in hindsight, it’s obvious that it’s possible simple_smile


It would be nice if datomic.api was used in database functions like datomic.Peer and datomic.Util methods. Any chance of this in the future?