This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-10-08
Channels
- # alda (10)
- # beginners (15)
- # boot (16)
- # business (2)
- # clojure (68)
- # clojure-czech (1)
- # clojure-italy (3)
- # clojure-japan (71)
- # clojure-russia (32)
- # clojure-uk (2)
- # clojurescript (134)
- # core-async (84)
- # datomic (27)
- # devcards (13)
- # hoplon (3)
- # ldnclj (8)
- # ldnproclodo (1)
- # lein-figwheel (2)
- # melbourne (1)
- # off-topic (29)
- # om (53)
- # re-frame (7)
- # reagent (15)
- # yada (50)
@gerstree: https://pointslope.com/blog/datomic-pro-starter-edition-in-15-minutes-with-docker/ is also good reading
@shofetim: Thanks. That's what got me started. I improved on the Dockerfile a bit, next week I will share it all in a blog.
what’s the most performant way to find the most recently transacted datom for a given attr?
@bkamphaus / @marshall : could I ask a huge favour and ask if you can get any sort of clarity on this issue for us, please? https://groups.google.com/forum/#!topic/datomic/1WBgM84nKmc
@robert-stuttaford: is there a specific aspect you want clarified? Re: changes to the behavior, we don’t have anything to report. The present recommended strategy for accommodating this is still the same, i.e. generate a unique db name with gensym
or appending a uuid to the name.
Stu’s last message on that thread was ‘investigating a fix’ - presumably, that means, it’s not the intended behaviour. Do you plan to fix it?
appending a unique suffix fixes it in the short term, but that’ll bloat durable storage very, very quickly
gives us one more thing to manage in dev and staging environments
suffixes are totally fine for in-memory dbs, but then, this isn’t actually an issue for in-memory dbs
does that make sense?
@robert-stuttaford: I do understand the points you outline here. Just nothing additional to report at this time. The exact previous behavior is unlikely to be restored, as there was at least one bugfix related to insufficient coordination around deletion.
ok - all we’re really hoping to be able to do is delete and make durable databases with the same name without restarting the peer
even if we have to wait for a future to deliver or something
The investigation comment Stu makes is exactly that - looking into tradeoffs. We are reluctant to have people rely on create/delete in a tight cycle as a promised fast behavior. I suspect, due to how it often comes up in testing, a slow but synchronous solution won’t match what a lot of people expect. Anyways, I have brought it up with the dev team again, but can’t promise any specific outcome.
ok, great. thank you, Ben. much appreciated
@robert-stuttaford: You don’t necessarily need to restart the peer process. You can ‘reuse’ DB names after a certain timeout. I just confirmed that I can successfully create a db, delete it, wait 60 seconds, then create again with the same uri and reconnect.
@robert-stuttaford: re: the earlier questions for fast query to get last datom transacted, something like (working example on mbrainz):
(let [hdb (d/history (d/db conn))]
(d/q '[:find ?a ?attr ?aname ?atx ?added
:in $ ?attr
:where
[(datomic.api/q '[:find (max ?tx)
:in $ ?attr
:where
[_ :artist/name _ ?tx]]
$ ?attr) [[?atx]]]
[?a ?attr ?aname ?atx ?added]]
hdb :artist/name))
bah, hard coded artist/name in subquery is a refactoring artifact
you can also just compose the queries, but for rest api etc. where you want everything in one query subquery is a good way to get an answer based on an aggregate, or handle an aggregate of aggregate problem
yes, this gets around one of my biggest bugaboos with using the rest api. I'm always trying to minimize round tripping back and forth to a peer, so this is very nice.
also, let this be a record of my surprise that my computer recognizes "bugaboos" as a word.
thanks, @bkamphaus !
must admit i’ve never seen d/q
used inside a datalog clause before! in hindsight, it’s obvious that it’s possible