This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (31)
- # boot (5)
- # cider (1)
- # clara (1)
- # cljs-dev (9)
- # clojure (118)
- # clojure-nl (2)
- # clojure-russia (90)
- # clojurescript (344)
- # cursive (4)
- # datascript (1)
- # datomic (41)
- # emacs (5)
- # hoplon (48)
- # ldnclj (13)
- # lein-figwheel (13)
- # leiningen (1)
- # off-topic (1)
- # om (146)
- # omnext (1)
- # onyx (65)
- # re-frame (22)
@currentoor: we use tx annos in a couple ways. 'who': every web-generated tx is tagged with the signed-in user who created it = easy audit trail. and our back-end event-stream processor tags its own txes as 'processed-by' so that it can keep track of what work it's done and has to do. i've also seen examples mentioned like marking a past tx as 'error', or marking a new tx as a 'correction'
@currentoor: on pagination, it's actually an interesting problem to solve. the problem is fundamentally this: Datomic doesn't do arbitrary sorting for you like SQL or Mongo do, beyond the sort order present in the 4 indexes (eavt aevt avet vaet). if you needed to e.g. sort a 3 'column' dataset by any of its columns ascending or descending, you're on your own. it's easy to implement, but not performant in the large. i went down the road of caching large data-sets in redis to make paginating and re-sorting the set faster, as all the work required to get the dataset to the point where it's sortable and ready for render is slow when you get to 10,000s and 100,000s
using core.memoize, the cache key is all the fn's args, one of which is the datomic db
so, we're now looking into ways to reduce the total dataset size before you start sorting, by warning the user of the dataset size up-front and prompting them to apply filters to reduce it
because the likelihood that you're going to page through 1000s of records is ultra low
actual pagination code is very easy:
(->> (d/datoms ...) seq (drop (* page-index page-size)) (take page-size))
you could have a datalog query or any other collection producing code at the beginning, of course, and you'd also sort before you drop+take
@robert-stuttaford: would recommend also tagging transactions with: a) git sha of the process that produced it b) basic info about the http request (I just do method and path)
Just as a side note, any generated or domain supplied unique identifier on a transaction is great for dealing with retry logic, since you have to sync/coordinate after unavailability to see what made it in otherwise.
Tim Ewald also covered some other use cases of Reified Transactions at the Datomic Conf portion of the Conj this year: http://www.datomic.com/videos.html
how do you guys think about creating partitions for your data? Should I be creating a different partition for every type of entity? So, if we had a notion of users, teams, games, stadiums we would do a separate partition for each?
@davebryand: as I understand it, partitions (primarily) drive index locality, so you want to keep entities you work with together a lot under the same partition. It really depends on how you use teams/games/stadiums/etc.
(I can imagine use cases for those entities where each strategy could be more appropriate.)
gotcha—so depending on the app logic, it might make sense to have a partition per team or something, if that’s a common query pattern?
anyone know if there is a way to expand a transaction map form into a list form for debugging?
I’m seeing behavior where it seems to sum up across all of the
counts instead of giving me individual counts
@kschrader: can you share an example of what you want the output to like like and a version of the query, obfuscated from your domain if need be?
@bkamphaus: but using both of them seems to multiple the values together and return that value for both statements
@kschrader: let me think through setting up an analogous query with mbrainz to test, and see expected behavior. What happens if you put
?org in a
:with clause ( http://docs.datomic.com/query.html#with )