Fork me on GitHub
Christian Pekeler08:10:29

If I need to know creation-time and last-update-time for most of my documents all the time, is there a better way than explicitly getting the history for each entity to get the time from the first and last history entry? (sorry if this is a faq)


idk the official answer, but I've been recording those as document attributes so they can be queried


and you can backdate them... a "creation time" from a user perspective may be different than the technical tx time the system recorded

Christian Pekeler11:10:29

I was hoping to avoid this since the DB is already tracking all the times. Good point on the backdating, though.


Hi, there isn't a built-in way to access the valid-time / transaction-time / tx-id data inside a query, although you can certainly do it by calling out to custom functions, see Depending on how you are hoping to using valid-time in your data model & queries, it still may be best to track timestamps inside your documents also, since that means they are indexed in a way that can participate in the join order/execution (and benefit from the built-in range lookups & operators, > <= etc) See also

👍 2

I started out using the history API to attach created-at and updated-at (most recent) at read time for all my documents and found it to be too slow. Then I just started attaching and to all my documents on write. While specifically about Datomic, this blog post has insights which apply here as well:

👍 1

> found it to be too slow. Do you recall if that was that roughly using the same approach I linked - i.e. lookups inside the query? Or did you try within an open-db? If it wasn't either or those though, then I can certainly imagine it would be a fair bit slower.


Ah okay, yes that would certainly add overhead, since each fresh api call against a db "value" (when not explicitly opened as a resource using open-db) has an non-zero initialisation cost against the underlying KV storage


aha, good to know - I may play around with doing this on-read again with the code you posted then

Christian Pekeler21:10:13

> Interesting read, but I disagree with this article. Of course you don’t want to rely on the immutability aspect of your DBMS if your requirements are to keep mutable past revisions of your data. Expecting past revisions (of blog posts) to have tags which were added later seems philosophically incompatible with immutability. The offline-mode argument seems mood when you explicitly set your valid-times (besides, should you trust the client’s clock?). And the DBMS merging argument is just ridiculous because you can use the backwards-compatibility argument against any fundamental technological improvement. And I dislike the recommendation to limit my use of the temporal infos to just technical housekeeping.

🙂 1

> I dislike the recommendation to limit my use of the temporal infos to just technical housekeeping This is our biggest motivation behind the ongoing R&D work on XT (which is happening entirely in private still, in case anyone wondered). Have you looked at how the SQL:2011 bitemporal spec works before? It relies on the timestamp information within the relation, which is pretty different to our model where we treat time fully orthogonally. The advantage of how SQL:2011 arranges things is that you can still access the data very liberally within a standard query, and the disadvantage is that it's much messier to manage (let alone implement) and is tightly coupled to the schema.

Piotr Roterski12:10:02

> If I'm getting this right, the problem is specific to datomic, and xtdb doesn't share it thanks to its bitemporality core feature. When you submit a transaction you can set the valid-time for the data that's different from transaction-time - which solves the exact problem described in the article - you can query the past world state at given valid-time even if the data itself was added later (at transaction-time).

💯 1

My takeaway is that if you're building in history features into your application's domain you should model those explicitly using attributes of your data model and not rely on the time-tracking abilities of your data store. One simple example is a use case where you want to specify in one datalog query constraints such as: I want to use the current version of the given user (most recent valid time) and some past version of a blog post they wrote - get the comments for that post. Essentially use cases where you want to ask questions about documents at various points in their respective histories - not in the history of the entire database.


As far as I understand things with xtdb this will still require you to add attributes to track time or versions to your domain


> As far as I understand things with xtdb this will still require you to add attributes to track time or versions to your domain that's true in the current query engine and with the current index design, yep, but that level of usage is what we hope to unlock in the next major version

❤️ 2

very cool! 🙂 looking forward to it

Marc Nickert12:10:51

Hey, I was searching for the Current State of GraphQL for XTDB? I am using dgraph at the moment and was looking for an alternative where I could add Spec validations to the GraphQL Schema and Interception of auto-generated Queries and Mutations, and I think this is something we could build around XTDB?


This is very topical 😄 some of the wider JUXT team have been working on adding GraphQL to the site project over the past few weeks: - however site is much more than just a GraphQL layer on top of XT though, as it also tries to tackle authz, authn & ops (and HTTP / OpenAPI)


You may be interested to borrow some of the patterns there and re-use the underlying GraphQL lib


Grab is designed to work with XT and Site, in fact, if you looked into Site's ns you'd see how simple it is.

Marc Nickert14:10:14

Thanks a lot 😁