Fork me on GitHub

as far as I understand how datomic works, it is accepted to have wrong information on your system in the past versions of the database. [imagining you made a mistake 2 months ago and only corrected now]. However, as many people might be consuming information on different point in time, they might use wrong information to make decisions, right?

Alex Miller (Clojure team)13:09:37

this is true of all databases. Datomic lets you know that you did it.


sorry but I don't understand how this would help people querying past versions of the data, specially non-technical people. I understand the concept of issuing a new transactor as pointed out by @U050ECB92 but the wrong data is there forever and people might use it.


I was making a lot of confusion about "time travel" functionalities in datomic. The distinction of "event time" and "recording time" was very helpful to make it more clear to me


but I still don't know some implications, for example, imagine I want to run a financial report from three months ago. The data is there, cool. But we issued a transaction to correct some balances, but now the team running the report should not use asOf, txInstant or they will produce a wrong report, that is right?

Alex Miller (Clojure team)13:09:53

when you assert new facts, you retract the old ones. it is possible to do queries as of a point in time in the past before they had been retracted, but that's not what you would normally do in this case.

Alex Miller (Clojure team)13:09:10

I think there are two notions of time here - one is the transaction times which track when you know things

Alex Miller (Clojure team)13:09:35

and another are attributes that are put into the database and that's what you'd probably report on

Alex Miller (Clojure team)13:09:10

so you might record a transaction at time A that says sales were $100 on June 1 (via a schema attribute). And then you'd have a transaction at time B that says oops actually they were $120 on June 1. If you then run a quarterly report, you'd do it over the schema attributes, so you'd see the "updated" data

Alex Miller (Clojure team)13:09:36

but also you could run the same query asOf A and then compare to see the report before and after corrections

Alex Miller (Clojure team)13:09:01

if you use a SQL database, that's impossible because the data is updated in place


yeas! that was very clear. So, at modeling fase I should be paying attention to what entities might need a "time" attribute other than the transaction time

Alex Miller (Clojure team)13:09:52

anything that you'd need a date-time attribute for in another database model, you probably still need one

Alex Miller (Clojure team)13:09:11

what you typically don't need are things like "created" and "lastUpdated"

wizard 4
👍 4
❤️ 4
Alex Miller (Clojure team)13:09:38

where a relational database has extra attributes to record transaction attributes - those you get "for free" in Datomic


@UBSREKQ5Q If you want something that does this in a more systemic manner, check out the Crux database by Juxt. It has bitemporality built right into it. Not sure how mature it is though...


thank you @U064X3EF3 you made it cristal clear to me now.


@U883WCP5Z I'll take a look at this database as well, however I think this could be an overkill right now for me rsrsrs thanks o/

👍 4

would be better to have a mutable database to business people to interact with and increase the awareness of the developer team about such facts?


how is the daily operation of a company using datomic as the main database?


"accountants don't use erasers" -- accountants don't go to previous entries and modify them to fix them, they create reconciliation entries later


same thing applies with datomic: if some value is incorrect, you can transact to assert new facts. It still doesn't change the fact that it was wrong in the past


Hi all, we are trying to nail down some behavior of Datomic on-prem where we see a burst of writes to DynamoDB during what we believe to be a read-only operation. It occurred to me that the read-only search query contains a very infrequently-invoked fulltext function call, and while we're not familiar with how this is implemented internally it seems possible that, given that the fulltext index is described as eventually consistent, a Lucene indexing job might be run on demand, causing the burst of writes. Can we rule that out as a possibility, or should we investigate further?


@gws datomic writes to storage dont necessarily correspond directly to transactions Indexing jobs occur when the mem index reaches a threshold and will include significant write bursts


OK - looking at the memory index it is otherwise holding relatively steady around the threshold, we suspected that as well but don't see the correlation there


We may have been misreading that CloudWatch chart, thanks for the insight, I think you're right 👍