Fork me on GitHub
Dustin Getz13:12:28

History modeling question. Say you have :post/url-slug :keyword :identity. And when you change the slug, you want to prevent breakage by remembering the old slugs. Is slug :cardinality :many, or should we use history for this?


I'll keep shouting it: don't use history to implement your application logic! 😛

🆒 4

Maybe have 2 attributes, one which is the current slug, another which keeps track of past slugs?


Agree with @U06GS6P1N. I would store multiple slugs in this case, especially if you don't care which one is the last one and only use them to resolve URLs. But if I could take a step back I would try to put an immutable identifier in this URL for resolution.


thinking about your post there @U06GS6P1N… if you’re concerned about the earth-time that something actually happened (and you probably should be) would it not be better for pretty much every entity to have an created-at in addition to the automatic tx instant?


that would enable clients or really any other process (for example, offline-mode clients) to hold onto the when and not conflate the when-it-actually-happened with when-i-learned-about-it


though i suppose it depends on the needs of your application. how important the “event time” is compared to the “recording time” also seems like a domain concern

Dustin Getz21:12:51

It is not clear to me that preventing breakage of public identities over time should be considered application logic


I think val's "don't use history to implement your application logic" is a shortcut. Sometimes it makes sense to use history for application logic when real-world time = database time. In your case, I'm guessing that this slug started to exist when it was created in the database so the two times coincide. So it would be correct to use history for that. I think it all depends whether you want the :url/slug attribute to mean "the last slug for this resource and the one to use when publishing this URL" or "all the slugs that redirect to this URL". Another thing to consider is that if you use those slugs to identify resources you might want to ensure unicity. You might not want to look at the whole history when you create a new slug to detect collisions. A cardinality many with a "identity" flag seems easier.


when real-world time = database time seems to me to affect two scenarios*: any sort of ‘offline mode’ feature, and any time you have a queueing system to process heavy load, where ‘real world event time’ != ‘database time’ (by a significant enough margin to matter)


@U0W0JDY4C Not sure I understand what you mean. The difference between the domain time and the database time is more general I think. If you record a fact that person P joined company C at a certain date then it is "domain time" and datomic history will not help you with that. But val's article showed that even if what you model are entities that could coincide with database time (what happens to a blog post is what gets recorded to the database), it is still not a good idea to rely on the history functions to implement features.


yes, sorry for the confusion. we are on the same page-- datomic doesn’t magically handle time related specifically to domains, and if you need domain time it’s important to model that explicitly.


ANN: If you are running Datomic Cloud Production topology and are using a VPC Endpoint (as detailed here:, we are considering improvements that impact this service and would like to hear from you. Please email us (<mailto:[email protected]|[email protected]>) or respond on the forums with a short description of your use case for the VPC Endpoint.


If I delete a large Datomic DB (10-50m datoms), Datomics DDB read provisioned spikes to 25 and read actual to 250 for a while. Is there a reason for this?


Further, shouldn't Datomic have auto scaled the DDB reads up to at least 100?


Read actual stayed up at 250 for about 15 mins.