Fork me on GitHub
#datomic
<
2018-12-06
>
Dustin Getz13:12:28

History modeling question. Say you have :post/url-slug :keyword :identity. And when you change the slug, you want to prevent breakage by remembering the old slugs. Is slug :cardinality :many, or should we use history for this?

val_waeselynck17:12:16

I'll keep shouting it: don't use history to implement your application logic! https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html 😛

🆒 4
val_waeselynck17:12:50

Maybe have 2 attributes, one which is the current slug, another which keeps track of past slugs?

benoit17:12:17

Agree with @U06GS6P1N. I would store multiple slugs in this case, especially if you don't care which one is the last one and only use them to resolve URLs. But if I could take a step back I would try to put an immutable identifier in this URL for resolution.

lwhorton17:12:32

thinking about your post there @U06GS6P1N… if you’re concerned about the earth-time that something actually happened (and you probably should be) would it not be better for pretty much every entity to have an created-at in addition to the automatic tx instant?

lwhorton17:12:41

that would enable clients or really any other process (for example, offline-mode clients) to hold onto the when and not conflate the when-it-actually-happened with when-i-learned-about-it

lwhorton17:12:29

though i suppose it depends on the needs of your application. how important the “event time” is compared to the “recording time” also seems like a domain concern

Dustin Getz21:12:51

It is not clear to me that preventing breakage of public identities over time should be considered application logic

benoit21:12:21

I think val's "don't use history to implement your application logic" is a shortcut. Sometimes it makes sense to use history for application logic when real-world time = database time. In your case, I'm guessing that this slug started to exist when it was created in the database so the two times coincide. So it would be correct to use history for that. I think it all depends whether you want the :url/slug attribute to mean "the last slug for this resource and the one to use when publishing this URL" or "all the slugs that redirect to this URL". Another thing to consider is that if you use those slugs to identify resources you might want to ensure unicity. You might not want to look at the whole history when you create a new slug to detect collisions. A cardinality many with a "identity" flag seems easier.

lwhorton21:12:04

when real-world time = database time seems to me to affect two scenarios*: any sort of ‘offline mode’ feature, and any time you have a queueing system to process heavy load, where ‘real world event time’ != ‘database time’ (by a significant enough margin to matter)

benoit21:12:21

@U0W0JDY4C Not sure I understand what you mean. The difference between the domain time and the database time is more general I think. If you record a fact that person P joined company C at a certain date then it is "domain time" and datomic history will not help you with that. But val's article showed that even if what you model are entities that could coincide with database time (what happens to a blog post is what gets recorded to the database), it is still not a good idea to rely on the history functions to implement features.

lwhorton21:12:55

yes, sorry for the confusion. we are on the same page-- datomic doesn’t magically handle time related specifically to domains, and if you need domain time it’s important to model that explicitly.

marshall16:12:51

ANN: If you are running Datomic Cloud Production topology and are using a VPC Endpoint (as detailed here: https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint), we are considering improvements that impact this service and would like to hear from you. Please email us (<mailto:[email protected]|[email protected]>) or respond on the forums with a short description of your use case for the VPC Endpoint. https://forum.datomic.com/t/requesting-feedback-on-vpc-endpoint-use/721

kenny21:12:08

If I delete a large Datomic DB (10-50m datoms), Datomics DDB read provisioned spikes to 25 and read actual to 250 for a while. Is there a reason for this?

kenny21:12:26

Further, shouldn't Datomic have auto scaled the DDB reads up to at least 100?

kenny21:12:05

Read actual stayed up at 250 for about 15 mins.