Fork me on GitHub

"Datomic does not provide a mechanism to declare composite uniqueness constraints; however, you can implement them (or any arbitrary functional constraint) via transaction functions." Reading the transaction function documentation, it looks like it applies to a single transaction entry but not to the transaction as a whole. Is this true? If so, doesn't this mean that, while you can guarantee that no single entry violates a composite uniqueness constraint in the current db, you cannot guarantee that the whole transaction maintains this constraint? In other words, multiple entries in the same transaction could have duplicate composite keys but there is no way to detect it?


So from looking over the docs at it looks like if I need to change a :db/valueType or :db/fulltext I'm pretty much on my own? I'm guessing that the "right way to do it" would be to create another attribute, with the altered type or index setting, copy all data from the existing, then retract the old, and rename the new to the old?


@shofetim: that’s essentially correct. To change valueType or fulltext you need a new attribute, which you can then migrate data to. You can alter :db/ident in sequence so as to point the new attribute to the old one — change the previous attribute to point to something like :person/id-old, make a new :person/id attribute, then migrate data from :person/id-old to :person/id-new.


It may be appropriate to also introduce e.g. rules that will look for values in the appropriate place and so on. There are definitely users who prefer to migrate all data (i.e. by replaying the log from the entire db), preserving original tx-instants but remapping values when appropriate to match new type, rather than introduce that level of complexity into their schema for the db of record.


As always, backup, backup, backup before you try any of this. simple_smile


I'm pretty curious if there's a way around @domkm's issue


@bhagany: you could wrap the entirety of every transaction in a transaction function, which takes all the other data per transaction as it's arg. Probably overkill though.


heh, I had wondered about something like this


I'm guessing most people enforce uniqueness-per-transaction in their application code


seems better to me, anyway


Sure. You can't do that and have it actually work in the face of concurrency though :)


Transaction Functions are application code that happen to run on the Transactor.


To answer @domkm's question, you define a Transaction Function in terms of one unit of logical change for your data.


If you have an operation which makes multiple changes, all related to the same constraints, then that operation should probably be a single transaction function.


It's still your code responsible for enforcing the constraint, so it's a very different style of interaction than, say, "table constraints" in SQL databases.


I think what I had in mind would work with concurrency, even outside of a transaction function - I'm imagining a transaction that adds two entities. Your non-transaciton-function-application code would ensure that your uniqueness constraint holds for these two entities. Then the transaction function does the same thing for each entity, against db-before. I'm also assuming that if the uniqueness check munges values to make them unique, then that munging is guaranteed not to produce a collision between the two added entities.


@stuartsierra: is there a less unwieldy word or phrase for "code that's not a transaction function"?


transaction functions are code that happen to run on the Transactor


I'm looking for a phrase that wouldn't run against your correction of my use of "application code"


oh, "Peer code" maybe


okay, gotcha. much obliged simple_smile


you're welcome


I admit these things get a bit fuzzy sometimes…