Fork me on GitHub

We have released a new version of Client-pro, Datomic On-prem, and Datomic Cloud. See announcements below for more info:


@nolan if we bump codec, any reason not to bump to 1.12 (i.e. do you need 1.11 specifically?)


@stuarthalloway no specific need for 1.11. 1.12 would probably make most sense if you guys were to bump it


Will test ASAP, this is an easy change if they didn't break anything.


Right on, thanks for looking into this Stu! Would be huge for us, but I also recognize there’s a lot beyond my line of sight here. Will be following closely!


wahey! \o/ tuples! specs!


Am I reading it right that tuples can be used as composite keys? If so, that's fantastic! Are there any non-obvious surprises that I would run into if I treat them like I would a SQL composite key?


yep! no idea on the SQL thing. i'm also curious about how the performance changes


Question: for composite attrs, list of allowed types does not mention refs, but the example in the docs uses a ref


can you put a ref in a composite attr? is the system aware of it's ref-ness? this seems like an easy way to accidentally introduce an internal weak eid reference


Hm, I feel tricked, this seems like more than one question 🙂


Yes, yes, and probably yes.


Reading through all the changes right now - that's some amazing timing! I added 3 years ago on after asking @stuarthalloway about it at my first Datomic training, and I just finished writing the last required feature in my first real Datomic application for work yesterday. It's the same domain that I wanted composite keys for, so I should be able to add them in before we move it to full production mode. Days like this make me feel like the universe (or the Datomic team) really has my back. 🙂

datomic 16

"The day after you asked for it" would probably have been better timing... 🙂


Ok, next time I want something big and new I'll expect it right away then! 🙂


hey guys, small question, if I have 2 databases on transactor, are transactions serializable in scope of each db, or in scope of transactor?


In scope of each DB. There is no coordination between DB‘s.


so if I have multi-tenant app, then I can scale with database for each tenant :thinking_face:


As a datomic-cloud user, please bump datomic-free. There is some tooling that use it.


I can only second this. I’m a paying on-prem user and like to see all flavors of Datomic at the same version. I also will release the free Docker image ASAP.

💥 4

Soooooo theoretically, couldn't you assert an Entity Spec that ensures you can't ever modify the entity, including retracting the Entity Spec itself? Looks like a wonderful prank. 😈


No. "Specs must be requested explicitly per transaction+entity" --


@stuarthalloway thanks, here's the part of the docs that confused me: > Entity specs can be asserted or retracted at any time, and will be enforced starting on the transaction after they are asserted


Might want to change to "and will be enforceable starting on the transaction after they are asserted"


My takeaway was :db/ensure is a magic attribute, it doesn't actually add any datoms


it adds a postcondition to the current tx, not the entity


so, does that mean it won't show up in tx logs? "decanting" will lose the ensure assertions?


@U09R86PA4 nails it. ":db/ensure is a virtual attribute. It is not added in the database; instead it triggers checks based on the named entity." --

👍 4

In Datomic Cloud, attribute predicates and entity predicates just need to be on the classpath of your application, not installed as Ions, correct?


Database predicates must be on the classpath where transactions execute. In Cloud this means ions. In On-Prem, it is up to you to start a transactor with the appropriate code on the classpath.


can I use many :db/ensure ? like [[:db/add :souenzzo :name "Enzzo"] [:db/add :souenzzo :db/ensure :op1] [:db/add :souenzzo :db/ensure :op2]] ?


How do you guys handle schemas in production, do you let your application code-base run through your schema updates (functions) on startup and send them to datomic or do you have an external application which takes care of it?


right now I just transact my schemas as a part of my application init.


I do the same. Transacting my schema on every startup.

👍 4

we wrote a tool that we run to update the schema. We have lots of microservices using datomic, so transacting on startup doesn’t make sense for us

👍 8

We transact the schema on startup. When we move to microservices, we won't be doing that anymore. @UFQT3VCF8 -- how did you architect the use of datomic with microservices?

👍 4

there’s probably a lot I could talk about, but overall it’s simple — services just connect to DynamoDB and the transactor. We tried using the peer-server and client API, but the peer API was much more efficient under load


Same here, used to use conformity as it felt weird to not have some kind of "schema migration" coming from the relational world, but have since gone to just transacting stuff in