Fork me on GitHub

Is it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request

; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
              :in [$  % ?title]
              :where [[?m :movie/title ?title]
                      (sequels ?m ?s)
                      [?s :movie/title ?sequel]]}

     :args [@loaded-db
            '[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
              [(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
            "Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.


@UGHND87PG you may want to read about the perils of stored procedures 🙂 But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like (shameless plug).


All that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.


gotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?


also -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.


philosophically datomic is very much on the side of databases being “dumb” and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)


(at least IMHO, I don’t speak for cognitect)


yeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true


it'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place


I have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality is set to :db.cardinality/one and the :db/unique is set to db.unique/identity. When a unique constraint is set to db.unique/identity on a single attribute, if a transaction is executed against an existing entity, upsert will be enabled as described I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique was set to :db.unique/identity. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique was set to :db.unique/value. Am I missing something or misunderstanding how things are working? I’m new to Datomic and I’m assuming this is just a misunderstanding on my part.


Resolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom


To get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them


The reason it’s like this is because there’s a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know there’s a conflict it needs to know the tuple value


ah, that makes sense. It is relatively trivial to handle the exception and, in the application I’m working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.


Thanks for the explanation.


If that’s the case, consider using only :db.unique/value instead of identity to avoid possibly surprising upserting in the future.


Just so I’m clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?


That makes sense to me, just want to make sure I’m understanding correctly.


I guess that’s possible, but I just mean :db.unique/identity is IMHO a footgun in general


if you don’t need upserting, don’t turn it on


gotcha. thanks.


Hi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?


@schultzkaue do you mean make your instance(s) larger or add more of them?


I wanted more nodes

marshall18:07:37 ^ this is how you choose a larger instance size - change the instance type parameter for increasing the # of nodes: Edit the AutoScaling Group for you primary compute group, set it larger

metal 1

same approach as is used here: but you set it higher instead of setting it down to 0

🙏 1

neat! Thank you