Fork me on GitHub
#datomic
<
2020-07-30
>
cpdean13:07:09

Is it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request

; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
              :in [$  % ?title]
              :where [[?m :movie/title ?title]
                      (sequels ?m ?s)
                      [?s :movie/title ?sequel]]}

     :args [@loaded-db
            '[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
              [(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
            "Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.

val_waeselynck14:07:57

@UGHND87PG you may want to read about the perils of stored procedures 🙂 But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like https://github.com/vvvvalvalval/datalog-rules (shameless plug).

val_waeselynck14:07:38

All that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.

cpdean16:07:20

gotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?

cpdean16:07:22

also -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.

favila17:07:09

philosophically datomic is very much on the side of databases being “dumb” and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)

3
favila17:07:37

(at least IMHO, I don’t speak for cognitect)

cpdean18:07:08

yeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true

cpdean19:07:01

it'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place

mafcocinco16:07:49

I have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality is set to :db.cardinality/one and the :db/unique is set to db.unique/identity. When a unique constraint is set to db.unique/identity on a single attribute, if a transaction is executed against an existing entity, upsert will be enabled as described https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity. I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique was set to :db.unique/identity. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique was set to :db.unique/value. Am I missing something or misunderstanding how things are working? I’m new to Datomic and I’m assuming this is just a misunderstanding on my part.

favila17:07:56

Resolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom

favila17:07:29

To get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them

favila17:07:56

The reason it’s like this is because there’s a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know there’s a conflict it needs to know the tuple value

mafcocinco17:07:00

ah, that makes sense. It is relatively trivial to handle the exception and, in the application I’m working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.

mafcocinco17:07:07

Thanks for the explanation.

favila17:07:07

If that’s the case, consider using only :db.unique/value instead of identity to avoid possibly surprising upserting in the future.

mafcocinco17:07:14

Just so I’m clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?

mafcocinco17:07:23

That makes sense to me, just want to make sure I’m understanding correctly.

favila17:07:26

I guess that’s possible, but I just mean :db.unique/identity is IMHO a footgun in general

favila17:07:37

if you don’t need upserting, don’t turn it on

mafcocinco17:07:45

gotcha. thanks.

kschltz17:07:50

Hi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?

marshall18:07:14

@schultzkaue do you mean make your instance(s) larger or add more of them?

kschltz18:07:35

I wanted more nodes

marshall18:07:37

https://docs.datomic.com/cloud/operation/howto.html#update-parameter ^ this is how you choose a larger instance size - change the instance type parameter for increasing the # of nodes: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling Edit the AutoScaling Group for you primary compute group, set it larger

metal 3
marshall18:07:23

same approach as is used here: https://docs.datomic.com/cloud/tech-notes/turn-off.html#org7fdb7ff but you set it higher instead of setting it down to 0

🙏 3
kschltz18:07:43

neat! Thank you