This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-27
Channels
- # aleph (3)
- # beginners (89)
- # boot (198)
- # cbus (4)
- # cider (11)
- # clara (2)
- # cljs-dev (27)
- # cljsrn (4)
- # clojure (141)
- # clojure-austin (4)
- # clojure-italy (11)
- # clojure-nl (1)
- # clojure-poland (2)
- # clojure-russia (35)
- # clojure-spec (33)
- # clojure-uk (55)
- # clojurescript (111)
- # core-logic (15)
- # cursive (2)
- # datascript (47)
- # datomic (132)
- # emacs (4)
- # jobs (1)
- # lein-figwheel (13)
- # leiningen (15)
- # lumo (20)
- # off-topic (110)
- # om (8)
- # onyx (20)
- # parinfer (2)
- # protorepl (1)
- # re-frame (36)
- # reagent (5)
- # remote-jobs (1)
- # ring (2)
- # ring-swagger (5)
- # specter (6)
- # uncomplicate (3)
- # unrepl (77)
Datomic’s default behaviour in a transaction is to “upsert” (e.g. if an ID specified as :db/id
does not represent an existing entity then it’s considered a temporary ID). This seems prone to errors; is there a way to throw an error on transact if :db/id
is not referring to an existing entity? and vice-versa?
Ah nevermind, I am wrong on this. It’s not that :db/id
gets considered as a temporary ID if it does not represent an existing entity ID. It’s that it uses the value of :db/id
as the new entity’s ID instead of auto-generating it.
Basically, I want to make sure that when I am updating an entity I am actually updating an existing entity, and not creating a new one
the closest you can get is to say there are no datoms which contain that number as a reference
(precisely: no datoms in :eavt index with that number in :e, and no datoms in :vaet index with that number in :v)
you can arbitrarily assert datoms against any entity id you want whose t value is <= the internal t counter in the db (and whose partition id is valid? not sure if this is checked)
What I suggest you do is define some notion of entity "existence" for your application (e.g., has a certain attribute asserted) and use a transaction function to assert that the attribute either exists or has a certain value.
{:db/id [:unique-attr "unique-value"] :more "assertions"}
will fail if the lookup ref cannot resolve
@favila thanks! That’s helpful. I was wondering the same thing about :db.fn/cas
working with same old and new value; I’ve to try it
would db.fn/cas work with old value nil
? To check whether an attribute has NOT been set?
@favila are you doing “existence” checks like this internally? It seems like something that would be quite common, e.g. some users might have the right to update an entity but not create one, etc
ok. And slightly unrelated question: do you use :db/id in your product directly? or do you tag every entity with a uuid?
on the order of minutes. db id is not persisted anywhere, it's just in a client's memory
can a map-form transaction contain a reverse lookup that associates multiple other entities with "this" entity? can't find any docs on this.
it appears i can associate a single entity with a reverse ref:
{:db/id "new-entity"
:book/_subject 12312312312
:person/name "foo"}
but not multiple:
{:db/id "new-entity"
:book/_subject #{12312312312 456456456456}
:person/name "bar"}
I don't know if this is documented. I had to reverse engineer some map format edge cases
forward refs do sometimes accept many, but I don't remember how it decided between one lookup ref vs many items
Is there a better / preferable way to get a list of datomic entity-API items out of a query besides this?
(map #(d/entity db %) (d/q '[:find [?eid ...]] db))
Is there a way to prevent peers from execution d/delete-database
? (unless they are “priviledged” or something)
stop using the peer lib, and use the client lib instead
@robert-stuttaford client lib is in alpha and lacks support for some features though, no?
i guess. i don’t use the client lib 🙂 but that’s basically what it boils down to. the peer is considered to be inside the database
probably not. now that there’s no peer limit, i prefer its programming model
we’re all in with Datomic; its our only database, and we’re full-stack Clojure. so the peer is everywhere
the idea of being able to delete your production database with a single line is scary
how do you go about doing continuous backups by the way @robert-stuttaford ?
we have a t2.small that has one job; this script
I thought you were doing something clever observing the transaction log with Onyx and piping it to a backup location
gosh no 🙂
we are looking at DDB streams for cross-region replication so that in Disaster Recovery we can be back up quicker
Are there some guarantees that Datomic’s backup system won’t fuck up the incremental backup?
because when we dry run it right now, the longest part of downtime is copying the backup to another region
we have regularly scheduled backups to non-AWS yes
@robert-stuttaford out of curiosity, do you test that your backups are actually working? this is a non-datomic question, but I was planning to do this on a project
@marshall Hi! Maybe you could answer this question: what exactly happens when you call d/destroy-database
? Does it send a message to the transactor? Does it destroy the storage directly? Also, would it be possible to disallow d/destroy-database
calls on, say, a production database? (by configuring the transactor or the storage in a certain way)
@hmaurer delete-database
tells the transactor to remove the database from the catalog. it doesn’t destroy any of the storage directly (that happens when you later call gc-deleted-databases
)
There is currently no way to disable it or launch a peer that can’t call it
at which point will it delete storage if you don’t call gc-deleted-database? and if it doesn’t delete storage, is there a way to “restore” a deleted database?
not really; there is probably some way to recover it manually, but it wouldn’t be pretty
basically, you probably shouldnt have any code paths in your system that include a call to delete-database
Think of it a bit like having a DROP TABLE
somewhere in your code
Yep, of course. But on PG/MySQL I could configure the production db user to not be able to drop tables at all
Which is reassuring, even if code reviews/linting tools can ensure that your production code does not call those methods
true enough. I would suggest that is a reasonable request to put into our Suggest Features portal 🙂
@marshall another small thing I discussed earlier in a thread: is the “transact” function in the Datomic peer clojure library part of a protocol that I could implement?
Not sure i understand the question. If you need to enforce constraints on the transacted data you can either use a transaction function, or build up the transaction data structure in your peer and do a compare-and-swap against some known value
My bad, my question wasn’t very clear. I would like to add some attributes on every transaction in my application for auditing purposes (e.g. the ID of the current user). To this end, I could wrap d/transact
with my own function, so as to add the necessary tx-data to every transaction. I was wondering if the d/transact
function in the Clojure API is part of a protocol that I could reify to add my own behaviour. I am quite new to clojure so this might not make sense at all
gotcha - I dont believe it is extensible in that manner. I’d probably suggest you write a wrapper fn that adds the user info you’re wanting to include and use it exclusively throughout your application
^ that's the approach we've taken at my shop, it works great
@U08QZ7Y5S do you pass the “current user” context manually all the way down to this function? Or implicitly through something like a dynamic var?
Yep. We keep a kind of metadata/session object around which gets initialized from a JWT token at the top of the stack and contains the user-id and roles and some related stuff, then we pass it all the way down to the datomic layer, and then our wrapper function adds a {:db/id (d/tempid :db.part/tx) :user/id blah :meta/data foo}
to the transaction data that the calling code passed to it.
We did experiment with some ways to avoid needing to explicitly pass the metadata around, but nothing seemed to be significantly better and most of our experiments introduced subtle context semantics that we didn't want
@U08QZ7Y5S thanks for the explanation 🙂 out of curiosity did you wrap other datomic functions (e.g. d/entity
) to enforce some security rules based on roles too?
Nope, and having three separate datomic APIs (pull / query / entity) has been a source of some architectural friction for us that we haven't quite solved. What we tend to do these days is return entities from our data layer and then do filtering at the higher levels, but that spreads the filtering logic around our codebase a bit more than we like
It's nothing unsolvable, but returning entities rather than data from the data-access layer has some implications for the complexity of the rest of the code
On the other hand, it keeps the data layer simple and is very flexible in terms of what we can do at the controller level
@hmaurer we restore production to our staging environment daily
part of our business has a content creation component to it, so we’re constantly testing new content with new code
i'm trying to generate a datalog clause that looks like this [(.before ^Date ?event-start ^Date ?start-before)]
(with type hints)
when I don't quote the ^Date they get removed (which seems logical to me): [(list '.before ^Date (calculate-symbol param) ^Date (calculate-symbol other-param))]
==> [(.before ?event-start ?start-before)]