This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-04
Channels
- # architecture (27)
- # bangalore-clj (4)
- # beginners (22)
- # boot (35)
- # cider (26)
- # cljs-dev (2)
- # cljsrn (3)
- # clojure (156)
- # clojure-austria (3)
- # clojure-dev (9)
- # clojure-italy (25)
- # clojure-nl (10)
- # clojure-poland (5)
- # clojure-sanfrancisco (1)
- # clojure-spec (6)
- # clojure-uk (64)
- # clojurescript (169)
- # core-async (13)
- # cursive (13)
- # datomic (63)
- # dirac (50)
- # duct (21)
- # editors (1)
- # emacs (6)
- # events (1)
- # fulcro (1)
- # java (22)
- # keechma (14)
- # leiningen (2)
- # luminus (4)
- # off-topic (23)
- # onyx (4)
- # parinfer (5)
- # pedestal (4)
- # re-frame (6)
- # reagent (4)
- # ring-swagger (7)
- # rum (4)
- # shadow-cljs (84)
- # specter (5)
- # sql (36)
- # tools-deps (76)
- # uncomplicate (3)
- # yada (4)
I'm trying to upsert an entity and use its temp id with :db.fn/cas
, like so
[{:db/id "my-foo-id"
:foo/id "my-foo-id"
... other foo attributes ...}
[:db.fn/cas "my-foo-id" :foo/is-processed? nil true]]
but I get an exception
:db.error/not-a-keyword Cannot interpret as a keyword: my-foo-id, no leading :
is there a way to do this?@davidw I doubt it, because transactions functions happen prior to tempid resolution (for good reasons) - for such cases, you may want to write your own transaction function that is aware of identity attributes
the intention is to use this with datomic cloud so a custom transaction function isn't an option
do you have a reference for the transaction function, tempid resolution order? I'm interested in the good reasons.
I don't have any doc ereference about that, but we can reason about it
a transaction function is local - it cannot see the whole transaction it participates in. But tempid resolution has to consider the entire transaction
What's more, if tempid resolution happened before transaction functions, what would happen with tempids emitted by transactions functions ?
And yeah, about Cloud, that's exactly the sort of limitation causing me to not be too enthusiastic about it yet, so I don't know what to tell you really đ
that makes sense. meaning I can see why it works that way. it's disappointing because it's seems like a valid think to want to do.
do you know off the top of your head if you can use a look up ref with cas?
[:db.fn/cas [:foo/id "my-foo-id"] :foo/is-processed? nil true]
I don't sorry đ I don't use cas much
the only other way I can think to achieve what I want is to insert the empty entity in one transaction and then use the lookup ref with cas in another. It's not as nice as I was hoping for but I think it should work. I'll check it out. Thanks for your help.
Datomic Cloud is definitely lacking in transactional expressive power as of today IMHO
That will change in time
So, how do you approach user-defined fields in Datomic? In say Postgresql I would roll user-defined fields into a map and store that as json. Not perfect, but it works well enough. Storing maps doesn't seem practical with Datomic and I'm not sure I want to let users arbitrarily modify the schema. Anyone have a good solution?
Storing JSON or EDN in strings works decently. I have arbitrary, user-supplied queries and pull expressions in string fields.
However, I'm also curious what kind of application is it in which users want to modify the schema.
In Postgresql you can index by json fields, which is nice. Storing serialized objects in Datomic wouldn't offer the same advantage. User-defined data fields are a pretty common requirement for enterprise software. In Datomic it could be as easy as just modifying the schema whenever a user wants to add a field, but that doesn't seem like a good practice.
Curious if anyone knows: when you set âno historyâ on a Datomic attribute, does this allow Datomic to do update-in-place at the storage level? Wondering how much of a performance boost that is likely to give if you donât need history (and update-in-place would imply âa lotâ).
also once a key+value is written it is never mutated (there are only a few exceptions: well-known key names whose values hold the root pointers)
Yeah, that makes sense. I guess Iâll just have to micro-benchmark and get a sense of how much it can help.
also you have no guarantees there will never be any history at all--before the index is written, you will see "old" values in the log
https://docs.datomic.com/on-prem/best-practices.html#nohistory-for-high-churn â cost of storing history is frequently not worth the impact on database size or indexing performance.â Thinking about it now, Iâm sure the âindexing performanceâ comment is just due to there being less data.
The docs say that it reduces indexing overhead, which implies update-in-place in my mind
@lopalghost âseem like a good practiceâ feels like something, from my perspective, that is coming from a certain SQL security mindset. I have that leaning as well, but as Iâve thought about it schema in Datomic: 1. Gives a clear name for something, with a namespace. Allowing schema to âflexâ to include user-namespaced things seems natural to me. 2. Gives a clear type to it, that because of (1) can also be given a data specification (e.g. clojure.spec). Personally, I think opening Datomic schema up to extension through a user UI is pretty powerful
of course, new challenges as wellâŚrules like âonly grow schemaâ need to be followed đ
@tony.kay I'm starting to come around on that line of thinking. I'm definitely coming from a mindset of sql security that might not be relevant to Datomic. Has anyone else tried opening the schema to modification by users?
you maybe want [(missing? $ ?e :alarm/cleared_at)]
, but that will match every entity in the entire system that lacks a cleared_at
and you have to set or retract both attributes together all the time in your application code
Seems very unlikely by itself; although obviously cycles devoted to reading it are consumed that wouldn't be
I think the peers get all TXs all the time anyway; tx-report-queue is just adding them to a user-readable queue
2. Does Datomic cache query results, such that the same query on the same db doesn't hit indexes and such?
No, but a repeat run will typically be very hot: query plan is cached, indexes were loaded, etc