Fork me on GitHub

LOL! Sounds like me... I've been meaning to learn Datomic for "ages" but now that dev-local is available, I think I actually might.


I'm not yet at the stage of working with anything that complex, but so far I'm very happy, probably just because it all makes so much sense ...

John Leidegren05:09:15

How can I move the identity :test/id from the entity 17592186045418 to the new entity (referenced by :test/ref). Do I have to do this in two separate transactions? All I want to do is move the identity to a new entity in a single transaction. I understand why the temp ID resolution is taking place and resolving the temp ID to a conflict but how can I avoid it. How can I force a new entity here?


Interesting problem! I'm interested to see the solution for this.. I expect you'd have to split to two transactions since you're working with db.unique/identity though

John Leidegren06:09:37

Yeah. That's what I did. I don't like it because now there's a point in time where the database sort of has an inconsistent state. It's not the end of the world but I really want it to commit as a single transaction. For this to actually go though, the transactor, would have to somehow react to the fact that the identity is being retracted during the transaction and because of that, it mustn't be allowed to partake in temp ID resolution. (either that, or you tag the temp ID as unresolvable to force a new entity...)


it seems like a modelling problem if you need to get to a state where an entity has an 'identity', then loses it and gives it to another entity - so I could see why this could be unsupported behaviour

John Leidegren06:09:57

I'm fixing a data problem or rather I'm doing this because I'm revising the data model. I ran into this as part of a upgrade script I wrote.

John Leidegren06:09:45

I know Marshal has commented in the past on some of these transactional isolation behaviours. As to why it might need to work this way but I'm curious to what the reasoning for it is. As I can see a way to program around it but I can also understand that you might not want to just do that.


I suspect that it's a bit of a trade-off - if you have this behaviour, it's simpler to reason about transactions, since it's likely implemented in a ref-resolution phase, then an actual write phase


but if you have clever transactions where you effectively mutate the state for each fact, then things get trickier to accurately reason about


I had this kind of issue for new schema, and schema that used the new schema:

[{:db/ident :ent/displayname}
 {:db/ident :ent/something
  :ent/displayname "Hello"}]
would complain that :ent/displayname is not part of the schema yet


so I had to write a function that checks existence of the properties and then split the schema assertion into multiple phases

John Leidegren06:09:54

Yeah, so this rule applies to attributes. Which I sort of understand. You cannot refer to schema before it exists but for data though. Are the same constraints equally valid?


I imagine the same ref-resolution phase code applies.. I don't know the exact implementation details of course, but that's the picture I have in my head xD It would basically handle every datom against the immutable value prior to the transaction


interestingly, this implementation also implies that you can't provide two values of a :cardinality/one field in the same transaction


(d/transact (:conn datomic)
      [[:db/add "temp" :ent/displayname "hello"]
       [:db/add "temp" :ent/displayname "bye"]]


:db.error/datoms-conflict Two datoms in the same transaction conflict
                             {:d1 [17592186045457 :ent/displayname \"hello\" 13194139534352 true],
                              :d2 [17592186045457 :ent/displayname \"bye\" 13194139534352 true]}


since it can't imply the db/retract for the "hello" value


/s/field/attribute, of course

John Leidegren07:09:54

Yeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction.


ah, I see - so by that constraint, the same applies for retracting and to re-using identity on a new entity


what version of datomic are you using?


and cloud or on-prem?

John Leidegren12:09:40

@U05120CBV It's actually datomic-free-0.9.5703.21 so maybe this isn't a problem elsewhere


but its possible this is unrelated

John Leidegren13:09:52

hehe, that description seems to fit my problem very well. oh well. Thanks for letting me know.


@UNV3H01PS do you have a starter license? can you try it in starter and/or with Cloud?


i can also look at trying to reproduce

John Leidegren14:09:21

Thanks but I'm just fooling around. As long as I know this isn't the intended behavior that's fine. I know what to do now.


:thumbsup: we’ll look into it anyway

❤️ 3

Hey @UNV3H01PS, Marshall tasked me with looking into this and I wanted to clarify that this is indeed intended behavior and not related to the fix Marshall described. You already have the rough reason here: > Yeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction. You cannot transact on the same datom twice and have it mean separate things in the same transaction. You have to split the transactions up to retract the entity then assert the new identity. Ultimately what you're doing here is cleaning up a modeling decision and in addition to separating your retraction and add transactions you could alternatively model a new identity and use that identity going forward, preserving the initial decision.


I know you were already past this problem, but I hope that clears things up.

John Leidegren10:09:38

@U1QJACBUM Oh, thanks for getting back to me. I really appreciate it.


Hello, I'm new to Clojure and Datomic. I'm using the min aggregate to find the lowest-priced product, but can't seem to figure out how to get the entity ID of the product along with it -

;; schema
(def product-offer-schema
  [{:db/ident :product-offer/product
    :db/valueType :db.type/ref
    :db/cardinality :db.cardinality/one}
   {:db/ident :product-offer/vendor
    :db/valueType :db.type/ref
    :db/cardinality :db.cardinality/one}
   {:db/ident :product-offer/price
    :db/valueType :db.type/long
    :db/cardinality :db.cardinality/one}
   {:db/ident :product-offer/stock-quantity
    :db/valueType :db.type/long
    :db/cardinality :db.cardinality/one}
(d/transact conn product-offer-schema)

;; add data
(d/transact conn
  [{:db/ident :vendor/Alice}
   {:db/ident :vendor/Bob}
   {:db/ident :product/BunnyBoots}
   {:db/ident :product/Gum}
(d/transact conn
  [{:product-offer/vendor  :vendor/Alice
    :product-offer/product :product/BunnyBoots
    :product-offer/price   9981 ;; $99.81
    :product-offer/stock-quantity 78
   {:product-offer/vendor  :vendor/Alice
    :product-offer/product :product/Gum
    :product-offer/price   200 ;; $2.00
    :product-offer/stock-quantity 500
   {:product-offer/vendor  :vendor/Bob
    :product-offer/product :product/BunnyBoots
    :product-offer/price   9000 ;; $90.00
    :product-offer/stock-quantity 15

;; This returns the lowest price for bunny boots as expected, $90:
(def cheapest-boots-q '[:find (min ?p) .
                        [?e :product-offer/product :product/BunnyBoots]
                        [?e :product-offer/price ?p]
(d/q cheapest-boots-q db)
;; => 9000

;; However I also need the entity ID for the lowest-priced offer, and
;; when I try adding it, I get the $99.81 boots:
(def cheapest-boots-q '[:find [?e (min ?p)]
                        [?e :product-offer/product :product/BunnyBoots]
                        [?e :product-offer/price ?p]
(d/q cheapest-boots-q db)
;; => [17592186045423 9981]
I think I might see what's going on - it's grouping on entity ID, and returning a (min ?p) aggregate for each one (so basically useless). But I'm not sure how else to get the entity ID in the result tuple... should I not be using an aggregate at all for this?


datalog doesn’t support this kind of aggregation (neither does sql!)


you can do this with a subquery that finds the max, then find the e with a matching max in the outer query; or, do it in clojure

👍 3

:find ?e ?p then (apply max-key peek results) (for example)


the reason datalog and sql don’t do this is because the aggregation is uncorrelated: suppose multiple ?e values have the same max value: which ?e is selected? the aggregation demands only one row for the grouping


(you still have that problem BTW--you may need to add some other selection criteria)


Ah I see, thank you!

Vishal Gautam20:09:38

Hello I am playing with dev-local datomic. When I try to create a database I get error.

java.nio.file.NoSuchFileException: "/resources/dev/quizzer/db.log"
Here is the full code
(ns quizzer.core
   [datomic.client.api :as d]))

(def client (d/client {:server-type :dev-local
                       :storage-dir "/resources"
                       :system "dev"}))

;; Creating a database
(defn make-conn [db-name]
  (d/create-database client {:db-name db-name})
  (d/connect client {:db-name db-name}))

  (d/create-database client {:db-name "quizzer"}))
Any ideas? 🙂

Alex Miller (Clojure team)20:09:05

does /resources/dev/quizzer exist?

Alex Miller (Clojure team)20:09:35

or more simply, does /resources exist?

Vishal Gautam20:09:16

I placed it in the root directory. Here is project structure

├── deps.edn
├── resources
│   └── dev
│       └── quizzer
│           └── db.log
├── src
│   └── quizzer
│       └── core.clj
└── test
    └── quizzer
        └── core_test.clj

7 directories, 5 files

Vishal Gautam20:09:51

And the deps.edn structure

{:paths ["src" "resources" "test"]
 :deps {org.clojure/clojure              {:mvn/version "1.10.1"}
        com.datomic/dev-local            {:mvn/version "0.9.195"}}
 :aliases {:server {:main-opts ["-m" "quizzer.core"]}
           :test {:extra-paths ["test/quizzer"]
                  :extra-deps  {lambdaisland/kaocha {:mvn/version "0.0-529"}
                                lambdaisland/kaocha-cloverage {:mvn/version "1.0.63"}}
                  :main-opts   ["-m" "kaocha.runner"]}}}

Alex Miller (Clojure team)20:09:05

"/resources" is an absolute path

Alex Miller (Clojure team)20:09:41

I assume that's in your ~/.datomic/dev-local.edn

😮 3
Vishal Gautam20:09:28

Absolute path was the problem, thank you @alexmiller

Jake Shelby21:09:05

I have a datomic cloud production topology, which shows the correct number of datoms in the corresponding CloudWatch dashboard panel..... however, the datoms panel for my other solo topology never shows any datoms, no matter how many I transact into the system


I know solo reports a subset of the metrics, but according to solo should report that datoms metric. > Note In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled. Not sure what's going on. I'm seeing the same on our solo stacks though @U018P5YRB8U.

Jake Shelby21:09:23

thanks for checking your system @U083D6HK9, what version is yours? (I just launched mine last week, so it's the latest version

▶ datomic cloud list-systems                                                                                           
[{"name":"core-dev", "storage-cft-version":"704", "topology":"solo"},


Same version

Vishal Gautam23:09:11

Does anyone have an example on how tx-report-queue is used