This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-06-13
Channels
- # announcements (1)
- # beginners (155)
- # calva (104)
- # cider (7)
- # clj-kondo (55)
- # cljdoc (2)
- # cljs-dev (19)
- # cljsrn (22)
- # clojure (230)
- # clojure-europe (1)
- # clojure-italy (15)
- # clojure-losangeles (8)
- # clojure-nl (11)
- # clojure-spec (20)
- # clojure-uk (30)
- # clojurescript (10)
- # code-reviews (32)
- # cursive (2)
- # data-science (6)
- # datascript (3)
- # datomic (141)
- # fulcro (1)
- # graphql (6)
- # jobs-discuss (19)
- # luminus (8)
- # off-topic (30)
- # pathom (2)
- # protorepl (8)
- # reagent (15)
- # reitit (3)
- # shadow-cljs (11)
- # tools-deps (1)
- # xtdb (8)
is it possible to create lookup ref that relies on more then one attribute? The answer seems to be no, but it would not be hard to for me to just right the query for the entity id and use that i suppose.
so, if I'm running (datomic.api/connect db-uri)
... am I supposed to have AMQ running in the background?
Because I'm getting this
AMQ119007: Cannot connect to server(s). Tried with all available servers.
for reference this was attempted with MySQL and the transactor running in docker containers and was attempting to connect to transactor from host machine
(attempting to troubleshoot why my ring server is having trouble connecting to datomic)
@goomba no. see https://docs.datomic.com/on-prem/deployment.html#peer-fails-to-connect most likely your host and alt-host values in your transactor properties file are the issue
haha, it's even highlighted on the page š thanks @marshall, I'll take a look
alright this time I think it actually is the peer
ring_1 | ERROR: AMQ214016: Failed to create netty connection
ring_1 | javax.net.ssl.SSLException: handshake timed out
ring_1 | at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
ring_1 |
ring_1 | Jun 13, 2019 1:36:04 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ring_1 | ERROR: AMQ214016: Failed to create netty connection
ring_1 | javax.net.ssl.SSLException: handshake timed out
ring_1 | at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
ring_1 |
ring_1 | Exception in thread "main" Syntax error compiling at (db.clj:42:11).
ring_1 | Error communicating with HOST 172.20.0.4 on PORT 4334
I assume this is because I'm not exposing a port in docker properly
will look into it
Iāve got a weird corner case were datomic and datascript agree: tempids can unify together but not a tempid and an eid:
[[:db/add "foo" :db/ident :foo] [:db/add "bar" :db/ident :foo]] ; works fine and resolve to a single eid
[[:db/add <existing-eid-or-lookup-ref> :db/ident :foo] [:db/add "bar" :db/ident :foo]] ; doesn't work
(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/user) :db/ident :foo] [:db/add "bar" :db/ident :foo]]) deref)
=>
{:db-before datomic.db.Db,
@5e4de82f :db-after,
datomic.db.Db @b6765e2a,
:tx-data [#datom[13194139534312
50
#inst"2019-06-13T14:19:50.056-00:00"
13194139534312
true]
#datom[17592186045417 10 :foo 13194139534312 true]],
:tempids {-9223350046623220289 17592186045417, "bar" 17592186045417}}
(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/user) :db/ident :foo] [:db/add "bar" :db/ident :foo]]) deref)
=>
{:db-before datomic.db.Db,
@b6765e2a :db-after,
datomic.db.Db @276d2771,
:tx-data [#datom[13194139534314
50
#inst"2019-06-13T14:20:00.447-00:00"
13194139534314
true]],
:tempids {-9223350046623220291 17592186045417, "bar" 17592186045417}}
seems to work for initial write also, but honestly the partition difference seems like it should be an error. I don't know how it would choose a partition
(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/db) :db/ident :bar] [:db/add "bar" :db/ident :bar]]) deref)
=>
{:db-before datomic.db.Db,
@28ac123 :db-after,
datomic.db.Db @a8ad04f9,
:tx-data [#datom[13194139534319
50
#inst"2019-06-13T14:22:26.281-00:00"
13194139534319
true]
#datom[63 10 :bar 13194139534319 true]],
:tempids {-9223367638809264717 63, "bar" 63}}
"doesn't work" means? error?
@favila @alexmiller fuller repro: 1/ transacting with two tempids and one ident works fine
user=> (d/transact conn [[:db/add "foo" :db/ident :foo] [:db/add "bar" :db/ident :foo]])
#object[datomic.promise$settable_future$reify__4751 0x3386c206 {:status :ready, :val {:db-before datomic.db.Db@a02b4ea8, :db-after datomic.db.Db@e2dbad4b, :tx-data [#datom[13194139534321 50 #inst "2019-06-13T14:39:47.738-00:00" 13194139534321 true] #datom[17592186045426 10 :foo 13194139534321 true]], :tempids {"foo" 17592186045426, "bar" 17592186045426}}}]
2/ transacting the same ident on an existing eid AND a tempid fails:
user=> (d/transact conn [[:db/add :foo :db/ident :bar] [:db/add "bar" :db/ident :bar]])
#object[datomic.promise$settable_future$reify__4751 0x2321e482 {:status :failed, :val #error {
:cause ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:data {:d1 [:foo :db/ident :bar 13194139534323 true], :d2 [17592186045428 :db/ident :bar 13194139534323 true], :db/error :db.error/datoms-conflict}
:via
[{:type java.util.concurrent.ExecutionException
:message "java.lang.IllegalArgumentException: :db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:at [datomic.promise$throw_executionexception_if_throwable invokeStatic "promise.clj" 10]}
{:type datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:message ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:data {:d1 [:foo :db/ident :bar 13194139534323 true], :d2 [17592186045428 :db/ident :bar 13194139534323 true], :db/error :db.error/datoms-conflict}
:at [datomic.error$argd invokeStatic "error.clj" 77]}]
...
where I would have expected the tempid to be assigned the existing eid (upsert semantics)user=> (d/transact conn [[:db/add 17592186045426 :db/ident :bar] [:db/add "tmp" :db/ident :bar]])
#object[datomic.promise$settable_future$reify__4751 0x2964511 {:status :failed, :val #error {
:cause ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
resolution of a tempid to a possible real id is done using the db-before value; for this to work as expected, it would have to be done with a db-after value
i.e., [:db/add "tmp" :db/ident :bar]
would have to rewrite "tmp"
to 17592186045426
before it knew that 17592186045426 had the ident :bar
What about this one (less heavy on db/idents)?
user=> (d/transact conn [{:db/ident :u/k :db/unique :db.unique/identity :db/valueType :db.type/keyword :db/cardinality :db.cardinality/one}])
user=> (d/transact conn [[:db/add :u/k :u/k :foo] [:db/add "tmp" :u/k :foo]])
(exception)
then how come
(d/transact conn [[:db/add "tmp1" :u/k :bar] [:db/add "tmp2" :u/k :bar]])
works?btw I donāt need any db to figure this out: itās a purely local unification; I could postprocess the fully expanded tx-data to perform the unification....
I'm still suspicious of trying to unify against something that hasn't been written yet
I'm pretty sure I exploit lack of unification in cases like this to detect real conflicts
I can sort of see why [[:db/add "tmp1" :u/k :bar] [:db/add "tmp2" :u/k :bar]]
might be allowed to unify because :u/k :bar doesn't exist yet
how do you know it doesnāt exist yet? In fact it works even if it preexists:
user=> (d/transact conn [{:u/k :preexisting}])
#object[datomic.promise$settable_future$reify__4751 0x62cd562d {:status :ready, :val {:db-before datomic.db.Db@1d500de1, :db-after datomic.db.Db@c2499ebc, :tx-data [#datom[13194139534328 50 #inst "2019-06-13T15:10:42.368-00:00" 13194139534328 true] #datom[17592186045433 64 :preexisting 13194139534328 true]], :tempids {-9223301668109598113 17592186045433}}}]
user=> (d/transact conn [[:db/add "tmp1" :u/k :preexisting] [:db/add "tmp2" :u/k :preexisting]])
#object[datomic.promise$settable_future$reify__4751 0x14b5752f {:status :ready, :val {:db-before datomic.db.Db@c2499ebc, :db-after datomic.db.Db@98f067bc, :tx-data [#datom[13194139534330 50 #inst "2019-06-13T15:11:01.592-00:00" 13194139534330 true]], :tempids {"tmp1" 17592186045433, "tmp2" 17592186045433}}}]
similarly, if [:u/k :bar]
didn't exist and was asserted, kinda makes sense to say that every other tempid trying to assert that would unify to the same newly-minted eid
but when :u/k :bar
could unify to a real eid, to ask a different tempid asserting a new [:u/k :foo]
to unify to the what-is-now :bar but what will be :foo seems like too much mind-reading
e.g. {:db/id 12345 :refa :refa1 :refb :refb1} {:db/id 67890 :refa :refa2}
tx [[:db/add "t1" :refa :refa2][:db/add "t2" :refb :refb1]]
@(d/transact conn [{:db/id "t1" :refa :a1 :refb :b1} {:db/id "t2" :refa :a2 :refb :b2}])
@(d/transact conn [[:db/add "t3" :refa :a1] [:db/add "t3" :refb :b3]])
=>
{:db-before datomic.db.Db,
@20fead49 :db-after,
datomic.db.Db @c13cf2d2,
:tx-data [#datom[13194139534316
50
#inst"2019-06-13T15:43:29.444-00:00"
13194139534316
true]
#datom[17592186045418 64 :b3 13194139534316 true]
#datom[17592186045418 64 :b1 13194139534316 false]],
:tempids {"t3" 17592186045418}}
this btw is also the tx ops from expansion of the tx map `{:db/id "t3" :refa :a1 :refb :b3}`
If I were going back in time, I think I would make upserting attributes work like lookup refs that may not resolve against the db-before
e.g. {:db/id [:refb :b3] :refa :a1}
would (if :b3 didn't exist) reliably make a new eid, assert :refb :b3 on it, and assert :refa :a1 on it
actually you may be able to replicate some of that behavior by consistently hashing the same ref lookup to the same string-for-tempid
or even worse [[:db/add "t1" :refa :refa2][:db/add "t1" :refb :refb1]]
: was I making a new :refa2 or changing :refa1 to :refa2?
you have to decide whether forms like [:db/add "t1" :refa :refa2]
are primarily a lazy way of resolving tempids or a way to assert a new ident
in general when I want my tx to be an update rather than a upsert, I will use the ident or lookup ref as the eid
> you have to decide whether forms like [:db/add "t1" :refa :refa2]
are primarily a lazy way of resolving tempids or a way to assert a new ident
Neither (or both), they are the upsert semantics and static analysis of the tx-data is enough
[:db/add eid :ref :A] [:db/add "tmp" :ref :A]
⢠:A doesn't exist yet in the db, tmp resolves to eid
⢠:A does exist but on another eid -> it's a unicity conflict
⢠:A does already exist on this eid -> tmp resolves to eid
In all non-conflicting cases we get the same output.> you have to decide whether forms like [:db/add "t1" :refa :refa2]
are primarily a lazy way of resolving tempids or a way to assert a new ident
Neither (or both), they are the upsert semantics and static analysis of the tx-data is enough
[:db/add eid :ref :A] [:db/add "tmp" :ref :A]
⢠:A doesn't exist yet in the db, tmp resolves to eid
⢠:A does exist but on another eid -> it's a unicity conflict
⢠:A does already exist on this eid -> tmp resolves to eid
In all non-conflicting cases we get the same output.Hello all, I have a question due to lack of conceptualization: why is :db/cas
necessary if the following is true for a Datomic system:
>The transactor queues transactions and processes them serially.
Serial processing seems to imply no need for check-and-set, but Iām surely missing something.
If the value youāre about to set is dependent upon itās previous value (like a bank account) then you want cas.
That makes sense. So then I think Iāve been implying that the transaction functions themselves are run serially, but in reality, they might be run (expanded) in parallel, and their resulting datoms are what actually get sent to the transactor for serial processing. Is that correct?
the problem is by the time that tx data gets to the transactor, your assumptions may be wrong
:db/cas is a way to assert that something still has the value you read at the moment the write occurs
the transactions themselves are applied serially, but the transaction data was not prepared serially (i.e. it was prepared by uncoordinated peers reading whatever they read)
I see, thank you @favila. So then, in a Cloud system, the ācompute groupā actually can prepare datoms (i.e. run tx fns) in an uncoordinated fashion, but the actual mutation of storage is always serial?
This actually would explain why :db/cas
is a built-in, because there must be some storage-level magic happening to ensure that CASās promise is kept.
the same description for the use/need/purpose of CAS for on-prem that Francis provided above is true for Cloud
Okay so to check my own understanding, if I prepare the tx outside of a tx function, then :db/cas
might be required. However, if I query for the dependent data inside a tx function, then I shouldnāt need CAS, correct?
Hereās actually an interesting example from the docs: https://docs.datomic.com/cloud/transactions/transaction-functions.html#creating
Notice that inc-attr
does not use :db/cas
even tho it depends on the previous valueā¦
:db/cas
is a transaction function
you can use it within a custom transaction function you write, but you donāt have to
you can also reimplement it (or something like it) as a transaction function yourself
However, the general use of CAS is more frequently for optimistic concurrency applications - i.e. https://docs.datomic.com/cloud/best.html#optimistic-concurrency
This is because transactions run serially. the transaction data expansion and application to make the new DB value begins with the previous db value. All tx functions receive that previous db value. No other transactions are expanded/applied during this process (in essence, the transaction has a lock on the entire database). The datoms that result from db are applied to the previous db value to make the next db value. then the next tx is processed
the tradeoff is that whatever work youāre doing in your transaction function is happening in the single-writer-thread of Datomic
so if you try to do something expensive (like call a remote service ā eek!) from within the transaction function, all writes are going to wait on that work
if you instead do that work locally in your client (or peer), you can avoid that cost on the transaction stream but you need to ensure that no one has changed your relevant data out from under you in the meantime, so you can often use CAS for that
cas (and it's general technique of "assert what I read hasn't changed") allows the opposite tradeoff: possible parallel tx preparation, but a stale read is expensive to recover from. (You need to catch the tx error, detect it was a CAS error, and reprepare your tx using a newer db, and reissue hoping you don't race with some other write)
Ahhh okay, that link coupled with these explanations have totally cleared up my confusion. I was totally missing the fact that concurrency was in the hands of the developer. So putting simple query logic into a tx function is totally valid, but when that logic becomes expensive (e.g. remote call as @marshall said), it might make more sense to perform that work outside the tx fn to keep the tx stream clear, and rely on CAS to uphold consistency.
If you are familiar with clojure atoms: this is roughly the difference between (swap! db apply-ops (inc-something db))
and (swap! db (comp apply-ops inc-something))
Makes sense. Former performs the inc transformation outside the swap, and the latter performs the inc/apply inside the swap (iiuc).
are there best-practices or guides for query optimisation? we're discovering queries that run orders of magnitude faster after reordering just one of their constraints.