This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-11-03
Channels
- # announcements (2)
- # asami (1)
- # babashka (32)
- # beginners (125)
- # calva (4)
- # cider (1)
- # clj-kondo (16)
- # clj-together (1)
- # cljs-dev (15)
- # clojure (30)
- # clojure-australia (3)
- # clojure-europe (41)
- # clojure-italy (1)
- # clojure-losangeles (1)
- # clojure-nl (4)
- # clojure-spec (68)
- # clojure-uk (28)
- # clojurescript (36)
- # conjure (2)
- # cryogen (1)
- # cursive (2)
- # data-science (2)
- # datascript (2)
- # datomic (70)
- # events (2)
- # fulcro (11)
- # graalvm (1)
- # jobs (4)
- # kaocha (4)
- # leiningen (4)
- # malli (52)
- # meander (21)
- # off-topic (11)
- # pathom (7)
- # pedestal (17)
- # reagent (23)
- # reitit (5)
- # remote-jobs (5)
- # reveal (7)
- # shadow-cljs (24)
- # spacemacs (36)
- # sql (21)
- # vim (18)
- # xtdb (7)
should i be able to upsert an entity via an attribute which is a reference and also unique-by-identity?
for example
(d/transact conn
{:tx-data
[; upsert a school entity where :school/president is a reference and unique-by-identity
{:school/president {:president/id 12345}
:school/name "Bowling Academy of the Sciences"}]})
yes, you are correct and that is indeed the problem. it seems that you cannot upsert two entities that reference each other within the same transaction. for example, running this transaction twice causes a datom conflict
(d/transact conn
{:tx-data
[
; a president
{:president/id "The Dude" :db/id "temp-president"}
; a school with a unique-by-identity
; :school/president reference to the president
{:school/president "temp-president"
:school/name "Bowling Academy of Sciences"}
]})
whereas both of these transactions upsert as expected
(d/transact conn
{:tx-data
[; a president
{:president/id "The Dude" :db/id "temp-president"}
]})
(d/transact conn
{:tx-data
[; a school with a unique-by-identity
; :school/president reference to the president
{:school/president 101155069755476 ;<- known dbid
:school/name "Bowling Academy of Sciences"}]})
Is there any specific reason why some kind of selection can only be done using the Peer Server?
This one can
:find ?name ?surname :in $ :where [?e :p/name ?name] [?e :p/surname ?surname]
Yes, these ones. It seems like the Peer Library can only execute the "Collection of List" one
and it’s the opposite: only the peer API supports these; the client API (the peer server provides an endpoint for the client api) does not
This is weird, I'm using Datomic-dev (which I guess it's using the peer library?!) and I can't execute such queries
Maybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.
Maybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.
Oh ok, so it's a simulation of a cloud environment. I guess I was confused by the fact that's all in the same process
the client-api is designed to be networked or in-process; in dev-local or inside an ion, it’s actually in-process
Got it. So to keep it short I should either move to Datomic-free on Premise or workaround the limitation in the code
as to why they dropped the find specifications, I don’t know. My guess would be that people incorrectly thought that it actually changed the query performance characteristics, but actually it’s just a convenience for first
, map first
, etc
I could see these conveniente useful though. The idea of having to manually do that every time is annoying.
Hi there. We've been facing an awkward situation with our Cloud system From what I've seem of Datomic Cloud architecture, it seemed like I can have several databases in the same system, as long as there are transactor machines available in my Transactor group. With that in mind, we scaled our compute group to have 20 machines, to serve our 19 dbs. All went well for a few months, until 3/4 days ago, when we started facing issues to transact data, having "Busy Indexing" errors. If Im not wrong this is due to our transactors being unable to ingest data the same pace we are transacting it, or is there something else I'm missing here? Thanks :D
Another odd thing is that my Dynamo Write Actual is really low, despite my IndexMemDb metric being really high
are you running your application on the compute group? Or are you carefully directing clients to query groups that service a narrow number of dbs? If you hit the compute group randomly for app stuff, then you’re going to really stress the object cache on those nodes.
yeah, I don’t work for cognitect, but my understanding of how it works leads me to the very strong belief that doing what you’re doing will not scale. Remember that each db needs it’s own RAM cache space for queries. The compute group has no db affinity, so with 20 dbs you’re ending up causing every compute node to need to cache stuff for all 20 dbs.
@U0CKQ19AQ would you say it would be best if I transacted to a query group fed by a specific set of databases?
writes always go to a primary compute node for the db in question. no way around that
the problem is probably that you’re also causing high memory and CPU pressure on those nodes for queries
you could also just be ingesting things faster than datomic can handle…that is also possible
but 20dbs on compute sounds like a recipe for trouble if you’re using that for general application traffic
I tried shutting my services down and give time to datomic to ingest, but to no avail. IndexMemDB is just a flat line
there’s also the possibility that the txes themselves need to read enough of the 20 diff dbs to be causing mem problems. I’d contact support with a high prio ticket and see what they say.
The way things are built, there is a client connection for each one of the databases, depending on the body of a tx it is transacted to a specific db
If each node will be indexing/caching all 19 DBs, what's the point of increasing the node count to 20?