This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-09-27
Channels
- # announcements (1)
- # aws (8)
- # babashka (77)
- # babashka-sci-dev (8)
- # beginners (29)
- # biff (2)
- # calva (13)
- # cljs-dev (1)
- # clojure (42)
- # clojure-europe (205)
- # clojure-nl (1)
- # clojure-norway (5)
- # clojure-uk (4)
- # clojurescript (58)
- # conjure (9)
- # data-science (7)
- # datalevin (19)
- # datomic (3)
- # emacs (7)
- # fulcro (15)
- # gratitude (8)
- # lsp (52)
- # meander (3)
- # membrane (92)
- # off-topic (12)
- # re-frame (16)
- # reagent (4)
- # reitit (15)
- # releases (1)
- # sci (30)
- # shadow-cljs (34)
- # tools-deps (5)
- # xtdb (17)
Is xtdb somehow really sensitive to how far the SQL server is from the node? We are now setting up the cloud environments, and an ingress of data that takes 30 secs against local postgres took 20 minutes from the laptop to a data center some 100km away.
well I’d expect it to do lots of roundtrips and therefore be much slower than running the indexing in the same datacenter as the SQL server
not related to xtdb, but I’ve had a similar difference in SQL queries locally (in same datacenter) vs remotely via bastion… 90 seconds vs 11+ minutes
What is the typical number of docs per tx? The pipelining changes in 1.22 should improve the batching across small transactions quite dramatically
Uf, just hit https://github.com/xtdb/xtdb/issues/1509 too. Partitioning indeed might make sense, since evict is scaling worse than linearly, so perhaps other parts are too
(I tested evicting all those 28k documents and re-ingressing as opposed to just nuking the db, but I don't know if I have the stomach to wait for that eviction even on local rocksdb)
Hm, interesting. If I run several ingressions and evictions on the same node, the runtimes grow. First ingress was 11secs, partitioned evict 17secs, second ingress (same data) 155 secs, second evict 22secs, then closing and reopening the node, ingress 20secs, eviction 19secs