This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-18
Channels
- # announcements (1)
- # asami (2)
- # babashka (21)
- # beginners (23)
- # cider (5)
- # clj-kondo (10)
- # clojure (31)
- # clojure-europe (3)
- # clojure-nl (1)
- # clojurescript (47)
- # deps-new (1)
- # figwheel-main (7)
- # fulcro (7)
- # gratitude (1)
- # jobs-discuss (2)
- # lein-figwheel (1)
- # lsp (5)
- # off-topic (11)
- # pathom (5)
- # re-frame (1)
- # react (5)
- # reagent (4)
- # releases (1)
- # shadow-cljs (63)
- # tools-deps (16)
- # xtdb (26)
Hi. What’s the meaning of this line: https://github.com/xtdb/xtdb/blob/master/build/docker/Dockerfile#L11 RUN clojure -Sforce -Spath >/dev/null
downloads deps and caches the classpath as a side effect
Is there any overhead in (xt/db node)
if you don't actually use the result at all? Biff's middleware adds a db val to each incoming request, wrapped in a delay
(so db
won't get called on e.g. requests for static files). I mainly did that because I started out using open-db
, but I've since switched to plain db
. So I'm wondering if there's any point in still having the delay
there, especially since the crux->xt name change gives me a nice opportunity to smuggle in some breaking changes.
I'm like 95% sure that the db
call is just tracking the two timestamps, so ~no overhead 🙂 https://github.com/xtdb/xtdb/blob/e2f51ed99fc2716faa8ad254c0b18166c937b134/core/src/xtdb/node.clj#L100-L106
Awesome, thanks
have you benchmarked it? I did and went to some trouble to make a fancy cache (with an api that spares me from calling db at all), because it takes about 20us, which iirc almost doubled the amount of time entity
takes
> have you benchmarked it? I've not, no, but that is definitely the right way to approach the question, in retrospect 🙂
well, it's a vps so the benchmark is probably making way less use of cache than my pc is
it's a pretty cheap one. (merge {:foo 1} {:bar 2})
takes about 0.2ms, so I don't think db
is adding too much overhead
with those numbers, the only conclusion you can draw is that your test environment is very noisy. it doesn't pass the sniff test that merge is 40% the perf of opening a rocksdb snapshot which is ultimately what happens in db
https://github.com/xtdb/xtdb/blob/e2f51ed99fc2716faa8ad254c0b18166c937b134/core/src/xtdb/query.clj#L1935
Who's been trying out the XT 1.19.0-beta1
pre-release? Has it been working okay? Were the migration instructions easy enough to follow?
Worked well for me. Other than changing names, it was pretty straightforward. OTOH, I am not making heavy use of XTDB just yet.
Thanks @U0AT6MBUL that's useful to hear 👌
Took a look at it from a migrating crux-geo
perspective. One thing that concerns me slightly is the legacy maintenance of the old crux keywords for indexing. The (map c/xt->crux docs)
stuff is a bit unfortunate. Part of me wonders if it would have been better to write a script that migrates a transaction log and then require a rebuilding of the nodes, but then allowed legacy code like the above to not exist…
We certainly considered forcing users to migrate the transaction log instead, but decided it was way too big a change to justify for ~zero user benefit (+ the loss of ability to do a rolling upgrade!)
@U899JBRPF perhaps it would be better if crux’s internal functions handled mapping xtdb to crux, rather than having indices doing it. I’m not sure the cost of calling c/xt->crux
but if it’s non-zero, and every (secondary) index is going to need to do it anyway, perhaps it should be handled somewhere lower in the stack
Again, just thinking in terms of ergonomics and maintenance. Obviously writing an index is a more “advanced” API provided by xtdb, but the migration means forever supporting the old paradigm, or explicitly choosing not to (ie. what if a 3rd party index decides not to support the old crux keyspace)
Ah okay, I haven't dug into the trade-offs at that level of detail, but maybe @@jarohen can add some context when he next pops his head up - or feel free to open an issue for it
Just closing loops, here. Did this become a GitHub issue? Or was it resolved elsewhere?