Fork me on GitHub

Hi. What’s the meaning of this line: RUN clojure -Sforce -Spath >/dev/null


Compute the classpath and output to /dev/null, for what?

Alex Miller (Clojure team)15:09:49

downloads deps and caches the classpath as a side effect

Alex Miller (Clojure team)15:09:23

clojure -P is a more intentional way to do that now

🙂 4

Yes. Thanks

Jacob O'Bryant21:09:25

Is there any overhead in (xt/db node) if you don't actually use the result at all? Biff's middleware adds a db val to each incoming request, wrapped in a delay (so db won't get called on e.g. requests for static files). I mainly did that because I started out using open-db, but I've since switched to plain db. So I'm wondering if there's any point in still having the delay there, especially since the crux->xt name change gives me a nice opportunity to smuggle in some breaking changes.


I'm like 95% sure that the db call is just tracking the two timestamps, so ~no overhead 🙂

Jacob O'Bryant21:09:22

Awesome, thanks


have you benchmarked it? I did and went to some trouble to make a fancy cache (with an api that spares me from calling db at all), because it takes about 20us, which iirc almost doubled the amount of time entity takes


> have you benchmarked it? I've not, no, but that is definitely the right way to approach the question, in retrospect 🙂

Jacob O'Bryant21:09:59

looks like db takes about 0.5ms on average on my vps


oof. that sounds like a bad vps tbh


well, it's a vps so the benchmark is probably making way less use of cache than my pc is

💯 2
Jacob O'Bryant21:09:46

it's a pretty cheap one. (merge {:foo 1} {:bar 2}) takes about 0.2ms, so I don't think db is adding too much overhead

🙂 2

ah, that puts it in perspective nicely (much relief)

🎅 2

with those numbers, the only conclusion you can draw is that your test environment is very noisy. it doesn't pass the sniff test that merge is 40% the perf of opening a rocksdb snapshot which is ultimately what happens in db

👍 2

Who's been trying out the XT 1.19.0-beta1 pre-release? Has it been working okay? Were the migration instructions easy enough to follow?


Worked well for me. Other than changing names, it was pretty straightforward. OTOH, I am not making heavy use of XTDB just yet.

👍 2

Thanks @U0AT6MBUL that's useful to hear 👌


migration was easy!

🙌 2

Took a look at it from a migrating crux-geo perspective. One thing that concerns me slightly is the legacy maintenance of the old crux keywords for indexing. The (map c/xt->crux docs) stuff is a bit unfortunate. Part of me wonders if it would have been better to write a script that migrates a transaction log and then require a rebuilding of the nodes, but then allowed legacy code like the above to not exist…

👍 2

We certainly considered forcing users to migrate the transaction log instead, but decided it was way too big a change to justify for ~zero user benefit (+ the loss of ability to do a rolling upgrade!)


@U899JBRPF perhaps it would be better if crux’s internal functions handled mapping xtdb to crux, rather than having indices doing it. I’m not sure the cost of calling c/xt->crux but if it’s non-zero, and every (secondary) index is going to need to do it anyway, perhaps it should be handled somewhere lower in the stack


Again, just thinking in terms of ergonomics and maintenance. Obviously writing an index is a more “advanced” API provided by xtdb, but the migration means forever supporting the old paradigm, or explicitly choosing not to (ie. what if a 3rd party index decides not to support the old crux keyspace)


Ah okay, I haven't dug into the trade-offs at that level of detail, but maybe @@jarohen can add some context when he next pops his head up - or feel free to open an issue for it

Steven Deobald16:09:49

Just closing loops, here. Did this become a GitHub issue? Or was it resolved elsewhere?