Fork me on GitHub

just noticed v1.20 was released yesterday ?


docs still say 1.19.0


and github releases page


1.20.0 is indeed up on Maven, release notes to come 🙂

excited 4

Hypothetical. Asking for a friend. How would you implement a two-tier doc-store and tx-log. The first tier has lots of data that doesn't change very much, and the second tier has a little bit of data that is changing more frequently. The doc store first checks t2 and if it doesn't find the doc, checks t1 (and makes use of document-cache too). All writes for the doc-store go to t2 by default (t1 is managed elsewhere). what rules should I follow for the tx-log for it to remain valid? could all odd tx-ids be a reference to the tier1 tx-log and all even tx-ids be a reference to the tier2 tx-log?


why would you? what problem would this type of tier system solve?


tier1 has lots of reference data, and each tenant needs a copy of that reference data, but rather than have lots of copies of reference data (one for each tenant) - have one copy (tier1) and store tenant specific data in tier 2


ok, so you are trading storage space cost vs solution complexity


idk how you would do this in a way that actually works so that queries can use both the t1 and t2 data


Hypothetically speaking (😉) I can't think of a way to guarantee deterministic ingestion ordering across two continuously updating tx-logs unless they are both guaranteed to run on the same broker/JDBC backend and the tx-times lined up exactly, regardless of write ordering between threads (in which case you could then choose to always order A txes before B txes in the rare cases when the tx-times are identical). I'm not sure such a setup is actually possible using Kafka at all :thinking_face: However, if you are willing to tolerate duplicating the tx-log writes to each tenant (which shouldn't be quite so bad, since tx-log contents are typically small relative to the docs), then it makes things simpler, as I can imagine having a two-tier doc-store is quite plausible by proxying and partitioning across two physical backend doc-stores, based on inspecting the prefix of the doc ID. You would also need to handle puts of eviction tombstones somehow, since the ID is wiped - I guess you could naively write tombstones to both partitions.

👍 1

thanks. the backend store is GCP datastore, so timestamps will be consistent across tiers or partitions. (ie the same datastore, different namespaces). not working on this at the moment, but it was more a thought exercise about future scaling options! thanks @tatut and @U899JBRPF

👌 1
🙂 1

I'm seeing a strange behavior in the HTTP interface where the same query completes in ~12s but times out (even with :timeout at 3 minutes) when all the logic variables are not prefixed with a ?. can someone sanity check me here? query inside


with ?:

{:find [?t],
   :where [[?s :sentence/tokens ?t]
           [?s :sentence/tokens ?t2]

           [?t :token/form ?f]
           [?f :form/value #{"eat" "Eat"}]

           [?t2 :token/form ?f2]
           [?f2 :form/value #{"up" "Up"}]
           [?t2 :token/deprel ?dr2]
           [?dr2 :deprel/value "compound:prt"]]



{:find [t],
   :where [[s :sentence/tokens t]
           [s :sentence/tokens t2]

           [t :token/form f]
           [f :form/value #{"eat" "Eat"}]

           [t2 :token/form f2]
           [f2 :form/value #{"up" "Up"}]
           [t2 :token/deprel dr2]
           [dr2 :deprel/value "compound:prt"]]


you don't need to quote the query when you're using the http interface, right?


> you don't need to quote the query when you're using the http interface, right? shouldn't do, nope

👍 1

just in case there's some obscure symbol conflicts with those specific examples, could you try adding a ~random prefix

{:find [foo-t],
   :where [[foo-s :sentence/tokens foo-t]
           [foo-s :sentence/tokens foo-t2]

           [foo-t :token/form foo-f]
           [foo-f :form/value #{"eat" "Eat"}]

           [foo-t2 :token/form foo-f2]
           [foo-f2 :form/value #{"up" "Up"}]
           [foo-t2 :token/deprel foo-dr2]
           [foo-dr2 :deprel/value "compound:prt"]]


not sure if this makes things more or less weird, but that also times out

😄 1

to be clear, this isn't a critical issue for me at all (can just do ? prefixes) but i thought i'd mention it

👌 1

might try to repro on a very small db later


cool, good to rule out, anyway next up, see if the vars-in-join-order match when calling xtdb.query/query-plan-for with both (or turn on debug logs for xtdb.query)


Hey folks 🙂 just announcing that we released XTDB 1.20.0 earlier today - - but since the release notes are brief I'll save you click... > 1.20.0 is a bugfix release with one minor breaking bugfix to our pull behaviour: > > - (breaking): pull now returns nil instead of {} where joined documents do not exist. > - Lucene handles 'match an absent document' ops > - Able to restore Lucene from a checkpoint (thx @tatut!) We were particularly keen to clear the decks and get this released ahead of the upcoming Re:Clojure conference, for which I will be running a 2-hour pre-conf workshop this Thursday: And also, since it's probably of interest to quite a few of you, @j.antonelli712 is presenting on JUXT's project, which is a rather novel XT-powered GraphQL and OpenAPI "Resource Server". Look out for "Schema driven development with GraphQL" on Friday @ 12:30 UTC

xt 11
❤️ 2