This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-02-19
Channels
- # babashka (7)
- # beginners (29)
- # biff (10)
- # cherry (6)
- # cider (23)
- # clj-kondo (9)
- # clojure (54)
- # clojure-europe (27)
- # clojure-nl (2)
- # clojure-norway (10)
- # clojure-uk (5)
- # datomic (14)
- # deps-new (1)
- # events (7)
- # figwheel-main (3)
- # hyperfiddle (9)
- # lsp (4)
- # malli (12)
- # off-topic (8)
- # other-languages (2)
- # releases (3)
- # shadow-cljs (104)
- # specter (1)
- # tools-deps (12)
(First off, thanks so much to Jacob for this project, it's amazing!). Has anyone implemented lazy-loading / pagination in the Biff tutorial app channel view, or know how to approach that problem? Reading through some of the documentation, it seems like xtdb isn't well-suited for pagination being a graph database (https://github.com/orgs/xtdb/discussions/1514, https://v1-docs.xtdb.com/language-reference/1.24.3/datalog-queries/#ordering-and-pagination). Per the documentation, it seems like the right-ish approach is to use a more traditional sql setup for querying the messages. Though I'm not sure how to set that up alongside xtdb, or if there is a nicer way to achieve this directly in xtdb. I figured I'd ask here because of the concrete example that the eelchat messages gives me, and maybe others have thought about the same thing.
hey! Glad you're enjoying Biff.
As a first step of course I'd start out with the "dumb" approach of just using :limit
+ :order-by
+ :offset
or similar; if you're not querying over too many documents then that could be plenty fast.
Assuming you're mainly interested in what to do when that gets too slow: an approach I read about when I was adding pagination to one of my apps a while ago was to cache the initial result set e.g. in redis and then on subsequent pagination calls, iterate through the cache. I haven't implemented that myself, but it seems like it'd be fairly easy to bolt on when the dumb approach gets too slow.
If you also want the initial query to be fast, I probably don't have much to add beyond what was discussed in those links. But there is at least one possibility I would look into: XT has a custom index API that's used for the Lucene index, and @U899JBRPF has mentioned that it's fine to use that API to define your own indexes. My understanding is that's it's more-or-less like using open-tx-log
+ listen
to pipe the transaction log into some other system/index, but it takes care of some plumbing for you/makes it easy to query a consistent snapshot of XT + the other system. You could store the document IDs + any attributes needed for sorting and pagination in some database that implements better indexes for pagination (sqlite?), and after you get the IDs from there, use pull-many
to grab the full document contents from XT.
I haven't looked into that API myself, but I'm hoping to within the next couple months--I'd really like to add some sort of materialized view setup to Biff. Not sure if the API is documented anywhere, but I was planning to start out by looking at the Lucene index source code.
(I'm also wondering if the new index structure in XT v2 makes pagination better?)
This is amazing thank you for the pointers. I have been toiling over docs for days and realized that you're right -- I should do the obvious thing first and only worry about it if it actually gets slow. I think that my lack of familiarity with web apps has me more anxious to build out the "wrong way" than usual 🙂 I hadn't even thought about the redis cache solution though that's another very interesting option. Feels like that would just work™ in many scenarios.
I was also wondering about xtdb2, especially since they clearly have thought a lot about these use cases going over the github issues
Thanks again for taking the time to respond. This is my first real-ish clojure app (been in frontend mobile stuff for years) and it's been freeing to work in something this fun
Oh also now looking up what Lucene is hah
ha ha no worries! As a tip for the dumb approach, when you're querying for subsequent pages, it also wouldn't hurt to pass in a timestamp/transaction ID to use for the https://v1-docs.xtdb.com/clients/clojure/#_db. Then you'll get consistent results even if the data you're querying for was changed after the first page was fetched. This is the lucene index I was referring to: https://v1-docs.xtdb.com/extensions/1.24.3/full-text-search/. Looks like the source code is over here: https://github.com/xtdb/xtdb/tree/master/modules/lucene
You're way ahead of me, I was just starting to wonder how to do things like :offset
if the database had changed. I clearly have many more docs to read. This has been so helpful
glad to hear it!