This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # babashka (6)
- # beginners (37)
- # clojure (4)
- # clojure-europe (6)
- # clojure-india (3)
- # clojure-spec (6)
- # clojured (1)
- # clojurescript (14)
- # datalog (5)
- # gratitude (1)
- # helix (3)
- # hyperfiddle (1)
- # interop (6)
- # leiningen (2)
- # off-topic (142)
- # other-lisps (2)
- # pathom (20)
- # releases (1)
- # rewrite-clj (4)
- # shadow-cljs (5)
- # tools-deps (3)
Is there a recommended approach for using a DB as a persistent cache store? Some of my attributes require a heavy DB lookup, and I'd like to save the final result to a DB key to optimize.
basically you mean denormalization, right? I've had to do some, afaik there aren't any xtdb specific facilities for it, though the transaction listener API might be useful depending on your situation
Yes, basically saving the denormalized final result to some place in the DB so it can be retrieved without traversing the graph.
At the moment I'm achieving it by creating two resolvers, x and cache->x, where the latter has a higher priority and just checks the DB under for a valid
cache-key record with the resolver attribute
x (or whatever the name is), returning nil if not found. I could accomplish the same with a single resolver and a conditional in it, but I'm trying it this way :man-shrugging: I just wonder if there's an overall cleaner/more common approach to doing it.
doh I'm sorry I thought this was in #xtdb :man-facepalming: your two ideas seem sensible to me
my initial reaction's that the single resolver approach makes more sense, and here's why: it's fine for pathom resolvers to be side effecting wrt database state, and you can think of cache state as just another kind of db state
> it's fine for pathom resolvers to be side effecting wrt database state
Really? I thought I read the opposite somewhere recently - that they should be pure.
Another idea I had for implementing this was to create a plugin that uses https://pathom3.wsscode.com/docs/plugins/#pcrwrap-resolve to perform they
cache-key lookup and skip the resolver call if something is found. Tried it briefly, but went with the previously mentioned idea because the mechanics there are more familiar to me right now.
@UPWHQK562 there are ways to setup custom cache stores, and set that to be used by specific resolvers: https://pathom3.wsscode.com/docs/cache#custom-cache-store-per-resolver
> Really? I thought I read the opposite somewhere recently - that they should be pure. I'm not a Pathom expert and I agree it's a good general rule to keep resolvers pure, but the question is, imo, just whether it'd be bad for a side-effect in a resolver to occur many times and unpredictably. For a properly implemented caching side effect, maybe it's OK--for a very expensive side effect or one with destructive consequences, it's probably not ok
I think its ok to side effect, as long as you understand and is ok with the implications, after all caching is a side effect that happens for resolvers all the time
@U066U8JQJ the way described in that link requires the cache to be implemented with
core.cache, right? Or just using
If it's the latter, it doesn't have to be implemented using Atom or Volatile?
no core cache required, thats just an example or implementing a custom CacheStore
Pathom extends Atoms and Volatiles to implement CacheStore, so you can use them directly as cache stores
> This protocol is implemented by Pathom to Atom and Volatile, so you can use any of those as a cache-store. I think this part is what I'm confused by. I don't want to use either of those for my cache store because those are in-memory, and I want to use my database as the cache store. Am I misunderstanding something here?
Atom and Volatile are two of the readily available options, but the protocol can be extended to use anything else as well. That's the correct interpretation?
heh I think I made that more difficult than it needed to be 😛 Thanks very much, and you as well, @U49U72C4V. I'll try that shortly.