Fork me on GitHub
#pathom
<
2022-06-05
>
sheluchin14:06:18

Is there a recommended approach for using a DB as a persistent cache store? Some of my attributes require a heavy DB lookup, and I'd like to save the final result to a DB key to optimize.

lgessler03:06:31

basically you mean denormalization, right? I've had to do some, afaik there aren't any xtdb specific facilities for it, though the transaction listener API might be useful depending on your situation

sheluchin18:06:43

Yes, basically saving the denormalized final result to some place in the DB so it can be retrieved without traversing the graph. At the moment I'm achieving it by creating two resolvers, x and cache->x, where the latter has a higher priority and just checks the DB under for a valid cache-key record with the resolver attribute x (or whatever the name is), returning nil if not found. I could accomplish the same with a single resolver and a conditional in it, but I'm trying it this way :man-shrugging: I just wonder if there's an overall cleaner/more common approach to doing it.

lgessler15:06:24

doh I'm sorry I thought this was in #xtdb :man-facepalming: your two ideas seem sensible to me

lgessler15:06:28

my initial reaction's that the single resolver approach makes more sense, and here's why: it's fine for pathom resolvers to be side effecting wrt database state, and you can think of cache state as just another kind of db state

sheluchin15:06:42

> it's fine for pathom resolvers to be side effecting wrt database state Really? I thought I read the opposite somewhere recently - that they should be pure. Another idea I had for implementing this was to create a plugin that uses https://pathom3.wsscode.com/docs/plugins/#pcrwrap-resolve to perform they cache-key lookup and skip the resolver call if something is found. Tried it briefly, but went with the previously mentioned idea because the mechanics there are more familiar to me right now.

wilkerlucio16:06:35

@UPWHQK562 there are ways to setup custom cache stores, and set that to be used by specific resolvers: https://pathom3.wsscode.com/docs/cache#custom-cache-store-per-resolver

lgessler17:06:49

> Really? I thought I read the opposite somewhere recently - that they should be pure. I'm not a Pathom expert and I agree it's a good general rule to keep resolvers pure, but the question is, imo, just whether it'd be bad for a side-effect in a resolver to occur many times and unpredictably. For a properly implemented caching side effect, maybe it's OK--for a very expensive side effect or one with destructive consequences, it's probably not ok

wilkerlucio17:06:12

I think its ok to side effect, as long as you understand and is ok with the implications, after all caching is a side effect that happens for resolvers all the time

wilkerlucio17:06:25

another common side effect is to log resolver duration for observability

sheluchin17:06:49

@U066U8JQJ the way described in that link requires the cache to be implemented with core.cache, right? Or just using com.wsscode.pathom3.cache/CacheStore? If it's the latter, it doesn't have to be implemented using Atom or Volatile?

wilkerlucio17:06:09

no core cache required, thats just an example or implementing a custom CacheStore

wilkerlucio17:06:18

you just need to make your custom CacheStore

wilkerlucio17:06:47

Pathom extends Atoms and Volatiles to implement CacheStore, so you can use them directly as cache stores

sheluchin17:06:48

> This protocol is implemented by Pathom to Atom and Volatile, so you can use any of those as a cache-store. I think this part is what I'm confused by. I don't want to use either of those for my cache store because those are in-memory, and I want to use my database as the cache store. Am I misunderstanding something here?

sheluchin17:06:40

Atom and Volatile are two of the readily available options, but the protocol can be extended to use anything else as well. That's the correct interpretation?

wilkerlucio17:06:08

yes, thats correct

wilkerlucio17:06:31

a cache store is anything that implements CacheStore protocol

sheluchin17:06:25

heh I think I made that more difficult than it needed to be 😛 Thanks very much, and you as well, @U49U72C4V. I'll try that shortly.

🙌 1
sheluchin17:06:11

Makes sense about cautious side-effects like logging and caching.