Fork me on GitHub
#graphql
<
2021-12-08
>
hden04:12:38

Regarding superlifter, When used with Lacinia, it is useful to allow a field resolver to modify the application context, with the change exposed just to the fields nested below, but to any depth. (resolve/with-context result new-context) In our specific use-case, we’d like to update a specific db parameter for just to the field below. However superlifter requires the parameter to be placed in its context. We are able to specify the initial context via the :urania-opts key, but what’s the recommended way to update the urania env just to the fields nested below, but to any depth? Also posted in https://github.com/oliyh/superlifter/issues/26

hden04:12:08

Current idea: Use superlifter/add-bucket! to create a new bucket every time we need to context-switch, but there will be lots of buckets to sync.

Lennart Buit09:12:53

We've just made our (datomic) db value a part of the identity of our fetch records

hden10:12:38

Yes, that’s our use case as well. Datomic transact function give us a db-after value after transaction, so we need to use that as a basis in all of the fields nested below.

thumbnail12:12:19

We have a db value in the lacinia context, which we update at the end of a mutation by passing a new context to lacinia using resolve/with-context.

hden12:12:49

Yup, assuming there are multiple mutation transaction running concurrently, thus returning multiple db value, each with different T. How should the data-fetching function (running on a thread pool) know which db (= T) to use?

thumbnail12:12:16

I think it's not possible to run multiple mutations in a single graphql document if you're using lacinia

hden12:12:57

Well…… I’m not so sure about that……

hden12:12:55

Let’s go back to the original question. How do you update superlifter’s context? Presumably you could transact, get a new db, somehow update superlifter’s context, and finally use resolve/with-context to change lacinia’s context.

thumbnail12:12:04

> Well…… I’m not so sure about that…… I checked, My original statement is not true (any more?) according to [the docs](https://lacinia.readthedocs.io/en/latest/mutations.html). Either way; they're executed serially. > How do you update superlifter’s context I just checked our usage; and we use urania-opts to initially set a context using an interceptor. In our resolvers we use (with-superlifter context before resolving anything, where context is the first argument of our resolver. I think this is enough to get it to work. Most of our resolvers return stubs and require the superlifter so our approach would rely pretty heavily on this.

hden13:12:45

How do you pass a db to your data fetcher (i.e. superfetcher)? 1. via superlifter’s :urania-opts 2. via superfetcher’s identity (s/def-superfetcher FetchFoo [id db] ...) 3. via a shared atom And how do you change to a new db after a transaction?

1️⃣ 1
2️⃣ 1
3️⃣ 1
Lennart Buit13:12:51

(same company as thumbnail) We don’t use the macro that superlifter offers; we just have simple defrecords implementing the DataSource protocol:

(defrecord MyFetcher [db id]
  u/DataSource
  (-identity [_] [(db->cache-key db) id])
  (-fetch [_ env] ...))

(defn my-fetcher [db id] 
  (->MyFetcher db id))

Lennart Buit13:12:03

So; kinda like your option 2, but without the macro

Lennart Buit13:12:42

IOW, to go back to your original question, we dont update the superlifter context; database values are not in the superlifter context for us

Lennart Buit13:12:04

We do update the lacinia context with a new db value, and we use that to create new fetchers (as show above)

👀 1
Lennart Buit13:12:27

Meaning the selection below a mutation uses the db-after value of the datomic transaction

Lennart Buit13:12:08

Does that help?

hden14:12:10

Got it. It makes so much sense now. We’ve started with similar pattern (using the macro), but somehow the query won’t get batched. Maybe there is something wrong with our implementation, but it works if you implements a BatchedSource, right?

thumbnail15:12:17

😅 I forgot we ditched the superlifter macros, thanks @UDF11HLKC. Debugging the batching can be tricky. we implement both DataSource and BatchedSource indeed, maybe you can share some code?

👍 1
Lennart Buit15:12:37

Yeah my example only caches, no batching. You need to implement u/BatchedSource to do batching

👍 1
hden16:12:24

Got it. Thanks.

(s/def-superfetcher FetchSession [id context]
  (fn [coll _]
    (map clj->gql
         (identity/find-session-by-ids context (map core/to-eids coll)))))

(def fetch-session-by-id ->FetchSession)
context is a map that contains db. It should work though…. Maybe I should try implementing u/BatchedSource without the macro.

Lennart Buit23:12:01

So whats important to know tho, the superlifter macro will use (:id this) as identity for your datasource. If you truly care about using the right database value for the right subquery, you need to incorporate the database value in your cache key.

Lennart Buit23:12:31

I strongly recommend trying the record / protocol, and work your way back

hden00:12:59

Got it, thanks!