Fork me on GitHub

(let [register [(pc/mutation `inc {} (fn anc [{::keys [state]} _]
                                       ;; how to update the `st` value?
                                       (swap! state inc)
                (pc/resolver `v {::pc/output [:v]} (fn [{::keys [st]} _] {:v st}))]
      ctx {::p/reader  [p/map-reader
           ::p/plugins [(pc/connect-plugin {::pc/register register})]
           ::p/mutate  pc/mutate-async}
      p (p/parallel-parser ctx)
      state (atom 0)]
  (async/<!! (p (assoc ctx ::state state
                           ::st @state)
                  {(inc) [:v]}]))
    {:v               0
     clojure.core/inc {:v 1}}
    {:v               0
     clojure.core/inc {:v 0}}))
Is possible to update a value in context after a mutation? (just for it's children )


can you elaborate the use case? there are some options to give data down, the simplest is just put the data in the mutation response, then they will be part of the input


I'm planning to use the "db value" from #datomic ATM I'm still (-> ctx :conn d/db), so every resolver use a "new" db


humm, there is a feature to change env during processing, that is returning ::p/env, but I just tried with mutations and something is broken there


it would be something like this:


(quick-parser {::pc/register [(pc/mutation 'mut
                                  (fn [env _]
                                    {::p/env (assoc env :env-data 42)}))

                                (pc/resolver 'from-env
                                  {::pc/output [:env-data]}
                                  (fn [{:keys [env-data]} _]
                                    {:env-data env-data}))]}


I'll try to get that to work, not sure whats breaking about it


@souenzzo I just faced this recently. Ideally we want to a) use the same version of the db with each resolver and b) update that db after a transaction, so mutation joins are pulling against the new db. What I came up with is to have a global resolver that inserts the db into the response. Query resolvers have the db as a required input when needed, and it gets inserted by the first resolver that needs it (by the global resolver) and then subsequent resolvers reuse it, so a) is satisfied. For mutations joins, they simply return the updated db along with whatever you are returning to establish context for the query, and then this db is used for subsequent resolvers to resolve the query, satisfying b). I hope that explanation is clear. It’s actually very little code and easy to understand once you see it. If it’s not clear, I can post a gist in a bit as an example.

👍 4

seems like a simpler solution

(let [register [(pc/mutation `inc {::pc/output [::st]}
                             (fn anc [{::keys [state]} _]
                               {::st (swap! state inc)}))
                (pc/resolver `st {::pc/output [::st]}
                             (fn [{::keys [state]} _] {::st @state}))
                (pc/resolver `v {::pc/input  #{::st}
                                 ::pc/output [:v]}
                             (fn [_ {::keys [st]}]
                               {:v (str "v: " st)}))]
      state (atom 0)
      ctx {::p/reader  [p/map-reader
           ::state     state
           ::p/plugins [(pc/connect-plugin {::pc/register register})]
           ::p/mutate  pc/mutate-async}
      p (p/parallel-parser ctx)]
  (async/<!! (p (assoc-in ctx [::p/entity ::st] @state)
                  {(inc) [:v]}])))


@mdhaney that’s a really cool solution. Db-as-a-value continues to have so many interesting consequences. In this case, the database is no longer part of the environment, but something directly’s not just “cool”, but totally natural. Thanks for sharing.


@mdhaney that's a good approach I think, you can already leverage caching with it, and can override if you want to, well done 👍


@wilkerlucio thanks. 🙂. I would love to get your feedback on another approach I’m using with Datomic (Peer, not Cloud, which probably makes a difference). Originally, to pull say all the local attributes (non-refs) on an entity, I was just doing it in a single resolver with one big pull expression to Datomic. Easy enough, but seemed somewhat inefficient to me (what if you only need 1 attribute but you’re always pulling 20-30?). What I’ve started doing instead is using the Datomic entity api, so in my resolvers that resolve idents, I grab the entity and shove it in the output. Then I have a separate resolver for each attribute that pulls the value off the stored entity (which sounds tedious, but is easily wrapped in a macro to make it usually a 1 liner, unless you need to do more tweaking of the value). I haven’t benchmarked it yet to compare, but this seems more efficient to me. Not only are you only pulling just the attributes you need (because the Datomic entity api lazily resolves the attribute values) but also the separate resolvers for each attribute are run in parallel, as opposed to waiting for the entire pull query to finish. Just curious if you had experimented with this approach or not.


@mdhaney I would love to see some benchmarks, I would assume for peer it doens't matter as much since the DB is mostly on memory, so reading a single property vs many should not make a different as far as I understand (but benchmarks can prove me wrong :)). on the using entities idea, I think its valid as well, you could write a new reader similar to the map-reader and use a different key on the env to read from it, so you can cascade down to map-reader when you don't have a datomic entity to pull from, makes sense?


Interesting, I’ll have to look into the custom reader.