Fork me on GitHub

When a single remote mutation is used by multiple places in the client, and each time having a different continuation, is it a good practice to delegate all the actions to the transact! call like so:

(defmutation my-mutation [args]
  (action [env] ((get-in env [::tx/options action]) env))
  (ok-action [env] ((get-in env [::tx/options ok-action]) env))
  (error-action [env] ((get-in env [::tx/options error-action]) env))
  (remote [_] true))
so that I could just do
(comp/transact! this [(my-mutation {...})] {:action (fn [] ...) :ok-action (fn [] ...) :error-action (fn [] ...)}

Thomas Moerman09:02:35

What I would suggest is to take a look at UISM state machines. If I understand correctly your mutation is used in multiple contexts, with different interaction scenarios. UISM state machines are in my experience a suitable construct to model these interactions. Upon receiving an event, the uism state machine can then call the appropriate optimistic updates, remote mutation (e.g. the one you describe), and connect the mutation with the correct follow-up (ok or error) events, specific to that interaction. This way the interaction logic is better co-located with other events in the interaction and less coupled to how the remote mutation is called. Hope that helps. Curious to hear other opinions as well. Cheers.

❤️ 2

Thanks, @U052A8RUT. It's been a while but last time when I dived into UISMs I concluded that UISMs require a mature understanding of the business logic which I currently do not have. I do think you are right that UISMs could be one solution to that kind of shared logic.


I'm currently trying out another simpler approach for my particular problem: define multiple client-side mutations that map to the same remote mutation:

(defmutation foo [_]
  (remote [_] true))

(defmutation foo-variant-1 [args]
  (action [env] ...)
  (ok-action [env] ...)
  (error-action [env] ...)
  (remote [{:keys [ast]}] (eql/query->ast [(foo {...})])))

(defmutation foo-variant-2 [args]
  (action [env] ...)
  (ok-action [env] ...)
  (error-action [env] ...)
  (remote [{:keys [ast]}] (eql/query->ast [(foo {...})])))


To expand why I think over-using ::tx/options is a bad practice -- because it defeats the purpose of separating mutations from UI. In my opinion, the point of separating mutations from UI (and datafying the former) is so that we can reason about what exactly is going to happen to the app state, given only the datafied mutation. Callbacks attached at the comp/transact! call will destroy our ability to do that.

Björn Ebbinghaus18:02:20

Transactions are just data.. so you could add the continuation as a parameter.

Björn Ebbinghaus18:02:51

(defmutation your-mutation [{:keys [continuation]}]
  (action [{:keys [app]}]
    (comp/transact! app [continuation])))

(comp/transact! app [(your-mutation {:continuation (your-other-mutation {:foo "bar"})})])


@U4VT24ZM3 In my application, I need the continuation be a general function rather than a pre-defined mutation. That being said, if my continuation was already a mutation, I could simply do (comp/transact! this [(your-mutation) (your-other-mutation)]), isn't that right?


@U6SEJ4ZUH something I do which is related to the problem you're trying to solve here (I think) is to separate server mutations from client mutations when multiple client mutations want to use the same server mutation. E.g., if you have a server mutation for writing changes to a customer object, and you have one client location where you only need to change :customer/name and another client location where you change more than that.


This requires you to rewrite the remote mutation's symbol in the remote clause of the m/defmutation:

(remote [{:keys [ast]}]
          (let [ast (assoc ast :key `my.backend.ns/server-mutation)]


oops I just saw you said you do that in the replies 😁 would be curious to hear your thoughts on this pattern as you go along--so far I've been loving this pattern personally as I find that imposing the assumption that server and client mutations will be one-to-one is usually not very ergonomic at least in my problem domains. and yeah, I find UISMs to be an investment only worth making in some situations


Is this anti-pattern? Then what is the correct way?


Could someone explain the relationship between RAD's Pathom3 parser (a mount state with the value of the result of rad.pathom3/new-processor result) and a Pathom3 environment?

Jakub Holý (HolyJak)10:02:49

I believe parser is a function. Env is a map passed to all resolvers


Is there an important difference between a "processor" and a "parser"? If I want to use Pathom throughout my application without using UI component queries, is there an established pattern for doing that? Do I just require parser everywhere and hand-write EQL queries to pass as the tx argument? If so, does it make sense to write wrapper functions for such parser calls? Pathom is supposed to abstract function calls, so wrapping it in function calls seems at conflict with its nature/purpose (fns -> fn abstraction -> fns again). I'm not sure how to reason about it. Should the processor be the one and only? Is it an anti-pattern to have multiple Pathom processors (or parsers? I'm confused here)? For example, my Fulcro app obviously has its set of resolvers. Okay, that's one. If I also want to wrap a GraphQL API with Pathom, is it better to include it in the same parser/processor, or to separate them? Once I have the parser created, is there some way to get the env from it? For example, if I want to make calls to (or any other Pathom fn taking an env), how do I get access to env to pass as an argument? Is this something that will require changing the RAD source to separate environment creation from processor creation? Sorry, a lot of questions here 😅 I understand Pathom abstractly and make use of it all the time, but the above is unclear to me. I don't really get how to use Pathom within the Fulcro context and simultaneously on its own.

Jakub Holý (HolyJak)12:02:32

If using pathom outside of Fulcro, best to ask in #pathom The env is just a map. If it should contain eg a dB connection - you must put it in. So you can construct the env the same way for your manual calls. See 1.b under


    {} ; env plugins such as RAD's pathom-plugin
       ; will add necessary stuff here
    [:your/query :is/here ...])
I think it's that part about the pathom-plugin adding extras that I need to copy from the lib into my project code and use to create the env when making manual calls to process-one et al. Does that sound right?

Jakub Holý (HolyJak)18:02:08

It depends if your resolvers care about the stuff. I typically make my own env map with what the resolver needs


@U0522TWDA after re-reading your blog and looking over the linked example, it started to click for me. It kinda makes sense now... I just include whatever the resolver in question will need in the env. That's a little more tricky than it sounds with middleware being included in the mix in order to make RAD attributes resolvable, but I think I have what I need for now 🙂 For posterity:

  ((-> (attr/wrap-env all-attributes)
       (xtdb/wrap-env (fn [env] {:production (:main xtdb-nodes)})))
   (-> {}
     ;; `convert-resolvers` call is necessary for now,
     ;; even if you don't use P2 resolvers/mutations at all
     (pci/register (pathom3/convert-resolvers
  {::foo/id #uuid "ffffffff-ffff-ffff-ffff-000000000001"}
Many thanks! 🙏

❤️ 1