Fork me on GitHub
#fulcro
<
2019-07-20
>
souenzzo00:07:49

How do fulcro-inspect know that there is a app connected? everything goes through window events? or things like __fulcro-inspect-remote-installed__ also relevant?

wilkerlucio12:07:01

@souenzzo __fulcro-inspect-remote-installed__ is used set by the client library (from your code) and this is used from the extension to detect if the client is installed, after that detection they keep communicating via messaging, it uses window.postMessage, this is captured by a content-script installed from the extension, that forwards the messages to the background script page, which then communicates with the devtool panel, this is the big picture of the messaging, and the client library the hooks on the apps, its the one who tells extension about everything, including new app instalation

wilkerlucio14:07:04

it doesn't run inside the extension, it runs in the client library, the same codebase have different output targets, the code that adds is on the client library (runs with user code), the one that checks runs in the extension

wilkerlucio14:07:03

but in fulcro 3 this client code now lives in Fulcro, so the clients will always have it, no need to pull fulcro inspect dep

souenzzo14:07:30

I know. I'm playing with fulcro3. Trying to pipe http://window.post calls into a websocket <-server-> dummy client that pipes to inspector

wilkerlucio12:07:00

@uwo I think currently there is no direct approach, but you can use the :update-query param on the load to modify the query, with a few transformation helpers you can add the param to the ident part of the query, and add :pathom/context there, makes sense?

uwo18:07:46

That does make sense, thanks Wilker!

uwo22:07:44

Is there an existing way to get sliding-queue type behavior on a collection (normalized in app state). Say I have lots of customers, stored in a table keyed by :customer/id, and I want the least recently fetched customers to fall off the table after 100 for instance. I know there's prepend, append, and replace mutation helpers, so it made me wonder if there's already something existing that would make this easy.

Chris O’Donnell23:07:19

Maybe you could use append/prepend along with a post mutation that handles dropping elements past 100?

wilkerlucio23:07:45

just the map would not be enough since you can't know the order in which keys got add there, I think core.cached has some helpers, but you also need to be careful with references, if you remove some entity maybe someone else was pointing to it, so you have to think about how to recover from it. its surely an interesting problem

Chris O’Donnell23:07:26

I was assuming they had an ordered collection of customer idents somewhere to manage the recency. They would need that to use fulcro's prepend or append helpers IIRC.

Chris O’Donnell23:07:57

And yeah, removing the customer data from state completely definitely introduces some tricky problems.

Chris O’Donnell23:07:11

Is there a way to pass multiple mutations as part of the remote implementation of a fulcro mutation? For context, I'm using fulcro + pathom connect graphql, and I'd like to trigger a request with multiple graphql mutations like:

mutation {
  remove_a(params) {
    stuff
  }
  remove_b(params) {
    stuff
  }
}
Also, if there's another way I should be accomplishing that, I'm all ears.

wilkerlucio23:07:50

no, you should prefer composing in the mutation itself, instead of running mutation A + B, write a mutation C that does A + B

Chris O’Donnell23:07:35

Thanks for the suggestion. Unfortunately, I don't think I can do that as I don't have control over the schema of this GraphQL API. The reason it's important to run these in the same GraphQL request is the API I'm hitting has the semantic that multiple mutations in the same request are run in a transaction (if one fails, they all roll back).

wilkerlucio01:07:56

humm, gotcha, I had another idea, what if in those case you use a mutation like this: (handle-multiple-mutations {:mutations [(mutation-a {...}) (mutation-b {...})]})

wilkerlucio01:07:31

then, you could intercept the requests at network level, and if you see the handle-multiple-mutations mutation, than you can modify the request to expand it, makes sense?

Chris O’Donnell01:07:50

Nice idea! Pretty sure I can make that work. Thanks!

wilkerlucio01:07:28

cool, I guess depending on how you need to handle hte response, you will you also need to "unwrap" that in the response, sibnce fulcro will be expecting the original mutation back

Chris O’Donnell02:07:49

Pretty sure all I need is to keep an eye out for errors. I'll have to play with that a bit to make sure everything is working properly. Cheers!

Chris O’Donnell20:07:37

@wilkerlucio I ended up writing a custom mutation send-multiple-mutations that reimplements pieces of the pathom connect graphql resolver:

(defn send-multiple-mutation [{::pcg/keys [demung] :as config}]
  (pc/mutation
   `send-multiple-mutations
   {::pc/params [:mutations]}
   (fn [env {:keys [mutations]}]
     (let [env' (merge env config)
           parser-item' (::pcg/parser-item config pcg/parser-item)
           gq    (pcg/query->graphql mutations config)]
       (let-chan [{:keys [data errors]} (pcg/request env' gq)]
         (let [parser-response
               (parser-item' {::p/entity      data
                              ::p/errors*     (::p/errors* env)
                              ::base-path     (vec (butlast (::p/path env)))
                              ::demung        (or demung identity)
                              ::graphql-query gq
                              ::errors        (pcg/index-graphql-errors errors)}
                             mutations)]
           parser-response))))))
I tried using a sentinel send-multiple-mutations mutation, then rewriting the ast at the network level before passing the query to the parser to include its mutation params as top level mutations. Unfortunately (in my case), the parser then sends off the mutations in parallel requests, which is not what I wanted. I wasn't sure how else to work around this, so I ended up with the above.

Chris O’Donnell20:07:37

@wilkerlucio I ended up writing a custom mutation send-multiple-mutations that reimplements pieces of the pathom connect graphql resolver:

(defn send-multiple-mutation [{::pcg/keys [demung] :as config}]
  (pc/mutation
   `send-multiple-mutations
   {::pc/params [:mutations]}
   (fn [env {:keys [mutations]}]
     (let [env' (merge env config)
           parser-item' (::pcg/parser-item config pcg/parser-item)
           gq    (pcg/query->graphql mutations config)]
       (let-chan [{:keys [data errors]} (pcg/request env' gq)]
         (let [parser-response
               (parser-item' {::p/entity      data
                              ::p/errors*     (::p/errors* env)
                              ::base-path     (vec (butlast (::p/path env)))
                              ::demung        (or demung identity)
                              ::graphql-query gq
                              ::errors        (pcg/index-graphql-errors errors)}
                             mutations)]
           parser-response))))))
I tried using a sentinel send-multiple-mutations mutation, then rewriting the ast at the network level before passing the query to the parser to include its mutation params as top level mutations. Unfortunately (in my case), the parser then sends off the mutations in parallel requests, which is not what I wanted. I wasn't sure how else to work around this, so I ended up with the above.