Fork me on GitHub
#pathom
<
2019-02-18
>
currentoor01:02:03

Also @souenzzo this approach doesn’t seem to have code reloading possible in dev, is that something that was working for you?

currentoor01:02:44

@wilkerlucio perhaps if ::pc/register could be a function, then code reloading in dev would work?

currentoor01:02:05

or an alternative ::pc/register-fn?

souenzzo14:02:56

@currentoor I never used #pathom on client. 😞 But hot-reload on pathon is a problem for me too. But i have a "complex" case where I generate the resolvers..

wilkerlucio19:02:32

@currentoor @souenzzo hello guys, talking about refresh, its all about keeping the index updated. when we do the regular setup, all resolvers are send and the they are add to the index, by default the index will be created by the connect plugin under the hood, but you can send your own index atom, like this:

; create a index outside so you can maintain it
(defonce indexes (atom {}))

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ::p/placeholder-prefixes #{">"}}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register my-resolvers
                                      ; send your index here
                                      ::pc/indexes indexes})
                  p/error-handler-plugin
                  p/trace-plugin]}))
Then if you all (swap! indexes pc/register some-new-resolver) it will be add to the index and take effect immediatly. Pathom doesn't have an opinion around this setup because there are too many ways to do it (and is often different for clj and cljs), but having your own index and writing some helpers around the core functions should give some way to handle it. Makes sense?

👍 5
currentoor19:02:24

yeah that totally makes sense, thank you

currentoor19:02:55

also, is there any difference between using the parallel-parser vs the async-parser in the browser or node?

wilkerlucio19:02:24

parallel parser can do requests in parallel, async supports async ops, but still serial

wilkerlucio19:02:51

under the root they are very different 🙂

currentoor19:02:10

i see, so the async parser is kind of like the request queue in fulcro?

wilkerlucio19:02:37

it came in a time where parser was the only option, and js need async to do anything (like http requests)

wilkerlucio19:02:52

so the async is just like the regular, but it supports core async channels for async ops

wilkerlucio19:02:04

but it process in a serial way like the regular parser

currentoor19:02:13

ah i see, @tony.kay and I were wondering what the difference was

currentoor19:02:51

i assumed parallel only made sense in CLJ and async did requests in parallel

wilkerlucio19:02:09

nops, I suggest using parallel parser everywhere

currentoor19:02:16

thanks for explaining

wilkerlucio19:02:09

no worries, one the topic, thats also why there are pc/async-reader2 and pc/parallel-reader

wilkerlucio19:02:23

they work in very different ways, the paralle-reader needs to do way more work for orchestrating the parallel requests