Fork me on GitHub
#pathom
<
2020-10-22
>
tvaughan18:10:12

I have a parallel parser that is called in some http-kit middleware (for server-side rendering). This has been working for quite some time without issue, but just now broke when I added additional data to a test fixture. I think what's happening is that the database query takes just long enough now (despite being extremely quick) for http-kit to trigger some sort of timeout. This is pure speculation, but reducing the size of the test fixture and switching to a normal parser both solve the problem. Does this ring a bell for anyone? What am I doing wrong? Below is what I had. drawings-endpoint is the function called in the http-kit middleware.

(defonce ^:private drawings-parser
  (pathom/parallel-parser
    {::pathom/env {::pathom/reader [pathom/map-reader
                                    pathom-connect/parallel-reader
                                    pathom-connect/open-ident-reader
                                    pathom-connect/index-reader]
                   ::pathom/process-error ex-handler
                   ::pathom-connect/mutation-join-globals [:tempids]}
     ::pathom/mutate pathom-connect/mutate-async
     ::pathom/plugins [(pathom-connect/connect-plugin {::pathom-connect/register drawings-handlers})
                       pathom/error-handler-plugin
                       pathom/trace-plugin]}))

(defn drawings-endpoint
  ([query]
   (drawings-endpoint query {}))
  ([query opts]
   (when query
     (<!! (drawings-parser opts query)))))
And this my "solution"
(defonce ^:private drawings-parser
-  (pathom/parallel-parser
+  (pathom/parser
     {::pathom/env {::pathom/reader [pathom/map-reader
-                                    pathom-connect/parallel-reader
+                                    pathom-connect/reader2
                                     pathom-connect/open-ident-reader
                                     pathom-connect/index-reader]
                    ::pathom/process-error ex-handler
                    ::pathom-connect/mutation-join-globals [:tempids]}
-     ::pathom/mutate pathom-connect/mutate-async
+     ::pathom/mutate pathom-connect/mutate
      ::pathom/plugins [(pathom-connect/connect-plugin {::pathom-connect/register drawings-handlers})
                        pathom/error-handler-plugin
                        pathom/trace-plugin]}))
@@ -32,6 +32,7 @@
   ([query opts]
    (when query
-     (<!! (drawings-parser opts query)))))
+     (drawings-parser opts query))))
 

souenzzo19:10:18

@tvaughan looks like that you are doing Locking IO inside parallel-parsers resolvers You can solve it by using a thread-pool or use a non-blocking db api that probablly will have a internal thread-pool In real, looks like that your use-case a simple "serial" parser will be a better solution

tvaughan19:10:04

The db query is a simple datomic pull-syntax read operation

souenzzo19:10:57

If you are using #datomic cloud I highly recommend use p/parser I started with a parallel-parser, and after A LOT of debbuging, thread pool, and others patterns/tweaks, to make datomic.client.(async).api work with parallel-parser, I see that p/parser is more performatic 😞

tvaughan19:10:01

I switched to the parallel parser because it seemed like (somewhat unconfirmed) that using the serial parser would block all other pending requests

tvaughan19:10:16

> I see that `p/parser` is more performatic Interesting. I came to the opposite conclusion

souenzzo19:10:24

I started with that conclusion too, for small/devlocal dataset But for larger data, running inside ion, with "hot" cache.. looks like that datomic already do the threading internally for you. Many threads accessing datomic api just cause a "thread overhead" PS: my conclusion is about datomic IONS If you are using datomic-cloud without ions, that relay on http calls, you may end up into different results.

tvaughan19:10:12

Cool. Thanks for this

tvaughan19:10:31

Thanks @souenzzo I'll stick with the serial parser

souenzzo19:10:24

I started with that conclusion too, for small/devlocal dataset But for larger data, running inside ion, with "hot" cache.. looks like that datomic already do the threading internally for you. Many threads accessing datomic api just cause a "thread overhead" PS: my conclusion is about datomic IONS If you are using datomic-cloud without ions, that relay on http calls, you may end up into different results.

tvaughan19:10:33

Currently on-prem, but cloud eventually

wilkerlucio19:10:29

@tvaughan serial parser is better for most users, parallel parser is way too complex and adds a ton of overhead, to cross the "overhead gap" you must have quite large queries (and I mean 300+ attributes in a single query) and resolvers that have IO that is easy to distribute (for example hitting many different services)

3
wilkerlucio19:10:53

the serial is not going to prevent many requests at once, the serial means that for one EQL query, that query is processed sequentially (one attribute at a time)

wilkerlucio19:10:09

while the parallel can parallelize differnet attributes on the same query

wilkerlucio19:10:41

but I can undestand the confusion, for a time I used to recommand the parallel as the starter, but time goes on and we learn better 🙂

tvaughan19:10:37

To be clear, the parser in question is used both for server side rendering and as an api endpoint. The performance characteristics I mention are related to its use as an api endpoint. The problem I mention above shows up when used in http-kit middelware. I was surprised that parallel requests to http-kit would block. However, I'm not so surprised that using a parallel reader in http-kit middeware is a problem

tvaughan19:10:44

Cool. Thanks for the clarification @wilkerlucio. That does clarify its purpose for me

tvaughan19:10:31

> for a time I used to recommand the parallel as the starter, First I went back to the documentation. FYI - https://blog.wsscode.com/pathom/v2/pathom/2.2.0/core/async.html says: > Nowadays the parallel parser is the recommended one to use ...

wilkerlucio19:10:07

thanks, bad old docs, going to update this now 🙂

👍 3
tvaughan20:10:45

That's super helpful. Thanks!

nivekuil19:10:56

since pathom3 will only have reader3, how will async resolvers work? You can't return core.async channels from resolvers/mutations anymore right?

dehli19:10:55

I’m using reader3 with core.async channels and it’s working for me

wilkerlucio19:10:27

rest assured that Pathom 3 will support async 🙂 in Pathom 2 reader3 supports sync and async. When I say pathom 3 will only have reader3, is more of a way to say the type of processing it will do, because Pathom 3 will not have readers at all (neither parsers).

nivekuil20:10:10

In reply to @￱￱slack￱￱￱￱￱￱t03￱￱r￱￱z￱￱g￱￱p￱￱f￱￱r=2d￱￱u2￱￱u78￱￱h￱￱t5￱_￱g:nivekuil.comI’m using reader3 with core.async channels and it’s working for meah, it does -- I thought you had to use the parallel reader alongside the parallel parser/mutate. I guess that confusion can't happen in pathom3 if it's not even a thing anymore :)

nivekuil20:10:58

so just changing from parallel-reader to reader3, still using parallel-parser, seems to cause a few random attributes (like 1% of the stuff returned from a big query) that my fulcro app previously loaded fine to be nil. Is this expected?

wilkerlucio20:10:38

reader3 is experimental, so errors are expected yes

souenzzo00:10:35

not sure what expect from mixed parallel-parser with reader3 About reader3, I know some issues if you use placeholders and (:ast env)