This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-11-08
Channels
- # announcements (42)
- # aws (2)
- # babashka (69)
- # beginners (38)
- # calva (18)
- # cider (39)
- # circleci (1)
- # clj-commons (10)
- # cljs-dev (2)
- # clojure (36)
- # clojure-australia (14)
- # clojure-europe (25)
- # clojure-gamedev (40)
- # clojure-losangeles (4)
- # clojure-nl (5)
- # clojure-sweden (1)
- # clojure-uk (5)
- # clojurescript (133)
- # core-logic (24)
- # cursive (7)
- # datalevin (4)
- # datascript (3)
- # figwheel-main (1)
- # fulcro (45)
- # honeysql (1)
- # integrant (43)
- # introduce-yourself (1)
- # jobs (4)
- # leiningen (3)
- # lsp (32)
- # nextjournal (9)
- # pathom (18)
- # polylith (21)
- # portal (65)
- # re-frame (6)
- # releases (1)
- # remote-jobs (1)
- # reveal (12)
- # rewrite-clj (1)
- # sci (84)
- # tools-deps (22)
I have some query plans that take upward of 6 seconds to compute. I'm sure that I could simplify my resolvers to improve this but I don't have time to dig into it right now. In the short term, I'd like to use Redis for the plan cache. Of course, I don't want to use old plans from the cache, so I'm thinking that I should incorporate one of the Pathom indices into the cache key. I think the :com.wsscode.pathom3.connect.indexes/index-oir
is a good index to use.
Thoughts?
index-oir is good, and to make it simpler to cache you can get the (hash index-oir)
so you have a shorter thing to use as the cache key
Out of curiosity: Is six seconds to calculate a plan concerning? I plan to investigate this more deeply but it's going to take a while before I get around to it
really deps on how large is your query and how complex the attribute depth is
I think 6 seconds is really a lot, but if there is a lot of spreading, and a lot of OR nodes, than things can get nasty
if after inspection you see some way in which Pathom could do it better, we can work it out
in my experiments I get planning to finish mostly around 3 ~ 10 ms
My plan contains a lot of OR nodes. Especially early in the plan so that might compound the problem. It will be a few weeks before I can get into this in depth
OR paths are the trickiest thing to handle, I hope we find ways to improve it, but if you can reduce on your modeling, it surely helps
fyi - I found some time to test the hypothesis that OR nodes cause the planning to take a long time. In short, yes. I with just a little hacking, I can get the planning down from 6000ms to about 20 milliseconds
My problem is that my app has about 25 resolvers that provide a public API that the client uses. These resolvers duplicate the output of internal resolvers so the plan that pathom produces is very large.
Because I know these resolvers are only used at the input edge of the plan, I can break query processing into two phases: first phase is a Pathom query that converts the public API to the internal API and the second phase is a Pathom query consistent of only the internal resolvers.
Much, much faster this way
Thanks for pointing me in the right direction!
@wilkerlucio I think I found a small bug in the interaction of the mutation resolver plugin and the parallel parser
good catch, thanks!