Fork me on GitHub

I have some query plans that take upward of 6 seconds to compute. I'm sure that I could simplify my resolvers to improve this but I don't have time to dig into it right now. In the short term, I'd like to use Redis for the plan cache. Of course, I don't want to use old plans from the cache, so I'm thinking that I should incorporate one of the Pathom indices into the cache key. I think the :com.wsscode.pathom3.connect.indexes/index-oir is a good index to use. Thoughts?


index-oir is good, and to make it simpler to cache you can get the (hash index-oir) so you have a shorter thing to use as the cache key

👍 1

Out of curiosity: Is six seconds to calculate a plan concerning? I plan to investigate this more deeply but it's going to take a while before I get around to it


really deps on how large is your query and how complex the attribute depth is


I think 6 seconds is really a lot, but if there is a lot of spreading, and a lot of OR nodes, than things can get nasty


if after inspection you see some way in which Pathom could do it better, we can work it out


in my experiments I get planning to finish mostly around 3 ~ 10 ms


My plan contains a lot of OR nodes. Especially early in the plan so that might compound the problem. It will be a few weeks before I can get into this in depth


OR paths are the trickiest thing to handle, I hope we find ways to improve it, but if you can reduce on your modeling, it surely helps

👍 1

fyi - I found some time to test the hypothesis that OR nodes cause the planning to take a long time. In short, yes. I with just a little hacking, I can get the planning down from 6000ms to about 20 milliseconds


My problem is that my app has about 25 resolvers that provide a public API that the client uses. These resolvers duplicate the output of internal resolvers so the plan that pathom produces is very large.


Because I know these resolvers are only used at the input edge of the plan, I can break query processing into two phases: first phase is a Pathom query that converts the public API to the internal API and the second phase is a Pathom query consistent of only the internal resolvers.


Much, much faster this way


Thanks for pointing me in the right direction!


@wilkerlucio I think I found a small bug in the interaction of the mutation resolver plugin and the parallel parser


good catch, thanks!