Fork me on GitHub

is there a way to limit concurrency for the parallel-parser? a large query I have immediately calls a single resolver 900 times. processing in the resolver is limited to a number of threads so most of the calls end up timing out.


you should probably write a batch resolver then


it depends on what query I run though, so I'm not sure I want to make resolver adjustments for individual queries


also it doesn't feel right to control the concurrency in individual resolvers


another issue is that my resolver is already a batch resolver, it just gets called for 900 batches


I've tried increasing ::pp/key-process-timeout , but individual resolver calls seem to time out regardless


@ak407 is this related to the changes you did? can you send some smaller example that demonstrates the problem you are facing?


@wilkerlucio I'm fairly sure it doesn't relate to my PR since it's also happening with non-batch resolvers. I'll try to create a small example, but I'll probably only get to it on the weekend.


I think I have lots of batches because of my joins: a future parser could probably optimize this too and prepare a single batch across multiple levels of joins (this probably isn't always the right thing to do for performance though)


maybe the resolver could even hint at some optimal batch size


Does Pathom support returning an inputstream from a resolver?


the values of the attributes are up to you


@kenny you would have a problem if you try to cross it on a boundary, but if you want to get it and use on the same process, it can work


but that would just be a value as a input stream, pathom will not do anything special about it