Fork me on GitHub

Hi, I have a working resolver and I'm trying to transform it to work with batches. It all works as expected, but sometimes my resolver gets called with nil as the input. This can lead to :com.wsscode.pathom.core/not-found appearing in the results (for entities where the required identifier for my resolver is already present). It seems to happen randomly and sometimes I get the expected full result, which makes this pretty hard to debug :) Is there maybe something I have to return from the resolver to signify that the input is inadequate? The resolver is executing multiple batches in a single query if that helps. Also I'm using the parallel-parser, so this might be some kind of race condition. I wanted to ask here before I try to work on reproducing with a minimal example. Any ideas?


Creating a reproducible example first is always helpful, even for your own understanding 🙂


I'll try, thought I may be missing something obvious. I just hope it's not a race condition, because it might be tricky to reproduce with a small example.


@ak407 you are using :>/things ?


@souenzzo: I'm not sure what you mean, so I'm probably not 🙂


;; (require '[com.wsscode.pathom.parser :as pp]) try to add ::pp/max-key-iterations to your env. The default value is 5 , you can try maybe 20 I'm not sure if it goes into (p/parser {...}) arg or in (parser {...} [...]) context. So you can put in both 🙂


thanks, I'll try it and get back to you in a minute


it doesn't seem to have an effect, resolver is still called varying number of times with nil as the input leading to a number of "not-found" entries in the result


I've also tried reproducing with a minimal example, but it seems to work fine


I'll try introducing random delays into the resolvers to simulate an actual api call


no luck so far 😕


I think it has something to do with the cache in the parallel-batch function (connect.cljc)


it seems to work fine when I don't get cache hits


I've tried disabling caching on the resolver, but that kills the batch functionality too


fun thing is: there shouldn't be any cache hits for this query as all batched items are unique


false alarm, it just randomly worked for several queries. not sure if it has anything to do with the cache or not


I think I'm getting closer


so it looks like parallel-batch is sometimes called twice for the same batch. (remove #(p/cache-contains? env [resolver-sym (second %) params])) seems to remove all entries from one of the batches (which I assume to be the second one), hence the resolver is called with nil. if I remove that line from parallel-batch the resolver is called twice with the same arguments, but at least I consistently get results back.


@ak407 good catch, I guess you are writring, it shouldn't exclude the cached ones, just read from the cache, I can see that being a miss, do you think you can make a minimal example and open an issue about it?


@wilkerlucio: I'm having some trouble reproducing it with a minimal example, but will try to come up with something tomorrow. thanks everyone for helping out

λustin f(n)20:01:11

In the pathom viz tools when you have multiple inputs in a resolver, the 'input' key is considered the set of the multiple inputs. Makes sense, but when you then try to inspect the full graph the resolvers with multiple inputs are not linked in any way to their inputs. Are there any plans to add something to the graph visualization to show that multiple-input resolvers are related to their inputs somehow?