Fork me on GitHub
Björn Ebbinghaus16:05:41

@wilkerlucio Finally looking into migrating my app to pathom3 Playing around with Pathom Viz I noticed something interesting. I have the following query with a placeholder.

[{[:decide.models.process/slug "wie-digitalisieren-wir-die-hhu"]
I expected for the graph with the placeholder to just be "merged" into the parent query and then spread out again later, without any real overhead… But this doesn't seem to be the case? Resolvers are still called, but are cached? Is this correct behaviour? Is there a way to optimize this?

Björn Ebbinghaus16:05:16

From this query:

[{[:decide.models.process/slug "wie-digitalisieren-wir-die-hhu"]
With a resolver like:
(defresolver resolve-proposal ...
  {::pc/output [::proposal/id ::proposal/body ::proposal/title]}
  {::proposal/id 1 
   ::proposal/body "Body" 
   ::proposal/title "title"})
I expected that the corresponding resolver is only called once and the result is shaped correspondingly later.

Björn Ebbinghaus16:05:27

But the trace for this is really long. Sure. The follow-up calls to the resolver are fast, because the resolver before already provided the values, but still. I can't stop thinking that this is something a planner could optimize.

Björn Ebbinghaus16:05:06

Hm… Looking at the snapshot view, the planer already optimizes this by merging sibling resolver calls. Then What am I seeing here?

Björn Ebbinghaus16:05:30

Every attribute is resolved by the same resolver:

[{[:decide.models.process/slug "wie-digitalisieren-wir-die-hhu"]
Takes ~220ms
[{[:decide.models.process/slug "wie-digitalisieren-wir-die-hhu"]
Takes ~100ms


hello Bjorn, that seems a lot of extra overhead, do you have guardrails on?


try turning it off, makes a ton of difference for these small processes 🙂

Björn Ebbinghaus16:05:09

That was a thing I also wanted to look into... Is there a way to disable Guardrails for certain namespaces?


but talking more about how placeholders work, its not just a merge, that's because the "merged" part is just the first depth layer, placeholders might need further process to run each item, that's why they still need to be processed (and not just merged)


about the localization, I remember talking to @U0CKQ19AQ in the past about it, but dont remember if was implemented

Björn Ebbinghaus16:05:32

But isn't discerning how much processing a node needs the job of the planner? I do not quite understand what you mean with "further process"? Do you mean things like running a resolver plugin or something?


I mean nested requirements, for example:

  (p.eql/process env
    [{:>/a [:user/a]}
     {:>/b [:user/b]}
     {:>/c [{:user/c
                {:things [:here]}]}]}]}
     {:>/d [:user/d]}
     {:>/e [:user/e]}]))


I can take a deeper look, its been a while since I touched this part of the code, that's what I'm remembering now about it, without guardrails the overhead is much less problematic

Björn Ebbinghaus16:05:16

But this is something the planner can know beforehand, doesn't it? What may be the difference to:

     {:things [:here]}]}]}


it does, but the planner only looks in depths up to a point, it does check it to make sure it will be possible to run (stopping early if it sees its impossible according to the indexes), but each depth level gets its own planning, because the plan depends on the output of the nested item (which may or may not fulfill the complete list of what it exposes in the output, and in the case it doesn't, the plan may turn out different)


so any time you have to process an entity down, pathom will calculate the plan again, which in most cases shouldn't be a problem, specially if you use a persistent cache for the planner (which is always recommended to do in prod environments)

Björn Ebbinghaus16:05:20

Without guardrails, the overhead is indeed smaller, but still a +10-20%

Björn Ebbinghaus17:05:01

Even if the attribute is the same:

[{[:decide.models.process/slug "wie-digitalisieren-wir-die-hhu"]
It is the same overhead.


yeah, that's correct, I would be happy to make it better, but at this moment I dont see any easy ways to improve that, this 1ms extra per entity gets harder to make right without risking the correctness of the result

Björn Ebbinghaus17:05:15

What I don't understand is what the snaphot log of the planner is telling me then… It optimizes by "merging the sibling nodes together"… What does that mean if the resolver is still called x times..?


this is the plan for a single entity, its not that it has 5 calls on the same plan, the plan optimizes and have 1, but you have 5 different plans (one for each placeholder, plus the parent one)


when looking from the trace, each plan must have a single node in this case


the snapshots is telling you the process to build the plan, in this case it does merge all the placeholders to a single graph to try to optiize the deps between then


altough, thinking here, in this case the plan for the children should be empty maybe? I gotta double check on that

Björn Ebbinghaus17:05:38

Another thought that I had in the past: It would be great if placeholders could be a concern of the client, wouldn't it? It would reduce redundancy in the query and the result. But I suppose transit would deduplicate that anyway?


ah, reading my code here, the reason it doesn't move the data down is because there may be different parameters at different placeholder levels, and this case we can't re-use the data from the parent

Björn Ebbinghaus17:05:25

But the planner knows that there aren't any parameters, does it not?


there is an optimization opportunity here, it needs to ensure that the placeholder doesn't have any different parameters in the query when compared to the parent, in this case it can forward the data down


I just did a test here, and making this optimization doesn't seem to improve the perf a lot


it may in some more complicated cases, but also requires adding the cost of the param check

Björn Ebbinghaus17:05:03

You may be right. I was shocked surprised at first that the placeholder added +50-100% overhead. But with guardrails disabled, it is much more reasonable. Like a +7% overhead.


with guardrails it has to keep checking the env in a lot of points, so does the insane overhead 😛


I'm interested if you see that is being a real issue, so far it seems to be fine, but I'm open to learn if you find a case where the overhead is prohibitable


because it really depends on what else is going on, in general there is a lot of IO that eclipses these smaller processing overheads, but if every resolver is super fast, than it starts to show up more

Björn Ebbinghaus18:05:53

I will definitely come back to you if I notice something interesting. btw. Is there a way to get the current nodes "expected output" as EQL? For a resolver that is practically just a d/pull?


look for (-> env ::pcp/node ::pcp/expects)


it uses the shape descriptor format, there are helpers to transform from that to EQL in the ns com.wsscode.pathom3.format.shape-descriptor, the fn shape-descriptor->query

Björn Ebbinghaus18:05:40

Ah. Yes thank you.


@U4VT24ZM3 have you tried to use guardrails async? maybe that will help with the overhead

Björn Ebbinghaus19:05:16

I used guardrails async:

{:defn-macro nil
 :expound {:show-valid-values? true
           :print-specs? true
           :theme :figwheel-theme}
 :async? true
 :throw? false
 :emit-spec? true}


what emit-spec? does?

Björn Ebbinghaus19:05:48

I think it is not used anymore. In the past :emit-spec? false would cause >defn to not emit a fdef spec.

Björn Ebbinghaus20:05:18

@wilkerlucio In the following example, you could optimize the second AND away, couldn't you?

Björn Ebbinghaus20:05:02

In which case, you could optimize the second resolve-proposal away as well.


it could, just nothing that's been looked up at this point, the current state is more like doing the optimizations that are safe, its more about finding those patterns and adding more optimizations


the hardest ones are around OR nodes, those get very few optimizations at this point


but making those generic is not a trivial job, so they have to be made with a lot of care to avoid messing the execution


so currently it just lets those happen, also because their cost isn't high, given that it will skip the repeated nodes at execution time


(but its a problem if you are making them dynamic, like using the expects thing)


(this approach is due to my experience with the first version, where I was trying to get optimizations at max, to them realize it did a lot of plannings that were just wrong, this made me more cautious about them)


altough, to fix the problem with dynamic calls, there is something you can do on your end, you can look at the graph and find all the instances of the same node type, merge their expects and use that to make the call

Björn Ebbinghaus20:05:27

But shouldn't merging ANDs always be possible?


not always, because they may have follow-up calls with different sets of dependencies, in this case you sent its clear, but sometimes the AND node itself has some nodes after it, or the instances may also have follow up calls, and merging those is trickier


but this could be an interesting one, in a case where none of the AND nodes have a follow up node (black arrows), then they are mergeable


these kinds of changes can also affect the paralellism when doing parallel processing, and merging them might result in slower processes (because in this case the merge will make the AND node requires more things before calling the follow up node, delaying that call)

Björn Ebbinghaus20:05:33

Can you provide an example, where this doesn't work? In my understanding, an AND node would never have a follow-up node, wouldn't it?


it does, when you have a node that depends on multiple attributes


(pco/defresolver multi-dep [{:keys [a b c]}]
  {:d "val"})

(defonce plan-cache (atom {}))

(def env
  (-> {:com.wsscode.pathom3.connect.planner/plan-cache* plan-cache}
        [(pbir/constantly-resolver :a 1)
         (pbir/constantly-resolver :b 2)
         (pbir/constantly-resolver :c 3)
      ((requiring-resolve '

  (p.eql/process env


one a bit more complex


(pco/defresolver multi-dep [{:keys [a b c]}]
  {:d "val"})

(pco/defresolver multi-dep2 [{:keys [d e]}]
  {:f "val"})

(defonce plan-cache (atom {}))

(def env
  (-> {:com.wsscode.pathom3.connect.planner/plan-cache* plan-cache}
        [(pbir/constantly-resolver :a 1)
         (pbir/constantly-resolver :b 2)
         (pbir/constantly-resolver :c 3)
         (pbir/constantly-resolver :e 5)
      ((requiring-resolve '

  (p.eql/process env


those 2 and nodes can't be merged (the last example)


but as I said, I think if the inner AND doesn't have a next node, them it should be fine to merge


can you send me that example that made your graph? I can take a look in optimizing that scenario (just not the OR part)

Björn Ebbinghaus20:05:26

This feels weird to me... I would have expected something like:


there is a kind of inversion in the order with the run next, in the case of AND its like: get all the deps, THEM run this


orange arrows go first (branch nodes) and black go after them (run next)


> Is there a way to disable Guardrails for certain namespaces? There was this related issue but I don't think it was implemented

Björn Ebbinghaus20:05:35

Hm… I think I am beginning to understand … I thought of it as a dependency graph, but it is more like a "topological layer graph"?


I see it as an execution graph, but yeah, the point is to optimize for the runner, since the plans can be cached the performance of running depends on how easy it is to traverse


so what I had in mind is to try to make it as simple as possible, in this case, the runner starts at the root, and there is code for each type of node (resolver, AND or OR nodes)

Björn Ebbinghaus20:05:47

Yes. execution graph is the right term


one nice property of this, for the parallel runner, is that it can always parallelize the branches of every AND node

Björn Ebbinghaus20:05:24

In your example it could parallelize a, b, c, e together.


also, it only waits for a, b and c to start realizing d, maybe it realizes d even before e is done

Björn Ebbinghaus20:05:46

Ah.. Sure.. Orange arrows can run in parallel...


in case you are going to try the parallel runner, remember it needs async things to be able to paralellize, otherwise it still goes serial, this is actually good, because it can retain some of the speed of serial for things that are fast


(in other words, to give Pathom a chance to run things in parallel, a resolver must return a promise, otherwise Pathom will still wait for that resolver to be done before moving forward)

Björn Ebbinghaus20:05:10

But wouldn't that result in "too-much processing" in parallel? I mean, how do you parallelize an OR node?

Björn Ebbinghaus20:05:37

Or in this case resolve-proposal :


it doesn't at this point, it tries each one serially, but, it could be possible to have a different strategy for the OR, it could run all in parallel and pick the first that comes back)


what strategy works better will depend on the usage case

Björn Ebbinghaus21:05:41

(def graph
  {:a #{}
   :b #{}
   :c #{}
   :d #{:a :b :c}
   :e #{}
   :f #{:e :d}})

(defn topological-layering [graph]
  (loop [graph  graph
         layers []]
    (if (empty? graph)
      (let [no-deps-nodes         (map key (filter (comp empty? val) graph))
            no-deps-removed-graph (apply dissoc graph no-deps-nodes)
            new-graph             (update-vals no-deps-removed-graph #(apply disj % no-deps-nodes))]
        (recur new-graph (conj layers no-deps-nodes))))))

(topological-layering graph)

; => [(:a :b :c :e) (:d) (:f)]

Björn Ebbinghaus21:05:29

Hm… Maybe this is too naive…


if you like to learn it more in depth I invite you to take a read in the planner namespace, everything related to planning lives there


the current approach is to first "expand the graph", doing it simply, and optimizing after


there you will see things like considering optionals, nested input requirements, dynamic resolvers... lots of fun XD

Björn Ebbinghaus21:05:29

Have you considered sending that problem to advent of code or some competitive programming event? 😅


I find quite hard to explain the problem in detail, haha

Björn Ebbinghaus21:05:31

Well, the overall problem is nothing new and "easy" (still NP-hard I think) But you have to fight a lot of nuances as well …

Björn Ebbinghaus21:05:26

Maybe I can convince my colleges from the chair for operating systems to give that problem to some students looking for master thesis topic …


haha, awesome, I would be down for a meeting to give more details on the context and requirements of it

Björn Ebbinghaus23:05:33

This is a brain melter… I should really go to sleep…


forgot to mention, the planner docs page can also help understand:

Björn Ebbinghaus21:05:08

Thanks, I read it Friday night, but I had to get away from it, before it consumed my whole weekend. 😄


that doc still a bit superficial imo, but its good for the basics of it


feedback very welcome to improve it

Björn Ebbinghaus22:05:37

Not consumed by reading the docs, but by trying to „solve the problem“. I am bad at pulling myself away from an interesting „puzzle“.


hehe, so fun right?! I think the pathom planner is the most complex thing I have done in my life as a programmer, even though I have spend dozens of hours on it, when I need to come back to debug I still have to spend a good time to remember how a part of it works


hi, I find myself wanting to write a thin HTTP wrapper around a pathom parser so that consumers can have a familiar HTTP API instead of having to learn pathom, is anyone aware of any reference impls of something like this?

Björn Ebbinghaus08:05:38

Do you mean REST API?


well, suppose it could be RESTful, but said HTTP since I (still) don't understand what REST compliance is 🥲


I just mean an HTTP API that could have, say, one endpoint per mutation, and one endpoint for every distinct pathom query you want to expose


I think it's very obvious what to do here at a basic level, just wanted to check if there are any existing patterns for it to observe


Considering that the graph queries could have infinite variation (if there is any recursion), then nothing automatic would work. So it would be pretty much just writing every endpoint by hand, but that doesn't leave lots of code.


But yeah, I suppose some kind of automation could work with constraints. Something like no nesting, every resolver gets an endpoint, and the generator makes a way to choose which parts of output is included, plus any possible parameters

👍 1

It just isn't quite as powerful as making custom decisions for what level of nesting is useful and reasonable


one way I think its interesting to approach this, is to think of the definition of each HTTP endpoint to be just about capturing inputs (from query params, path params, etc...) and building a query, this should be easy enough to abstract away, then your endpoint definitions would look something like this:

(defn user-by-id-handler [{:keys [pathom-boundary-interface] :as request}]
    {:pathom/entity {:user/id (-> request :path-params :user-id)}
     :pathom/eql    [:user/name

👍 1

thanks all for the ideas--yeah, i think it's sufficient in my case at least to just assume the query will be static for a given endpoint, which avoids some tricky problems


even though consumers of this HTTP API won't have access to pathom's full expressivity it's good to know that maintenance of the API should be very straightforward (just a matter of editing the EQL query and param-harvesting code)

👍 1

has anyone thought about supervision with pathom's runners? For example the parallel runner doesn't seem to cancel all running promises after one throws


not yet, but I'm interested in the topic, I think there is still a lot to improve in the parallel runner


and you right, no cancelling happens at this point, but I think wouldn't be too hard to make that work, we could use some atom at the env with a boolean that starts off, and if we want to stop the whole thing, set it to true, and them the node runner code could halt if it sees it true


the cancelation thing of things already running might be a bit trickier, we would need generic support at the promises level to allow us to request a cancel in the currently running operations


for now, if you like to experiment, I think you can write a plugin on wrap-resolve that does with I just described


I've been using missionary and pathom together for this in one project

simple_smile 1