Fork me on GitHub

Question… is it possible to dynamically generate input and output properties / resolvers with pathom?


Essentially I need to discover the attributes for my properties at run time. I think I would need to query my system to discover them dynamically before I resolve the queries.


Is there a recommended way to do such a thing with pathom? From what I’ve seen it looks like properties must be coded up front, rather than discovered from your data


ok just seen there is a function resolver

👍 3

I already generated resolvers from a wired//custom rest API/specification. Worked really well


yeah I think this would be sufficient… I haven’t played with pathom in any depth yet; just trying to assess at a high level whether I could make it work. My underlying data model is already open-world/RDF so in many ways it’s a natural fit; though I’d need to discover applicable attributes through meta-level queries first.


computing the graph indexes is rather expensive, so I guess it'll help if they can be generated in meta-level phase to minimize overhead in subsequent requests


I dont think they are expensive to generate, but surely you dont wanna do it once before each query, but generating once at app start should be fine

👍 3

for RDF kind of problem I guess the open-world is just too big (infinite?) to be indexed at start time but instead must be "lazily" expanded as the client asks for more. I can imagine keeping track of several (dynamically generated) child resolvers for each session (or even "garbage collecting" them somehow?)... Of course I don't know that much about rick's use case 🙂


also, maybe it'll be an interesting case for pathom's query UI where the whole graph is not known beforehand?


you can always cache the index if gets too heavy


and distributed environments (local or remote) could get somethink like expanding graph


but figuring index at planning stage is not something I see viable


for RDF, as same as GraphQL, and Datomic, there will be a better support for dynamic resolvers, this is where you can improve things, but you still have to know all the attributes ahead of time, otherwise pathom can't tell when to start processing or not


hmm, I guess "have to know all the attributes ahead of time" can be a frown for rick, but let's wait for his confirmation. IIrc, usually you connect to one RDF endpoint and may get references to other RDF endpoints, and the number of endpoints is kinda infinite, that's why you can't know beforehand. Anyway, this also need confirmation


> hmm, I guess “have to know all the attributes ahead of time” can be a frown for rick, Yeah it looks like this might be a problem for us 😞 > but generating once at app start should be fine This won’t work for us as they’d have to be rebuilt on every data update, and that really isn’t practical for a number of reasons. It would be far easier to discover them at query time Essentially our problem is that our platform hosts data that can essentially be anything, with arbitrary predicates that our users can coin and supply themselves. Obviously a generic platform can’t do anything bespoke without knowledge of the data, so we target the meta-level (vocabularies) which provide schemas around the shape of the data. However the vocabularies are a level removed from the actual properties themselves… For example one of the main vocabularies we use is the W3C RDF Data cube ( ) which essentially models multidimensional statistical data into dimensions, attributes, measures and observations. An observation is essentially just a cell in an N-dimensional spreadsheet triangulated by it’s dimensions… they usually include area and time but also arbitrary other dimensions specific to the cube, for example homelessness data might include dimensions on gender / age, but if someone loaded a trade dataset it would have imports, exports and dimension properties like “chained volume measure” etc. We can’t feasibly know all of these when we write the software; but we can rely on them being formally described in that vocabulary; so we can discover what they are for any given cube at query time; either through the datasets Data Structure Definition (DSD) or via the fact that all dimension predicates are of type DimensionProperty. The DSD bit of the cube vocabulary is essentially a meta-schema for describing cube-schemas; but they are described in amongst the triples/data itself. So in order to use pathom I was hoping to be able to dynamically generate a subset of the resolvers at query time.


So essentially I’d need to provide functions for things like ::pco/input and ::pco/output instead of hard coded vectors


(pco/defresolver fetch-all-observations [{:keys [cube/id]}]
  {::pco/input  [:cube/id]
   ::pco/output #(lookup-cube-dimensions id)}
  (fetch-observations id))

Björn Ebbinghaus21:11:21

Am I right that the viz full graph doesn’t show edges on a to-many relationship?


not the nested parts, it connects with the attribute that has the list, but not the items (the indirect things)


Maybe this is crazy/impossible, but can dynamic dependencies work? use case could be something like total-book-cost

pallet-cost, pallet-count, book-cost, book-count

  if present (pallet-cost, pallet-count) (* pallet-cost pallet-count)
  if present (book-cost, book-count) (* book-cost, book-count)
  else err

(total-book-cost {:pallet-cost 1 :pallet-count 2}) => 2

(total-book-cost {:book-cost 4 :book-count 4}) => 16


It would (maybe?) be slow (having to trace if book-cost can be calculated from other params), but it could be useful if possible.


@jatkin It's possible and AFIK, it's pretty optional performance


Oh, duh. I'm an idiot 🙂. I thought there was much more ceremony here.


Thanks very much for taking the time to write this out!


+pathom3 example on the same gist Pathom3 is way more simpler 🙂


just a warning with this approach, in case the entry has access to all 4 attributes, the result becomes uncertain


one interesting thing that this case brings to mind is that you can also make a function to return resolvers, so another way to implement this:

parrot 6

(defn attr-total-cost-resolver [attr]
  (let [cost-kw       (keyword (str attr "-cost"))
        count-kw      (keyword (str attr "-cost"))
        total-cost-kw (keyword (str "total-" attr "-cost"))
        sym           (symbol (str "total-book-cost-from-" attr))]
    [(pc/resolver sym
       {::pc/input  #{cost-kw
        ::pc/output [total-cost-kw]}
       (fn [_ input]
         {total-cost-kw (* (cost-kw input) (count-kw input))}))
     (pc/alias-resolver total-cost-kw :total-book-cost)]))

(let [;; Relevant part: the resolvers
      registers [(attr-total-cost-resolver "book")
                 (attr-total-cost-resolver "pallet")]
      ;; pathom2 parser. Pathom3 is simpler
      parser    (p/parser {::p/plugins [(pc/connect-plugin)]})
      env       {::p/reader               [p/map-reader
                                           env-placeholder-reader-v2] ;; I backported pathom3 placeholders to pathom2
                 ::pc/indexes             (pc/register {} registers)
                 ::p/placeholder-prefixes #{">"}}]
  (parser env `[{(:>/pallet {:pallet-cost 1 :pallet-count 2})


but still the same in consideration the same warning I said before


I think a proper solution to it requires optional inputs, that's something planned for pathom 3, so you can ask for all attributes (as optionals) e than make your logic inside the resolver, so you have more control


one way to archive this on pathom 2, is having a resolver that doesnt require any input, then from inside of it you call the parser again, and ask for the attributes (all of then), and work with this result, like:

(pc/defresolver total-book-cost [{:keys [parser] :as env} _]
  {::pc/output [:total-book-cost]}
  (let [result (parser env [:book-cost