Fork me on GitHub

I have a list of things with references to different entities, but they are keyed with different keys. Is there a way in pathom3/eql to match any key and do a join if it's possible to?

;;; id->thing index

{1 {:name "A" :fk1 {:id 2}}
 2 {:name "A" :fk2 {:id 1}}
 3 {:name "A" :fk3 {:id 4}}
 4 {:name "b" :fk4 {:id 3}}}

;;; data i'm querying
{:things [{:name "A" :fk1 {:id 2}}
          {:name "A" :fk2 {:id 1}}
          {:name "A" :fk3 {:id 4}}
          {:name "b" :fk4 {:id 3}}]}

;;; I want a query something like this, look up the name for each entity pointed to by
;;; the fk
'{:things [{'* [:id :name]}]}


not in an automatic way like that, because Pathom is design for lazyness, so a generic join is not a thing but you can make it explicity if you are able to know all possible foreign keys ahead of time, in this case you can make a common name to which all foreign options can converge to, something like:

(defn alias-join [source-name target-name]
  (pco/resolver (pbir/attr-alias-resolver-name source-name target-name "joined")
    {::pco/input  [source-name]
     ::pco/output [{target-name [:id :name]}]}
    (fn [_ input]
      (let [data (get input source-name)]
        {target-name data}))))

  [(alias-join :fk1 :generic-fk)
   (alias-join :fk2 :generic-fk)
   (alias-join :fk3 :generic-fk)
   (alias-join :fk4 :generic-fk)])

[{:things [{:generic-fk [:id :name]}]}]


note that pathom needs to know the sub-query to work properly with nested queries (in case you have those)


Awesome, thanks for the help @U066U8JQJ / @U2J4FRT2T


@jjttjj one way to do that is create one alias pc/alias :fk1 :fk-any for each fk


I've been getting deep into pathom for a few days now and still getting a sense for where to draw the line between pathom and a (datomic-esque) database, particularly when it comes to exploring data. For data you already have, pathom quite ideal (or intended) for arbitrary exploration in the way that a full query language is. But if what you have is a REST api, or especially multiple related REST apis, pathom is pretty great for exploration compared to just using REST. Has anyone used anything like a plugin that saves all data you fetch with pathom to a db which could then be queried more flexibly? Or even just a high level workflow that has functions (fetch [eql]) + (q [datalog]) and alternating between the two in a repl session? Just trying to get a better sense in general for how pathom and a db relate. My quick google searches seem to show that libraries putting pathom on top of a db to enable eql queries seem more common than using pathom as a way to feed data to a db. But maybe that's just because the latter doesn't really need a library?


something i've done is fetch a tree of data via pathom and then put it in a datascript db as a client side cache


this worked ok, but datascript is actually quite slow for a client side cache, which led me to build

💯 2

pyramid doesn't really have good support (it's very alpha) for datalog style queries, but you can for example fetch some data from a pathom endpoint, put the result in a pyramid db, and then run the same EQL query on the pyramid db to avoid another round trip


you can also treat it like a normal map and just (get-in db ,,,) whatever you need


if you like datalog tho, i would suggest datascript or perhaps asami


great ask @jjttjj, I personally dont think that line is very clear, and I love to hear people finding interesting ways to use Pathom (like @U3Y18N0UC using it for REPL connections in Clover). I personally see pathom as this "big controller", that does coordination for you, in this view it makes sense to wrap things around pathom instead of otherwise, because this allows you to have the flexibility to change the implementation of your request, while keeping a consistent and evolvable interface (via the attribute names and their evolution), in this sense, the "need" (which is the shape) is a complete abstract definition, from which you could load via DB, via some mock data, or even via generators (I have actually done that, a pathom that just generates random data based on the specs of the attributes, to use during the rendering of Fulcro components in test mode, so we can try many different data variations without writing any specific one)


sometimes you need more performance on queries (like on the UI), and in this case you may wanna dump the data somewhere else, like @U4YGF4NGM pointed out with Pyramid, I also see Fulcro as a variation of the same idea, having a local db separate from the "external sources", I think Fulcro has a great full stack story on that


Sorry for the late reply but thanks for the responses! I've been using datascript as a default db, and have dabbled with all the other datalog things to varying degrees. The "big controller" thing is something I'm slowly grasping. On the surface it seems like pathom is something like a graphql replacement for specifying exactly what is needed for a page over a wire. It's really cool that it serves that role AND as a general computation engine. I've consdered doing things like:

(process {::start time1
            ::end   time2}
    {::posts [::title ::slug]}
to grab any entities between two times. Still exploring and figuring things out but it's a fun process, and the way in which resolvers are built up feels useful even if ultimately the same functionality is moved elsewhere. As you hint at, something about this style seems particularly nice when it comes to code "evolving"

👍 1