Fork me on GitHub

Sorry if this has been answered before, or if I missed this somewhere in the documentation. I'm working with Fulcro and Datomic. I worked with Om.Next way back when and implemented my own server-side API. This API allowed queries like: [:foobar/by-ident any-ident-with-a-unique-constraint] such as [:foobar/by-ident ""] or [:foobar/by-ident 123456789] where could be the email address of a person entity with a unique constraint, 123456789 could be the tax id number of a company entity with a unique constraint, and :foobar is just an arbitrary namespace name. I think I see how I could implement something similar with Fulcro and Pathom, however I'm confused by the use of output on defresolver. I understand the desire to whitelist return values, but let's assume I have another way to limit what can be returned. Can I simply omit output if I want to allow the resolver to return anything? Like this?

(defn q-by-ident
  [db ident & [query]]
  (datomic/pull db (or query '[*]) ident))

(pc/defresolver by-ident-resolver
  [env {:foobar/keys [ident]}]
  {::connect/input #{:foobar/by-ident}}
  (q-by-ident ident))


@tvaughan no connect always need a output It will only call a resolver when the query "requires" a key that a resolver can output


I see. Thank you for the quick response @souenzzo


@tvaughan your resolver don't need to always output all keys from ::pc/output The meaning of pc/output is "this resolver may return this keys" If the resolver don't return, it will try "the next" resolver that provides that key


Understood. Thanks for the clarification @souenzzo


@tvaughan another thiing to notice is that Pathom changed the standard pattern of by-*, instead I recommend the usage the property directly (instead of :foobar/by-id, use :foobar/id), this is part of a greater vision Pathom has around property re-use, this way the id natually gets to be a hub, and reduce unescessary naming convertions between by-id and id around namespaces


by-ident is purely an invention of mine that only exists between the client and server. by-ident doesn't exist anywhere in a datomic schema. I also have a by-attrs that can be used to return a list of entities according to some attribute value that doesn't have a unique constraint, like a last name. I haven't come across anything yet in the Pathom documentation about id and property re-use. I'll take another look. Thanks @wilkerlucio


I understand, just pointing it out because by-id was a standard convention in times, but not anymore 🙂

👍 8

How do I use request-caching with connect? Or maybe a better question is, how do I access the inputs being passed to a connect resolver given just the ‘env’, since it looks like the cached functions are expecting a single ‘env’ argument?


Some context for what I’m trying to do: I’m using Datomic Cloud and my app is setup to run as an Ion, or locally for development (in which case all the queries are remote calls). On startup, my app issues a very large query that pulls in most of the graph for the logged-in user. This works fine in production, but consistently hangs during parsing when I run locally. The problem seems to be the huge number of queries being issued to Datomic, which are all now remote calls and apparently are overwhelming the access gateway. I’ve temporarily corrected this by adding a special resolver for this one query, which pulls as much data as it can in a single call to Datomic. I an pull in maybe 70% of the graph this way, but additional calls are still needed and in fact this solution didn’t even work until I highly optimized a couple of the additional queries. So I’m worried it will break again at some point when new queries are added, and of course when the local development version is locking up it seriously impacts productivity. Plus, having to tailor to specific queries like this loses a lot of the flexibility pathom gives me. Since there is a lot of redundant information in the graph, I think request caching will help here so that I can keep my more granular resolvers, but not have the database being queried multiple times for the same data.


not sure if that answers your question but you can enrich the env by simply returning an additional ::p/env key in your resolver, which would be (assoc env :your-stuff its-value)


it looks like the cached functions are expecting a single 'env' argument? are you sure about that? in the example there is just one env argument but that doesn't mean you can't make your own, as explained in the comment you can have additional arguments like id for instance and make it part of the key, so that the cache can work with multiple queries


@U0JPBB10W thanks for the detailed explanation, here is an example of how you can create a persistent caching mechanism:


what is really missing there is how to expire it, you have to take care of that, maybe by time, maybe by something manually cleaning the cache, whatever works better on your system


Thanks, I’ll take a look at that. I don’t really need a persistent cache - just caching while processing a single query should be enough. But it’s nice to have another example, to make sure I’m understanding the request-cache correctly.


Thanks @wilkerlucio , that transform code is exactly what I needed! Now I’m able to use more granular resolvers, and it’s only making 2/3 as many DB queries as the version with the specialized resolver. Now I’m making modifications to support batching and still using the caching effectively. Once that’s in place, it should cut the number of queries by half at least!

🎉 8

Sorry to hijack the thread but I have a cache related question as well: would a per request-cache be useful in a parallel reader if the queries are at the same "level"? i.e if I query a list of messages and the user details are filled by another resolver, wouldn't it run the same query for identical users when the parallel reader goes through them


@U0DB715GU it can be, if you are the same level, you would probably hit the entity cache first, since the data is already there, but depending on the processing, maybe a resolver got scheduled to run for multiple reasons (different attributes, that have differnet goals, but share things in the middle), in this case the request cache kicks in to avoid duplicated calls to the resolver. another way it can be used is if you are dealing with farther appart parts of the same requests that end up needing similar things