Fork me on GitHub

@pri I’m no expert with lacinia, but I’ve used it a little. I wouldn’t think there is a recommended approach for caching data. It would be up to the resolvers and your data sources to deal with caching. For example, maybe if you are using aws you cache with redis and your resolvers just deal with that. Or if you’re using something else you implement some kind of memoization that your resolvers interact with and your data source would know nothing of it.


Also depends on what you want to cache. It’s worth listening to talks by Facebook devs about increasing GraphQL efficiency.


(For example, it’s possible to cache parsed query ASTs, return a unique ID for such query, and consequently have frontent only send the ID. This saves time on generating a query in the client, transferring the not insignificant amount of plaintext query, and parsing it on the server. Or you can even precompile frontend with query IDs, as I think Facebook does/did.)


I would like to see a dataloader-esque library for Clojure. Elixir has one, JS has one. It solves the simple case pretty well and allows you to extend it to match the behavior that your application needs (e.g. reaching out to Redis)

👍 4

haven't gotten around to building it myself, but I think there is a need. I received a bit of push back in Elixir/Absinthe when I asked about it, but they eventually implemented it. I see the same kind of arguments / push back in Clojure as well, but I think there is value in it


I've basically ended up building my own bespoke version of that on every GraphQL project I've ever been on 🙃


Don't subscription give you almost the same more easily? Or don't I get it.


subscriptions are completely orthogonal. subscriptions will live-update clients based on events; dataloader is about loading data on the GraphQL server from a data source (e.g. a data base, or upstream REST API)


Ok, didn't get it was a server side thing. Only made a GraphQl endpoint based on some Kafka topics so far.