Fork me on GitHub

Q: I’ve started using Lacinia to return results for a vega-lite chart. This means 1000s of objects. For big data-sets, it’s creating quite a lot of GC pressure. While this can be caused by internal allocations (not using transducers fully yet), I’m wondering how much of the allocation/GC comes from the resolvers. Has anyone else got an insight into this?


I will dig in and measure the internal transformations but just curious about best practice for this at the resolver level. that may include not using graphql for these results and just returning EDN.


Well, essentially, Lacinia has to build the data structure up from the leaves, THEN can start converting all that EDN to JSON (or to a stream of characters), and there isn’t a good way to make it happen in a lazy way.


We haven't done this, but for large data sets, you could consider implementing it as a subscription, as that can deal with a streaming model to get all the data to the client over time, rather than in one giant response.


depending on the object graph you may want to implement pagination

hiredman22:08:58 has a discussion of some possible implementations


thanks both. I am using idiomatic GQL pagination already and it’s working well. the subscription idea is a good one as I plan to use websockets for another requirement as well. I’ll take a look at that


I don’t think laziness will help with GC as it will still need to allocate the same number of short lived objects. I could be wrong in my thinking on this, happy to be corrected


subs could be challenging as my stack is api-gateway -> datomic Ions. I am considering adding ec2 to the stack and Lacinia would run there but I’m not sure for either design, how well subs would run through AWS Api Gateway.


but that’s on me to test out. I’ll report back if I find anything interesting