Fork me on GitHub
Matt Butler12:04:42

Is there a way to get a better idea of what's happening during execute-query, other than the :timings key? It's returned value appears to be a lot shorter than the overall execution time (based on the gap between logs from pedestal/query-executor interceptor and check-terminators) for large data sizes.


You're not seeing the time to convert EDN data to a JSON byte stream, for example.

Matt Butler15:04:28

Does that not happen inside the json-response interceptor?


When query-executor is done, you have a large data structure in memory; I don't believe you'll have any unresolved lazy seqs left at that point, because Lacinia is walking lists to perform validation. In any case, that's the time that Lacinia takes to execute, but another chunk of processing, which won't show up in timings, is how long it takes to convert that chunk of data into a bytestream and pump it down to the client.

Matt Butler16:04:37

Is there a way the measure just the query executor time then?

Matt Butler16:04:49

My suspicion is that because I've got 100k objects each with multiple fields, I'm losing time on as you say, field validation etc. But it would be nice to prove it, rather than just assume that's the diff between the :timings key value and the time between the pedestals query-executor interceptor and the following interceptors logs.

Matt Butler16:04:29

Unless I'm misunderstanding and Lacinia itself is doing some json conversion (rather than a lacinia.pedestal interceptor).


lacinia does not do any IO or encoding, beyond what's necessary to convert scalar values back and forth.


In our stack, we have a logging interceptor around our the lacinia pipeline (but also includes a few other things, like some auth) and the numbers seem to line up pretty nicely. The thing with timings is that not everything is captured (by design, since that would be a lot of noise) and if things are happening in parallel, the way it should, then the total of the elapsed time of the timings will not add up to the total time entry to exit.

Matt Butler19:04:53

So In my case there is only 1 custom resolver, which returns a long seq of maps (100k entries). Therefore the only resolver I see in the timings is my custom one. Which says it takes around 500ms. Whereas the total time the query appears to be executing for is ~3s (measuring from the Lacinia pedestal interceptors). Is it reasonable to assume this is Lacinia doing type checking on the return value and the associated overheard or are there other overheads one should consider when dealing with large data structures in Lacinia.

Matt Butler19:04:00

Thanks for the help btw 👍 🙂

Matt Butler19:04:06

Much appreciated


You should probably put your own interceptor in, to take the measurements you need.


Like I said, once Lacinia is done with its work, there's a bit more work to turn it into a HTTP response. Depending on where you do your measuring, you can figure out what part is where.


Generally speaking, once Lacinia is done resolving and selecting, there's nothing left to be lazy (all lazy seqs will have been fully walked for various reasons).