Fork me on GitHub
#graphql
<
2019-10-15
>
orestis05:10:16

We use it just to look at timings by hand, not out into production just yet.

hlship16:10:19

I hope the new version, in 0.36.0-alpha-1, will be even easier to use. If I get a chance, I want to try and create a visualizer script for the data.

orestis16:10:25

Would there be any way to log the various timings in a structured way, so I could use for example CloudWatch or some other log parser to see meaningful data? Would that be something I could/should add to an pedestal interceptor?

hiredman16:10:20

I am using the timing. We track the total time for graphql operations as sort of the key metric for this app, and when that spikes up my first question is: are all graphql operations spiking or just some big outliers. So far it mostly hasn't been big outliers and the big spikes are the result of other issues, but I am tracking the timings

hlship17:10:04

We're mostly interested in seeing if resolution is async the way it should be.

hiredman17:10:53

I haven't noticed any blocking, but I have the callback executor bound to 16 thread executor, so I am not sure if I would

hlship20:10:26

I've been chasing a race condition all day that appears to actually have been a bug in our tests where a callback was executed on a terminated thread pool executor.

hlship22:10:12

After a series of yak shaves, I've ended up with an interesting change to Lacinia that supports setting a timeout on the execution. I'm interested in what people think of this PR: https://github.com/walmartlabs/lacinia/pull/301

hiredman23:10:46

for the execution changes: other then the timeout, I more or less run lacinia like this already, because I am calling execute-parsed-query-async directly and not immediately derefing the returned value

hiredman23:10:04

but that also means the timeout feature is added at the layer above how I am interacting with the library, so really no change for me, which is great 🙂