Fork me on GitHub
#graphql
<
2023-02-08
>
tlonist01:02:41

Is there any reference on how walmart is using lacinia? in terms of architecture, tech stack and etc. Or references from any company with decent traffic that makes use of lacinia that you know of?

hlship19:02:36

So, from memory, as I left Walmart last June. First off, Lacinia is only used in a couple of endpoints, those few written in Clojure. The majority of GraphQL at Walmart is going to be implemented in Node and maybe some in Java. Our team is customer purchase history, and the GraphQL was used by desktop and mobile apps for everything you do after purchase (in store or online); this was mostly just purchase history, but also includes warranty upsells, scheduled delivery, and the like, and a lot of support for in-store picking for delivery. We didn't do any app developing (mobile or desktop), but worked with the various app teams; all apps used our APIs to get consistent results. We have three major endpoints: the original one used by older mobile clients, the similar API used by the SAMS store apps, and the modern API that fits into a larger federated API. Much of our application's work was to integrate data from multiple backend systems (order services, product details, warranty details, etc., etc.) and provide a convenient and consistent view of that data. For example, in the app if you look at an order, the items are grouped ("delivered tuesday", "on the way", "awaiting return") and that grouping was something provided only in our layer, something synthesized up from a rat's nest of JSON data. We monitored performance very closely, and some changes to Lacinia last year were driven by us identifying request processing time that wasn't just waiting for other services to respond. The older APIs made more heavy use of async ResolverResults and resolver functions at many levels of the schema; the modern API instead uses the preview API to do all the reading and organizing primarily at the operation level, inside a large go block. Deployment is via Walmart's custom layers on top of Kubernetes. I don't remember how scaled out we were, maybe 20 servers x 4 zones?

👏 10
👍 2
tlonist04:02:37

Thanks for the kind elaboration. Do you remember how much traffic was the server handling per hour? and also, were you using superlifter for dataloader implementation? What I want to do is a rough comparison in terms of tps between my service and the other provided that both use the same tech stack.

hlship04:02:01

For the old model, I created my own take on dataloader, but it never proved itself the way I wanted, so we never open sourced it. Not using superlifter. The relevant service is the federated one, so we don't have anything like total control over client behavior, we just try to do our part quickly.

2
tlonist06:02:00

so to recap, • clojure(lacinia) was not the server taking up the main traffic. • did not use superlifter for n+1 problem. • lacinia performance was monitored and led to updates