Fork me on GitHub

Q: I have a stock prod topology and I have started load testing readonly loads. when I max it out, the cpu tops out at approx 50%. Am I right in understanding this is because only one of the two server nodes is handling queries?


@U05120CBV could you comment on this?


How are you "maxing it out"


Is the read load all coming from a single source?


I have a cljs lambda which calls two Ion fns. Both are read only. I use Gatling to create increasing levels of load. It reaches an asymtote pretty quickly and the CPU chart in the dashboard never goes above 50% when it hits the asymtote


(I can’t spell asymptote)


so I guess it is a single source but I’m not sure how lambdas scale out. are you suggesting that, if > 1 lambda is making the call, it will distribute across both nodes?


I suspect it has to do with traffic routing on a per database basis for the primary compute group, but i dont know for a fact. I would suggest trying it against a query group


But also, the thing that matters is your read throughput, not the distribution of cpu usage


digging in. my cljs lambda invokes the Ion lambda via the npm aws client. not sure if that is relevant


also my Ions work with N datomic databases, not just a single one. I’ve segregated database based on usage requirements


ok. I’ll try to grok “read throughput” a bit more on the next load test


in my startup status (i.e. low budget) I’m trying to minimise costs. adding a query group implies extra cost right?


I dont know of the top of my head how the dashboard reports cpu usage. You can always look at individual ec2 instance cpu metrics if you want to examine things further


yeah. I thought about watching ec2 node cpu as well. I’ll do that also


thanks for the ideas. I’ll refine more on the next test


here’s a thought. would the routing system be different if I started using http-direct for this Ion invocation? It’s something I’ve been meaning to do anyway


shouldn’t change HTTP direct goes to the LB the same as a lambda


@U05120CBV ok. I have since verified that under load, only one of the two nodes are being used for reads. with the OOTB topology, should the load be spread across both? if so, what’s the next step to diagnosing this? Is it possible that my cljs/aws api Ion invocation is somehow affecting the load balancer?


(interestingly the architecture diagrams don’t show an NLB behind the lambdas or direct. I trust your statement despite that 🙂


happy to log a ticket to pursue this further if it makes sense?

David Pham09:03:56

Can we translate number of datomic system with the number of transactors?