This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-08-04
Channels
- # announcements (7)
- # babashka (32)
- # beginners (106)
- # bristol-clojurians (10)
- # cider (6)
- # clj-kondo (5)
- # cljdoc (10)
- # clojure (110)
- # clojure-australia (10)
- # clojure-dev (6)
- # clojure-europe (12)
- # clojure-nl (2)
- # clojure-norway (16)
- # clojure-spec (9)
- # clojure-uk (59)
- # clojurescript (105)
- # community-development (2)
- # conjure (46)
- # cursive (12)
- # data-science (1)
- # datalog (26)
- # datomic (37)
- # docker (4)
- # emacs (10)
- # events (1)
- # fulcro (8)
- # graalvm (2)
- # jobs (1)
- # jobs-discuss (1)
- # malli (24)
- # meander (13)
- # off-topic (52)
- # pathom (4)
- # polylith (17)
- # proletarian (4)
- # react (1)
- # rewrite-clj (4)
- # shadow-cljs (56)
- # sql (21)
- # xtdb (14)
Thanks, @dominicm, that’s useful information. It would be nice to have a benchmark in the repo to establish a baseline for performance. Is this something you could share (in a gist or something)?
@msolli It was pretty basic, I was technically testing a bunch of other stuff too (e.g. our auto-scaling setup, our http layer, my local network). So it's not a great test at all.
For my test, I used wrk
with a lua script to set the HTTP body.
I suppose the processing is more interesting, but for that I was measuring with our datadog setup. So, again, not great for a gist.
Maybe something could be put together with https://github.com/aphyr/interval-metrics#measuring-your-codes-performance You'll still want to externalize the PG instance somehow, so the JVM doesn't impact it and it's an accurate measure of the latency you will see in production.