Fork me on GitHub

clojure -Ttools install com.github.seancorfield/clj-new '{:git/tag "v1.2.359"}' :as clj-new • Switch app, lib, polylith, and template templates to generate project (instead of depstar, which has been archived). • Fix #80 by improving the failure reporting when a template cannot be located. • Fix #79 by warning about options we don't recognize. • Switch project itself to use intead of depstar. • Update .gitignore template files (includes change of LSP database location). Follow-up in #deps-new (which is for both clj-new and deps-new).

🎉 24
Ben Sless09:09:08

I would like to share with you a project I have been working on in the past few months,, aimed at setting up reproducible stress testing and profiling environments for Clojure servers. It's an ongoing project with the goal of finding the best configurations and settings, understanding the effects of JVM versions and GC algorithms, identifying performance bottlenecks and in general, learn! Currently I have results for Java 8, 11, 15, different GC algorithms, blocking and non blocking handlers, different operating rates and httpkit, aleph, jetty, undertow and pohjavirta for servers I hope you can find interesting and useful bits in it. It should help inform developers when choosing libraries and setting up servers, and library developers on the effects of their design choices. Contributions and feedback are very welcome. We can push the ecosystem forward. • project: • inaugural blog post:

🚀 49

well done, a really comprehensive setup! the Aleph InputStream bug is reported here:

👀 2
Ben Sless09:09:43

aha! Well, after I read muuntaja's docks and returned a byte array I worked around it by accident. Didn't really dig into Aleph looking for the answer, though


really cool work, thanks! it would be nice if you included links to the PRs so we can follow the progress :thumbsup:

Jakub Holý17:09:17

Awesome project, thanks! I can't even imagine how much effort went into this...

Ben Sless17:09:15

@U0522TWDA More than the effort, you can't imagine how much time it took, there are hundreds of possible combinations for all the scenarios. Each one runs for 10 minutes. That means that profiling one server takes an entire night. Now imagine waking up in the morning and finding out you made a mistake 😄

😱 2
Jakub Holý17:09:56

I guess you are either saint or crazy 🙂 Thank you in either case! 🙂

Ben Sless17:09:39

Probably the latter, but I haven't been professionally tested 🙃 I working in VLSI has made me more tolerant of 24h long simulations

Ben Sless09:09:10

Launched a run with java 17 Results in a day or so 🙂

⏲️ 4

is there a summary of the results somewhere?


How feasible would it be to parallelize the tests (e.g. on aws) to speed things up?

Ben Sless17:09:59

@U055NJ5CC there's a big CSV with everything as part of the repo. I'm actually hoping some of the kind folks from the data-science / visualization groups will take a crack at it Interim conclusions are that http-kit can suffer almost everything, and that later JVM is better @U05100J3V sadly no, I want to make sure I'm comparing apples to apples as much as possible, unless you're willing to pay for on-demand instances. Are you the metasoarous behind oz? I've been fighting with Vega for the past few hours trying to make some nice interactive plot with it which would make drawing conclusions easier


That would be me 🙂 How can I help?


Regarding apples to apples & on-demand instances, I was assuming as much; Could still imagine it being useful for situations where you need tighter turn around. But also probably a lot of work, so don't do it on my account.

Ben Sless17:09:02

Is there an appropriate channel to discuss it? It would get very off topic

Ben Sless18:09:23

@U055NJ5CC what I usually do to get a feel for the data is cd to the png directory, pick out one file name, (doesn't really matter where you start), then try replacing each of the dimensions with * and view the matching files in feh or another program which would let me page between them. That way I can see the effects of modifying one parameter with all others being equal


nbb, a scripting environment like babashka, but for Node.js, v0.0.75 now has built-in integration with applied-science/js-interop! This means you can run this example with stock nbb:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
See and #nbb for more. Thanks Matt Huebert for co-operating on this.

🖤 14
bubblebobble 4
🎉 11

doxa is a simple in-memory database, trying to copy the best solutions from `datascript`, `crux`, `fulcro`, `autonormal`, `kechema`, `shadow-grove` I finally finished something between poor man diferential dataflow and query caching. incrementally maintained views and in the meantime, I also completely rewrote datalog query parsing and many other things. all using meander of course, just because I can 🙃

👏 24
👀 2
Jakub Holý21:09:29

Interesting. What is shadow-grow?


Is this actually differential dataflow, or just incrementally maintained views which fall back to re-evaluation from scratch when there's a recursive query?


too difficult question for me @U05100J3V


I'm not sure I know the definition


No worries 🙂


In general, the scheme looks like this


transaction -> diff -> match diffs -> re-run queries


after each transaction the diff is calculated using editscript


That's not differential dataflow. That's effectively what posh is doing.


(I know posh has some edge cases, but still; That's the basic methodology)


Which is fine, but I'd suggest not trying to market it using that terminology, because it might catch folks off guard expecting it to be something it's not.


posh from what I remember matches not diffs but transactions


but I guess it doesn't matter


No, it doesn't match transactions, but rather the datoms (diffs, effectively) produced by the transactions


that's what I meant


but again, as far as I remember, there is no difference check and a transaction that changes nothing also causes recalculation of queries


thanks @U05100J3V, I corrected the description


You're welcome; Thanks for clarifying.