Fork me on GitHub
Richard Bowen00:01:44

What's a great modern introduction to API development with Clojure.

Simion Iulian Belea10:01:12

I found @U8A5NMMGD’s Learn Reitit course great for that, even as I have about 3 years using Clojure profesionally.

Richard Bowen14:01:46

Cool, I'm familiar with the ClojureScript course he has.


If, hypothetically, hot-code-reloading is the ultimate objective: does any language beat Clojure, and if so, how ?


the erlang ecosystem has tools, documentation, frameworks, libraries, processes, and their vm built with support for updating code in production while its running. that's a lot of support!


As another example, I recently came across this series that showcases how smalltalk takes hot-code reloading very seriously,


I think both examples show very different aspects to hot-code-reloading as a feature.


Smalltalk does not comprehend hot-code-reloading. It is all just code, running all the time. Making changes to a method and saving those changes makes it the new version that immediately starts executing. Calling a method that does not exist, pops up the debugger, where you can implement that method. Then complete that call by continuing the stack. I'm a fan. Spent a number of years doing this.


> Smalltalk does not comprehend hot-code-reloading. It is all just code, running all the time. I'm not sure I understand the distinction. > Making changes to a method and saving those changes makes it the new version that immediately starts executing. 😅 I thought that was hot code reloading


😂 It is a fine distinction, and one I do like to make: We submit code to the REPL. From over-here, we send to over-there. Same with your Erlang example (did that for a bit too) - you install new code over-there. We start something, then we load something into it. With Smalltalk everything is just-here. There is no other place, since the IDE/runtime/VM/running-processes are truly, integrated. You don’t start a Smalltalk VM, you resume it. The running processes just continue, and you just continue changing a running system. It is alive in a way that no other environment has ever been since.


aren't lisp machines basically image environments like that as well?


Heh, yes. Hence my clarification of since. As Alan Kay says, nothing new happened since 1980 😉

❤️ 1

How are method definitions organized in smalltalk?


Here is toooo much information: In short: You edit the method, you save it (“accept it”). That immediately compiles it into a CompiledMethod object (everything is an object in Smalltalk, of course). And installs it into the MethodDictionary for the Object that it is a part of. And from then on if you “send that message to an instance of that object” it will run.


Some old screenshots of the Browser, that is how you do this, here:


And of course, go play with it yourself:

Noah Bogart11:01:06

Common Lisp beats Clojure in multiple ways: the condition system means that if there's a bug (missing function, added a number to a symbol), it'll drop you into a debugger, allow you to edit the code, and then resume from the invocation that broke

Noah Bogart11:01:10

Changing a class defined with CLOS will update all instances of that class, no matter where they are


Thanks, @U9E8C7QRJ! It was the ”saving changes make it the new version” that got me to wonder. I imagined saving a file with a bunch of stuff you might not want installed.


I see the fact that things do not auto-install on save, and that I don't need to save to install, as features om the plus-side for Clojure.


"no matter where they are" - in that image?

👍 1

smalltalk doesn't really have the idea of "files". It has an image that you modify and then ship

👍 1

I tried using VisualAge for Java in its 1st version, which was basicaly like a smalltalk image for java. That was weird and I missed all my helpful unix text tools


I think bash and grep and sed are what really kept me from doing everything in an image


Yeah, that was a gigantic buggerup. VA and Java did not want to play together.

Noah Bogart12:01:44

Defined functions are bound to symbols which are memory locations, so changing a function will update it everywhere (instead of having to use vars in Clojure)

Joshua Suskalo16:01:52

common lisp is also both image based and generally considered pretty good for code reloading. It goes far enough to allow you to redefine all of the existing instances of classes when you redefine them to add or change slots (features which I'm sure smalltalk also has)

☝️ 1
Joshua Suskalo16:01:44

And the debugger is like the one I implemented in #farolero, it does something similar to smalltalk, when an error occurs you get put into the debugger, it has a repl, and you have the option to continue execution in predefined ways that the code you're calling specifies, or that the implementation gives you.

🎉 1
Noah Bogart16:01:16

Yeah the redefine existing instances of classes in Common Lisp is one of my big hangups with records in Clojure. Very annoying to create a piece of state for reuse in repl-driven development, and then have to recreate that state every time I change the record definition (implement new protocols etc) because the state contains the record instance in some nested location.

Joshua Suskalo17:01:59

probably why it's recommended to make it mostly be maps and only use records as performance optimizations if possible

👍 1

Thanks everyone for informative responses. Besides Erlang, Smalltalk, Common Lisp, any other languages worth looking at ?


It sorta depends on your definition of hot-code-reloading. Depending on your definition, Excel is probably the most successful hot-code-reloading environment. Some other interesting examples: • PHP and/or wordpress • The now defunct Adobe Flash • Scratch • Minecraft • LabVIEW • emacs/elisp I'm sure there are lots of others.


Never used LabView. For PHP: is it 'hot reloading' in that every http request gets latest code? What part of wordpress does "hot code reloading" ?


When I first started web development, I used PHP. It worked great. Just make change, save, refresh, repeat. All the state was in the database.


It's been a long time since I've used wordpress, but it has(had?) the code for plugins available in the web UI. It's possible to edit the code within wordpress, hit save, and refresh to immediately see the changes.


Does Clojure compile to the same byte code regardless of JVM and JVM host machine?

Ben Sless10:01:02

Should, the compiler's ASM library is bundled with it

Ben Sless10:01:25

But you can be affected by how classes are admitted by the JVM itself

Ben Sless10:01:44

So if you want byte code that will run on lower versions of JVM you should tell that to the JVM you're using to compile it

Ben Sless10:01:04

It will be forward compatible, though


Thanks! I have a performance mystery that I am investigating (tearing my hair off over it, rather). Two, almost identical, functions perform about as fast on my machine, A is about 10% faster than B. While on another machine B is 3X faster than A. This shouldn't be because of different bytecode being produced then, iiuc?

Ben Sless10:01:42

How are you measuring?

Ben Sless10:01:09

What's the order of magnitude?


When I test it on my machine I use two methods: 1. Criterium (most often quick-bench) 2. Let it run for 5 seconds and see how many times the function completes. Using a first run of 5 seconds to warm things up. The reason for method 2 is that this is how things are measured on the benchmarking site for this particular challenge. (Dave Plummer's programming languages drag-racing.)

Ben Sless10:01:33

Leiningen or deps?


deps.edn both of them

Ben Sless10:01:41

My first suspicion is JIT, might be configured differently between those machines Second is different JVM versions might have better jit compilers Third is hardware difference (arm vs x86)


The question in the OP asks about different JVMs. Different JVMs can easily have staggeringly different performance in different scenarios.


The solutions are run with Docker. I should probably check the deps.edn of B...


What I mean by run with Docker is that regardless of machine, the same JVM is always used for A, and the same always used for B.


Just in case - if you use some ...:latest image with Docker, it can be different image on different machines, even with different JVMs potentially. The page linked above doesn't really say which tag it uses.

Ben Sless10:01:04

Since we're getting into drag racing you could even be influenced by cache sizes, pages sizes, CPU mitigation flags, how the JVM was compiled, and more


B deps.edn:

{:paths ["."]
 :deps {org.clojure/clojure {:mvn/version "1.10.3"}}}
Mine (A) looks the same, except criterium.


Yes, probably a lot of things can influence. A thing with the machine producing the results I posted above is that is a monster. Tons of CPUs, RAM and cache and stuff. Here are the results from another machine, with less impressive specs:;hi=False&amp;hf=False&amp;hp=False&amp;fi=clojure&amp;fp=mt&amp;fa=wh~ot&amp;ff=uf&amp;fb=&amp;tp=False&amp;sc=ps&amp;sd=True


@UK0810AQ2 With that in mind, one cannot really optimize something for a different machine and environment, can they? Without perfectly replicating that whole environment.

Ben Sless10:01:29

This is why I said it depends on the order of magnitude. If we're talking microseconds the machine probably matters less than when dealing with nanoseconds


That first result table has another interesting thing. When I use a bitset instead of boolean-array, things go about 0.5X as fast, on my machine and on almost any machine. But on Dave's super monster machine my two solutions run about equally fast.

Ben Sless10:01:09

Would you mind profiling both solutions on your machine?


One run of the function runs in about 0.8ms on the monster machine (and on my M1 machine).


How do I profile Clojure/Java? Never done that.

Ben Sless10:01:57

But each op is every quick

Ben Sless10:01:05

Pull in clj-async-profiler

Ben Sless10:01:28

Wrap an expression with dotimes enough times to run for 30-60 seconds

Ben Sless10:01:37

Then wrap it in profile macro

Ben Sless10:01:47

Give the JVM a bunch of flags

Ben Sless10:01:57

Make sure debug symbols are installed

Ben Sless10:01:07

It's a fun process

Ben Sless10:01:11

Refer to preparation section


Funny, in the video description to the episode we recorded about this drag-racing thing we added a link to that site of yours. I should have read it more carefully. 😃


Maybe you should make a video together with @U02KYBC5WJY about clojure-goes-fast, @UK0810AQ2?


I'm sure Daniel is too and am looking forward to watch!

Ben Sless11:01:01

Anyhow, profiling your code might point out why it's slower, where CPU cycles are consumed


On my machine my code is faster though…

Ben Sless11:01:09

Will still give you a decent view of things. Always good to profile, you might find surprising results


And ”should” be if we only look at what's going on in the code.


I’m dying to profile this! Also ordering a machine that's similar to Dave’s and will profile it there as well.

Ben Sless11:01:14

@U0ETXRFEW the bitset solution is about 2x slower on my machne

Ben Sless12:01:28

but let's analyze the differences between the byte arrays


2X slower on my machine as well. And it's what I have seen on any machine. That's why the 1:1 on one of the dragracing-machines is so strange.

Ben Sless12:01:58

I suspect intrinsics


> intrinsics Speak to me like I am 5 yo 😃

Ben Sless12:01:47

Things which let the JIT compiler swap a piece of code for hand tuned assembly

🙏 1
Ben Sless12:01:40

Basically a lookup table from method to assembly, pretty neat


While on this subject. During my tuning of this solution I have several times all-of-a-sudden had performance of a function drop by 4X and such. Restarting the REPL has fixed it. Is this something you have seen happen?

Ben Sless12:01:46

I also compared the two byte array implementations, there were a few places where the compiler emits intCast, after making sure the arguments to it will always be int I get the same results

Ben Sless12:01:03

Yes, there's a cache of compiled methods =\

Ben Sless12:01:07

it has a limited size

Ben Sless12:01:30

(defn sieve_
  "This returns a boolean-array where the index of each false bit is the prime
  number (+ 1 (* 2 index)).  Despite this being unidiomatic, the result of this
  function can be used by the benchmark without needing to convert it into
  a proper list of primes. [1]
  [1]: <>"
  [^long limit]
  (let [q (unchecked-inc (long (Math/sqrt limit)))
        ^booleans sieve (boolean-array (>> limit 1))]
    (loop [factor 3]
      (when (< factor q)
        (when-not (aget sieve (unchecked-int (>> factor 1)))
          (let [factor*2 (<< factor 1)]
            (loop [num (sqr factor)]
              (when (< num limit)
                (aset sieve (unchecked-int (>> num 1)) true)
                (recur (unchecked-add num factor*2))))))
        (recur (unchecked-add 2 factor))))
(defn sieve-ba
  "boolean-array storage
   Returns the raw sieve with each index representing the odd numbers * 2
   Skips even indexes.
   No parallelisation."
  [^long n]
  (if (< n 2)
    (boolean-array n)
    (let [primes (boolean-array (bit-shift-right n 1) true)
          sqrt-n (long (Math/ceil (Math/sqrt (double n))))
          half-n (bit-shift-right n 1)]
      (loop [p 3]
        (when (< p sqrt-n)
          (when (aget primes (bit-shift-right p 1))
            (loop [i (bit-shift-right (unchecked-multiply p p) 1)]
              (when (< i half-n)
                (aset primes i false)
                (recur (unchecked-add i p)))))
          (recur (unchecked-add p 2))))

Ben Sless12:01:40

these should give you about the same perf on a fresh repl

Ben Sless12:01:56

The places where I stuck the unchecked-int calls were to make sure the argument to RT.intCast would be int and then you go through a happy path


Awesome. Thanks! How do the implementations compare on your machine?

Ben Sless12:01:00

I managed to force them to exactly the same perf

Ben Sless12:01:14

Also changed sqr

(defmacro sqr [n]
  `(let [n# (unchecked-int ~n)] (unchecked-multiply-int n# n#))) 


On my machine the performance drops a bit with the changes. Or at least they make no difference. Is that to be expected?


Will try with the sqr macro now as well.


Hmmm, I think it is a slight performance-drop. Even with the sqr macro added.


I'll just have to get going with the profiling now. Haha.

Ben Sless12:01:14

Since it's all static method calls you won't see much with async-profiler

Ben Sless12:01:22

need even more fine-grained tools

Ben Sless12:01:34

I'll try JITWatch later


I guess the Running on non-x64 platforms is relevant for me?


I'll try this on my x64 mac then. 😃


No profiling yet. But two observations from my x64 mac: 1. With jdk-12 my solution (A) is almost twice as fast as Alex's (B) 2. With jdk-18 about the same diff as on my M1, Maybe A is more clearly faster than B on my x64. 3. The optimizations you added slow my solution down slightly also on my x64. At least in the REPL, must fix a Docker issue to test with that.


It is beyond me why the "unchecked” stuff would slow things down. But I also recall having seen it before, it is the reason I don't have them in my submitted version. I probably should use a Linux machine for the tuning for this drag-racing thing.


Sorry if I missed it in the discussion so far or you checked it already: If you run it from the same Docker image on both machines, and the JVM is x86, is the container maybe running emulated on the M1 machine?


Thanks! Run x64 emulation on the M1, you mean?


I'm not super savvy with docker, but will figure out how.


Yeah, most images are built for x86 architectures. As Docker runs in a shared kernel, the architecture matters. On M1 Macs, x86 are running in Rosetta2 by default, which is emulating an x86 arch and therefore often much slower


Also, one datapoint here is that my x64 Mac behaves very similar to my M1. About the same performance even, which leads me to wonder if maybe I am running in some emulated mode already. The M1 is otherwise running circles around the x64, without ever getting any warm at all. (I love my M1! 😃)


Was running my workloads on my M1 with a x86 JDK in the beginning. When I switched to a ARM compatible one, the build times for some apps were cut in half.


When I run the Rust Solutions my M1 beats that drag-racing water cooled monster on per-thread performance. But when I run it in the docker container things slowed down significantly.


Now, I need to figure out how to check if the JVM I'm using is x64. 😃


You can use docker image inspect <image_name>


Part of the output


I also get arm64.


That sounds good


Not sure how to see if the JVM is in emulation though.

(System/getProperty "") => "64"
java -version     
java version "17.0.1" 2021-10-19 LTS
Java(TM) SE Runtime Environment (build 17.0.1+12-LTS-39)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.1+12-LTS-39, mixed mode, sharing)
(On the Mac.)


From which source did you get the JDK/JVM?


I'm using sdk-man. Not sure what repositories it uses...


Another datapoint is that this previous version of mine ran much faster on the drag-racing machine than my current version.

(defn sieve-ba-skip-even
  "boolean-array storage
   Returns the raw sieve with each index representing the odd numbers * 2
   Skips even indexes.
   No parallelisation."
  [^long n]
  (if (< n 2)
    (boolean-array n)
    (let [primes (boolean-array (bit-shift-right n 1) true)
          sqrt-n (long (Math/ceil (Math/sqrt n)))]
      (loop [p 3]
        (if-not (< p sqrt-n)
            (loop [i (* p p)]
              (when (< i n)
                (aset primes (bit-shift-right i 1) false)
                (recur (+ i p p))))
            (recur (+ p 2))))))))
The current version fixes a major algorithmic flaw with this one. And on my machines this fix improves performance a lot. But on that drag-racing machine... Pasting the current one for reference:
(defn sieve-ba
  "boolean-array storage
   Returns the raw sieve with each index representing the odd numbers * 2
   Skips even indexes.
   No parallelisation."
  [^long n]
  (if (< n 2)
    (boolean-array n)
    (let [primes (boolean-array (bit-shift-right n 1) true)
          sqrt-n (long (Math/ceil (Math/sqrt n)))
          half-n (bit-shift-right n 1)]
      (loop [p 3]
        (when (< p sqrt-n)
          (when (aget primes (bit-shift-right p 1))
            (loop [i (bit-shift-right (* p p) 1)]
              (when (< i half-n)
                (aset primes i false)
                (recur (+ i p)))))
          (recur (+ p 2))))
Here's a run from some days ago before the algo-fix:;hi=False&amp;hf=False&amp;hp=False&amp;fi=clojure&amp;fp=mt&amp;fa=wh~ot&amp;ff=uf&amp;fb=&amp;tp=False&amp;sc=pp&amp;sd=True And here's one from today:;hi=False&amp;hf=False&amp;hp=False&amp;fi=clojure&amp;fp=mt&amp;fa=wh~ot&amp;ff=uf&amp;fb=&amp;tp=False&amp;sc=pp&amp;sd=True (Even if only for my own use. 😃 I'll be looting this thread when I try to blog something about this ghost chase.)


Seems I need to disable rosetta compatibility in sdk-man to get Apple Silicon versions Will try that now.


This gets weirder and weirder. Yesterday I submitted a solution where I applied your changes, @UK0810AQ2, to both the boolean array and the BitSet implementations. The 8-cores machine has run it,;hi=False&amp;hf=True&amp;hp=False&amp;fi=clojure&amp;fp=mt&amp;fa=wh~ot&amp;ff=uf&amp;fb=&amp;tp=False&amp;sc=pp&amp;sd=True. The expected effects of the optimizations show for the bitset solution, but for the boolean array one things has gotten even worse. Now it is being clearly outperformed by the bitset solution. They implement the algorithm the exact same way and have the same boxing-avoidance measures applied. Unless I am missing something. I am probably missing something. However none of my machines expose the problem. Fwiw, I can profile on both, though, using clj-async-profiler, but I don't really know how to interpret the flame graph. (And, it might not be granular enough anyway, I think you concluded yesterday.)


Now I should probably set up profiling. Is it still worth it with clj-async-profiler, you think, @UK0810AQ2?

Ben Sless14:01:11

I'll start by testing the bitset implementation on different JVM versions. You could profile it to satisfy curiosity


If I use JDK17 both functions perform as they should in a reliable way. So that probably solves my problem with not having the fastest Clojure solutions in that drag-race. But it does not satisfy my cusiosity. I have to know what is going on on JDK18 with these machines. Maybe I should start with summarizing the situation in a blog article. That'll hopefully help me to formulate a plan for how to move forward.

Ben Sless16:01:05

Do you know who built and distributed the JVM 18 in both cases?


I'm using OpenJDKs installed with sdkman.


I've just tried to AOT compile the program on JDK17 and run it on JVM18. Same slowness.


I have a Java solution to the sieve as well. Maybe I should try that one with Java 18...


Is there a common naming convention for asynchronous functions, e.g. returning core.async channel or js/promises?

👀 1

a prefix < is suggested here, but I have not encountered this anywhere in any code


most other programming languages simply add async as a suffix, but this imho looks bad in clj

Noah Bogart12:01:14

JavaScript/node has converged on functionName and functionNameSync as the ecosystem adopted Promises


in js almost everything is async


in clj exactly opposite


If i have a function that suppose to return async channel then i think about such function as "channel builder" and usually give it a name with -chan suffix

👍 1

Which clojure retry library (linear, exponential-backoff) do you recommend?


I just use failsafe via interop

👍 3

I have code that mutates couple of record fields often. What’s more performant: 1. returning new record each time 2. using (assoc the-record :key new-value)

Alex Miller (Clojure team)13:01:58

assoc is also going to make a new record

Alex Miller (Clojure team)13:01:21

So not sure those are any different, but why guess when you can measure?


because I’m deep enough in the rabbit hole and trying not to dig deeper 🙂 right, record is not a map so assoc is not as cheap as simple “branching”


Measure first but I would expect re-constructing with the java constructor would be fastest way to get a modified record. But measure because 'I expect' is often not the case in reality.


thanks both

Alex Miller (Clojure team)15:01:51

re-constructing with the java constructor is what assoc does

Alex Miller (Clojure team)15:01:06

so I don't think these are likely to be meaningfully different

👍 1

I'm trying to figure out some stuff in the AWS x-ray Java SDK and my lord what a mess. I understand now what people mean when they talk about bloated and complected code.


Deep down it's just sending JSON to a UDP port. Can't use that on my own. Can I create the data objects independently? No. Can I use the thread local stuff? No, that's also tangled together.


all the java tracing libs are like that


traces: they're just maps™. There's a touch of thread locals to convey those maps, and a background publisher that takes maps and squirts them onto the wire.


doing it fully in Clojure is straightforward, but you may trade off being on an island (if you are using tools that already integrate with the OpenTelemetry java lib)


I have found so far all the OpenTelemetry stuff a bit half baked. At least on AWS you need to work extra to push to XRay. I am leaning towards creating internal namespaces anyway, so hopefully migrating will be straightforward.


Is there anything I'm missing re: OpenTelemetry?

Chad Angelelli15:01:32

does anyone have a working example of using deps.edn with a private GCP Artifact Registry Maven repository?

Alex Miller (Clojure team)16:01:22

looking at, it seems like they use a custom url protocol and wagon for access to artifact registry, which is not supported by the Clojure CLI currently

Alex Miller (Clojure team)16:01:55

although the "configuring password authentication" section of that makes it seem like you can also use https. I don't know of anyone that's tried this, but you could try that and see if it works

Alex Miller (Clojure team)16:01:50

you would make the same settings.xml changes and then use

:mvn/repos {"artifact-registry" {:url "..."}}
in deps.edn. Not sure if all the http config would correctly get picked up from settings.xml.

Chad Angelelli22:01:32

yea, i felt like i was hitting a wall yesterday trying to bend it far out of the shape it seemed it wanted. not to say it can't be done but i'll have to dig further to see if i can get it w/ straight HTTPS/settings.xml. maybe i'll just use S3 for now. do you think it's likely to see wagons in tools deps in the next few months?

Alex Miller (Clojure team)23:01:22

it is actually an open extension system if you use tools.deps via the library

Alex Miller (Clojure team)23:01:40

we do not currently have any way in the Clojure CLI to plug in an extension

Alex Miller (Clojure team)23:01:12

installed extension code is not a hard problem, but what this means for consumers of transitive deps that might not have the extension is a harder thing to figure out

Chad Angelelli23:01:09

got it. thanks for the quick responses Alex! i'll find a way. this one little thing is far from turning me back to Lein/Boot. like i said, i'd probably just use S3 or a different storage before that 😉 appreciate it, man! have a good one

Nom Nom Mousse16:01:15

I have started a web app in a REPL. It sometimes prints output to stdout. Is there a way to view that output in another terminal too, not just in the repl? There is an nrepl server on 7000. Is there a way to only listen to it, not be in a prompt?


sounds like you are using #clojurescript ?

Nom Nom Mousse16:01:02

No, the stdout is from the backend. But good that you asked for clarification :thumbsup:


hm, what prints to stdout only? java code? because clojure’s println should be instrumented

Nom Nom Mousse16:01:43

I should have been more precise. I have some printlns I want to see in the terminal, not just my CIDER REPL. :)

Nom Nom Mousse16:01:32

And before anyone asks: that is because the output is color formatted, so I want to see what it looks like in another color scheme


I don’t have an answer, maybe digging into the middlewares could help

🙏 1

(binding [*out* ( System/out)]
  (println "Hello, world!"))
Should probably do the trick.

Ben Sless16:01:41

Anything in particular I should do to ensure keywords are encoded when using data.fressian? Want to make sure I'm getting the most out of the encoding mechanism

Noah Bogart16:01:28

Is there any performance penalty to using vars instead of function references when, for instance, putting a function reference in a map?

Ben Sless16:01:44

Yes, but it only matters when they're on a hot path or very fast. If they take microseconds or more to execute I wouldn't be bothered by it

👍 1
Alex Miller (Clojure team)16:01:10

you're basically adding one deref into the path

👍 1
Ben Sless16:01:32

Won't it prevent some JIT optimizations and churn cache because you're derefing a volatile?

Noah Bogart16:01:12

As I expected. Thanks for the quick answers

Alex Miller (Clojure team)16:01:35

I mean, yes ... but I haven't looked at this on recent JVMs and it would not surprise me if there are cases where hotspot can optimize in spite of this

Ben Sless18:01:23

Aren't volatile reads guaranteed to entail cache eviction? Not sure if that can be optimized away, especially as some algorithms rely on these properties

Alex Miller (Clojure team)18:01:18

If you can prove everything is in a single thread and the value doesn't escape, I imagine you could potentially have nothing in cache to evict

Alex Miller (Clojure team)18:01:06

I'm not saying the jvm can do this, but ... it can do many things that are surprising to us plebes on the outside

Ben Sless18:01:57

JIT compilation is generally magical to me. "I took your bytecode and replaced it with assembly, woosh"

Alex Miller (Clojure team)18:01:45

if you haven't played with the jvm settings that print the assembly as the jit works, it's pretty illuminating

Ben Sless18:01:46

I have, and have analyzed these logs, too

Alex Miller (Clojure team)18:01:20

-XX:+PrintInlining is sometimes interesting too

Ben Sless18:01:41

I threw all the flags at it and analyzed it with jitwatch


I'd like to extend a protocol to anything that extends another protocol. is there an elegant way to do that?

Alex Miller (Clojure team)20:01:46

there are several approaches, but really you should probably consider deeply whether you really want to do that


More elegant than the hacky i suggested for cljs? nope. Not if you want to support everything that implements a protocol

Alex Miller (Clojure team)20:01:14

some options: • extend it to a set of known concrete implementors (if that's fixed) • extend to Object, determine in that impl the concrete type you have is an instance of the protocol and if so, dynamically install the concrete extension (I have an example of this in Clojure Applied) • wrap protocol use in a function that can route appropriately (a good idea generally) • extend to the interface backing the protocol (kind of dirty, but will work for direct extensions, and not for others) • in a wrapper function, dynamically wrap in metadata extension if appropriate • programatically modify the protocol map


if you want to abandon the protocol mechanism a tad, you can in clj make a definterface and expose functions that work with that interface. Then you know there is always a type to dispatch on and you can extend your protocol to that custom type

Alex Miller (Clojure team)20:01:38

that's basically the same as using the interface under the defprotocol


yeah, but you dont leave open the unsupported extension points

Alex Miller (Clojure team)20:01:58

which kind of takes advantage of implementation details you might want to pretend you don't know


oh and one i forgot - you can bake in support for protocol B into protocol A


(defprotocol A
  (do-thing-a [_])
  (as-b [_]))

(defprotocol B 
  (do-thing-b [_]))

Alex Miller (Clojure team)20:01:37

yep, add that to the list :)


so anything that properly implements the A protocol provides a way to provide something that implements the B protocol


in my case it needs to be both fast and open to outside extension


extending to the interface might be ok in clojure

Alex Miller (Clojure team)22:01:06

extenders then have to use direct extension (not external or metadata extension), which might be ok for some uses


I guess I'll extend the concrete types I know of for now